text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**D.I.C.E. Award for Outstanding Achievement in Art Direction**
D.I.C.E. Award for Outstanding Achievement in Art Direction:
The D.I.C.E. Award for Outstanding Achievement in Art Direction is an award presented annually by the Academy of Interactive Arts & Sciences during the academy's annual D.I.C.E. Awards. This award is "presented to the individual or team whose work represents the highest level of achievement in designing a unified graphic look for an interactive title." It originally was presented as Outstanding Achievement in Art/Graphics, with its first winner being Riven: The Sequel to Myst. It was renamed to the Outstanding Achievement Award in Art Direction at the 3rd Annual Interactive Achievement Awards.The award's most recent winner is God of War Ragnarök, developed by Santa Monica Studio and published by Sony Interactive Entertainment.
Multiple nominations and wins:
Developers and publishers Sony has published the most nominees and winners. Sony had published games with multiple back-to-back wins, but with different developers. Nintendo and Microsoft have also published multiple nominees and both have two winners. The only other publisher with back-to-back wins has been Square Electronic Arts. Nintendo, Microsoft, Ubisoft, and Square Enix Europe have also had multiple nominees for the same year. Several of Sony's developing subsidiaries have developed multiple nominees and winners. Insomniac Games has developed the most nominees, but only one of its nominated games has won so far (Ratchet & Clank: Rift Apart). Sony subsidiary Naughty Dog is the studio who had developed the most winners in this category. Sony's second party studios Sucker Punch Productions and Santa Monica Studio, as well as Sony's in-house Japan Studio, have all also won this award category twice. The only developer with back-to-back wins has been SquareSoft. Ubisoft Montreal has the distinction of not only developing the most nominees without having a single winner (8 nominations, 0 wins), but also publishing the most nominees without a single win (12 nominations, 0 wins).
Multiple nominations and wins:
Franchises The most nominated franchises have been Call of Duty (6 nominations) and Final Fantasy (5 nominations). Final Fantasy, God of War, and Uncharted have been the only franchises so far to have won more than once. The Call of Duty franchise have not won any awards in this category despite receiving the most nominations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Affine sphere**
Affine sphere:
In mathematics, and especially differential geometry, an affine sphere is a hypersurface for which the affine normals all intersect in a single point. The term affine sphere is used because they play an analogous role in affine differential geometry to that of ordinary spheres in Euclidean differential geometry.
An affine sphere is called improper if all of the affine normals are constant. In that case, the intersection point mentioned above lies on the hyperplane at infinity.
Affine spheres have been the subject of much investigation, with many hundreds of research articles devoted to their study.
Examples:
All quadrics are affine spheres; the quadrics that are also improper affine spheres are the paraboloids.
If ƒ is a smooth function on the plane and the determinant of the Hessian matrix is ±1 then the graph of ƒ in three-space is an improper affine sphere. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Server Normal Format**
Server Normal Format:
Server Normal Format (SNF) is a bitmap font format used by X Window. It is one of the oldest X Window font formats. Nowadays it is rarely used, however it is still supported by the latest X.org server. SNF fonts had the problem of being platform dependent, therefore they needed to be compiled on each system. In 1991, X11 moved away from SNF fonts to Portable Compiled Format, which could be shared between systems. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dumbek rhythms**
Dumbek rhythms:
Dumbek rhythms are a collection of rhythms that are usually played with hand drums such as the dumbek. These rhythms are various combinations of these three basic sounds: Doom (D), produced with the dominant hand striking the sweet spot of the skin.
Tak (T), produced with the recessive hand striking the rim.
Ka (K), produced with the dominant hand striking the rim.
Notation:
In a simple notation, these three sounds are represented by three letters: D, T, and K. When capitalized, the beat is emphasized, and when lower-case, it is played less emphatically. These basic sounds can be combined with other sounds: Sak or slap (S) (sometimes called 'pa'), produced with the dominant hand. Similar to the doom except the fingers are cupped to capture the air, making a loud terminating sound. The hand remains on the drum head to prevent sustain.
Notation:
Trill (l), produced by lightly tapping three fingers of one hand in rapid succession on the rim Roll or (rash, r), produced by a rapid alternating pattern of taks and kasThis is the simple dumbek rhythm notation for the 2/4 rhythm known as ayyoub: 1-&-2-&- D--kD-T-
Rhythms:
There are many traditional rhythms. Some are much more popular than others. The "big six" Middle Eastern rhythms are Ayyoub, Beledi (Masmoudi Saghir), Chiftitelli, Maqsoum, Masmoudi and Saidi. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FAD-dependent urate hydroxylase**
FAD-dependent urate hydroxylase:
FAD-dependent urate hydroxylase (EC 1.14.13.113, HpxO enzyme, FAD-dependent urate oxidase, urate hydroxylase) is an enzyme with systematic name urate,NADH:oxygen oxidoreductase (5-hydroxyisourate forming). A non-homologous isofunctional enzyme (NISE) to HpxO was found, and named HpyO. HpyO was determined to be a typical Michaelian enzyme. These FAD-dependent urate hydroxylases are flavoproteins.
This enzyme catalyses the following chemical reaction urate + FADH + H+ + O2 ⇌ 5-hydroxyisourate + FAD+ + H2O | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Companion shadow**
Companion shadow:
Companion shadow is a term used in describing radiographs that denotes the appearance of a smooth, homogenous, radiodensity with a well-defined margin that runs parallel with a bony landmark. Companion shadows represent soft tissue that overlies the respective bony landmark in profile. They are not seen in every radiograph and can be misinterpreted as pathology.
Types of companion shadow:
Clavicular companion shadow is a thin soft-tissue stripe along the upper edge of the clavicle.
Types of companion shadow:
Rib companion shadows parallel the ribs and measure 1–5 mm in diameter project adjacent to the inferior and inferolateral margins of the first and second ribs and the axillary portions of the lower ribs. These companion shadows of the first and second ribs occur in 35% and 31% of the population, respectively. Rib companion shadows represent the fat and muscles in the intercostal space. The shadows that accompany the ribs may mimic pleural and lung disease.
Types of companion shadow:
Scapular companion shadow overlie the scapula, with a smooth, well-defined margin parallel to the medial border of the scapula. The companion shadow results from unusual radiographic position of the scapula, which causes a soft-tissue fold to occur along its medial border. Winging of the scapula may also be responsible for the shadow. Scapular companion shadows may be mistaken for a soft-tissue or pleural lesion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ubersketch**
Ubersketch:
The Ubersketch is a moniker for a collection of sketches created in The Geometer's Sketchpad by the PRISM-NEO project which mimic (virtual) manipulatives, such as the ones found at The National Library of Virtual Manipulatives. The editor of the collection is Greg Clarke of the Simcoe-Muskoka Catholic District School Board.
There is a related set of Adobe Flash objects called the UberFlash collection which are being implemented as part of the Ontario Ministry of Education's CLIPS project.
The CLIPS calculator has been affectionately called the uberCalc since it is another useful collection of tools compiled by Greg Clarke. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sensistor**
Sensistor:
Sensistor is a resistor whose resistance changes with temperature.
Sensistor:
The resistance increases exponentially with temperature, that is the temperature coefficient is positive (e.g. 0.7% per degree Celsius).Sensistors are used in electronic circuits for compensation of temperature influence or as sensors of temperature for other circuits.Sensistors are made by using very heavily doped semiconductors so that their operation is similar to PTC-type thermistors. However, very heavily doped semiconductor behaves more like a metal and the resistance change is more gradual than it is the case for other PTC thermistors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Daf-5**
Daf-5:
Daf-5 is an ortholog of the mammalian protein Sno/Ski,which present in the nematode worm Caenorhabditis elegans on the downstream of TGFβ signaling pathway. Without daf-7 signal, daf-5 combined with daf-3, co-SMAD for C. elegans, to form a heterodimer and started dauer development. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sleep in space**
Sleep in space:
Sleeping in space is an important part of space medicine and mission planning, with impacts on the health, capabilities and morale of astronauts. Human spaceflight often requires astronaut crews to endure long periods without rest. Studies have shown that lack of sleep can cause fatigue that leads to errors while performing critical tasks. Also, individuals who are fatigued often cannot determine the degree of their impairment.
Sleep in space:
Astronauts and ground crews frequently suffer from the effects of sleep deprivation and circadian rhythm disruption. Fatigue due to sleep loss, sleep shifting and work overload could cause performance errors that put space flight participants at risk of compromising mission objectives as well as the health and safety of those on board.
Overview:
Sleeping in space requires that astronauts sleep in a crew cabin, a small room about the size of a shower stall. They lie in a sleeping bag which is strapped to the wall. Astronauts have reported having nightmares and dreams, and snoring while sleeping in space.Sleeping and crew accommodations need to be well ventilated; otherwise, astronauts can wake up oxygen-deprived and gasping for air, because a bubble of their own exhaled carbon dioxide had formed around their heads. Brain cells are extremely sensitive to a lack of oxygen and brain cells can start dying less than 5 minutes after their oxygen supply disappears; the result is that brain hypoxia can rapidly cause severe brain damage or even death. A decrease of oxygen to the brain can cause dementia and brain damage, as well as a host of other symptoms.In the early 21st century, crew on the ISS were said to average about six hours of sleep per day.
On the ground:
Chronic sleep loss can impact performance similarly to total sleep loss and recent studies have shown that cognitive impairment after 17 hours of wakefulness is similar to impairment from an elevated blood alcohol level.
On the ground:
It has been suggested that work overload and circadian desynchronization may cause performance impairment. Those who perform shift work suffer from increased fatigue because the timing of their sleep/wake schedule is out of sync with natural daylight (see Shift work syndrome). They are more prone to auto and industrial accidents as well as a decreased quality of work and productivity on the job.Ground crews at NASA are also affected by slam shifting (sleep shifting) while supporting critical International Space Station operations during overnight shifts.
In space:
During the Apollo program, it was discovered that adequate sleep in the small volumes available in the command module and Lunar Module was most easily achieved if (1) there was minimum disruption to the pre-flight circadian rhythm of the crew members; (2) all crew members in the spacecraft slept at the same time; (3) crew members were able to doff their suits before sleeping; (4) work schedules were organized – and revised as needed – to provide an undisturbed (radio quiet) 6-8 hour rest period during each 24-hour period; (5) in zero-gravity, loose restraints were provided to keep the crewmen from drifting; (6) on the lunar surface, a hammock or other form of bed was provided; (7) there was an adequate combination of cabin temperature and sleepwear for comfort; (8) the crew could dim instrument lights and either cover their eyes or exclude sunlight from the cabin; and (9) equipment such as pumps were adequately muffled.NASA management currently has limits in place to restrict the number of hours in which astronauts are to complete tasks and events. This is known as the "Fitness for Duty Standards". Space crews' current nominal number of work hours is 6.5 hours per day, and weekly work time should not exceed 48 hours. NASA defines critical workload overload for a space flight crew as 10-hour work days for 3 days per work week, or more than 60 hours per week (NASA STD-3001, Vol. 1). Astronauts have reported that periods of high-intensity workload can result in mental and physical fatigue. Studies from the medical and aviation industries have shown that increased and intense workloads combined with disturbed sleep and fatigue can lead to significant health issues and performance errors.Research suggests that astronauts' quality and quantity of sleep while in space is markedly reduced than while on Earth. The use of sleep-inducing medication could be indicative of poor sleep due to disturbances.
In space:
A study in 1997 showed that sleep structure as well as the restorative component of sleep may be disrupted while in space. These disturbances could increase the occurrence of performance errors.Current space flight data shows that accuracy, response time and recall tasks are all affected by sleep loss, work overload, fatigue and circadian desynchronization.
In space:
Factors that contribute to sleep loss and fatigue The most common factors that can affect the length and quality of sleep while in space include: noise physical discomfort voids disturbances caused by other crew members temperatureAn evidence gathering effort is currently underway to evaluate the impact of these individual, physiological and environmental factors on sleep and fatigue. The effects of work-rest schedules, environmental conditions and flight rules and requirements on sleep, fatigue and performance are also being evaluated.
In space:
Factors that contribute to circadian desynchronization Exposure to light is the largest contributor to circadian desynchronization on board the ISS. Since the ISS orbits the Earth every 1.5 hours, the flight crew experiences 16 sunrises and sunsets per day. Slam shifting (sleep shifting) is also a considerable external factor that causes circadian desynchronization in the current space flight environment.Other factors that may cause circadian desynchronization in space: shift work extended work hours timeline changes slam shifting (sleep shifting) prolonged light of lunar day Mars sol on Earth Mars sol on Mars abnormal environmental cues (i.e.: unnatural light exposure)
Sleep loss, genetics, and space:
Both acute and chronic partial sleep loss occur frequently in space flight due to operational demands and for physiological reasons not yet entirely understood. Some astronauts are affected more than others. Earth-based research has demonstrated that sleep loss poses risks to astronaut performance, and that there are large, highly reliable individual differences in the magnitude of cognitive performance, fatigue and sleepiness, and sleep homeostatic vulnerability to acute total sleep deprivation and to chronic sleep restriction in healthy adults. The stable, trait-like (phenotypic) inter-individual differences observed in response to sleep loss point to an underlying genetic component. Indeed, data suggest that common genetic variations (polymorphisms) involved in sleep-wake, circadian, and cognitive regulation may serve as markers for prediction of inter-individual differences in sleep homeostatic and neurobehavioral vulnerability to sleep restriction in healthy adults. Identification of genetic predictors of differential vulnerability to sleep restriction will help identify astronauts most in need of fatigue countermeasures in space flight and inform medical standards for obtaining adequate sleep in space.
Computer-based simulation information:
Biomathematical models are being developed to instantiate the biological dynamics of sleep need and circadian timing. These models could predict astronaut performance relative to fatigue and circadian desynchronization. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Medial anterior thalamic vein**
Medial anterior thalamic vein:
The paired (right and left) medial anterior thalamic veins (Latin: venae mediales anterior thalami dextra et sinistra) originate each from the medial anterior part of the thalamus. Benno Shlesinger in 1976 classified these veins as belonging to the central group of thalamic veins (venae centrales thalami). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Defect concentration diagram**
Defect concentration diagram:
The defect concentration diagram (also problem concentration diagram) is a graphical tool that is useful in analyzing the causes of the product or part defects. It is a drawing of the product (or other item of interest), with all relevant views displayed, onto which the locations and frequencies of various defects are shown.
Usage:
Defect concentration diagram is used effectively in the following situations: During data collection phase of problem identification.
Analyzing a part or assembly for possible defects.
Analyzing a product (or a part of a product) being manufactured with several defects.
Steps:
There are a number of steps that are needed to be follow when constructing the defect concentration diagram: Define the fault or faults (or whatever) being investigated.
Make a map, drawing, or picture.
Mark on the diagram each time a fault (or whatever) occurs and where it occurs.
After a sufficient period of time, analyze it to identify where the faults occur. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Elliptic singularity**
Elliptic singularity:
In algebraic geometry, an elliptic singularity of a surface, introduced by Wagreich (1970), is a surface singularity such that the arithmetic genus of its local ring is 1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multi-Vendor Integration Protocol**
Multi-Vendor Integration Protocol:
The Multi-Vendor Integration Protocol (MVIP) is a hardware bus for computer telephony integration (Audiotex) equipment, a PCM data highway for interconnecting expansion boards inside a PC. It was invented and brought to market by Natural Microsystems Inc (now BPQ Communicationser).
Multi-Vendor Integration Protocol:
Used to build call center equipment using regular PCs, MVIP provides a second communications bus within the computer that can multiplex up to 256 full-duplex voice channels from one voice card to another. Digital voice, fax and video is bussed over a ribbon cable connected at the top of each ISA or PCI card. MVIP products make a PC perform like a small-scale PBX. The protocol accommodated for a variety of expansion boards, including trunk interfaces (usually T1 or ISDN), voice processing boards equipment speech recognition or fax processing. Each board could optionally provide a switch that could interconnect voice channels on the bus, allowing for a flexible routing of calls within the MVIP bus.
Multi-Vendor Integration Protocol:
The MVIP bus was promoted as an alternative to the then-dominant PEB bus by Dialogic Corporation which had much less capacity and was not an open standard. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scaffold/matrix attachment region**
Scaffold/matrix attachment region:
The term S/MAR (scaffold/matrix attachment region), otherwise called SAR (scaffold-attachment region), or MAR (matrix-associated region), are sequences in the DNA of eukaryotic chromosomes where the nuclear matrix attaches. As architectural DNA components that organize the genome of eukaryotes into functional units within the cell nucleus, S/MARs mediate structural organization of the chromatin within the nucleus. These elements constitute anchor points of the DNA for the chromatin scaffold and serve to organize the chromatin into structural domains. Studies on individual genes led to the conclusion that the dynamic and complex organization of the chromatin mediated by S/MAR elements plays an important role in the regulation of gene expression.
Overview:
It has been known for many years that a polymer meshwork, a so-called "nuclear matrix" or "nuclear-scaffold" is an essential component of eukaryotic nuclei. This nuclear skeleton acts as a dynamic support for many specialized events concerning the readout a spread of genetic information (see below).
Overview:
S/MARs map to non-random locations in the genome. They occur at the flanks of transcribed regions, in 5´-introns, and also at gene breakpoint cluster regions (BCRs). Being association points for common nuclear structural proteins S/MARs are required for authentic and efficient chromosomal replication and transcription, for recombination and chromosome condensation. S/MARs do not have an obvious consensus sequence. Although prototype elements consist of AT-rich regions several hundred base pairs in length, the overall base composition is definitely not the primary determinant of their activity. Instead, their function requires a pattern of "AT-patches" that confer the propensity for local strand unpairing under torsional strain.
Overview:
Bioinformatics approaches support the idea that, by these properties, S/MARs not only separate a given transcriptional unit (chromatin domain) from its neighbors, but also provide platforms for the assembly of factors enabling transcriptional events within a given domain. An increased propensity to separate the DNA strands (the so-called 'stress induced duplex destabilization' potential, SIDD) can serve the formation of secondary structures such as cruciforms or slippage structures, which are recognizable features for a number of enzymes (DNAses, topoisomerases, poly(ADP-ribosyl) polymerases and enzymes of the histone-acetylation and DNA-methylation apparatus). S/MARs have been classified as either being constitutive (acting as permanent domain boundaries in all cell types) or facultative (cell type- and activity-related) depending on their dynamic properties.
Overview:
While the number of S/MARs in the human genome has been estimated to approach 64,000 (chromatin domains) plus an additional 10,000 (replication foci), in 2007 still only a minor fraction (559 for all eukaryotes) had met the standard criteria for an annotation in the S/MARt database.
Context-dependent properties:
Current views of the nuclear matrix envision it as a dynamic entity, which changes its properties along the requirements of the cell nucleus—much the same as the cytoskeleton adapts its structure and function to external signals. In retrospect it is of note that the discovery of S/MARs has two major routes: the description of scaffold-attachment elements (SARs) by Laemmli and coworkers, which were thought to demarcate the borders of a given chromatin domain the characterization of matrix-associated regions (MARs) the first examples of which supported the immunoglobulin kapp-chain enhancer according to its occupancy with transcription factorsSubsequent work demonstrated both the constitutive (SAR-like) and the facultative (MAR-like) function of the elements depending on the context. Whereas constitutive S/MARs were found to be associated with a DNase I hypersensitive site in 'all' cell types (whether or not the enclosed domain was transcribed), DNAse I hypersensitivity of the facultative type depended on the transcriptional status. The major difference between these two functional types of S/MARs is their size: the constitutive elements may extend over several kilobasepairs whereas facultative ones are at the lower size limit around 300 base pairs.
Context-dependent properties:
The figure shows our present understanding of these properties and it incorporates the following findings: the dynamic properties of S/MAR-scaffold contacts as derived by haloFISH investigations the fact that during transcription DNA is reeled through RNA-polymerase which itself is a fixed component of the nuclear matrix the fact that certain domain-intrinsic S/MARs require the support of an adjacent transcription factor to become active.
Use in gene therapy:
As an alternative to viral vectors, which can have unwanted effects in patients body, non-viral methods of gene therapy are being studied. One of such methods uses plasmids with special properties - the so-called episomes. Episomes have the ability to divide together with the rest of eukaryotic genome during mitosis. Compared with standard plasmids they are not epigenetically silenced within nucleus and are not enzymatically destroyed. Episomes acquire this ability through the presence of S/MAR sequence within their construct.
Additional information:
Recently, Tetko has found a strong correlation of intragenic S/MARs with spatiotemporal expression of genes in Arabidopsis thaliana. On a genome scale, pronounced tissue- and organ-specific and developmental expression patterns of S/MAR-containing genes have been detected. Notably, transcription factor genes contain a significant higher portion of S/MARs. The pronounced difference in expression characteristics of S/MAR-containing genes emphasizes their functional importance and the importance of structural chromosomal characteristics for gene regulation in plants as well as within other eukaryotes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ultamatix**
Ultamatix:
Ultamatix was a tool to automate the addition of applications, codecs, fonts and libraries not provided directly by the software repositories of Debian-based distributions like Ubuntu.
History:
Ultamatix was based on Automatix, picking up where its development ended. It has many of the same characteristics, but works on Ubuntu 8.04, and the developer claims to have fixed many of the problems with Automatix.
Supported software:
Ultamatix allowed the installation of 101 different programs/features, including programs such as the Adobe Flash Player plugin, Adobe Reader, multimedia codecs (including MP3, Windows Media Audio and video-DVD support), fonts, programming software (compilers) and games.
Reception:
Ultamatix has received positive reviews, with Softpedia calling it "Ultamatix: The New Automatix", and Linux.com saying it "may be a worthy successor to Automatix for new Ubuntu and Debian users" and that "The real value of Ultamatix is in making the Linux experience easier for new users".As with its detailed criticism of Automatix, many in the Ubuntu community believe that there are better solutions for installing the programs covered with this tool, many of which can be installed either from standard Ubuntu repositories or the third-party Medibuntu repository.
Reception:
Developers and users of Ubuntu have also raised concerns that Ultamatix and Automatix could create longer-term problems, by installing packages in an 'unclean' manner that can prevent the entire Ubuntu system from being upgraded for security and other reasons. The original developer of Automatix has given some positive and negative comments. Other issues are noted in the comments of Softpedia's review and the comments in Linux.com's review. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**O-Desmethylangolensin**
O-Desmethylangolensin:
O-Desmethylangolensin (O-DMA) is a phytoestrogen. It is an intestinal bacterial metabolite of the soy phytoestrogen daidzein. It produced in some people, deemed O-DMA producers, but not others. O-DMA producers were associated with 69% greater mammographic density and 6% bone density. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interactive marketing**
Interactive marketing:
Interactive marketing, sometimes called trigger-based or event-driven marketing, is a marketing strategy that uses two-way communication channels to allow consumers to connect with a company directly. Although this exchange can take place in person, in the last decade it has increasingly taken place almost exclusively online through email, social media, and blogs.
History:
As far back as 1995, interactive marketing was seen as the future of e-commerce and digital advertising.By 1997, the Journal of Direct Marketing had re-branded to become the Journal of Interactive Marketing, which continues today. In 1999, Salesforce.com was founded, allowing marketers and salespeople to directly affect and guide potential customers through their company's sales process using Salesforce's cloud-based customer relationship management (CRM) technology.With the advent of content marketing in the late 1990s and the founding of Hubspot in 2006, interactivity has seen a fundamental change from simple two-way communication to gamification and beyond. This particular type of interactive marketing is known as interactive content marketing, and many SAAS companies have been founded to respond to the need for new kinds of content and differentiation between competitors.
Applications:
As interactive marketing relies on having a means of open communication with customers, social media channels have been a large part of this strategy, usually headed up by a company's marketing or customer success departments. The most common application for interactive marketing is using it as a lead generator in a sales funnel. Interactive marketing is nearly inextricably linked to content marketing, so companies can produce audience-relevant content that is shared many times, or "goes viral", and eventually establish themselves as an authority in their particular industry. Consumers tend to trust those that are designated thought leaders in their industry, so this strategy can bring in many inbound leads, coming through gated download pages, for example, that they are nurtured via more content created specifically for them off of the information they've previously shared. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interleukin 13 receptor, alpha 1**
Interleukin 13 receptor, alpha 1:
Interleukin 13 receptor, alpha 1, also known as IL13RA1 and CD213A1 (cluster of differentiation 213A1), is a human gene.The protein encoded by this gene is a subunit of the interleukin 13 receptor. This subunit forms a receptor complex with IL4 receptor alpha, a subunit shared by IL13 and IL4 receptors. This subunit serves as a primary IL13-binding subunit of the IL13 receptor, and may also be a component of IL4 receptors. This protein has been shown to bind tyrosine kinase TYK2, and thus may mediate the signaling processes that lead to the activation of JAK1, STAT3 and STAT6 induced by IL13 and IL4. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hemotympanum**
Hemotympanum:
Hemotympanum, or hematotympanum, refers to the presence of blood in the tympanic cavity of the middle ear. Hemotympanum is often the result of basilar skull fracture.Hemotympanum refers to the presence of blood in the middle ear, which is the area behind the eardrum. In most cases, the blood is trapped behind the eardrum, so no discharge is visible.
Treating hemotympanum depends on the underlying cause.
Presentation:
The most common symptoms of hemotympanum are: pain sense of fullness in the ear hearing loss
Causes:
Skull fracture A basal skull fracture is a fracture in one of the bones at the base of the skull. This is almost always caused by impact trauma such as a hard fall or a car crash. If the temporal bone is affected, one of the following may co-occur: Auricular cerebrospinal fluid discharge Dizziness Bruises around the eyes or behind the ears Facial weakness Difficulty seeing, smelling, or hearing Nasal packing Following nasal surgery or frequent nosebleeds, gauze or cotton may be inserted into the nose to stop the bleeding. This process is called therapeutic nasal packing. Nasal packing sometimes causes blood to back up into the middle ear, causing hemotympanum. Removing the packing may allow the blood to drain from the ear. Antibiotics can prevent an ear infection.
Causes:
Bleeding disorders Bleeding disorders, such as hemophilia or idiopathic thrombocytopenia purpura, can also cause hemotympanum. These disorders prevent proper blood clotting. In that circumstance, a mild head injury or a strong sneeze can cause hemotympanum.
Anticoagulant medications Anticoagulants, often called blood thinners, are medications that keep blood from clotting too easily. In rare cases, anticoagulants can cause hemotympanum with no underlying cause or injury. Experiencing a head injury while taking anticoagulants, increases the likelihood of hemotympanum.
Ear infections Frequent ear infections, ongoing inflammation and fluid buildup can increase the risk of hemotympanum.
Treatment:
Skull fractures usually heal on their own, but they can also cause several complications. Cerebrospinal fluid leaking out of the ear involves a higher risk of developing meningitis. Treatment may include corticosteroids, antibiotics, or surgery. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Source Four**
Source Four:
The Electronic Theatre Controls (ETC) Source Four (known by professionals and amateurs alike as Source 4, S4, Source 4 Leko, "19 / 26 / 36 / 50 (or whatever barrel you choose) Degree (Fixture/Leko)" is an ellipsoidal reflector spotlight (ERS) used in stage lighting. First released in 1992, the Source Four was invented by David Cunningham and features an improved lamp and reflector compared to previous ERS designs, tool-free lamp adjustment, and a rotating, interchangeable shutter barrel. The Source Four is widely used by professional theaters across the globe.
Glass reflector:
The Source Four uses a faceted borosilicate reflector behind the lamp. Nearly all stage lights have some form of reflector positioned behind the lamp to reflect otherwise wasted light out of the front of the instrument. The Source Four's reflector is dichroic, meaning that it reflects light of only certain wavelengths. The Source Four's reflector reflects back 95% of the visible light striking it, while allowing over 90% of the infrared radiation (heat) to pass out the back of the instrument. This produces a much cooler light which is less destructive to gobos or color gels at the front of the fixture and reduces localized heating of the lighting target.
Lamp adjustment:
Lamp adjustment, or bench focus, is used to achieve an even field of light, and to remove hot-spots which can destroy color filters. On the Source Four, adjustment can be done without tools, and can be more accurate than older methods. The most common problem with lamp alignment is the lamp dropping too low in the reflector, causing a hot spot at the bottom of the beam, and a dark area at the top. This may be corrected by a realignment of the lamp.
Lamp adjustment:
There are two adjustments that can be made to the cap.
1. The center screw controls the depth of the lamp into the reflector. Loosening this screw causes a spring to push the lamp further inside the reflector, creating a brighter hot spot in the beam. Tightening this screw will draw the lamp backwards, for a flatter, more even light field.
2. The wider nut that sits underneath the center screw moves the lamp's position horizontally and vertically. Loosening this nut will free the lamp assembly from the cap housing. The user can then push the entire adjustment screw and lamp along the plane to center the lamp in the reflector, evening the beam.
Lamps:
HPL incandescent The proprietary HPL (High Performance Lamp) lamp uses a compact filament, which concentrates the most light where it is efficient in an ellipsoidal reflector. At 575 watts, HPL lamps can produce as much light as an older 1000 watt lamp. Source Four is named for the HPL, its light source, having four filaments. It is also available in 375 W and 750 W versions, at a variety of rated supply voltages. HPL lamps are also available in longer-life versions that reduce the color temperature from 3250 K to 3050 K to give the lamp a life of around 1500–2000 hours as opposed to the 300-400 hour life of the standard HPL.
Lamps:
Other versions ETC also manufactures a high-intensity discharge (HID) Source Four body with a metal-halide lamp. The fixture has a small box attached to the yoke of the fixture which contains the ballast and other additional control gear required to strike and operate an HID lamp. HID lamps are not dimmable. The HID fixture uses less energy than the standard HPL lamp.
Lamps:
A range of Source Four instruments using LEDs are also available. The shutter, barrel, and lens tubes are identical to the HID and tungsten versions of the Source Four. Control of parameters such as color and light intensity is available via DMX. The power usage is approximately 25% of that of the equivalent tungsten version, although the purchase cost is significantly higher. Lamp life is significantly higher, retaining 70% of its original intensity at 50,000 hours.
Barrel:
The Source Four is the first fixture to feature a rotating shutter barrel, which makes framing objects much easier regardless of lamp orientation. In previous fixtures, the shutters had only a limited range of motion and could not be rotated. The shutters are made from stainless steel, which does not warp as easily under the heat of the lamp. The shutters are arranged in 3 planes, allowing a degree of freedom in shutter placement by the extreme angles they can be racked to. The top and bottom shutters are on their own plane, with the 2 side shutters sharing a single plane.
Barrel:
ETC also offers a variety of interchangeable lens tubes with various field angles. These are: 90, 70, 50, 36, 26, 19, 14, 10, and 5 degree lens tubes, some of which are available as enhanced definition lens tubes (EDLT). The Source Four is also available as a zoom fixture, with a non-interchangeable lens tube. According to ETC's manual, the lenses in each lens tube cannot be reoriented, added or removed. Lenses come from the factory with a painted-on dot denoting the front face. The inside of the lens tube shows which slot is for which lens. The exception to this is the Source Four Zoom. The Source Four Zoom has a longer barrel, and it has a lens which changes the focal point by moving forward or backward.Different field angles are needed for different venues with different catwalk and electric systems (and, therefore, different throws). A lens tube with a smaller field angle will light an area from far away, whereas a large beam degree such as a 90 degree can be much closer in order to light the same area. A 90 degree Source Four might be used to project a gobo from the rear only 5 feet away on a scrim at the back of the stage, while a 10 degree could be used in the back of the house, for example, in the technical booth where a technician could access it to refocus or change gobos during a show. A zoom gives the option of adjusting the field angle within a specified range without changing the lens tube. This is needed in the case of lighting fixtures that need to be re-focused frequently to different areas of the performing area, without having to rehang them in a different position in the lighting rig. There are two Source Four Zoom fixtures, the 15-30 degree and the 25-50 degree.
Enhanced definition lens tube:
In November 2005, ETC released the Enhanced Definition Lens Tube (EDLT). The EDLT is designed to produce images from gobos and other focus-critical instrument accessories more clearly and accurately than with the standard lens barrels. The lenses in the tube are coated with an anti-reflective material and are machined to more exact standards than the standard Source Four lens. The EDLT also increases lumen output. It is available in 19, 26, 36 and 50 degree barrels.
Third party offerings:
Although the Source Four is designed to accept the same types of accessories as other lighting instruments, when launched in 1992 the smaller lens and barrel size required accessories that fit in a 6.25" holder rather than the 7.5" that was the standard for Lekos at the time. Many third-party manufacturers have designed products specifically for the Source Four.
Third party offerings:
Ocean Thin Films, a manufacturer of scientific optics and instruments, offers the SeaChanger Color Engine that utilizes gradient dichroic disks to control color. The unit is mounted in between the lamp assembly and optics and is controlled via DMX512.Great American Market (GAM) offers a special effects unit, the SX4, that is mounted inside a Source Four in a similar manner to the Seachanger Color Engine and offers a large selection of drop-in accessories that range from gobo-changers to overlapping looping gobos. Several manufacturers of HMI sources, such as Kobold, provide lighting fixtures that provide "daylight balanced" light through the Source 4 without the use of gels. This is accomplished by replacing the lamp assembly on the back of the Source 4 with an HMI lamp head.
Other Source Four products:
In 1995, ETC introduced the Source Four PAR which is meant to replace traditional PAR cans. It uses the HPL lamp, and has interchangeable lenses.
In 1999, ETC introduced the Source Four PARNel as an alternative to Fresnel lanterns.
In 2002, ETC introduced the Source Four MultiPAR as an alternative to striplights.
Other Source Four products:
In 2004, ETC introduced the Source Four Revolution, ETC's first moving fixture. The Revolution was awarded both the EDDY and ABTT awards. The Revolution has the same filament structure as famous HPL Source Four lamp, as opposed to most other moving lights, which use arc lamps, or, more recently, LEDs. Also, the Revolution uses a gel string color scroller instead of the typical color wheel, thus allowing lighting designers to use familiar gel choices.
Other Source Four products:
In 2011, ETC introduced the Source Four Fresnel, which is a fresnel light that uses the HPL lamp. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**B-factory**
B-factory:
In particle physics, a B-factory, or sometimes a beauty factory, is a particle collider experiment designed to produce and detect a large number of B mesons so that their properties and behavior can be measured with small statistical uncertainty. Tau leptons and D mesons are also copiously produced at B-factories.
History and development:
A sort of "prototype" or "precursor" B-factory was the HERA-B experiment at DESY that was planned to study B-meson physics in the 1990–2000s, before the actual B-factories were constructed/operational. However, usually HERA-B is not considered a B-factory.
History and development:
Two B-factories were designed and built in the 1990s, and they operated from late 1999 onward: the Belle experiment at the KEKB collider in Tsukuba, Japan, and the BaBar experiment at the PEP-II collider at SLAC in California, United States. They were both electron-positron colliders with the center of mass energy tuned to the ϒ(4S) resonance peak, which is just above the threshold for decay into two B mesons (both experiments took smaller data samples at different center of mass energies). BaBar prematurely ceased data collection in 2008 due to budget cuts, but Belle ran until 2010, when it stopped data collection both because it had reached its intended integrated luminosity and because construction was to begin on upgrades to the experiment (see below).
Current experiments:
Three "next generation" B-factories were to be built in the 2010s and 2020s: SuperB near Rome in Italy; Belle II, an upgrade to Belle, and SuperPEP-II, an upgrade to the PEP-II accelerator. SuperB was canceled, and the proposal for SuperPEP-II was never acted upon. However, Belle II successfully started taking data in 2018 and is currently the only next-generation B-factory in operation.In addition to Belle II there is the LHCb-experiment at the LHC (CERN), which started operations in 2010 and studies primarily the physics of bottom-quark containing hadrons, and thus could be understood to be a B-factory of this "next generation." But LHCb is not usually referred to as a B-factory as the experiment and (perhaps more importantly) the corresponding collider (that is, the LHC) are not used solely for the study of b-quark particles but have other purposes beside b-quark physics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N-Nitrosoglyphosate**
N-Nitrosoglyphosate:
N-Nitrosoglyphosate is the nitrosamine degradation product and synthetic impurity of glyphosate herbicide.
The US EPA limits N-nitrosoglyphosate impurity to a maximum of 1 ppm in glyphosate formulated products. N-Nitrosoglyphosate can also form from the reaction of nitrates and glyphosate. Formation of N-nitrosoglyphosate has been observed in soils treated with sodium nitrite and glyphosate at elevated levels, though formation in soil is not expected at under typical field conditions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stenotype**
Stenotype:
A steno machine, stenotype machine, shorthand machine, stenograph or steno writer is a specialized chorded keyboard or typewriter used by stenographers for shorthand use. In order to pass the United States Registered Professional Reporter test, a trained court reporter or closed captioner must write speeds of approximately 180, 200, and 225 words per minute (wpm) at very high accuracy in the categories of literary, jury charge, and testimony, respectively. Some stenographers can reach 300 words per minute. The website of the California Official Court Reporters Association (COCRA) gives the official record for American English as 375 wpm.
Stenotype:
The stenotype keyboard has far fewer keys than a conventional alphanumeric keyboard. Multiple keys are pressed simultaneously (known as "chording" or "stroking") to spell out whole syllables, words, and phrases with a single hand motion. This system makes realtime transcription practical for court reporting and live closed captioning. Because the keyboard does not contain all the letters of the English alphabet, letter combinations are substituted for the missing letters. There are several schools of thought on how to record various sounds, such as the StenEd, Phoenix, and Magnum Steno theories.
History:
The first shorthand machine (the word "stenotype" was not used for another 80 years or more) punched a paper strip and was built in 1830 by Karl Drais, a German inventor. The first machine was made in 1863 by the Italian Antonio Michela Zucco and was in actual use from 1880 in the Italian Senate. In New York City on December 24, 1875, John Celivergos Zachos invented a stenotype and filed patent number 175892 for type writers and phenotypic notation application. In 1879, Miles M. Bartholomew invented the shorthand machine. A French version was created by Marc Grandjean in 1909. The direct ancestor of today's stenotype was created by Ward Stone Ireland in about 1913, and the word "stenotype" was applied to his machine and its descendants sometime thereafter.
Modern hardware:
Most modern stenotype keyboards have more in common with computers than they do with typewriters or QWERTY computer keyboards. Most contain microprocessors, and many allow sensitivity adjustments for each individual key. They translate stenotype to the target language internally using user-specific dictionaries, and most have small display screens. They typically store a full day's work in non-volatile memory of some type, such as an SD card. These factors influence the price, along with economies of scale, as only a few thousand stenotype keyboards are sold each year. As of October 2013, student models, such as a Wave writer, sell for about US$1,500 and top-end models sell for approximately US$5,000. Machines that are 10 to 15 years old still resell for upward of $350.
Modern hardware:
The Open Steno Project has written free open-source software, including Plover, and has developed cheap open-source hardware for stenography. Plover software translates keypresses to Stenotype on any modern keyboard, with a preference given to ortholinear keyboards that have NKRO functionality.
Modern hardware:
Manufacturers Stenograph is by far the largest manufacturer of American stenotype keyboards with an estimated market share in excess of 90%. Their top models are the Luminex professional writer and the Wave student writer. The Stentura paper-based writers and the paperless Élan writers preceded the current models. There were two other large manufacturers in the 1980s (Xscribe, with the StenoRAM line and BaronData with the Transcriptor line). Stenograph purchased both companies and discontinued their products. The current manufacturers in the US include: Advantage Software (Passport and Passport Touch) Neutrino Group (Gemini, Revolution, & Infinity writers) ProCAT (Stenopaq, Flash, Stylus, Impression, and Xpression) Stenograph (Stentura, élan Mira, Fusion, élan Cybra, Wave, Diamante and Luminex) Stenovations (LightSpeed) Word Technologies (Tréal) Hobbyist keyboards Many steno enthusiasts are making and selling keyboards designed for use with Plover, the open source steno software. Most of these keyboards range from about $100 to $200 and allow the user to use stenography on their computer through Plover. Vendors include: g Heavy Industries (Georgi) Nolltronics (EcoSteno) SOFT/HRUF (Splitography) StenoKeyboards (The Uni) Stenomod (TinyMod) Stenography Store (Starboard)
Keyboard layout:
Stenotype keys normally are made of a hard, high-luster acrylic material with no markings. The keyboard layout of the American stenotype machine is shown at the top / right.
Keyboard layout:
In "home position", the fingers of the left hand rest along the gap between the two main rows of keys to the left of the asterisk (little finger on the "S" to forefinger on the "H" and "R"). These fingers are used to generate initial consonants. The fingers of the right hand lie in the corresponding position to the right of the asterisk (forefinger on "FR" to little finger on "TS"), and are used for final consonants. The thumbs produce the vowels.
Keyboard layout:
The system is roughly phonetic; for example the word cat would be written by a single stroke expressing the initial K, the vowel A, and the final T.
Keyboard layout:
To enter a number, a user presses the number bar at the top of the keyboard at the same time as the other keys, much like the Shift key on a QWERTY-based keyboard. The illustration shows which lettered keys correspond to which digits. Numbers can be chorded, just as letters can. They read from left to right across the keyboard. It is possible to write 137 in one stroke by pressing the number bar along with SP P, but it takes three separate strokes to write 731. Many court reporters and stenocaptioners write out numbers phonetically instead of using the number bar.
Keyboard layout:
There are various rule sets, known as theories, to combine letters to make different sounds; different court reporters use different theories in their work. Historically, reporters often created "briefs" (abbreviations) on-the-fly, and sometimes mixed theories, which could make it difficult for one reporter to read another reporter's notes, but current versions of theories are primarily designed for computerized translation using a standardized dictionary provided by the company that promulgates the theory, which forces reporters to stick with one theory and use only the specific combinations in that company's dictionary. However, it is not uncommon for students and reporters to add a significant number of entries to a stock dictionary, usually when creating briefs of their own.
Keyboard layout:
Some court reporters use scopists to translate and edit their work. A scopist is a person who is trained in the phonetic writing system, English punctuation, and usually in legal formatting. They are especially helpful when court reporters are working so much that they do not have time to edit their own work. Both scopists and proofreaders work closely with court reporters to ensure an accurate transcript. The widespread use of realtime translation of the strokes has increased the demand for scopists to work simultaneously with the court reporter. With transcripts produced on computer-aided transcription (CAT) software, a scopist no longer needs to have any knowledge of shorthand theories, because the software converts shorthand to text in real time via a dictionary. However, it may still be helpful in some situations while scoping, as misstroked words may not translate and would appear in steno. Depending on availability of scopists and proofreaders, court reporters may use a scopist only to clean up a rough draft of their transcript, then proofread and certify the transcript themselves, or they may use neither and produce a final transcript by themselves, though this is a very time-consuming practice.
Keyboard layout:
Steno paper Steno paper has become almost obsolete with the advancement in paperless stenotype machines. When it is used, steno paper comes out of a stenotype machine at the rate of one row per chord, with the pressed letters printed out in 22 columns corresponding to the 22 keys, in the following order: STKPWHRAO*EUFRPBLGTSDZ Chords This is a basic chart of the letters of this machine. There are, however, different writing theories that represent some letters or sounds differently (e.g., the *F for final v in the chart below), and each court reporter develops personalized "briefs" and alternate ways of writing things.
Example:
The following example shows how steno paper coming out of the machine represents an English sentence. Notice that key combinations can have different meanings depending on context. In the first stroke of the word example, the PL combination refers to m. In the second stroke of the word, that same key combination refers to the two letters pl.
Many words have been abbreviated: this, of and from are chorded as th, f and fr, and machine and shorthand become mn and shand respectively.
Canada:
There is one NCRA-approved school in all of Canada that teaches stenotype: the captioning and court reporting program at NAIT (Northern Alberta Institute of Technology). This program uses the STKPWHRAO*EUFRPBLGTSDZ keyboard layout. Graduates are trained to be court reporters, broadcast captioners, or CART providers and report a median income of $70,000 CAD between 2017 and 2020.
Other systems:
English In addition to the above American Stenotype layout of STKPWHRAO*EUFRPBLGTSDZ used internationally (Ward Stone Ireland 1913), there is also a Possum Palantype system still being used in the UK.
Italian Two Stenotype layouts are in use for the Italian language: Michela and Melani. The former is in use by the Italian senate.
Korean The main stenotype systems in Korea are CAS and Sorizava.
Other languages The Portuguese language has two stenotype systems. The Brazilian system uses the same layout as the American English one.
The Japanese language uses a StenoWord system with ten remapped keys or a more conventional Sokutaipu system.
As with the Korean language, Chinese stenotype layouts depend on the manufacturer, four of which are most commonly encountered in the market. A combination of chording and abbreviation is used. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Atranorin**
Atranorin:
Atranorin is a chemical substance produced by some species of lichen. It is a secondary metabolite belonging to a group of compounds known as depsides. Atranorin has analgesic, anti-inflammatory, antibacterial, antifungal, cytotoxic, antioxidant, antiviral, and immunomodulatory properties. In rare cases, people can react allergic to atranorin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Silent call**
Silent call:
A silent call is a telephone call in which the calling party does not speak when the call is answered. Most such calls are generated by a cold call telemarketing operation's predictive dialer which makes many calls, and sometimes does not have an agent immediately available to handle an answered call; the called party hears silence ("dead air"), followed by the call being disconnected. This differs from a ghost call, which is not dialed intentionally, and is due to technical issues or pocket dialing.
Silent call:
In the U.S., the Federal Trade Commission (FTC) uses the term "abandoned call" instead of silent call in its regulations applying to telemarketing. "Abandoned call" in non-FTC contexts may refer to a caller who hangs up when the call is not answered promptly.
For more information on why the technology makes silent calls, see predictive dialer.
Other reasons for silent calls:
People using telecommunications devices for the deaf (TDDs) and calling public agencies might expect the agency being called to answer with a teletypewriter (TTY). The caller waits for "TTY tones" to be generated over the phone line. The answerer should place the phone receiver into the TTY coupler and type anything ("Hello, Go Ahead") so the caller knows it is connected. It is inappropriate for public agencies that have a TTY to hang up on a silent call.
Other reasons for silent calls:
Many silent calls are a result of a process known as pinging. This is very similar to an Internet Protocol (IP) ping, where the intention is to see if there is life at the destination. This is often used by data cleaning companies as a method of removing dead telephone numbers from old lists. Each number is dialed and immediately dropped if the call appears to be going through; if it is not dropped quickly enough, it may make the telephone ring briefly.
Telemarketing and the UK DMA:
It is believed that most, although not all, silent calls are made by telemarketing agencies.
Telemarketing and the UK DMA:
Around 10% of dialler users in the UK are members of the Direct Marketing Association (DMA) which is an industry interest group for telemarketing in the UK. The DMA code of conduct rules that no more than 5% of the calls made per day should be silent. Many consumer groups consider this too high, and initially campaigned for this to be lowered to 3%.
Telemarketing and the UK DMA:
The legal position in the UK There is no law that declares silent calls illegal, although the Communications Act 2003 states: "127.2 A person is guilty of an offense if, for the purpose of causing annoyance, inconvenience or needless anxiety to another, he- .../cut/...
Telemarketing and the UK DMA:
(c) persistently makes use of a public electronic communications network." In 2003, OFCOM investigated two separate companies who were believed to be making silent calls. They were investigated under this section of the legislation because it was believed that they were "causing annoyance, inconvenience or needless anxiety". Unofficial warnings were issued, but OFCOM did not use powers given to it under the act until 3 May 2005, when it issued a "notification" (an official written warning) to one of the companies. The terms of the notification imposed a maximum proportion of 5% silent calls.
Telemarketing and the UK DMA:
On 1 March 2006, OFCOM published their revised policy requiring that Abandoned calls must carry a recorded message about the company calling.
Calling line identification must be presented on all outbound calls from call centres using automated calling systems.
Telephone calls abandoned should not be re-dialed for at least 72 hours, unless a dedicated person is available.
Abandoned call rates must be less than 3% in any 24-hour period.
Telemarketing and the UK DMA:
Suitable records must be kept.OFCOM also announced that it had completed an investigation into seven companies in relation to silent calls. Notifications were issued to four of these, imposing a new limit of 3% abandoned calls. An undertaking (without official notification) was secured from another company for the same performance. The sixth company's actions did not constitute persistent misuse. The final company has stopped accepting contracts to send unsolicited fax communications.
Telemarketing and the UK DMA:
On 3 November 2006, Ofcom published a statement indicating that it was taking action against 4 companies. Carphone Warehouse plc; Brakenbay Kitchens Ltd; Space Kitchens Ltd; and IDT Direct Ltd (trading as Toucan).The full notification is at OFCOM.org.uk.In 2008, Barclaycard was fined the maximum penalty at the time of £50,000. A government consultation was started with a view to increasing the maximum penalty, with suggestions from £250,000 to £2,000,000.
Telemarketing and the UK DMA:
On 1 February 2011, new legislation came into force, with a maximum penalty of £2,000,000 for companies that issue silent calls more than once a day to the same customer.
Unfortunately, a large proportion of nuisance telemarketing calls to the UK originate outside the UK, so are not subject to OFCOM regulation.
Telemarketing and the UK DMA:
The No Agent Available (NAA) message One solution to silent calls is for marketing organizations to play a recorded message to the consumer explaining why the call was silent and why no agent was available. The industry, led by the DMA, resisted this for a long time because it was assumed that it would be termed illegal under article 19 of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PERC): "Use of automated calling systems 19. - (1) A person shall neither transmit, nor instigate the transmission of, communications comprising recorded matter for direct marketing purposes by means of an automated calling system except in the circumstances referred to in paragraph (2)." (Further guidance on the PERC 2003) However, after extensive campaigning from a consumer called David Hickson the industry reviewed its position and started to further investigate the matter.
Telemarketing and the UK DMA:
A private consulting firm contacted the information commissioner on 13 July 2005 and got confirmation that a NAA message is acceptable, on the condition that the message itself does not contain any marketing information. On 25 July 2005, this consulting firm then released a framework to help call centres implement the NAA message, entitled Voluntary Dialing Code.However, many believed that even if an NAA message was legal in terms of the PERC, OFCOM could still find that it was persistent misuse.
Telemarketing and the UK DMA:
On 17 August 2005, MP John Hemming received confirmation from OFCOM that usage of the NAA message would not be termed a persistent misuse, and there was no widespread belief that OFCOM would conclude that the six companies it was investigating should stop making silent calls and use the information message instead.
OFCOM did not indicate what quantity of NAA is acceptable.
On 18 August 2005, the DMA pledged to update its code. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CoaXPress**
CoaXPress:
CoaXPress (CXP) is a digital interface standard developed for high speed image data transmission in machine vision applications.
The name is a portmanteau of 'express' (as in express train) and 'coaxial' to emphasize CoaXPress is faster than other standards (e.g. Camera Link, or GigE Vision) and uses 75 ohm coaxial cables as the physical transmission medium.
CoaXPress is mostly used in digital imaging applications but it is also suitable for high-speed transmission of universal digital data.
CoaXPress:
A 'device' that generates and transmits data (e.g. an industrial digital camera) is connected with one or more coaxial cables to a 'host' that receives the data (e.g. a frame grabber board in a computer). The CoaXPress standard 1.0 and 1.1 supports bit rates up to 6.25 Gbit/s per coaxial cable and the new 2.0 standard supports bit rates up to 12.5 Gbit/s per coaxial cable from the 'device' to the 'host'. The number of cables is not limited by the standard. Some recent CoaXPress cameras and frame grabbers use 8 coaxial cables providing a maximum image data rate of about 4.8 GB/s. The older Camera Link standard can only carry up to 850 MB/s.
CoaXPress:
A low speed uplink, operating at up to 41.6 Mbit/s is available to control the 'device' or for triggering. A CoaXPress 'host' can supply 24 V over the coaxial cable up to 13 W per cable. The CoaXPress standard requires that both the 'device' and the 'host' support GenICam, a standardized generic programming interface.
Other than Camera Link that is built upon pure LVDS with no transport layer, CoaXPress transmits data in packets (cf. network packet) using 8b/10b encoding and provides CRC.
CoaXPress competes with the Camera Link HS standard by the Automated Imaging Association.
History:
CoaXPress has been developed by 6 companies, of whom Adimec, EqcoLogic (today: Microchip) and Active Silicon were among the biggest drivers, in 2008. The goal was after the failure of Visilink to develop a successor for the Camera link standard for high-speed and data-rich vision related communication. The standard was first demonstrated in November 2008 at the "Vision" trade show. After a good reception the standard writing consortium of industrial companies consisting of Adimec, Eqcologic, Active Silicon, AVAL DATA, NED and Components Express was formed early 2009. During the next Vision in 2009 the consortium was awarded the VisionAward for their efforts to further the cause of (machine) vision applications. By this time the Japan Industrial Imaging Association has adopted CoaXPress to mature it to an official standard after which the first draft version 1.0 was presented in December 2010.
Cabling and connectors:
The transmission medium for CoaXPress is coaxial cable with a characteristic impedance of 75 Ω. The maximum transmission distance is depending on the bit rate and the quality of the cable. RG11, RG6, RG59 and other cable types can be used. It is also possible to reuse existing coaxial cable when upgrading from an analogue to a digital camera system.
Cabling and connectors:
The original connector for CoaXPress is a 75 Ω IEC 61169-8 BNC connector. The smaller DIN 1.0/2.3 connector was added in CoaXPress 1.1, and the Micro-BNC connector was then added to CoaXPress 2.0 for the new speeds faster than CXP-6. Most of recent camera and framegrabber products use either DIN 1.0/2.3 or Micro-BNC connectors and the IEC 61169-8 BNC has become rather rare. Solutions with a 5W5 connector have also been demonstrated, but this is not officially supported by the CoaXPress consortium.
Variants:
CoaXPress is a scalable standard and can be used for connections from 1.25 Gbit/s up to 25 Gbit/s and more. Note that the following chart represents typical practical cable lengths; the CoaXPress specification only specifies the electrical characteristics of CXP cables for each speed standard, it does not explicitly specify maximum lengths.
Low speed uplink:
CoaXPress supports a low speed uplink channel from frame grabber to camera. This uplink channel has a fixed bit rate of 20.833 Mbit/s for 1.0 and 1.1 version of the standard and a bit rate of 41.667 Mbit/s for the 2.0 version of the standard. The uplink channel uses 8b/10b encoding. The uplink can be used for camera control, triggering and firmware updates.
Low speed uplink:
When using the multilane DIN 1.0/2.3 cabling solution an optional high speed uplink can also be used, allowing 6.25 Gbit/s uplink communication to the camera. This can be used for very accurate triggering.
Usage:
The most common application is to interface cameras to computers (via a frame grabber) on applications (such as Machine vision) which involve automated acquisition and analysis of images. Some cameras and frame grabbers have been introduced which support and utilize the CoaXPress interface standard.
Implementation:
So far, there is just one company, Microchip (through its acquisition of EqcoLogic), which develops CoaXPress compatible driver and equalizer devices. These devices must be used with FPGA devices, in order to implement CoaXPress standard protocol.
Such standard implementation is executed using FPGA IP core, specially designed for this protocol, while it takes care of all the features defined by standard. Each side of the vision system, e.g. Camera or frame grabber, requires dedicated FPGA IP core. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Worthington's White Shield**
Worthington's White Shield:
Worthington's White Shield (5.6% ABV) was an India pale ale (IPA) available principally in bottle conditioned form.White Shield was first brewed by the Worthington Brewery in Burton upon Trent in 1829, primarily for export to the British Empire. Worthington merged with local rival Bass in 1927, which was itself taken over by Coors in 2002.
White Shield has won the CAMRA Champion Bottled Beer of Britain Gold award three times, more than any other beer.MolsonCoors announced that production of the beer would end in August 2023.
Production:
White Shield was generally available as a bottle conditioned beer, although it was periodically available in cask conditioned form.White Shield was brewed using pale malt and a small amount of crystal malt. The hops used are Challenger, Fuggles and Northdown. Different yeasts are used for primary and secondary fermentation. After primary fermentation, the beer is conditioned in bulk for three weeks. Once packaged, it is matured in the bottle for a month before being sent out for distribution. Molson Coors claim that the beer will continue to mature in the bottle for up to three years. Former brewer Steve Wellington described the product in 2011 as "pretty much unchanged since appearing first in 1829".
History:
Worthington launched East India Pale Ale, their first IPA, in 1829. It was exported to British expatriates across the Empire, mostly officers and civil servants, as the soldiers tended to drink porter, which was more affordable. The growth of the railway network allowed for increased distribution of the beer throughout the United Kingdom. The beer was brewed using the Burton Union system.The White Shield logo was introduced from the 1870s, and by the end of the nineteenth century the beer took on this name with drinkers. Worthington officially renamed their India Pale Ale White Shield from 1950.
History:
92,000 barrels of White Shield were brewed in 1952–53. Bass announced that White Shield would be discontinued in 1961: it was unpopular with many publicans as it had to be stored at a certain temperature and could not be served chilled. Bass ultimately reversed their decision, but just 15,000 barrels were brewed in 1965. Bass lowered the alcohol content of the beer in 1967.
History:
White Shield found renewed popularity in the early 1970s as the demand for real ale grew, but lost this position as the availability of cask ale improved.Bass relocated production from Burton to their Hope & Anchor brewery in Sheffield in 1981, and the beer ceased to be brewed using the Burton Union method. Production in 1988 totalled 12,000 barrels. The Hope & Anchor brewery was closed down in 1992, and production was moved to Cape Hill in Birmingham, before production was contracted to King and Barnes of Sussex in 1998. By this time, production was down to just 1,000 barrels a year, and the beer's long-term survival was in doubt. The King and Barnes brewery closed down in 2000, and production moved to the Bass owned White Shield microbrewery in Burton upon Trent.In 2000, a total of 500 barrels were produced. In 2010, production was moved to the newly constructed William Worthington's Brewery, a microbrewery based at the National Brewery Centre in Burton. In 2012, increasing demand saw White Shield production moved to the main Coors brewery in Burton.Roger Protz reported that White Shield was the highest selling bottle-conditioned beer in Britain in 2013. However, in 2018 he suggested that distribution of the beer had declined. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ventricular inversion**
Ventricular inversion:
Ventricular inversion is a condition in which the anatomic right ventricle of the heart is on the left side of the interventricular septum and the anatomic left ventricle is on the right. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lamin B receptor**
Lamin B receptor:
Lamin-B receptor is a protein, and in humans, it is encoded by the LBR gene.
Function:
The protein encoded by this gene belongs to the ERG4/ERG24 family. It localizes to the inner membrane of the nuclear envelope and anchors the lamina and the heterochromatin to the membrane. It may mediate the interaction between chromatin and lamin B. Mutations of this gene has been associated with autosomal recessive HEM/Greenberg skeletal dysplasia. Alternative splicing occurs at this locus and two transcript variants encoding the same protein have been identified.
Clinical significance:
There is evidence tying it to Greenberg dysplasia and Pelger-Huet anomaly.
Interactions:
Lamin B receptor has been shown to interact with CBX3 and CBX5. LBR also interacts with long non-coding RNA XIST in mouse cells and potentially assist the spreading XIST across X chromosome in differentiating female embryonic stem cells, but it might be redundant for correct XCI in vivo. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cross-interleaved Reed–Solomon coding**
Cross-interleaved Reed–Solomon coding:
In the compact disc system, cross-interleaved Reed–Solomon code (CIRC) provides error detection and error correction. CIRC adds to every three data bytes one redundant parity byte.
Overview:
Reed–Solomon codes are specifically useful in combating mixtures of random and burst errors. CIRC corrects error bursts up to 3,500 bits in sequence (2.4 mm in length as seen on CD surface) and compensates for error bursts up to 12,000 bits (8.5 mm) that may be caused by minor scratches.
Characteristics:
High random error correctability Long burst error correctability In case the burst correction capability is exceeded, interpolation may provide concealment by approximation Simple decoder strategy possible with reasonably-sized external random access memory Very high efficiency Room for future introduction of four audio channels without major changes in the format (as of 2008, this has not been implemented).
Interleave:
Errors found in compact discs (CDs) are a combination of random and burst errors. In order to alleviate the strain on the error control code, some form of interleaving is required. The CD system employs two concatenated Reed–Solomon codes, which are interleaved cross-wise. Judicious positioning of the stereo channels as well as the audio samples on even or odd-number instants within the interleaving scheme, provide the error concealment ability, and the multitude of interleave structures used on the CD makes it possible to correct and detect errors with a relatively low amount of redundancy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Castelnuovo curve**
Castelnuovo curve:
In algebraic geometry, a Castelnuovo curve, studied by Castelnuovo (1889), is a curve in projective space Pn of maximal genus g among irreducible non-degenerate curves of given degree d.
Castelnuovo showed that the maximal genus is given by the Castelnuovo bound g≤(n−1)m(m−1)/2+mϵ where m and ε are the quotient and remainder when dividing d–1 by n–1.
Castelnuovo described the curves satisfying this bound, showing in particular that they lie on either a rational normal scroll or on the Veronese surface. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Genetic saturation**
Genetic saturation:
Genetic saturation is the result of multiple substitutions at the same site in a sequence, or identical substitutions in different sequences, such that the apparent sequence divergence rate is lower than the actual divergence that has occurred. When comparing two or more genetic sequences consisting of single nucleotides, differences in sequence observed are only differences in the final state of the nucleotide sequence. Single nucleotides that undergoing genetic saturation change multiple times, sometimes back to their original nucleotide or to a nucleotide common to the compared genetic sequence. Without genetic information from intermediate taxa, it is difficult to know how much, or if any saturation has occurred on an observed sequence. Genetic saturation occurs most rapidly on fast-evolving sequences, such as the hypervariable region of mitochondrial DNA, or in short tandem repeats such as on the Y-chromosome.In phylogenetics, saturation effects result in long branch attraction, where the most distant lineages have misleadingly short branch lengths. It also decreases phylogenetic information contained in the sequences.
Phylogenetic saturation:
Multiple substitutions Multiple substitutions take place when single nucleotides undergo multiple changes before reaching their final nucleotide identity. A sequence is said to be saturated because mutation has acted multiple times upon nucleotides and observed change in sequence is, in fact, less than the historical change in sequence.
Phylogenetic saturation:
Detection It is possible to estimate the amount of saturation that a sequence might have undergone by estimating the substitution rate of a genetic sequence and how much time has passed since divergence. Divergence rates are estimated from a variety of sources including ancestral DNA, fossil records and biographical events. This use of molecular clocks to determine divergence is controversial because of its potential for inaccuracy and assumptions made in the model (such as consistent mutation rate for all branches) and is used mostly as an estimation tool. Genetic saturation can also be estimated by comparing the number of observed differences in nucleotide sequences between multiple pairs of species. The number of observed substitutions between sequences of different species can be compared to the number of inferred substitutions based on branch length to find the approximate point where the number of inferred substitutions surpasses the number of observed substitutions. This method can give researchers an idea of the level of saturation of a particular gene but is thought to underestimate the amount of saturation, especially for very large branch lengths.
Phylogenetic saturation:
Impact on phylogenetics In the field of molecular phylogenetics, the distances and relationships between species are investigated by looking at the DNA, RNA or amino acid sequences of an organism. When phylogenetic trees are constructed without considering possible saturation, the possibility of multiple substitutions can cause the distance between taxa to appear much smaller than the true distance. Multiple sequence alignment, a common technique to construct phylogenies, relies on the comparison of homologous sequences. It can easily be confounded by genetic saturation because the homologous loci under investigation show no indication whether or not more than one substitution on each nucleotide separates the taxa being described.
Phylogenetic saturation:
Substitution decreases the amount of phylogenetic information that can be contained in sequences, especially when deep branches are involved. This is particularly evident in studies examining arthropod groups. Furthermore, saturation effects can lead to a gross underestimation of divergence time. This is mainly attributed to the randomization of the phylogenetic signal with the number of observed sequence mutations and substitutions. The effects of saturation can mask the true amount of divergence time leading to inaccurate phylogenetic trees.
Phylogenetic saturation:
The principle of parsimony in genetic saturation analysis Parsimony plays a fundamental role in genetic saturation analysis. This principle gives preference to the simplest explanation that can explain the data. In regards to genetic saturation, parsimony means that the hypothesized relationship is one that has the smallest number of character changes. Using parsimony to analyze genetic saturation can lead to conflict when creating a phylogenetic tree. When only sequence data is used, it is possible to come up with numerous phylogenetic trees with the same amount of parsimony.
Phylogenetic saturation:
Long branch attraction Genetic saturation contributes to long-branch attraction in its ability to greatly mix up genetic code without easily observable associated phenotypic changes. Long branch attraction occurs when two relatively outgrouped taxa are seemingly closely linked. The more substitution mutations, the more likely it is for previously dissimilar sequences to share nucleotides and as a result, show homology in phylogenetic tree calculations. Long-branch attraction due to saturation has been proposed to be the cause of links in ancient phylogenies and puts into question even some of the earliest relationships between eukaryotes, archaea, and eubacteria.
Other uses of "Saturation" in genetics:
Gene site saturation mutagenesis Gene site saturation mutagenesis (GSSM) is mutagenesis technique of one or more codons in a gene to create a library of variants covering all other codons at that position. It is used in biochemistry and protein engineering to explore the functions and characteristics of specific amino acid sequences. This systemic identification of amino acid substitutions allows researchers to look at every possible variant of each position. This will provide crucial structural information about the protein of interest and will identify amino acid sequences that are more vital to the function of the protein.
Other uses of "Saturation" in genetics:
Researchers often lean towards using a one-step PCR-based to explore the specific effects of different variations in an amino acid of interest within a protein with GSSM. With a one-step PCR-based approached, researchers create a primer that has a corresponding sequence to the protein of interest at its two ends. Only one codon of a three codon amino acid sequence is substituted.The type of codon set, will determine the number of sequences that can be derived from GSSM. To determine which codon set to use, researchers will need to check the library quality on the DNA level, which means that massive sequence data is needed. If all 3 positions can be substituted for each of the four different nucleotides, researchers can code for all 20 amino acids. Although it’s possible to code for all 20 amino acids, this is not the most efficient method. The most efficient method is to use an NNK codon degeneracy, also known as a limited codon set. This method, will result in only 32 codons rather than 64.
Other uses of "Saturation" in genetics:
Advantages of GSSM In comparison to other techniques, GSSM is able to offer unique advantages such as: A complete analysis of every position in a given gene, which can be helpful in identifying critical positions. Critical positions are identified by analyzing the immensity of the effects of mutagenesis — both positive and negative. GSSM can also identify positions that are more flexible, as GSSM at these positions will have less of an impact on the amino acid.
Other uses of "Saturation" in genetics:
A residue-specific analysis, which allows for researchers to create a schematic representation of the amino acid. This allows for more complex and detailed genetic research in further studies.
Other uses of "Saturation" in genetics:
An ability to look at the effects of various amino acids without knowing any structural information about the protein. The data collected can then provide valuable insight into this area. Fast delivery times and cost-efficiency.GSSM was able to open up a whole frontier in genetic research, as it revolutionized fundamental beliefs about DNA. Before GSSM, researchers mutated DNA through radiation or with various chemicals. Both of these methods are imprecise. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**COX-inhibiting nitric oxide donator**
COX-inhibiting nitric oxide donator:
COX-inhibiting nitric oxide donators (CINODs), also known as NO-NSAIDs, are a new class of nonsteroidal anti-inflammatory drug (NSAID) developed with the intention of providing greater safety than existing NSAIDs.These compounds were first described by John Wallace and colleagues. CINODs are compounds generated by the fusion of an existing NSAID with a nitric oxide (NO)-donating moiety by chemical means, usually by ester linkage. CINODs retain the anti-inflammatory efficacy of NSAIDs via inhibition of cyclooxygenase (COX) while arguably improving upon gastric and vascular safety, most likely via vasorelaxation, inhibition of leukocyte adhesion and inhibition of caspases, all known effects of NO.
COX-inhibiting nitric oxide donator:
The first CINODs were developed in the 1990s, and as yet none have been approved for use by the general public. The importance of developing such drugs was increased when COX-2-specific NSAIDs rofecoxib (Vioxx) and lumiracoxib (Prexige) were removed from major pharmaceutical markets in the mid-2000s due to vascular safety concerns. In addition, traditional NSAIDs increase blood pressure and interfere with the actions of antihypertensive drugs. Several CINODs are currently being tested in clinical trials, the most advanced of which are being conducted by the French pharmaceutical company NicOx, whose flagship compound naproxcinod (NO-naproxen, nitronaproxen) is in phase III trials for the treatment of osteoarthritis. Naproxcinod is a fusion of naproxen and a NO-donating group. Other CINODs are also being tested by NicOx for the treatment of diseases in which inflammation plays a role. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dynkin system**
Dynkin system:
A Dynkin system, named after Eugene Dynkin, is a collection of subsets of another universal set Ω satisfying a set of axioms weaker than those of 𝜎-algebra. Dynkin systems are sometimes referred to as 𝜆-systems (Dynkin himself used this term) or d-system. These set families have applications in measure theory and probability.
A major application of 𝜆-systems is the π-𝜆 theorem, see below.
Definition:
Let Ω be a nonempty set, and let D be a collection of subsets of Ω (that is, D is a subset of the power set of Ω ). Then D is a Dynkin system if Ω∈D; D is closed under complements of subsets in supersets: if A,B∈D and A⊆B, then B∖A∈D; D is closed under countable increasing unions: if A1⊆A2⊆A3⊆⋯ is an increasing sequence of sets in D then ⋃n=1∞An∈D.
Definition:
It is easy to check that any Dynkin system D satisfies: Ω∈D; D is closed under complements in Ω : if {\textstyle A\in D,} then Ω∖A∈D; Taking := Ω shows that ∅∈D.
D is closed under countable unions of pairwise disjoint sets: if A1,A2,A3,… is a sequence of pairwise disjoint sets in D (meaning that Ai∩Aj=∅ for all i≠j ) then ⋃n=1∞An∈D.
To be clear, this property also holds for finite sequences A1,…,An of pairwise disjoint sets (by letting := ∅ for all i>n ).Conversely, it is easy to check that a family of sets that satisfy conditions 4-6 is a Dynkin class.
For this reason, a small group of authors have adopted conditions 4-6 to define a Dynkin system as they are easier to verify.
An important fact is that any Dynkin system that is also a π-system (that is, closed under finite intersections) is a 𝜎-algebra. This can be verified by noting that conditions 2 and 3 together with closure under finite intersections imply closure under finite unions, which in turn implies closure under countable unions.
Given any collection J of subsets of Ω, there exists a unique Dynkin system denoted D{J} which is minimal with respect to containing J.
That is, if D~ is any Dynkin system containing J, then D{J}⊆D~.
D{J} is called the Dynkin system generated by J.
For instance, D{∅}={∅,Ω}.
For another example, let Ω={1,2,3,4} and J={1} ; then D{J}={∅,{1},{2,3,4},Ω}.
Sierpiński–Dynkin's π-λ theorem:
Sierpiński-Dynkin's π-𝜆 theorem: If P is a π-system and D is a Dynkin system with P⊆D, then σ{P}⊆D.
In other words, the 𝜎-algebra generated by P is contained in D.
Sierpiński–Dynkin's π-λ theorem:
Thus a Dynkin system contains a π-system if and only if it contains the 𝜎-algebra generated by that π-system. One application of Sierpiński-Dynkin's π-𝜆 theorem is the uniqueness of a measure that evaluates the length of an interval (known as the Lebesgue measure): Let (Ω,B,ℓ) be the unit interval [0,1] with the Lebesgue measure on Borel sets. Let m be another measure on Ω satisfying m[(a,b)]=b−a, and let D be the family of sets S such that m[S]=ℓ[S].
Sierpiński–Dynkin's π-λ theorem:
Let := {(a,b),[a,b),(a,b],[a,b]:0<a≤b<1}, and observe that I is closed under finite intersections, that I⊆D, and that B is the 𝜎-algebra generated by I.
Sierpiński–Dynkin's π-λ theorem:
It may be shown that D satisfies the above conditions for a Dynkin-system. From Sierpiński-Dynkin's π-𝜆 Theorem it follows that D in fact includes all of B , which is equivalent to showing that the Lebesgue measure is unique on B Application to probability distributions The π-𝜆 theorem motivates the common definition of the probability distribution of a random variable X:(Ω,F,P)→R in terms of its cumulative distribution function. Recall that the cumulative distribution of a random variable is defined as whereas the seemingly more general law of the variable is the probability measure where B(R) is the Borel 𝜎-algebra. The random variables X:(Ω,F,P)→R and Y:(Ω~,F~,P~)→R (on two possibly different probability spaces) are equal in distribution (or law), denoted by X=DY, if they have the same cumulative distribution functions; that is, if FX=FY.
Sierpiński–Dynkin's π-λ theorem:
The motivation for the definition stems from the observation that if FX=FY, then that is exactly to say that LX and LY agree on the π-system {(−∞,a]:a∈R} which generates B(R), and so by the example above: LX=LY.
A similar result holds for the joint distribution of a random vector. For example, suppose X and Y are two random variables defined on the same probability space (Ω,F,P), with respectively generated π-systems IX and IY.
The joint cumulative distribution function of (X,Y) is However, A=X−1((−∞,a])∈IX and B=Y−1((−∞,b])∈IY.
Because is a π-system generated by the random pair (X,Y), the π-𝜆 theorem is used to show that the joint cumulative distribution function suffices to determine the joint law of (X,Y).
In other words, (X,Y) and (W,Z) have the same distribution if and only if they have the same joint cumulative distribution function.
In the theory of stochastic processes, two processes (Xt)t∈T,(Yt)t∈T are known to be equal in distribution if and only if they agree on all finite-dimensional distributions; that is, for all t1,…,tn∈T,n∈N, The proof of this is another application of the π-𝜆 theorem. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Faugère's F4 and F5 algorithms**
Faugère's F4 and F5 algorithms:
In computer algebra, the Faugère F4 algorithm, by Jean-Charles Faugère, computes the Gröbner basis of an ideal of a multivariate polynomial ring. The algorithm uses the same mathematical principles as the Buchberger algorithm, but computes many normal forms in one go by forming a generally sparse matrix and using fast linear algebra to do the reductions in parallel.
Faugère's F4 and F5 algorithms:
The Faugère F5 algorithm first calculates the Gröbner basis of a pair of generator polynomials of the ideal. Then it uses this basis to reduce the size of the initial matrices of generators for the next larger basis: If Gprev is an already computed Gröbner basis (f2, …, fm) and we want to compute a Gröbner basis of (f1) + Gprev then we will construct matrices whose rows are m f1 such that m is a monomial not divisible by the leading term of an element of Gprev.
Faugère's F4 and F5 algorithms:
This strategy allows the algorithm to apply two new criteria based on what Faugère calls signatures of polynomials. Thanks to these criteria, the algorithm can compute Gröbner bases for a large class of interesting polynomial systems, called regular sequences, without ever simplifying a single polynomial to zero—the most time-consuming operation in algorithms that compute Gröbner bases. It is also very effective for a large number of non-regular sequences.
Implementations:
The Faugère F4 algorithm is implemented in FGb, Faugère's own implementation, which includes interfaces for using it from C/C++ or Maple, in Maple computer algebra system, as the option method=fgb of function Groebner[gbasis] in the Magma computer algebra system, in the SageMath computer algebra system, Study versions of the Faugère F5 algorithm is implemented in the SINGULAR computer algebra system; the SageMath computer algebra system.
Applications:
The previously intractable "cyclic 10" problem was solved by F5, as were a number of systems related to cryptography; for example HFE and C*. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paratrooper helmet**
Paratrooper helmet:
A paratrooper helmet is a type of combat helmet used by paratroopers and airborne forces. The main difference from standard combat helmets is that paratrooper helmets have a different harness and lining to withstand impact when jumping from aircraft and to keep the helmet stable in flight, and most have a lower-profile shell to reduce wind resistance. Most modern combat helmets have features making them suitable for airborne use. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**D-Day Daily Telegraph crossword security alarm**
D-Day Daily Telegraph crossword security alarm:
In 1944, codenames related to the D-Day plans appeared as solutions in crosswords in the British newspaper, The Daily Telegraph, which the British Secret Services initially suspected to be a form of espionage.
Background:
Leonard Dawe, Telegraph crossword compiler, created these puzzles at his home in Leatherhead. Dawe was headmaster of Strand School, which had been evacuated to Effingham, Surrey. Adjacent to the school was a large camp of US and Canadian troops preparing for D-Day, and as security around the camp was lax, there was unrestricted contact between the schoolboys and soldiers. Some of the soldiers' chatter, including D-Day codewords, may thus have been heard and learnt by some of the schoolboys.
Background:
Dawe had developed a habit of saving his crossword-compiling work time by calling boys into his study to fill crossword blanks with words; afterwards Dawe would provide clues for those words. As a result, war-related words including those codenames got into the crosswords; Dawe said later that at the time he did not know that these words were military codewords.
Background:
On 18 August 1942, a day before the Dieppe raid, 'Dieppe' appeared as an answer in The Daily Telegraph crossword (set on 17 August 1942) (clued "French port"), causing a security alarm. The War Office suspected that the crossword had been used to pass intelligence to the enemy and called upon Lord Tweedsmuir, then a senior intelligence officer attached to the Canadian Army, to investigate the crossword. Tweedsmuir, the son of author John Buchan, later commented: "We noticed that the crossword contained the word "Dieppe", and there was an immediate and exhaustive inquiry which also involved MI5. But in the end it was concluded that it was just a remarkable coincidence – a complete fluke".
D-Day alarm:
In the months before D-Day the solution words 'Gold' and 'Sword' (codenames for the two D-Day beaches assigned to the British) and 'Juno' (codename for the D-Day beach assigned to Canada) appeared in The Daily Telegraph crossword solutions, but they are common words in crosswords, and were treated as coincidences. The run of D-Day codewords as The Daily Telegraph crossword solutions continued: 2 May 1944: 'Utah' (17 across, clued as "One of the U.S."): code name for the D-Day beach assigned to the US 4th Infantry Division (Utah Beach). This would have been treated as another coincidence.
D-Day alarm:
22 May 1944: 'Omaha' (3 down, clued as "Red Indian on the Missouri"): code name for the D-Day beach to be taken by the US 1st Infantry Division (Omaha Beach).
D-Day alarm:
27 May 1944: 'Overlord' (11 across, clued as "[common]... but some bigwig like this has stolen some of it at times.", code name for the whole D-Day operation: Operation Overlord) 30 May 1944: 'Mulberry' (11 across, clued as "This bush is a centre of nursery revolutions.", Mulberry harbour) 1 June 1944: 'Neptune' (15 down, clued as "Britannia and he hold to the same thing.", codeword for the naval phase: Operation Neptune).
Investigation:
MI5 became involved and arrested Dawe and a senior colleague, crossword compiler Melville Jones. Both were interrogated intensively, but it was decided that they were innocent, although Dawe nearly lost his job as a headmaster. Afterwards, Dawe asked at least one of the boys (Ronald French) where he had got these codewords from, and he was alarmed at the contents of the boy's notebook. He gave him a severe reprimand about secrecy and national security during wartime, ordered the notebook to be burnt, and ordered the boy to swear secrecy on the Bible. It was told publicly that the leakage of codenames was coincidence. Dawe kept his interrogation secret until he described it in a BBC interview in 1958.
Aftermath:
In 1984, the approach of the 40th anniversary of D-Day reminded people of the crossword incident, causing a check for any codewords related to the 1982 Falklands War in The Daily Telegraph crosswords set around the time of that war; none were found. That induced Ronald French, then a property manager in Wolverhampton, to come forward to say that in 1944, when he was a 14-year-old at the Strand School, he inserted D-Day codenames into crosswords. He believed that hundreds of children must have known what he knew.A fictionalised version of the story appeared in The Mountain and the Molehill in series 1 of the BBC One Screen One anthology series, first broadcast on 15 October 1989. Written by David Reid and directed by Moira Armstrong, it starred Michael Gough as Mr Maggs, a school headmaster based on Dawe. Another fictionalised version appeared in the Norwegian children's book Kodeord Overlord (Codeword Overlord), written by Tor Arve Røssland and published by Vigmostad&Bjørke publishing house in 2019, whose main character is also based on Dawe. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CREB3L1**
CREB3L1:
CAMP responsive element binding protein 3 like 1 is a responsive element binding protein that in humans is encoded by the CREB3L1 gene.
Function:
The protein encoded by this gene is normally found in the membrane of the endoplasmic reticulum (ER). However, upon stress to the ER, the encoded protein is cleaved, and the released cytoplasmic transcription factor domain translocates to the nucleus. There it activates the transcription of target genes by binding to box-B elements. [provided by RefSeq, Jun 2013]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cuddle party**
Cuddle party:
A cuddle party (or a cuddle puddle or snuggle party) is an event designed with the intention of allowing people to experience nonsexual group physical intimacy through cuddling.
History:
Reid Mihalko and Marcia Baczynski, a pair of self-described "relationship coaches" in New York City, founded Cuddle Party in New York on February 29, 2004. According to their website, the events were initially created for friends who were too intimidated to attend Mihalko's informal massage workshops. Upon publication of the Cuddle Party website, the events were opened to the general public, and, thanks to a swarm of media attention, became a phenomenon in New York.In order to meet the demand for Cuddle Parties in other cities, Mihalko and Baczynski began a training and certification program in January 2005, and have since trained a number of individuals to facilitate Cuddle Parties in various cities.
Media:
A cuddle party was featured on an episode of CSI: New York titled "Grand Murder at Central Station".
The second season of the popular TV series An Idiot Abroad featured a cuddle party in the episode "Route 66". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Runcinated tesseractic honeycomb**
Runcinated tesseractic honeycomb:
In four-dimensional Euclidean geometry, the runcinated tesseractic honeycomb is a uniform space-filling tessellation (or honeycomb) in Euclidean 4-space. It is constructed by a runcination of a tesseractic honeycomb creating runcinated tesseracts, and new tesseract, rectified tesseract and cuboctahedral prism facets.
Related honeycombs:
The [4,3,3,4], , Coxeter group generates 31 permutations of uniform tessellations, 21 with distinct symmetry and 20 with distinct geometry. The expanded tesseractic honeycomb (also known as the stericated tesseractic honeycomb) is geometrically identical to the tesseractic honeycomb. Three of the symmetric honeycombs are shared in the [3,4,3,3] family. Two alternations (13) and (17), and the quarter tesseractic (2) are repeated in other families. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digestive system surgery**
Digestive system surgery:
Digestive system surgery, or gastrointestinal surgery, can be divided into upper GI surgery and lower GI surgery.
Subtypes:
Upper gastrointestinal Upper gastrointestinal surgery, often referred to as upper GI surgery, refers to a practise of surgery that focuses on the upper parts of the gastrointestinal tract. There are many operations relevant to the upper gastrointestinal tract that are best done only by those who keep constant practise, owing to their complexity. Consequently, a general surgeon may specialise in 'upper GI' by attempting to maintain currency in those skills.
Subtypes:
Upper GI surgeons would have an interest in, and may exclusively perform, the following operations: Pancreaticoduodenectomy Esophagectomy Liver resectionSurgery on the digestive system's organs is referred to as digestive system surgery, gastrointestinal surgery, or gastrointestinal (GI) surgery. Nutrients from the food we eat are processed and absorbed by the digestive system. Surgery could be required to remedy or treat certain problems or diseases that affect the digestive tract.
Subtypes:
There are many different types of digestive system operations, some of the more popular ones being: 1. Appendectomy: The surgical removal of the appendix, typically as a result of acute appendicitis, an appendix inflammation.
Subtypes:
2. Gastric bypass: A weight-loss procedure that includes separating the stomach into an upper pouch that is smaller and a lower pouch that is bigger. Then, a section of the stomach and small intestine are skipped in favor of rearranging the small intestine to link to both pouches. As a result, the stomach can contain less food and nutrients are not as well absorbed, which causes weight loss.
Subtypes:
3. Cholecystectomy: Surgically removing the gallbladder, frequently as a result of painful gallstones or other problems.
4. Colectomy: The removal of the colon (large intestine) whole or in part. This procedure is typically done to address problems including colorectal cancer, diverticular disease, or inflammatory bowel disease.
5. Resection of the liver in part: This procedure is frequently carried out to treat liver tumors or to remove damaged liver tissue.
6. Esophagectomy: Removal of the esophagus in whole or in part, usually to treat esophageal cancer.
7. Pancreatic Surgery: procedures involving the pancreas, such as the Whipple surgery (pancreaticoduodenectomy), which is used to treat some forms of pancreatic cancer and other serious pancreatic diseases.
8. Hernia Repair: A hernia, which is the protrusion of an organ or tissue through a weak spot in the abdominal wall, is treated surgically.
Subtypes:
These operations can be carried out using conventional open surgical procedures or minimally invasive techniques like laparoscopic or robotic-assisted surgery, which require smaller incisions and result in quicker recoveries. Surgery of the digestive system is a complicated topic that calls for specialized education and experience. To make educated decisions regarding their healthcare, individuals must speak with a trained surgeon about their unique situation, treatment options, and potential hazards.
Subtypes:
My father was operated on by [1] Dr. Suddhasattwa Sen. My father was found to have a hernia and [2] gallstones. You can tell where you stand and what to do next based on his exceptional care, curiosity, investigative intellect, and ability to connect. My father is currently making a full recovery following surgery.
Subtypes:
[3]Dr. Suddhasattwa Sen completed his MBBS from R.G. Kar Medical College, Kolkata in 1999, MS (General Surgery) from IPGMER in 2005, DNB (General Surgery) from National Board of Examinations in 2006, MRCS from UK in 2006, DNB (Surgical Gastroenterology) from CMRI Hospital, Kolkata in 2010. He has also completed his MNAMS from the National Academy of Medical Sciences in 2007, Fellowship in AMASI in 2007, Fellowship in Hepato - Biliary - Pancreatic Surgery and Liver Transplantation from Sir Ganga Ram Hospital, Delhi in 2007, FICS in 2012, and Certification in Endohernia Surgery & Solid Organ Endo-surgery in 2008. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phenaglycodol**
Phenaglycodol:
Phenaglycodol (brand names Acalmid, Acalo, Alterton, Atadiol, Felixyn, Neotran, Pausital, Remin, Sedapsin, Sinforil, Stesil, Ultran) is a drug described as a tranquilizer or sedative which has anxiolytic and anticonvulsant properties. It is related pharmacologically to meprobamate, though it is not a carbamate.
Synthesis:
p-Chloroacetophenone and NaCN are reacted together to give the corresponding cyanohydrin (cf Strecker synthesis), CID:12439573. The cyano group is then hydrated in acid to the corresponding amide, p-chloroatrolactamide, CID:15255544 (4). The amide group is then further hydrolyzed with a 2nd equivalent of water in concentrated lye to p-chloroatrolactic acid, [4445-13-0] (5). Esterification to Ethyl p-chloroatrolactate [100126-96-3](6). Finally, nucleophilic addition a couple of equivalents of MeMgI are added to the ester give Phenaglycodol (7) crystals.
Synthesis:
A mixed Pinacol coupling rxn between 4-chloroacetophenone [99-91-2] and acetone with magnesium activated with a small amount of trimethylsilyl chloride gave a 40% yield of phenglycodol.
Notes See "Novel trifluoromethyl derivatives of substituted diols" U.S. Patent 3,134,819 also.
A Pinacol rearrangement occurs in acidic water: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Whole-house fan**
Whole-house fan:
A whole house fan is a type of fan, commonly venting into a building's attic, designed to circulate air in an entire home or building. The fan removes hot air from the building and draws in cooler outdoor air through windows and other openings. While sometimes referred to as an "attic fan", this term properly refers to a powered attic ventilator, which exhausts hot air from the attic to the outside through an opening in the roof or gable at a low velocity.
Description:
A whole house fan pulls air out of a building and forces it into the attic space or, in the case of homes without attics, through an opening in the roof or an outside wall. This forces air from the living areas into the attic and out through the gable and/or soffit vents, while at the same time drawing air from the outside into the living areas through open windows.
Description:
Powered attic ventilators, by comparison, simply push hot air out of the attic to facilitate the intake of colder air into the structure.
History:
Before the development of electrical power the principle of cooling a building by designing it to draw in cooler air from below and vent it from the top was known. In the absence of the forced air movement produced by a fan, careful design to promote cooling air flow was required. Thomas Jefferson, US president from 1801 to 1809, was personally involved in the designs of his residences at Monticello and Poplar Forest, and was aware of these techniques. Monticello had a large central hall and aligned windows designed to allow a cooling air-current to pass through the house, with an octagonal cupola at the top of the house drawing hot air up and out through natural convection.Whole house fans were the only method for cooling homes in the early 1900s. Air conditioning was invented by Carrier in 1907 but did not become popular until the 1950s. Whole house fans are still ideal for cooling homes when the air outside is cooler than the air inside.
Types:
There are four types of whole house fans: Ceiling-mounted: Mounted in the ceiling between the attic and living space.
Ducted: Remotely mounted away from the ceiling, typically hung from the rafters; can exhaust heat from multiple locations; operation is extremely quiet.
Window-mounted: Mounted in a window frame. Can also take cool air in from outside.
Rooftop-mounted: Suitable for homes with no attic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thinc**
Thinc:
Thinc is a thin client protocol, currently at the research stage. Thinc is capable of playing full screen video and sound remotely which is notably a difficult problem for thin client protocols. There is a working VMware appliance available which runs Debian Sid. The appliance also works in VirtualBox. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spontaneous absolute asymmetric synthesis**
Spontaneous absolute asymmetric synthesis:
Spontaneous absolute asymmetric synthesis is a chemical phenomenon that stochastically generates chirality based on autocatalysis and small fluctuations in the ratio of enantiomers present in a racemic mixture. In certain reactions which initially do not contain chiral information, stochastically distributed enantiomeric excess can be observed. The phenomenon is different from chiral amplification, where enantiomeric excess is present from the beginning and not stochastically distributed. Hence, when the experiment is repeated many times, the average enantiomeric excess approaches 0%. The phenomenon has important implications concerning the origin of homochirality in nature. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Notes from the Internet Apocalypse**
Notes from the Internet Apocalypse:
Notes from the Internet Apocalypse is the first in a trilogy of books written by Cracked.com writer Wayne Gladstone entitled The Internet Apocalypse Trilogy.The second novel in the trilogy, Agents of the Internet Apocalypse, was released on July 21, 2015.
Plot:
The Internet suddenly stops working and society collapses from its loss. Internet addicts wander the streets talking to themselves, the economy crashes and the government authorizes the NET Recovery Act.
For a man named Gladstone, the Internet's vanishing comes particularly hard, following the death of his wife, when he hears rumors that someone in New York City is still online. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gallic acid reagent**
Gallic acid reagent:
The Gallic acid reagent is used as a simple spot-test to presumptively identify drug precursor chemicals. It is composed of a mixture of gallic acid and concentrated sulfuric acid.0.05 g of gallic acid is used for every 10 mls of sulfuric acid. The same ratio of gallic acid n-propyl ester in sulfuric acid can also be used.Because of its short shelf life (changing to pale violet color) it is sometimes prepared by dissolving gallic acid into ethanol and adding the sulfuric acid at the time of testing from a separate bottle. In this case 100 mL ethanol is used and one drop of sulfuric acid is used per drop of gallic acid in ethanol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Compound of two icosahedra**
Compound of two icosahedra:
This uniform polyhedron compound is a composition of 2 icosahedra. It has octahedral symmetry Oh. As a holosnub, it is represented by Schläfli symbol β{3,4} and Coxeter diagram .
The triangles in this compound decompose into two orbits under action of the symmetry group: 16 of the triangles lie in coplanar pairs in octahedral planes, while the other 24 lie in unique planes.
It shares the same vertex arrangement as a nonuniform truncated octahedron, having irregular hexagons alternating with long and short edges.
The icosahedron, as a uniform snub tetrahedron, is similar to these snub-pair compounds: compound of two snub cubes and compound of two snub dodecahedra.
Together with its convex hull, it represents the icosahedron-first projection of the nonuniform snub tetrahedral antiprism.
Cartesian coordinates:
Cartesian coordinates for the vertices of this compound are all the permutations of (±1, 0, ±τ)where τ = (1+√5)/2 is the golden ratio (sometimes written φ).
Compound of two dodecahedra:
The dual compound has two dodecahedra as pyritohedra in dual positions: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spark micrometer**
Spark micrometer:
The spark micrometer, also known as a Riess micrometer was a device used by 19th century physicists to measure potential in an electric circuit. It was developed principally by German physicist Peter Riess. It consisted of two electrodes very close together, one of which was attached to a micrometer screw with a calibrated dial, so by turning a knob the width of the gap could be adjusted very precisely. From Paschen's law, the distance between two electrodes when a spark just jumped across a gap was proportional to the potential difference (voltage) between the electrodes, so a spark micrometer could serve as a crude voltage measuring instrument, by widening the gap until the voltage was just able to jump across. In 1887 Heinrich Hertz found that a spark in a nearby apparatus could induce a spark in a spark gap between the ends of a loop of wire not attached to any source of electricity, discovering radio waves. Hertz used spark micrometers attached to small loop and dipole antennas as receivers in historic experiments to investigate the properties of radio waves. Since the voltage induced in the receiving antenna was proportional to the signal strength of the radio wave, by measuring the length of spark it produced Hertz could measure the field strength of the wave. He showed that radio waves, like light, exhibit refraction, diffraction, interference and standing waves, proving that both radio waves and light are electromagnetic waves. This validated Maxwell's 1873 theory of electromagnetism and his prediction that light consisted of electromagnetic waves. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Atlas Supervisor**
Atlas Supervisor:
The Atlas Supervisor was the program which managed the allocation of processing resources of Manchester University's Atlas Computer so that the machine was able to act on many tasks and user programs concurrently.
Its various functions included running the Atlas computer's virtual memory (Atlas Supervisor paper, section 3, Store Organisation) and is ‘considered by many to be the first recognisable modern operating system’. Brinch Hansen described it as "the most significant breakthrough in the history of operating systems." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flying Star Feng Shui**
Flying Star Feng Shui:
Xuan Kong Flying Star feng shui or Xuan Kong Fei Xing is a discipline in Feng Shui, and is an integration of the principles of Yin Yang, the interactions between the five elements, the eight trigrams, the Lo Shu numbers, and the 24 Mountains, by using time, space and objects to create an astrological chart to analyze positive auras and negative auras of a building.These include analyzing wealth, mental and physiological states, success, relationships with external parties, and health of the inhabitant.During the Qing Dynasty, it was popularized by grandmaster Shen Zhu Ren, with his book Mr. Shen's Study of Xuan Kong, or Shen Shi Xuan Kong Xue.Flying Star Feng Shui does not limit itself to buildings for the living or Yang Zhai, where rules pertaining to directions equally apply to all built structures; it also applies to grave sites and buildings for spirits or Yin Zhai.
Fundamentals:
Numbers In the Lo Shu Square, flying stars are nine numbers.Each number in the Lo Shu represents one of the Chinese Trigrams and is related to an Element, Family Member, Cardinal, Colour, Hour, Season, Organ, Ailment and many others.The numbers always move to the lower right (northwest), middle right (west), lower left (northeast), upper center (south), lower center (north), upper right (southwest), middle left (east), upper left (southeast) and back to the center.
Fundamentals:
Time Time is divided into 20-Year cycles. Each cycle of 20 years is a Period or "Yun". A grand cycle comprises 9 Periods in total, which covers a span of 180 years.Periods are used to describe the cyclical pattern of Qi. Different types of Qi have different strengths and weaknesses with the reference to a particular Period.Periodic Table on Flying Stars Timely and Untimely Flying Stars A timely star is positive for a building whereas an untimely star is negative. For the current period, Period 8 (Year 2004–2023), stars Eight, Nine and One are timely (For a building, they are timely if and only if the object placed in that palace is timely). Star Eight is most timely which is often treated as Prosperous and Noble Star. Star Nine and Star One belong to Sheng Qi, a growing energy. The other six stars are regarded as having retreating, killing or dead qi.
Fundamentals:
Space An accurate measurement of direction must be obtained before any system of Feng Shui can be undertaken.A Luopan is a magnetic compass to determine the precise direction of a structure or an item.
24 Mountains The most important ring on the Luopan is the 24 Mountain ring.On the 24 Mountain ring, each direction is subdivided into three sectors.
Fundamentals:
Taking Directions Using the principles of Yin and Yang, the facing of a building is determined by the side of the built structure that receives most Yang Qi.A house is constructed with an architectural frontage with its side that faces whatever landscape feature. The facing of that house is considered by the direction of its frontage which is most Yang in nature.In apartments, or condominiums, the facing of a unit is determined by the facing of the entire building. If the structure is not an obvious facade, the facing of the unit is determined by the side of the building having the most Yang energy (faces the busiest crowd flow).
Fundamentals:
Taking locations Energy in a building can be tapped into by locating a person within a sector that houses the energy. Ideally, living objects should be located in a sector with positive Qi as determined by Flying Star Chart.The layout of a building is demarcated with a Nine Palace grid, which looks like a tic-tac-toe grid. A door, room or other object's location refers to the square within this grid where the object is found. This may or may not correspond to the direction that the object faces. A door could be located in the southwest sector, but face south. Its location could be also southwest, and its direction to be facing to the south.
Fundamentals:
Objects Objects are essential to evaluate the Feng Shui of a building.
Fundamentals:
Mountain Mountain generates Qi. A lush and green mountain or hill generates auspicious Qi, while a barren, rocky rising area will, in general, generate inauspicious energy.In urban areas, skyscrapers, apartments or any structure that rises from the ground have a similar role to a mountain: generating energy outside. From inside, cupboards, wardrobes, or any furniture that is taller or larger than any others nearby are also considered mountains.
Fundamentals:
Water Water conducts Qi. It is essential to identify the cleanliness of the water, the location and the flow of the water formation. These include ponds, lakes, rivers, drains and fountains.In urban areas, highways and lowlands play a similar role to waterways, conducting Qi. Inside a building or a room, a spinning fan or anything lower than ground level is considered water.
Nine-Palace Flying Stars:
Nine Palace Flying Stars or Jiu Gong Fei Xing is another name of the Flying Stars method whereby palaces are the nine sectors overlaid onto a layout of the house.
Flying Star Chart A Flying Star chart consists of three numbers in each Palace of the Luo Shu. These numbers are called the Base Star, the Facing Star and the Sitting Star.
Constructing a Flying Star Chart requires he dates that the building was occupied by the owners and the facing of the buildingFor example, if a building is constructed in the year 2003, but the residents do not move in until February 4 of 2004, the Period of the building is 8, not 7.
The period does not change again unless there is major renovation undertaken to the structure.
Rules and Procedures Creating a Flying Star chart is always begun with the Base Star. The period of the building determines the number occupies the Base Star position of the Central Palace. Base Stars always fly in the Luo Shu path.
Once all the base stars are distributed amongst the nine palaces, the number in Facing Palace on the Luo Shu grid is determined by the facing direction of the building. This number is the facing star.
The Sitting Palace is always opposite of the Facing Palace. The sitting star is the number in the sitting palace.
For instance, in a Period-8 building that faces southwest, the number that locates in Facing Palace is number 5 whereas the number in Sitting Palace is number 2; thus, 5 is Facing Star and 2 is Sitting Star.
Unlike the Base Star, the Facing Star and Sitting Star can fly in either ascending (Yang) order, or descending (Yin) order. The order depends upon two factors: whether the star is an even number or an odd number, and which mountain the unit faces.
Even-numbered Stars follow a Yin-Yang-Yang form. In a certain number which comprises three mountains, if the mountain that the property faces is Yang, then the numbers fly in ascending order of Lo Shu path, and vice versa.
Odd-numbered Stars follow a Yang-Yin-Yin form. In a certain number which comprises three mountains, if the mountain that the property faces is Yang, then the numbers fly in ascending order of Lo Shu path, and vice versa.
To determine the polarity of number 5 star, go by the polarity of the Period number.
Properties of Nine Stars:
Timely and Untimely Flying stars can be timely or untimely. The nature of flying star depends on which period is to be referred and which star is being activated.
Properties of Nine Stars:
Portents and Natures Famous Combination of Stars Bull fight Result of overcoming of untimely Flying Star 3 (Wood) upon Star 2 (Earth) Relationship: Son harassing mother-in-law, a male violating a woman Activities: Problems (Conflict, arguments, combat, lawsuit, disharmonies) for mother Health: woman is hurt at the belly (while pregnant) or having stomachache Cure: introduce a red carpet or a painting that is red. Red represents fire and will be able to change the effect that wood has on earth (control cycle) into a wood, fire and earth, supporting cycle.
Properties of Nine Stars:
Death and Disastrous Result of combination of untimely Flying Stars 2 (Earth) and 5 (Earth) Activities: Accidents, bankruptcy, haunted house, death Health: Serious sickness, cancer of the digestive system Fire hazard Result of fire combination of untimely Flying Stars 2 (Earth) and 7 (Metal), or of untimely Flying Stars 7 (Metal) and 9 (Fire) Relationship: Lesbian, Male with strong female personalities Activities: Fire, explosion Penetrating the heart Result of combination of untimely Flying Stars 3 (Wood) and 7 (Metal) Relationship: Male and female fight Activities: Cripple, armed robbery, burglary, lawsuit, scams Health: foot disease, liver cancer, arm injury by metal Wisdom Result of combination of timely Flying Stars 1 (Water) and 4 (Wood), or combination of timely Flying Stars 3 (Wood) and 9 (Fire), or combination of timely Flying Stars 1 (Water) and 6 (Metal) Activities: Intelligence, Splendid for studies and research Metal in battle Result of metal combination of untimely Flying Stars 6 and 7.
Properties of Nine Stars:
Relationship: combat and competition between brothers Activities: Conflict, armed robbery, death by metal Rich and Authority Result of combination of timely Flying Stars 6 and 8, or timely Flying Stars 2 and 6 Activities: Success in business, especially real estate or owning land, Inheritance, Great authority Fame and Celebration Result of combination of timely Flying Stars 8 (Earth) and 9 (Fire) Activities: Promotion, Marriage, Birth, Fame, Championship
Chemistry of Flying Stars:
According to I-Ching, south direction belongs to fire. However, in a building, the south sector may not be a fire. The nature of a palace depends on the combination among elements of the base star, of the sitting star, of the facing star and of the Heaven Trigram.For example, a house that faces bearing 337.6 – 352.5 was built in 2001, and was occupied by residents in 2006.
Chemistry of Flying Stars:
Period of the house: Period 8, since 2006 is the year of occupying Facing: Ren mountain in North direction, bearing 337.6 – 352.5 Sitting: Bing mountain in South direction, opposite of the facing Timely Flying Star Timely flying star is a catalyst to the phase combination of Sitting Star, Facing Star, Base Star and Heavenly Trigram, implying whether to boost the current aura to best or to the opposite. For example, annual Star 1 (Water) has the ability to combat the competition of metallic stars 6 and 7.
Chemistry of Flying Stars:
Annual and Monthly Flying Stars In Sexagenary cycle, a new Chinese year begins on start of spring, which usually falls on February 4. The annual flying star that visits the center palace has to be subtracted by one every Chinese year. Once Star 1 is reached, annual star would loop back around to 9 in next year. Star 1 occupies center palace in 1999, 2008, 2017 and so on.
Chemistry of Flying Stars:
Annual and monthly stars always follow Lo Shu path.
Daily Flying Star Daily Flying Stars are governed by,RULE 1: From the onset of Winter Solstice until Summer Solstice in the following year, the daily stars progress in ascending order (... 7,8,9,1, 2, 3, ...). The stars are distributed around the nine palaces following Lo Shu path.
On the very first Yang Wood Rat day or Jia-zi day after Winter Solstice, daily Star 1 presides the center palace.
RULE 2: From the onset of Summer Solstice until the next Winter Solstice, the daily stars progress in a descending order (... 3,2,1,9,8,7,...). The stars are distributed around the nine palaces fleeing Lo Shu path.
On the very first Yang Wood Rat day or Jia-zi day after Summer Solstice, daily Star 9 presides the center palace.
Bihourly Flying Star Bi-hourly Flying Stars are ruled by,RULE 1: From the onset of Winter Solstice until Summer Solstice in the following year, the bi-hourly stars are distributed around the nine palaces following Lo Shu path. The stars progress in ascending order every bi-hourly.
On Rat, Rabbit, Horse, and Rooster days, star 1 occupies the center sector at Rat hour (11 pm of previous day – 1 am). On Ox, Dragon, Goat, and Dog days, star 4 occupies the center sector at Rat hour. On Tiger, Snake, Monkey, Pig days, star 7 occupies the center sector at Rat hour.
RULE 2: From the onset of Summer Solstice until next Winter Solstice, the bi-hourly stars are distributed around the nine palaces fleeing Lo Shu path. The stars progress in descending order every bi-hourly.
On Rat, Rabbit, Horse, and Rooster days, star 9 occupies the center sector at Rat hour. On Ox, Dragon, Goat, and Dog days, star 6 occupies the center sector at Rat hour. On Tiger, Snake, Monkey, Pig days, star 3 occupies the center sector at Rat hour. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rapid strep test**
Rapid strep test:
The rapid strep test (RST) is a rapid antigen detection test (RADT) that is widely used in clinics to assist in the diagnosis of bacterial pharyngitis caused by group A streptococci (GAS), sometimes termed strep throat. There are currently several types of rapid strep test in use, each employing a distinct technology. However, they all work by detecting the presence of GAS in the throat of a person by responding to GAS-specific antigens on a throat swab.
Medical use:
A rapid strep test may assist a clinician in deciding whether to prescribe an antibiotic to a person with pharyngitis, a common infection of the throat. Viral infections are responsible for the majority of pharyngitis, but a significant proportion (20% to 40% in children and 5% to 15% in adults) is caused by bacterial infection. The symptoms of viral and bacterial infection may be indistinguishable, but only bacterial pharyngitis can be effectively treated by antibiotics. Since the major cause of bacterial pharyngitis is GAS, the presence of this organism in a person's throat may be seen as a necessary condition for prescribing antibiotics. GAS pharyngitis is a self-limiting infection that will usually resolve within a week without medication. However, antibiotics may reduce the length and severity of the illness and reduce the risk of certain rare but serious complications, including rheumatic heart disease.RSTs may also have a public health benefit. In addition to undesirable side-effects in the individual, inappropriate antibiotic use is thought to contribute to the development of drug-resistant strains of bacteria. By helping to identify bacterial infection, RSTs may help to limit the use of antibiotics in viral illnesses, where they are not beneficial.Some clinical guidelines recommend the use of RSTs in people with pharyngitis, but others do not. US guidelines are more consistently in favor of their use than their European equivalents. The use of RSTs may be most beneficial in the third world, where the complications of streptococcal infection are most prevalent, but their use in these regions has not been well studied.Microbial culture from a throat swab is a reliable and affordable alternative to an RST which has high sensitivity and specificity. However, a culture requires special facilities and usually takes 48 hours to give a result, whereas an RST can give a result within several minutes.
Procedure:
The person’s throat is first swabbed to collect a sample of mucus. In most RSTs, this mucus sample is then exposed to a reagent containing antibodies that will bind specifically to a GAS antigen. A positive result is signified by a certain visible reaction. There are three major types of RST: First, a latex fixation test, which was developed in the 1980s and is largely obsolete. It employs latex beads covered with antigens that will visibly agglutinate around GAS antibodies if these are present. Second, a lateral flow test, which is currently the most widely used RST. The sample is applied to a strip of nitrocellulose film and, if GAS antigens are present, these will migrate along the film to form a visible line of antigen bound to labeled antibodies. Third, optical immunoassay is the newest and more expensive test. It involves mixing the sample with labeled antibodies and then with a special substrate on a film which changes colour to signal the presence or absence of GAS antigen.
Interpretation:
The specificity of RSTs for the presence of GAS is at least 95%, with some studies finding close to 100% specificity. Therefore, if the test result is positive, the presence of GAS is highly likely. However, 5% to 20% of individuals carry GAS in their throats without symptomatic infection, so the presence of GAS in an individual with pharyngitis does not prove that this organism is responsible for the infection. The sensitivity of lateral flow RSTs is somewhat low at 65% to 80%. Therefore, a negative result from such a test cannot be used to exclude GAS pharyngitis, a considerable disadvantage compared with microbial culture, which has a sensitivity of 90% to 95%. However, optical immunoassay RSTs have been found to have a much higher sensitivity of 94%.Although an RST cannot distinguish GAS infection from asymptomatic carriage of the organism, most authorities recommend antibiotic treatment in the event of a positive RST result from a person with a sore throat. US guidelines recommend following up a negative result with a microbial culture, whereas European guidelines suggest relying on the negative RST. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adjustable grip hitch**
Adjustable grip hitch:
The adjustable grip hitch is a simple and useful friction hitch which may easily be shifted up and down the rope while slack. It will hold fast when loaded, but slip when shock loaded until tension is relieved enough for it to again hold fast. It serves the same purpose as the taut-line hitch, e.g. tensioning a tent's guy line.
Adjustable grip hitch:
This knot is also called the adjustable loop and Cawley adjustable hitch. It was conceived 1982 by Canadian climber Robert Chisnall.
Tying:
The working end is wrapped inwards around the standing part (A-B) twice (1). Then another turn is made around both parts and a bight is pulled through the last wrap (2, 3) for the slipped version (left image), or just the end for the non-slipped version (right image). The knot needs to be pulled tight to actually grip (the slack is pulled out of the windings and the knot pulled tight at C and D).
Tying:
By pushing the knot along the standing part in direction of A, the line can be tightened.
The grip can be improved by adding a third turn around the standing part.
The slipped adjustable grip hitch can be easily untied by pulling the end E, even under quite heavy load.
Adjustable Bend:
Using the same knot core an adjustable bend can be made to join two lines. For that the knot is tied in both working ends, each around the other line's standing part. The length of the line can be then adjusted by pushing the knots together or apart.
Alternatives:
The taut-line hitch and midshipman's hitch are more common for the same purpose. They are made of a rolling hitch around the standing part.
An adjustable bend can also be made with rolling hitches around the other line.
The Farrimond friction hitch can be tied in the bight. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**EFHC2**
EFHC2:
EF-hand domain (C-terminal) containing 2 is a protein that in humans is encoded by the EFHC2 gene.
Gene:
EFHC2 is located on the negative strand (sense strand) of the X chromosome at p11.3. EFHC2 is one of a few, select number of genes with in vitro evidence suggesting it escapes X inactivation. EFHC2 spans 195,796 base pairs and is neighbored by NDP, the gene encoding for Norrie disease protein. Preliminary evidence based on genome wide association studies have linked a SNP in the intron between exons 13 and 14 of EFHC2 with harm avoidance.The mRNA transcript encoding the EFHC2 protein is 3,269 base pairs. The first ninety base pairs compose the five prime untranslated region and the last 1913 base pairs compose the three prime untranslated region.
Protein:
The EFHC2 gene encodes a 749-amino acid protein which contains three DM10 domains (InterPro: IPR006602) and three calcium-binding EF-hand motifs.The isoelectric point of EFHC2 is estimated to be 7.13 in humans. Relative to other proteins expressed in humans, EFHC2 has fewer alanine residues and a greater number of tyrosine residues and is predicted to reside in the cytoplasm.
Tissue distribution:
EFHC2 is widely expressed in the central nervous system as well as peripheral tissues.
Clinical significance:
A related protein, EFHC1 is encoded by a gene on chromosome 6. It has been suggested that both proteins are involved in the development of epilepsy and that this gene may be associated with fear recognition in individuals with Turner syndrome.A mutation in EFHC2 which results in a serine to a tyrosine substitution at amino acid position 430 (S430Y) has been associated with juvenile myoclonic epilepsy in a male, German population. Additionally, a single nucleotide polymorphism in EFHC2 correlates to a reduced ability of Turner syndrome patients to recognize fear in facial expressions; however, these findings remain controversial. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Overtaking criterion**
Overtaking criterion:
In economics, the overtaking criterion is used to compare infinite streams of outcomes. Mathematically, it is used to properly define a notion of optimality for a problem of optimal control on an unbounded time interval.Often, the decisions of a policy-maker may have influences that extend to the far future. Economic decisions made today may influence the economic growth of a nation for an unknown number of years into the future. In such cases, it is often convenient to model the future outcomes as an infinite stream. Then, it may be required to compare two infinite streams and decide which one of them is better (for example, in order to decide on a policy). The overtaking criterion is one option to do this comparison.
Notation:
X is the set of possible outcomes. E.g., it may be the set of positive real numbers, representing the possible annual gross domestic product. It is normalized X∞ is the set of infinite sequences of possible outcomes. Each element in X∞ is of the form: x=(x1,x2,…) .⪯ is a partial order. Given two infinite sequences x,y , it is possible that x is weakly better ( x⪰y ) or that y is weakly better ( y⪰x ) or that they are incomparable.
Notation:
≺ is the strict variant of ⪯ , i.e., x≺y if x⪯y and not y⪯x
Cardinal definition:
≺ is called the "overtaking criterion" if there is an infinite sequence of real-valued functions u1,u2,…:X→R such that: x≺y iff ∃N0:∀N>N0:∑t=1Nut(xt)<∑t=1Nut(yt) An alternative condition is: x≻y iff lim inf N→∞∑t=1Nut(xt)−∑t=1Nut(yt) Examples: 1. In the following example, x≺y :x=(0,0,0,0,...) y=(−1,2,0,0,...) This shows that a difference in a single time period may affect the entire sequence.
2. In the following example, x and y are incomparable: x=(4,1,4,4,1,4,4,1,4,…) y=(3,3,3,3,3,3,3,3,3,…) The partial sums of x are larger, then smaller, then equal to the partial sums of y , so none of these sequences "overtakes" the other.
Cardinal definition:
This also shows that the overtaking criterion cannot be represented by a single cardinal utility function. I.e, there is no real-valued function U such that x≺y iff U(x)<U(y) . One way to see this is: for every a,b∈R and a<b :(a,a,…)≺(a+1,a,…)≺(b,b,…) Hence, there is a set of disjoint nonempty segments in (X,≺) with a cardinality like the cardinality of R . In contrast, every set of disjoint nonempty segments in (R,≺) must be a countable set.
Ordinal definition:
Define XT as the subset of X∞ in which only the first T elements are nonzero. Each element of XT is of the form (x1,…,xT,0,0,0,…) .≺ is called the "overtaking criterion" if it satisfies the following axioms: 1. For every T , ⪯ is a complete order on XT 2. For every T , ⪯ is a continuous relation in the obvious topology on XT 3. For each T>1 , XT is preferentially-independent (see Debreu theorems#Additivity of ordinal utility function for a definition). Also, for every T≥3 , at least three of the factors in XT are essential (have an effect on the preferences).
Ordinal definition:
4. x≺y iff ∃T0:∀T>T0:(x1,…,xT,0,0,0,…)≺(y1,…,yT,0,0,0,…) Every partial order that satisfies these axioms, also satisfies the first cardinal definition.As explained above, some sequences may be incomparable by the overtaking criterion. This is why the overtaking criterion is defined as a partial ordering on X∞ , and a complete ordering only on XT
Applications:
The overtaking criterion is used in economic growth theory.It is also used in repeated games theory, as an alternative to the limit-of-means criterion and the discounted-sum criterion. See Folk theorem (game theory)#Overtaking. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Janus (time-reversible computing programming language)**
Janus (time-reversible computing programming language):
Janus is a time-reversible programming language written at Caltech in 1982. The operational semantics of the language were formally specified, together with a program inverter and an invertible self-interpreter, in 2007 by Tetsuo Yokoyama and Robert Glück. A Janus inverter and interpreter is made freely available by the TOPPS research group at DIKU. Another Janus interpreter was implemented in Prolog in 2009. The below summarises the language presented in the 2007 paper.
Janus (time-reversible computing programming language):
Janus is an imperative programming language with a global store (there is no stack or heap allocation). Janus is a reversible programming language, i.e. it supports deterministic forward and backward computation by local inversion.
Syntax:
We specify the syntax of Janus using Backus–Naur form.
Syntax:
A Janus program is a sequence of one or more variable declarations, followed by a sequence of one or more procedure declarations: Note, Janus as specified in the 2007 paper, allows zero or more variables, but a program that starts with an empty store, produces an empty store. A program that does nothing is trivially invertible, and not interesting in practice.
Syntax:
A variable declaration defines either a variable or a one-dimensional array: Note, variable declarations carry no type information. This is because all values (and all constants) in Janus are non-negative 32-bit integers, so all values are between 0 and 232 − 1 = 4294967295. Note however, that the Janus interpreter hosted by TOPPS uses regular two's complement 32-bit integers, so all values there are between −231 = −2147483648 and 231 − 1 = 2147483647. All variables are initialized to the value 0.
Syntax:
There are no theoretical bounds to the sizes of arrays, but the said interpreter demands a size of at least 1.A procedure declaration consists of the keyword procedure, followed by a unique procedure identifier and a statement: The entry point of a Janus program is a procedure named main. If no such procedure exists, the last procedure in the program text is the entry point.
Syntax:
A statement is either an assignment, a swap, an if-then-else, a loop, a procedure call, a procedure uncall, a skip, or a sequence of statements: For assignments to be reversible, it is demanded that the variable on the left-hand side does not appear in the expressions on either side of the assignment. (Note, array cell assignment has an expression on both sides of the assignment.) A swap (<x> "<=>" <x>) is trivially reversible.
Syntax:
For conditionals to be reversible, we provide both a test (the <e> after "if") and an assertion (the <e> after "fi"). The semantics is that the test must hold before the execution of the then-branch, and the assertion must hold after it. Conversely, the test must not hold before the execution of the else-branch, and the assertion must not hold after it. In the inverted program, the assertion becomes the test, and the test becomes the assertion. (Since all values in Janus are integers, the usual C-semantics that 0 indicates false are employed.) For loops to be reversible, we similarly provide an assertion (the <e> after "from") and a test (the <e> after "until"). The semantics is that the assertion must hold only on entry to the loop, and the test must hold only on exit from the loop. In the inverted program, the assertion becomes the test, and the test becomes the assertion. An additional <e> after "loop" allows to perform work after the test is evaluated to false. The work should ensure that the assertion is false subsequently.
Syntax:
A procedure call executes the statements of a procedure in a forward direction. A procedure uncall executes the statements of a procedure in the backward direction. There are no parameters to procedures, so all variable passing is done by side-effects on the global store.
An expression is a constant (integer), a variable, an indexed variable, or an application of a binary operation: The constants in Janus (and the Janus interpreter hosted by TOPPS) have already been discussed above.
A binary operator is one of the following, having semantics similar to their C counterparts: The modification operators are a subset of the binary operators such that for all v, λv′.⊕(v′,v) is a bijective function, and hence invertible, where ⊕ is a modification operator: The inverse functions are "-", "+", and "^", respectively.
The restriction that the variable assigned to does not appear in an expression on either side of the assignment allows us to prove that the inference system of Janus is forward and backward deterministic.
Example:
We write a Janus procedure fib to find the n-th Fibonacci number, for n>2, i=n, x1=1, and x2=1: procedure fib from i = n do x1 += x2 x1 <=> x2 i -= 1 until i = 2 Upon termination, x1 is the (n−1)-th Fibonacci number and x2 is the nth Fibonacci number. i is an iterator variable that goes from n to 2. As i is decremented in every iteration, the assumption (i = n) is only true prior to the first iteration. The test is (i = 2) is only true after the last iteration of the loop (assuming n > 2).
Example:
Assuming the following prelude to the procedure, we end up with the 4th Fibonacci number in x2: i n x1 x2 procedure main n += 4 i += n x1 += 1 x2 += 1 call fib Note, our main would have to do a bit more work if we were to make it handle n≤2, especially negative integers.
Example:
The inverse of fib is: procedure fib from i = 2 do i += 1 x1 <=> x2 x1 -= x2 loop until i = n As you can see, Janus performs local inversion, where the loop test and assertion are swapped, the order of statements is reversed, and every statement in the loop is itself reversed. The reverse program can be used to find n when x1 is the (n-1)th and x2 is the nth Fibonacci number. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Feline leukemia virus**
Feline leukemia virus:
Feline leukemia virus (FeLV) is a retrovirus that infects cats. FeLV can be transmitted from infected cats when the transfer of saliva or nasal secretions is involved. If not defeated by the animal's immune system, the virus weakens the cat's immune system, which can lead to diseases which can be lethal. Because FeLV is cat-to-cat contagious, FeLV+ cats should only live with other FeLV+ cats.
Feline leukemia virus:
FeLV is categorized into four subgroups, A, B, C and T. An infected cat has a combination of FeLV-A and one or more of the other subgroups. Symptoms, prognosis and treatment are all affected by subgroup.FeLV+ cats often have a shorter lifespan, but can still live "normal", healthy lives.
Signs and symptoms:
The signs and symptoms of infection with feline leukemia virus are quite varied and include loss of appetite, poor coat condition, anisocoria (uneven pupils), infections of the skin, bladder, and respiratory tract, oral disease, seizures, lymphadenopathy (swollen lymph nodes), skin lesions, fatigue, fever, weight loss, stomatitis, gingivitis, litter box avoidance, pancytopenia, recurring bacterial and viral illnesses, anemia, diarrhea and jaundice.Asymptomatic carriers will show no signs of disease, often for many years.
Signs and symptoms:
Progression The disease has a wide range of effects. The cat can fight off the infection and become totally immune, can become a healthy carrier that never gets sick itself but can infect other cats, or a mid-level case in which the cat has a compromised immune system. Nevertheless, the development of lymphomas is considered the final stage of the disease. Although it is thought that virus protein has to be present to induce lymphomas in cats, newer evidence shows that a high percentage of FeLV-Antigen negative lymphomas contain FeLV-DNA, indicating a "hit-and-run" mechanism of virus-induced tumor development.Once the virus has entered the cat, there are six stages to a FeLV infection: Stage One: The virus enters the cat, usually through the pharynx where it infects the epithelial cells and infects the tonsilar B-lymphocytes and macrophages. These white blood cells then filter down to the lymph nodes and begin to replicate.
Signs and symptoms:
Stage Two: The virus enters the blood stream and begins to distribute throughout the body.
Stage Three: The lymphoid system (which produces antibodies to attack infected and cancerous cells) becomes infected, with further distribution throughout the body.
Signs and symptoms:
Stage Four: The main point in the infection, where the virus can take over the body's immune system and cause viremia. During this stage the hemolymphatic system and intestines become infected.If the cat's immune system does not fight off the virus, then it progresses to: Stage Five: The bone marrow becomes infected. At this point, the virus will stay with the cat for the rest of its life. In this phase, the virus replicates and is released four to seven days later in infected neutrophils, and sometimes lymphocytes, monocytes, and eosinophils (all white blood cells formed in the bone marrow).
Signs and symptoms:
Stage Six: The cat's body is overwhelmed by infection and mucosal and glandular epithelial cells (tissue that forms a thin protective layer on exposed bodily surfaces and forms the lining of internal cavities, ducts, and organs) become infected. The virus replicates in epithelial tissues including salivary glands, oropharynx, stomach, esophagus, intestines, trachea, nasopharynx, renal tubules, bladder, pancreas, alveolar ducts, and sebaceous ducts from the muzzle.
Transmission:
Cats infected with FeLV can serve as sources of infection of FeLV-A. Cats can pass the virus between themselves through saliva and close contact, by biting another cat, and (rarely) through a litter box or food dish used by an infected cat.Once a cat has been infected with FeLV-A, additional mutated forms of the original FeLV-A virus may arise, as may FeLV subgroups B, C and T.
Transmission:
In addition to domestic cats, some other members of Felidae are now threatened by FeLV (e.g. lynx and Florida panther). Overwhelming epidemiologic evidence suggests FeLV is not transmissible to either humans or dogs.Approximately 0.5% of pet cats are persistently infected with FeLV, but many more pet cats (>35%) have specific IgG antibodies which indicate prior exposure and subsequent development of immunity instead of infection. FeLV is highly infectious.Kittens can be born with it, having contracted it from their mother while in utero.Infection is far higher in city cats, stray or owned, than in rural cats: this is entirely due to the amount of contact the cats have with each other.
Diagnosis and prognosis:
Testing for FeLV is possible with ELISA tests that look for viral antigens, free particles found in the bloodstream. These ELISA tests use blood samples most often, but can also use saliva or eye secretions. The sample is added to a container or dish that contains the antibodies to the viral antigens. If the antigens are present in the sample, the antibodies will bind to them and an indicator on the test will change color. These give a definitive diagnosis, but it cannot differentiate between acute or persistent infections. Therefore, it is recommended that the cat be retested in three to four months after the positive result to determine if the virus has been cleared from the body. Diagnosis can also be made by reference lab testing, using an immunofluorescence (IFA) test. The IFA test uses a blood sample and will detect the virus once it is in the bone marrow by detecting the virus's presence in white blood cells. IFA testing will not give positive results for transient, primary infections - the infection must be persistent to get a positive result on this test. Other than ELISA and IFA testing, routine laboratory blood work may show changes that indicate infection but cannot be used as a definitive diagnosis. There may be blood cell count changes like leukopenia, decreased Packed Cell Volume (PCV) and Total Protein (TP) levels due to anemia, hemoconcentration and hypoglycemia due to vomiting and diarrhea, electrolyte imbalance caused by dehydration and anorexia, and recurrent urinary tract infections.Cats diagnosed as persistently infected by ELISA testing may die within a few months or may remain asymptomatic for longer; median survival time after diagnosis is 2.5 years.FeLV is categorized into four subgroups.
Diagnosis and prognosis:
FeLV-A is responsible for the immunosuppression characteristic of the disease. The vast majority of cats with FeLV has FeLV-A. An exception was reported in 2013.
FeLV-B causes an additional increase in the incidence of tumors and other abnormal tissue growths. About half of FeLV infected cats have FeLV-B. It forms by recombination of FeLV-A and cat endogenous FeLV (enFeLV).
FeLV-C causes severe anemia. Approximately 1% of FeLV infected cats have FeLV-C. It forms by mutation of FeLV-A.
Diagnosis and prognosis:
FeLV-T leads to lymphoid depletion and immunodeficiency. It forms by mutation of FeLV-A.The fatal diseases are leukemias, lymphomas, and non-regenerative anemias. Although there is no known cure for the virus infection, in 2006 the United States Department of Agriculture approved Lymphocyte T-Cell Immunomodulator as a treatment aid for FeLV (see Treatment). In Canada, one feline infected with progressive Feline Leukemia Virus Type C and its Immune-Mediated Hemolytic Anemia complication has been successfully managed so far for over 6 months with the use of high-dose corticosteroids, broad-spectrum antibiotics to treat opportunistic and comorbid infections, antiviral medications, and immunomodulators such as cyclosporine after requiring multiple packed red blood cell transfusions to raise a critically low blood cell count.
Prevention:
Three types of vaccines for FeLV are available: an adjuvanted killed virus noninfectious vaccine, an adjuvanted subunite vaccine and a nonadjuvanted canarypox virus-vectored recombinant infectious vaccine (ATCvet code QI066AA01 and various combination vaccines), though no currently available vaccine offers 100% protection from the virus. Vaccination is recommended for high-risk cats: those that have access to the outdoors, feral cats, cats that do not have the virus but live with an infected cat, multicat households, and cats with an unknown status, such as cats in catteries and shelters.Serious side effects have also been reported as a result of FeLV vaccination; in particular, a small percentage of cats who received the adjuvanted killed virus vaccine developed vaccine-associated sarcomas, an aggressive tumour, at the injection site. The development of sarcomas with the use of the old FeLV and other vaccines may be due to the inflammation caused by aluminium adjuvants in the vaccines.Merial produces a recombinant vaccine consisting of canarypox virus carrying FeLV gag and env genes (sold as PUREVAX FeLV in the US and Eurifel FeLV in Europe). This is thought to be safer than the old vaccine as it does not require an adjuvant to be effective. Although this is a live virus, it originates from a bird host and so does not replicate in mammals.
Viral structure:
Feline leukemia virus (FeLV) is an RNA virus in the subfamily Oncovirinae belonging to the Retroviridae family. The virus comprises 5' and 3' LTRs and three genes: Gag (structural), Pol (enzymes) and Env (envelope and transmembrane); the total genome is about 9,600 base pairs.See the entry on retroviruses for more details on the life cycle of FeLV.
Treatment:
Approved US treatment In 2006, the United States Department of Agriculture issued a conditional license for a new treatment aid termed Lymphocyte T-Cell Immunomodulator (LTCI). Lymphocyte T-Cell Immunomodulator is manufactured and distributed exclusively by T-Cyte Therapeutics, Inc.Lymphocyte T-Cell Immunomodulator is intended as an aid in the treatment of cats infected with feline leukemia virus (FeLV) and/or feline immunodeficiency virus (FIV), and the associated symptoms of lymphocytopenia, opportunistic infection, anemia, granulocytopenia, or thrombocytopenia. The absence of any observed adverse events in several animal species suggests that the product has a very low toxicity profile.
Treatment:
Lymphocyte T-Cell Immunomodulator is a potent regulator of CD-4 lymphocyte production and function. It has been shown to increase lymphocyte numbers and Interleukin 2 production in animals.Lymphocyte T-Cell Immunomodulator is a single chain polypeptide. It is a strongly cationic glycoprotein, and is purified with cation exchange resin. Purification of protein from bovine-derived stromal cell supernatants produces a substantially homogeneous factor, free of extraneous materials. The bovine protein is homologous with other mammalian species and is a homogeneous 50 kDa glycoprotein with an isoelectric point of 6.5. The protein is prepared in a lyophilized 1 microgram dose. Reconstitution in sterile diluent produces a solution for subcutaneous injection.
Treatment:
Approved European treatment Interferon-ω (omega) is sold in Europe at least under the name Virbagen Omega and manufactured by Virbac. When used in treatment of cats infected with FeLV in non-terminal clinical stages (over the age of 9 weeks) there have been substantial improvements in mortality rates; in non-anemic cats, mortality rate of 50% was reduced by approximately 20% following treatment.
History:
FeLV was first described in cats in 1964. The disease was originally associated with leukemia; however, it was later realized that the initial signs are generally anemia and immunosuppression. The first diagnostic test became available in 1973, which led to a "test and elimination" regime, dramatically reducing the number of infected cats in the general population. The first vaccine became available in 1986.
Comparison with feline immunodeficiency virus:
FeLV and feline immunodeficiency virus (FIV) are sometimes mistaken for one another, though the viruses differ in many ways. Although they are both in the same retroviral subfamily (Orthoretrovirinae), they are classified in different genera (FeLV is a gamma-retrovirus and FIV is a lentivirus like HIV-1). Their shapes are quite different: FeLV is more circular while FIV is elongated. The two viruses are also quite different genetically, and their protein coats differ in size and composition. Although many of the diseases caused by FeLV and FIV are similar, the specific ways in which they are caused also differ. Also, while the feline leukemia virus may cause symptomatic illness in an infected cat, an FIV infected cat can remain completely asymptomatic its entire lifetime. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adverse event**
Adverse event:
An adverse event (AE) is any untoward medical occurrence in a patient or clinical investigation subject administered a pharmaceutical product and which does not necessarily have a causal relationship with this treatment. An adverse event can therefore be any unfavourable and unintended sign (including an abnormal laboratory finding), symptom, or disease temporally associated with the use of a medicinal (investigational) product, whether or not related to the medicinal (investigational) product.AEs in patients participating in clinical trials must be reported to the study sponsor and if required could be reported to the local ethics committee. Adverse events categorized as "serious" (results in death, illness requiring hospitalization, events deemed life-threatening, results in persistent or significant incapacity, a congenital anomaly, birth defect or medically important condition) must be reported to the regulatory authorities immediately, whereas non-serious adverse events are merely documented in the annual summary sent to the regulatory authority.
Adverse event:
The sponsor collects AE reports from the local researchers, and notifies all participating sites of the AEs at the other sites, as well as both the local investigators' and the sponsors' judgment of the seriousness of the AEs. This process allows the sponsor and all the local investigators access to a set of data that might suggest potential problems with the study treatment while the study is still ongoing.
Types of adverse events:
All clinical trials have the potential to produce AEs. AEs are classified as serious or non-serious; expected or unexpected; and study-related, possibly study-related, or not study-related.For example, while a study that tests the effectiveness of a new blood pressure cuff for a period of 10 minutes might seem innocuous, the potential exists for the patient's skin to be irritated by the device. Patients in that study might also die during that 10-minute period. Both skin irritation and sudden death would be considered AEs. In this case, the skin irritation would be classified as not serious, unexpected, and possibly study-related. The death would be classified as serious and unexpected (unless the patient was already at death's door). The local researcher would use his/her medical judgment to determine whether the death could have been related to the study device.
Types of adverse events:
Both the skin irritation and the death are unexpected events, and should alert the researcher to the potential existence of a problem with the device (for instance, it could have malfunctioned and shocked the patient). The researcher would report these AEs to the local Institutional Review Board and to the sponsor, and await direction on whether to stop the study. If the researcher feels there is an imminent danger posed by the device, he or she can use medical discretion to stop patients from participating in the study.
Types of adverse events:
An adverse event can also be declared in the normal treatment of a patient which is suspected of being caused by the medication being taken or a medical device used in the treatment of the patient.
In Australia, 'Adverse EVENT' refers generically to medical errors of all kinds, surgical, medical or nursing related. The most recent available official study (1995) indicated 18,000 deaths per year are a result of hospital care. The Medical Error Action Group is lobbying for legislation to improve the reporting of AEs and through quality control, minimize the needless deaths.
Reporting of adverse events:
Researchers participating in a clinical trial must report all adverse events to the drug regulatory authority of the respective country where the drug or device is to be registered [e.g. Food and Drug Administration (FDA) if it is US]. Serious AEs must be reported immediately; minor AEs are 'bundled' by the sponsor and submitted later.
Reporting of adverse events:
The type of method used to elicit AEs reported by individuals for evidence on likely adverse drug reactions (ADRs) influences the extent and nature of data. A 2018 review conducted found that some participants in clinical drug trials were asked simple open questions (i.e. 'how are you feeling?'), while in other trials, participants were given lengthy questionnaires about physical symptoms (i.e. 'do you experience muscle soreness or headaches?'. A 2022 review on adverse events in Human challenge trials found that reporting improved over time, but remains non-standardized in ways that make comparisons difficult.As there is a lack of consensus on how AEs should be assessed, there is a concern that the kinds of questions and the phrasing of questions may lead to measurement error and impede comparisons between studies and pooled analysis. However, Allen et al. concluded that the impact of the AE detected by different methods is unclear.
Grades of AE:
Clinical trial results often report the number of grade 3 and grade 4 adverse events.
Grades are defined: Grade 1 Mild AE Grade 2 Moderate AE Grade 3 Severe AE Grade 4 Life-threatening or disabling AE Grade 5 Death related to AE
Database of adverse events:
The FDA provides a database for reporting of adverse events called the Manufacturer and User Facility Device Experience Database (MAUDE)[1]. The data consist of voluntary reports since June 1993, user facility reports since 1991, distributor reports since 1993, and manufacturer reports since August 1996, and is open for public view. Two private companies have also recently started providing access to analyzed adverse event information: Clarimed provides adverse event information for medical devices and AdverseEvents provides adverse event data for drugs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sunset yellow FCF**
Sunset yellow FCF:
Sunset yellow FCF (also known as orange yellow S, or C.I. 15985) is a petroleum-derived orange azo dye with a pH dependent maximum absorption at about 480 nm at pH 1 and 443 nm at pH 13 with a shoulder at 500 nm.: 463 When added to foods sold in the United States it is known as FD&C Yellow 6; when sold in Europe, it is denoted by E Number E110.
Uses:
Sunset yellow is used in foods, condoms, cosmetics, and drugs. Sunset yellow FCF is used as an orange or yellow-orange dye.: 4 For example, it is used in candy, desserts, snacks, sauces, and preserved fruits.: 463–465 Sunset yellow is often used in conjunction with E123, amaranth, to produce a brown colouring in both chocolates and caramel.
Safety:
The acceptable daily intake (ADI) is 0–4 mg/kg under both EU and WHO/FAO guidelines.: 465 Sunset yellow FCF has no carcinogenicity, genotoxicity, or developmental toxicity in the amounts at which it is used.: 465 It has been claimed since the late 1970s, under the advocacy of Benjamin Feingold, that sunset yellow FCF causes food intolerance and ADHD-like behavior in children, but there is no scientific evidence to support these broad claims.: 452 It is possible that certain food colorings may act as a trigger in those who are genetically predisposed, but the evidence is weak.
Regulation as food additive:
Europe "European Parliament and Council Directive 94/36/EC of 30 June 1994 on colours for use in foodstuffs" harmonized rules and approved Sunset Yellow FCF for use in foodstuffs in the whole of the European Union. Before that time, approved amounts was up to each country, but naming and composition was standardized.
Regulation as food additive:
Sunset yellow FCF was not approved in Norway before 2001. That was the time when the 94/36/EC directive of 1994 was included in EFTA (now EEC) rules and came into effect, after years of delaying tactics from the Norwegian side and a heated political debate.In 2008, the Food Standards Agency of the UK called for food manufacturers to voluntarily stop using six food additive colours, tartrazine, allura red, ponceau 4R, quinoline yellow WS, sunset yellow and carmoisine (dubbed the "Southampton 6") by 2009, and provided a document to assist in replacing the colors with other colors.An EU regulation came into effect in 2010 mandating that food manufacturers include a label on foods containing the Southampton 6 stating: "may have an adverse effect on activity and attention in children".
Regulation as food additive:
United States Sunset yellow FCF is known as FD&C yellow No. 6 in the US and is approved for use in coloring food, drugs, and cosmetics with an acceptable daily intake of 3.75 mg/kg.: 2, 7
Society and culture:
Since the 1970s and the well-publicized advocacy of Benjamin Feingold, there has been public concern that food colorings may cause ADHD-like behavior in children. These concerns have led the FDA and other food safety authorities to regularly review the scientific literature, and led the UK FSA to commission a study by researchers at Southampton University of the effect of a mixture of the "Southampton 6" and sodium benzoate (a preservative) on children in the general population who consumed them in beverages; the study published in 2007. The study found "a possible link between the consumption of these artificial colours and a sodium benzoate preservative and increased hyperactivity" in the children; the advisory committee to the FSA that evaluated the study also determined that because of study limitations, the results could not be extrapolated to the general population, and further testing was recommended".The European regulatory community, with a stronger emphasis on the precautionary principle, required labelling and temporarily reduced the acceptable daily intake (ADI) for the food colorings; the UK FSA called for voluntary withdrawal of the colorings by food manufacturers. However, in 2009 the EFSA re-evaluated the data at hand and determined that "the available scientific evidence does not substantiate a link between the color additives and behavioral effects" and in 2014 after further review of the data, the EFSA restored the prior ADI levels.The US FDA did not make changes following the publication of the Southampton study, but following a citizen petition filed by the Center for Science in the Public Interest in 2008, requesting the FDA to ban several food additives, the FDA commenced a review of the available evidence, and still made no changes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Real Analysis Exchange**
Real Analysis Exchange:
The Real Analysis Exchange (RAEX) is a biannual mathematics journal, publishing survey articles, research papers, and conference reports in real analysis and related topics. Its editor-in-chief is Paul D. Humke. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Feynman slash notation**
Feynman slash notation:
In the study of Dirac fields in quantum field theory, Richard Feynman invented the convenient Feynman slash notation (less commonly known as the Dirac slash notation). If A is a covariant vector (i.e., a 1-form), A/=defγ1A1+γ2A2+γ3A3+γ4A4 where γ are the gamma matrices. Using the Einstein summation notation, the expression is simply A/=defγμAμ
Identities:
Using the anticommutators of the gamma matrices, one can show that for any aμ and bμ ,a/a/=aμaμ⋅I4=a2⋅I4a/b/+b/a/=2a⋅b⋅I4.
where I4 is the identity matrix in four dimensions.
In particular, ∂/2=∂2⋅I4.
Further identities can be read off directly from the gamma matrix identities by replacing the metric tensor with inner products. For example, tr tr tr tr tr tr tr tr tr tr tr (a/1...a/2n+1)=0 where: εμνλσ is the Levi-Civita symbol ημν is the Minkowski metric m is a scalar.
With four-momentum:
This section uses the (+ − − −) metric signature. Often, when using the Dirac equation and solving for cross sections, one finds the slash notation used on four-momentum: using the Dirac basis for the gamma matrices, γ0=(I00−I),γi=(0σi−σi0) as well as the definition of contravariant four-momentum in natural units, pμ=(E,px,py,pz) we see explicitly that p/=γμpμ=γ0p0−γipi=[p000−p0]−[0σipi−σipi0]=[E−σ→⋅p→σ→⋅p→−E].
Similar results hold in other bases, such as the Weyl basis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Russian Fedora Remix**
Russian Fedora Remix:
Russian Fedora Remix was a remix of the Fedora Linux Linux distribution adapted for Russia that was active in 2008–2019. It was neither a copy of the original Fedora nor a new Linux distribution. The project aimed to ensure that Fedora fully satisfied the needs of Russian users with many additional features provided out of the box (e.g., specific software packages, preinstalled drivers for popular graphics processors, manuals in Russian). In autumn 2019 the project was phased out because its leaders announced that it "had fulfilled its purpose by 100%" and all of the Russian-centric improvements were officially included in Fedora repositories, and Russian Fedora software maintainers became regular Fedora maintainers.
History:
The project was originally established by Arkady "Tigro" Shain under the name Tedora. The main inspiration for this was Fedora 9 being very inconvenient for Russian users with a bug impeding successful installation when the packages were customized.The project's official status was announced at a conference held in the National Research Nuclear University MEPhI (Moscow Engineering Physics Institute) on 20 November 2008. That day Tedora merged into the newly established Russian Fedora founded by Fedora Project, Red Hat, VDEL, and VNIINS. The latter is now the project's technological center.
History:
Starting with version 11, the project name was changed to Russian Fedora Remix to comply with Fedora's regulations regarding use of its trademark.The project's logo was established on 10 March 2010.
Releases:
New versions were planned to be released simultaneously with Fedora ones.
Releases:
Tedora 9 The following are the general differences from Fedora 9: The first release of Fedora 9 contained the bug which impeded the successful installation in the Russian language if the packages were customized. This problem was due to an error in the Russian translation of the Fedora installer Anaconda. The error also occurred during the installation in the text mode after the packages had been selected. Both bugs were fixed in Tedora.
Releases:
SELinux was disabled by default.
Support introduced for ReiserFS, ext4, and Journaled File System (JFS).
Tedora was distributed with the patched loader Grub to allow booting the system installed on ext4.
The installation disk included GNOME Desktop Environment, K Desktop Environment, XFCE Desktop Environment and IceWM Window Manager.
The repositories Fedora Updates, Livna, Tigro and Tigro Non-Free were used.
During the network installation the Tedora repository should have been used instead of the Fedora repository in order to receive the packages. This was due to the changed name of the distribution. For other repositories the tick in the package selection window was sufficient.
The font loader was added to the file /etc/rc.sysinit which solved the problem with the incorrect rendering of the starting phrase "udev".
Only European languages were included on the installation disk.
The keyboard layout "English (US)" is default for the Russian and Ukrainian versions. This allowed the easy creation of a new user profile during the first start of the system.
The packages could be installed directly from the installation DVD.
The keys of the Livna and Tigro repositories were automatically imported to PackageKit during the installation.
There were many programs in Tedora which were not included on the original Fedora DVD. Some notable ones are: XFCE desktop environment; window managers IceWM and Fluxbox; full support of mp3, DVD, DivX and other US-problematic codecs; Flash-plugin which worked "out of the box" even under x86-64; Opera browser; VLC player; Compiz Fusion and Nvidia drivers.
In Tedora all fonts were rendered as they should. Some additional TrueType fonts were also added.
Russian Fedora 10 Russian Fedora 10 was released on 25 November 2008.
The following are the main differences from Fedora 10: Support of all popular audio and video codecs. Many proprietary video card drivers were also supported.
XFCE, LXDE and IceWM were available from the installation medium.
SELinux was set to the Permissive mode by default.
RPM Fusion and Tigro repositories were used by default.
Different base installation modes were added: GNOME Desktop, KDE Desktop, XFCE Desktop, etc..
Package installation from the medium.
KDM was used instead of GDM in case of KDE being the only installed desktop environment.
Russian Fedora 10.1 Russian Fedora 10.1 was released on 24 February 2009.
Improvements: Problems when switching the keyboard layout were fixed. Layout indicators were added to GNOME and KDE.
PackageKit allowed to install/uninstall programs from the installation disk without the internet access.
Folders were opened in the same window in Nautilus.
Accelerators of the GNOME Terminal menu were disabled.
The Tigro repository had been replaced with the Russian Fedora repository.
System installation bugs were fixed.
Russian Fedora 10.2 Russian Fedora 10.2 was released on 14 May 2009. The differences from the previous release are updated software and bug fixes.
Russian Fedora Remix 11 Russian Fedora Remix 11 was released on the same day as Fedora 11, 9 June 2009. The distribution was available on various media: installation DVD, LiveCD (KDE, GNOME, or Xfce) and LiveDVD (KDE, GNOME, Xfce, and LXDE). Two architectures were supported: P5 (i586) and x86-64.
Differences from Fedora 11: The installation DVD contained only languages used in Europe and the Post-Soviet states.
Many keyboard layout switching improvements.
Multimedia codecs, network adapter drivers and NVidia graphic card drivers were added.
Russian Fedora Remix 12 Russian Fedora Remix 12 was released on 17 November 2009. As a result of the adoption of the new compression algorithm (XZ, the new LZMA format) the installation DVD contained more packages compared to previous versions. All languages of the original Fedora were included on this DVD.
Releases:
Russian Fedora Remix 13 The release of RFRemix 13 came out on 25 May 2010.Apart from the usual set of changes like added multimedia codecs or additional desktop environments, RFRemix 13 has introduced the following features into Fedora 13 (only notable ones are listed): Firstboot contains the special screen for changing some system preferences, for example, disabling IPv6, enabling ctrl+alt+backspace, choosing the login manager, and others.
Releases:
The feature of setting up different key combinations for switching the keyboard layout for the Russian language.
Use of Firefox 3.6.4 pre build4 which is believed more stable than version 3.6.4.
SELinux set by default to Enforcing mode.
Updated Russian Fedora logos.
Fedora Remix 20 This 13 December 2013 remix adds applications to the Fedora20 Distributions (32-bit and 64-bit versions). Included are a moderate collection of applications for flash music, application development and more.
Fedora Remix 27 The latest version of the Remix is 27, with a beta corresponding to Fedora 28 beta. Shortly expect to see Remix 28 about the time that Fedora 28 is released.
Like regular Fedora, it offers the Gnome and KDE Plasma desktop environments. It includes software that is useful for the desktop, programming, gaming, server use, and more.
Fedora Remix 28 This version was available 2 days after the regular Fedora 28 release.
Fedora Remix 29 This version was available 2 days following the regular Fedora 29 release. This version included everything that was provided by Fedora 29. Proprietary media codecs needed to watch videos or listen to podcasts were included. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Proprotein convertase 2**
Proprotein convertase 2:
Proprotein convertase 2 (PC2) also known as prohormone convertase 2 or neuroendocrine convertase 2 (NEC2) is a serine protease and proprotein convertase PC2, like proprotein convertase 1 (PC1), is an enzyme responsible for the first step in the maturation of many neuroendocrine peptides from their precursors, such as the conversion of proinsulin to insulin intermediates. To generate the bioactive form of insulin (and many other peptides), a second step involving the removal of C-terminal basic residues is required; this step is mediated by carboxypeptidases E and/or D. PC2 plays only a minor role in the first step of insulin biosynthesis, but a greater role in the first step of glucagon biosynthesis compared to PC1. PC2 binds to the neuroendocrine protein named 7B2, and if this protein is not present, proPC2 cannot become enzymatically active. 7B2 accomplishes this by preventing the aggregation of proPC2 to inactivatable forms. The C-terminal domain of 7B2 also inhibits PC2 activity until it is cleaved into smaller inactive forms that lack carboxy-terminal basic residues. Thus, 7B2 is both an activator and an inhibitor of PC2. PC2 has been identified in a number of animals, including C. elegans.In humans, proprotein convertase 2 is encoded by the PCSK2 gene. It is related to the bacterial enzyme subtilisin, and altogether there are 9 different subtilisin-like genes in mammals: furin, PACE4, PC4, PC5/6, PC7/8, PCSK9, and SKI1/S1P. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solar spicule**
Solar spicule:
In solar physics, a spicule, also known as a fibril or mottle, is a dynamic jet of plasma in the Sun's chromosphere about 300 km in diameter. They move upwards with speeds between 15 and 110 km/s from the photosphere and last a few minutes each before falling back to the solar atmosphere. They were discovered in 1877 by Angelo Secchi, but the physical mechanism that generates them is still hotly debated.
Description:
Spicules last for about 15 minutes; at the solar limb they appear elongated (if seen on the disk, they are known as "mottles" or "fibrils"). They are usually associated with regions of high magnetic flux; their mass flux is about 100 times that of the solar wind. They rise at a rate of 20 km/s (or 72,000 km/h) and can reach several thousand kilometers in height before collapsing and fading away.
Prevalence:
There are about 3,000,000 active spicules at any one time on the Sun's chromosphere. An individual spicule typically reaches 3,000–10,000 km altitude above the photosphere.
Causes:
Bart De Pontieu (Lockheed Martin Solar and Astrophysics Laboratory, Palo Alto, California, United States), Robert Erdélyi and Stewart James (both from the University of Sheffield, United Kingdom) hypothesised in 2004 that spicules form as a result of P-mode oscillations in the Sun's surface, sound waves with a period of about five minutes that causes the Sun's surface to rise and fall at several hundred meters per second (see helioseismology). Magnetic flux tubes that are tilted away from the vertical can focus and guide the rising material up into the solar atmosphere to form a spicule. There is still however some controversy about the issue in the solar physics community.
Literature:
De Pontieu, B., Erdélyi, R. and James, S: Solar chromospheric spicules from the leakage of photospheric oscillations and flows In: Nature. 430/2004, p. 536–539, ISSN 0028-0836 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stevens's power law**
Stevens's power law:
Stevens' power law is an empirical relationship in psychophysics between an increased intensity or strength in a physical stimulus and the perceived magnitude increase in the sensation created by the stimulus. It is often considered to supersede the Weber–Fechner law, which is based on a logarithmic relationship between stimulus and sensation, because the power law describes a wider range of sensory comparisons, down to zero intensity.The theory is named after psychophysicist Stanley Smith Stevens (1906–1973). Although the idea of a power law had been suggested by 19th-century researchers, Stevens is credited with reviving the law and publishing a body of psychophysical data to support it in 1957.
Stevens's power law:
The general form of the law is ψ(I)=kIa, where I is the intensity or strength of the stimulus in physical units (energy, weight, pressure, mixture proportions, etc.), ψ(I) is the magnitude of the sensation evoked by the stimulus, a is an exponent that depends on the type of stimulation or sensory modality, and k is a proportionality constant that depends on the units used.
Stevens's power law:
A distinction has been made between local psychophysics, where stimuli can only be discriminated with a probability around 50%, and global psychophysics, where the stimuli can be discriminated correctly with near certainty (Luce & Krumhansl, 1988). The Weber–Fechner law and methods described by L. L. Thurstone are generally applied in local psychophysics, whereas Stevens' methods are usually applied in global psychophysics.
Stevens's power law:
The table to the right lists the exponents reported by Stevens.
Methods:
The principal methods used by Stevens to measure the perceived intensity of a stimulus were magnitude estimation and magnitude production. In magnitude estimation with a standard, the experimenter presents a stimulus called a standard and assigns it a number called the modulus. For subsequent stimuli, subjects report numerically their perceived intensity relative to the standard so as to preserve the ratio between the sensations and the numerical estimates (e.g., a sound perceived twice as loud as the standard should be given a number twice the modulus). In magnitude estimation without a standard (usually just magnitude estimation), subjects are free to choose their own standard, assigning any number to the first stimulus and all subsequent ones with the only requirement being that the ratio between sensations and numbers is preserved. In magnitude production a number and a reference stimulus is given and subjects produce a stimulus that is perceived as that number times the reference. Also used is cross-modality matching, which generally involves subjects altering the magnitude of one physical quantity, such as the brightness of a light, so that its perceived intensity is equal to the perceived intensity of another type of quantity, such as warmth or pressure.
Criticisms:
Stevens generally collected magnitude estimation data from multiple observers, averaged the data across subjects, and then fitted a power function to the data. Because the fit was generally reasonable, he concluded the power law was correct.
Criticisms:
A principal criticism has been that Stevens' approach provides neither a direct test of the power law itself nor the underlying assumptions of the magnitude estimation/production method: it simply fits curves to data points. In addition, the power law can be deduced mathematically from the Weber-Fechner logarithmic function (Mackay, 1963), and the relation makes predictions consistent with data (Staddon, 1978). As with all psychometric studies, Stevens' approach ignores individual differences in the stimulus-sensation relationship, and there are generally large individual differences in this relationship that averaging the data will obscure (Greem & Luce 1974).
Criticisms:
Stevens' main assertion was that using magnitude estimations/productions respondents were able to make judgements on a ratio scale (i.e., if x and y are values on a given ratio scale, then there exists a constant k such that x = ky). In the context of axiomatic psychophysics, (Narens 1996) formulated a testable property capturing the implicit underlying assumption this assertion entailed. Specifically, for two proportions p and q, and three stimuli, x, y, z, if y is judged p times x, z is judged q times y, then t = pq times x should be equal to z. This amounts to assuming that respondents interpret numbers in a veridical way. This property was unambiguously rejected (Ellermeier & Faulhammer 2000, Zimmer 2005). Without assuming veridical interpretation of numbers, (Narens 1996) formulated another property that, if sustained, meant that respondents could make ratio scaled judgments, namely, if y is judged p times x, z is judged q times y, and if y' is judged q times x, z' is judged p times y', then z should equal z'. This property has been sustained in a variety of situations (Ellermeier & Faulhammer 2000, Zimmer 2005).
Criticisms:
Critics of the power law also point out that the validity of the law is contingent on the measurement of perceived stimulus intensity that is employed in the relevant experiments. (Luce 2002), under the condition that respondents' numerical distortion function and the psychophysical functions could be separated, formulated a behavioral condition equivalent to the psychophysical function being a power function. This condition was confirmed for just over half the respondents, and the power form was found to be a reasonable approximation for the rest (Steingrimsson & Luce 2006).
Criticisms:
It has also been questioned, particularly in terms of signal detection theory, whether any given stimulus is actually associated with a particular and absolute perceived intensity; i.e. one that is independent of contextual factors and conditions. Consistent with this, Luce (1990, p. 73) observed that "by introducing contexts such as background noise in loudness judgements, the shape of the magnitude estimation functions certainly deviates sharply from a power function". Indeed, nearly all sensory judgments can be changed by the context in which a stimulus is perceived. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wine label**
Wine label:
Wine labels are important sources of information for consumers since they tell the type and origin of the wine. The label is often the only resource a buyer has for evaluating the wine before purchasing it. Certain information is ordinarily included in the wine label, such as the country of origin, quality, type of wine, alcoholic degree, producer, bottler, or importer. In addition to these national labeling requirements producers may include their web site address and a QR Code with vintage specific information.
Information provided:
Label design Some wineries place great importance on the label design while others do not. There are wineries that have not changed their label's design in over 60 years, as in the case of Château Simone, while others hire designers every year to change it. Labels may include images of works by Picasso, Chagall, and other artists, and these may be collector's pieces. The elegance of the label does not determine the wine's quality. Instead, it is the information contained within the label that can provide consumers with such knowledge.
Information provided:
Most New World consumers, and increasingly European consumers, prefer to purchase wine with varietal labels and/or with brand name labels. A recent study of younger wine drinkers in the U.S. found that they perceived labels with châteaux on them to be stuffy or old-fashioned. Producers often attempt to make selecting and purchasing wine easy and non-intimidating by making their labels playful and inviting. The financial success of New World wine attributed to striking label designs has led some European producers to follow suit, as in the case of the redesign of Mouton Cadet.
Information provided:
Differences by country Wine classification systems differ by country. Wines can be classified by region and area only. For example, there are 151 châteaux in Bordeaux with "Figeac" and 22 estates in Burgundy with "Corton" on their labels. In Burgundy, there are 110 appellations in an area only one-fifth the size of Bordeaux. Complicating the system is the fact that it is common for villages to append the name of their most famous vineyard to that of the village.
Information provided:
In Spain and Portugal, the authenticity of the wine is guaranteed by a seal on the label or a band over the cork under the capsule. This is promulgated by the growers' association in each area.
German wine labels are particularly noted for the detail that they can provide in determining quality and style of the wine.
Information provided:
Almost every New World wine is labelled by grape variety and geographic origin. Semi-generic designations were once quite common in countries such as Australia and the US, but the wine authorities in areas such as Champagne have not been afraid to bring lawsuits against the use of their names outside their region, and semi-generic names are falling out of use.
Information provided:
Wines whose label does not indicate the name of the winery or the winemaker are referred to as "cleanskin" wine, particularly in Australia.
Information provided:
Degree of sweetness information is particularly inconsistent, with some countries' manufacturers always indicating it in standardized fashion in their language (brut, dolce, etc.), some traditionally not mentioning it at all or referring to it informally and vaguely in a rear-label description, and yet other countries' regulators requiring such information to be included (commonly on a secondary label) even when such information has to be added by the importer. In certain cases of conflicting regulations, a wine may, for example, even be labelled "sweet" by a manufacturer, but also "semi-sweet" (as per a different law) in the local language translation on a supplementary label mandated by the jurisdiction where it is sold.
Information provided:
Importance of labels in different types of wine The information contained in labels is important to determine the quality of the wine. For example, great importance needs to be attached to vintage dates when there are differences in climate. The taste and quality of the wine can change from year to year depending on the climate. Knowing the vintage is specially important when buying fine wines because the quality of the wine can vary from year to year due to climatic differences. The quickest way to determine the quality of the year is to use a wine chart.Vintage dates may not be important, for example, there are no vintage dates on bottles of sherry.
Information provided:
On the other hand, wines may or may not have vintages. Champagne is usually a blend from more than one year and only sometimes sold as a vintage wine. Also, Port is only sold with a vintage in years of exceptional quality.
Information provided:
Bottler and importer information A wine label may include the producer, the bottler and the merchant's names. The bottler's name must always be included in the label. The importer's name must be included in the label only for countries outside the Common Market. While it is not necessary for a wine to be bottled at its place of origin, it is obligatory for classed growth claret and vintage port to be bottled in Bordeaux and Oporto. Also, bottling of Alsace must be done within the appellation. Thus, it is important to look for terms such as mis en bouteille au château or mis au domaine because they tell you the wine is estate bottled.
Information provided:
Misleading information Labels may include terms that may be perceived as misleading. The term Blanc de blancs may be included in a label. This term means "white wine made from white grapes". The fact is that white wines are predominantly made from white grapes, with the exception of many sparkling wines, the common use of the red Pinot noir in Champagne wines being a typical example.
Information provided:
Although the word château is most associated with Bordeaux, it does not mean that the wine does come from Bordeaux, and there may not be any kind of building – let alone a château – associated with the vineyard. The name château can even be included in wines from Australia or California. Labels of Vin de pays never include the word château.Cru, a word used to classify wines can mean different things. For example, in the Médoc part of Bordeaux, this terms means the château is one of the classified growths in the regions. In Saint-Émilion, the term cru is of little importance because it bears little relation to quality. For Provence the term cru classé is included only for historical reasons. On the other hand, the use of the term cru in Switzerland has no foundation and it is included at the producer's discretion.
Information provided:
Accessibility To better reach the market of blind or sight-impaired wine consumers, labels have appeared printed in Braille. Currently the only known winemaker to print all their labels in Braille is Chapoutier winery in France, who began the practice in 1996. Other wineries in a number of countries have followed Chapoutier's lead and have braille available on at least some of their bottles.
Neck and back labels:
Neck and/or back labels may appear on a bottle. The neck label may include the vintage date and the back label usually gives extra (and usually optional) information about the wine. Government required warnings are usually found in the back label, as well as UPCs. For example, the United States requires alcoholic beverages to include a warning regarding the consumption of alcohol during pregnancy. The label also has to mention the possibility of a reduced ability to drive while intoxicated. Wine labels in the US must also disclose that the wine contains sulfites.
Wine laws:
There are different reasons for wine laws. Labelling regulations can be intended to prevent wine from sounding better than it is. Also, it is illegal to say that a wine is made from one grape when it is actually from another.
The label must also include the name and address of the bottler of the wine. If the producer is not the bottler, the bottle will say that the wine was bottled by X bottler for Y producer. Table wines may carry the name of the bottler and the postal code. The label must also include the country of origin.
Wine laws:
The size of the font is also regulated for mandatory information. Alcohol content must be included in the label, with some jurisdictions also requiring brief nutritional data, such as caloric value, carbohydrate/sugar content, etc. In Australia and the United States a wine label must also mention that it has sulfites in certain circumstances.Regulations may permit table wines to be labelled with only the colour and flavour, and no indication of quality. The use of words such as Cuvée and grand vin in labels is controlled. As mentioned above, a vin de pays must never be from a château, but from a domaine.
Wine laws:
Allergen warnings New Zealand and Australian labelling regulations have required an allergen warning to appear on wine labels since 2002 due to the use of egg whites, milk, and isinglass in the fining and clarifying of the wine. The United States is considering similar requirements. Winemakers in the US have been resistant to this requirement because the decision to put a wine through a fining process normally occurs after the labels have been ordered, which could lead to allergen warnings on wines that have had no exposure to allergens.
Wine laws:
Wine labels from the member states of the European Union must also disclose after 30 June 2012 that the wine was treated with casein and ovalbumin, derived from milk and egg respectively, used as fining agents in the winemaking.
Collecting:
Paper wine labels have long been collected. This can turn into a full-fledged hobby, with collections organized by theme, country, or region. For others, saving labels may be part of maintaining a wine tasting-notes journal, or just simply to remember a particular wine. Wine labels, or Bottle Tickets, are also an area of interest to collectors. The Wine Label Circle was formed in 1952. These objects of silver, mother-of-pearl, ivory or enamel were, in the 18th and 19th centuries, used to identify the contents of the decanter or bottle on which they were hung, the contents of which may have included in addition to wines and spirits, sauces, condiments, flavourings, perfumes, toilet waters, medicines, inks, soft drinks, preserves and cordials While labels were once easily steamed off, recent automatic bottling and labeling processes at wineries have led to the use of stronger glues. Removing these labels is often difficult and may result in considerable damage to the label. A recent, though by no means universal, innovation to bypass this problem is the use of bottles that come with the ability to tear off a small part of the label in order to remind the drinker of the name and bearing of the wine.
Collecting:
If full label removal is desired, a common approach involves putting hot water inside the bottle which makes the hold of the glue weaker. A knife can then be used to remove the label from one side by lifting it off with even pressure.
Commercial label removal kits apply a strong, transparent sticker over the label surface. The goal is to carefully pull off the sticker and literally tear the front design of the label away from the glued back. In practice, varying degrees of success are encountered and extensive damage to the label can occur. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bacterial phylodynamics**
Bacterial phylodynamics:
Bacterial phylodynamics is the study of immunology, epidemiology, and phylogenetics of bacterial pathogens to better understand the evolutionary role of these pathogens. Phylodynamic analysis includes analyzing genetic diversity, natural selection, and population dynamics of infectious disease pathogen phylogenies during pandemics and studying intra-host evolution of viruses. Phylodynamics combines the study of phylogenetic analysis, ecological, and evolutionary processes to better understand of the mechanisms that drive spatiotemporal incidence and phylogenetic patterns of bacterial pathogens. Bacterial phylodynamics uses genome-wide single-nucleotide polymorphisms (SNP) in order to better understand the evolutionary mechanism of bacterial pathogens. Many phylodynamic studies have been performed on viruses, specifically RNA viruses (see Viral phylodynamics) which have high mutation rates. The field of bacterial phylodynamics has increased substantially due to the advancement of next-generation sequencing and the amount of data available.
Methods:
Novel hypothesis (study design) Studies can be designed to observe intra-host or inter-host interactions. Bacterial phylodynamic studies usually focus on inter-host interactions with samples from many different hosts in a specific geographical location or several different geographical locations. The most important part of a study design is how to organize the sampling strategy. For example, the number of sampled time points, the sampling interval, and the number of sequences per time point are crucial to phylodynamic analysis. Sampling bias causes problems when looking at a diverse taxological samples. For example, sampling from a limited geographical location may impact effective population size.
Methods:
Generating data Experimental settings Sequencing of the genome or genomic regions and what sequencing technique to use is an important experimental setting to phylodynamic analysis. Whole genome sequencing is often performed on bacterial genomes, although depending on the design of the study, many different methods can be utilized for phylodynamic analysis. Bacterial genomes are much larger and have a slower evolutionary rate than RNA viruses, limiting studies on the bacterial phylodynamics. The advancement of sequencing technology has made bacterial phylodynamics possible but proper preparation of the whole bacterial genomes is mandatory.
Methods:
Alignment When a new dataset with samples for phylodynamic analysis are obtained, the sequences in the new data set are aligned. A BLAST search is frequently executed to find similar strains of the pathogen of interest. Sequences collected from BLAST for an alignment will need the proper information to be added to a data set, such as sample collection date and geographical location of the sample. Multiple sequence alignment algorithms (e.g., MUSCLE, MAFFT, and CLUSAL W) will align the data set with all selected sequences. After the running a multiple sequence alignment algorithm, manual editing the alignment is highly recommended. Multiple sequence alignment algorithms can leave a large amount of indels in the sequence alignment when the indels do not exist. Manually editing the indels in the data set will allow a more accurate phylogenetic tree.
Methods:
Quality control In order to have an accurate phylodynamic analysis, quality control methods must be performed. This includes checking the samples in the data set for possible contamination, measuring phylogenetic signal of the sequences, and checking the sequences for possible signs of recombinant strains. Contamination of samples in the data set can be excluded with by various laboratory methods and by proper DNA/RNA extraction methods. There are several way to check for phylogenetic signal in an alignment, such as likelihood mapping, transition/transversions versus divergence plots, and the Xia test for saturation. If phylogenetic signal of an alignment is too low then a longer alignment or an alignment of another gene in the organism may be necessary to perform phylogenetic analysis. Typically substitution saturation is only in issue in data sets with viral sequences. Most algorithms used for phylogenetic analysis do not take into recombination into account, which can alter the molecular clock and coalescent estimates of a multiple sequence alignment. Strains that show signs of recombination should either be excluded from the data set or analyzed on their own.
Methods:
Data analysis Evolutionary model The best fitting nucleotide or amino acid substitution model for a multiple sequence alignment is the first step in phylodynamic analysis. This can be accomplished with several different algorithms (e.g., IQTREE, MEGA).
Phylogeny inference There are several different methods to infer phylogenies. These include methods include tree building algorithms such as UPGMA, neighbor joining, maximum parsimony, maximum likelihood, and Bayesian analysis.
Hypothesis testing Assessing phylogenetic support Testing the reliability of the tree after inferring its phylogeny, is a crucial step in the phylodynamic pipeline. Methods to test the reliability of a tree include bootstrapping, maximum likelihood estimation, and posterior probabilities in Bayesian analysis.
Phylodynamics inference Several methods are used to assess phylodynamic reliability of a data set. These methods include estimating the data set's molecular clock, demographic history, population structure, gene flow, and selection analysis. Phylodynamic results of a data set can also influence better study designs in future experiments.
Examples:
Phylodynamics of cholera Cholera is a diarrheal disease that is caused by the bacterium Vibrio cholerae. V. cholerae has been a popular bacterium for phylodynamic analysis after the 2010 cholera outbreak in Haiti. The cholera outbreak happened right after the 2010 earthquake in Haiti, which caused critical infrastructure damage, leading to the conclusion that the outbreak was most likely due to the V. cholerae bacterium being introduced naturally to the waters in Haiti from the earthquake. Soon after the earthquake, the UN sent MINUSTAH troops from Nepal to Haiti. Rumors started circulating about terrible conditions of the MINUSTAH camp, as well as people claiming that the MINUSTAH troops were deposing of their waste in the Artibonite River, which is the major water source in the surrounding area. Soon after the MINUSTAH troops arrival, the first cholera case was reported near the location of the MINUSTAH camp. Phylodynamic analysis was used to look into the source of the Haiti cholera outbreak. Whole genome sequencing of V. cholerae revealed that there was one single point source of the cholera outbreak in Haiti and it was similar to O1 strains circulating in South Asia. Before the MINUSTAH troops from Nepal were sent to Haiti, a cholera outbreak had just occurred in Nepal. In the original research to trace the origin of the outbreak, the Nepal strains were not available. Phylodynamic analyses were performed on the Haitian strain and the Nepalese strain when it became available and affirmed that the Haitian cholera strain was the most similar to the Nepalese cholera strain. This outbreak strain of cholera in Haiti showed signs of an altered or hybrid strain of V. cholerae associated with high virulence. Typically high quality single-nucleotide polymorphisms (hqSNP) from whole genome V. cholerae sequences are used for phylodynamic analysis. Using phylodynamic analysis to study cholera helps prediction and understanding of V. cholerae evolution during bacterial epidemics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ensemble cast**
Ensemble cast:
In a dramatic production, an ensemble cast is one that comprises many principal actors and performers who are typically assigned roughly equal amounts of screen time. The term is also used interchangeably to refer to a production (typically film) with a large cast or a cast with several prominent performers.
Structure:
In contrast to the popular model, which gives precedence to a sole protagonist, an ensemble cast leans more towards a sense of "collectivity and community".
Structure:
Cinema Ensemble casts in film were introduced as early as September 1916, with D. W. Griffith's silent epic film Intolerance, featuring four separate though parallel plots. The film follows the lives of several characters over hundreds of years, across different cultures and time periods. The unification of different plot lines and character arcs is a key characteristic of ensemble casting in film; whether it is a location, event, or an overarching theme that ties the film and characters together.Films that feature ensembles tend to emphasize the interconnectivity of the characters, even when the characters are strangers to one another. The interconnectivity is often shown to the audience through examples of the "six degrees of separation" theory, and allows them to navigate through plot lines using cognitive mapping. Examples of this method, where the six degrees of separation is evident in films with an ensemble cast, are in productions such as Love Actually, Crash, and Babel, which all have strong underlying themes interwoven within the plots that unify each film.The Avengers, X-Men, and Justice League are three examples of ensemble casts in the superhero genre. In The Avengers, there is no need for a single central protagonist as each character shares equal importance in the narrative, successfully balancing the ensemble cast. Referential acting is a key factor in executing this balance, as ensemble cast members "play off each other rather than off reality".Hollywood movies with ensemble casts tend to use numerous actors of high renown and/or prestige, instead of one or two "big stars" and a lesser-known supporting cast. Filmmakers known for their use of ensemble casts include Quentin Tarantino, Wes Anderson, and Paul Thomas Anderson among others.
Structure:
Television Ensemble casting also became more popular in television series because it allows flexibility for writers to focus on different characters in different episodes. In addition, the departure of players is less disruptive than would be the case with a regularly structured cast. The television series The Golden Girls and Friends are archetypal examples of ensemble casts in American sitcoms. The science-fiction mystery drama Lost features an ensemble cast. Ensemble casts of 20 or more actors are common in soap operas, a genre that relies heavily on the character development of the ensemble. The genre also requires continuous expansion of the cast as the series progresses, with soap operas such as General Hospital, Days of Our Lives, The Young and The Restless, and The Bold and the Beautiful staying on air for decades.An example of a success for television in ensemble casting is the Emmy Award-winning HBO series Game of Thrones. The fantasy series features one of the largest ensemble casts on the small screen. The series is notorious for major character deaths, resulting in constant changes within the ensemble. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reciprocating pump**
Reciprocating pump:
A reciprocating pump is a class of positive-displacement pumps that includes the piston pump, plunger pump, and diaphragm pump. Well maintained, reciprocating pumps can last for decades. Unmaintained, however, they can succumb to wear and tear. It is often used where a relatively small quantity of liquid is to be handled and where delivery pressure is quite large. In reciprocating pumps, the chamber that traps the liquid is a stationary cylinder that contains a piston or plunger.
Types:
By source of workSimple hand-operated reciprocating pump. The simplest example is the bicycle pump, which is used ubiquitously to inflate bicycle tires and various types of sporting balls. The name "bicycle pump" is not really the most correct term because it generates more compression than volume displacement.
Power-operated deep well reciprocating pumpBy mechanismSingle-acting reciprocating pump consists of a piston of which only one side engages the fluid being displaced. The simplest example would be a syringe.
Double-acting reciprocating pump engage with both sides of the piston, each stroke of the piston carries out both suction and expulsion at the same time. Thus it require two inflow pipes and two outflow pipes.
Triple-acting reciprocating pumpBy Numbers of CylindersSingle cylinder - consists of a single cylinder connected to a shaft.
Double cylinder - consists of two cylinders connected to a shaft.
Triple cylinder - consists of three cylinders connected to a shaft.
Main components of reciprocating pump:
Reciprocating pump has wide application and to clear the basic idea it is necessary to know the basic parts.
The basic parts along with its function; Water reservoir - it is not a part of reciprocating pump, however, it is the main source where from the reciprocating pump takes the water. It may be a source of other fluid as well.
Strainer - It removes all impurities from the liquid to avert chocking the pump.
Suction Pipe - It is a pipe by which pump takes the water from the reservoir.
Suction Valve - It is a non-return type valve installed on the suction pipe and helps to flow from reservoir to pump not the vice versa.
Cylinder or liquid cylinder - The main component where pressure is increased. It is a hollow cylinder with coatings. It consists of a piston along with piston rings.
Piston or plunger and Piston rod - Piston is directly connected to a rod that is the piston rod. This piston rod is again connected to the connecting rod. Piston makes the reciprocating motion in forward and backward motion and creates pressure inside the cylinder.
Piston rings - Piston rings are small but one of the vital parts to protect the piston surface as well as cylinder inner surface from wear and tear. It helps to operate the pump smoothly.
Packing - Packing is necessary for all pumps, to have a proper sealing between cylinder and piston. It helps to stop leakage.
Crank and Connecting rod - Crank is connected to the power source and connecting rod makes connection between crank and piston rod. These component helps to change the circular motion into linear motion.
Delivery valve (non-return valve) - Like suction valve delivery valve is also non return type and it helps to build up the pressure. It protect the pump from back flow.
Delivery pipe - It helps to supply the fluid at destination.
Air Vessel - Few reciprocating pumps may have an air vessel, it helps to reduce the frictional head or acceleration head.
Reciprocating Pump Application:
Application of Reciprocating pumps, are as follows: Vessel, pipe, tank, tube, condensate pipe, heat exchanger etc. cleaning, Oil drilling, refineries, production, disposal, injections.
Pneumatic pressure applications.
Vehicle cleaning.
Sewer line cleaning.
Wet sandblasting Boiler feeding High-pressure pumps for the RO system (Reverse osmosis) Hydro testing of tanks, vessels, etc.
Firefighting system.
Wastewater treatment system.
Examples:
Examples of reciprocating pumps include Wind mill water and oil pump Hand pump Axial piston pump | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Visual markers of marital status**
Visual markers of marital status:
Visual markers of marital status, as well as social status, may include clothing, hairstyle, accessories, jewelry, tattoos, and other bodily adornments. Visual markers of marital status are particularly important because they indicate that a person should not be approached for flirtation, courtship, or sex. In some cultures, married people enjoy special privileges or are addressed differently by members of the community.
Visual markers of marital status:
Marital status markers are usually gender-specific.
Husband:
Male marital status markers are usually less elaborate than female marital status markers. In many cultures, they may not exist.
Husband:
Jewelry In many Western nations, some husbands wear a wedding ring on the third or fourth finger of the left hand. In parts of Europe, especially in German-speaking regions, as well as in Bulgaria, Cyprus, Denmark, Greece, Hungary, Latvia, Lithuania, North Macedonia, Norway, Poland, Russia, Serbia, Spain, Turkey, and Ukraine, the wedding ring is worn on the ring finger of the right hand. In the Netherlands, Catholics wear their wedding rings on the left hand, while most other people wear them on the right. Some spouses choose to wear their wedding ring on the left hand in Turkey.
Husband:
In China, Western influence has resulted in some husbands donning wedding rings. In modern times, the material of wedding rings is not strictly prescribed; they may be forged of gold, rose gold, white gold, argentium silver, palladium, platinum, titanium, or tungsten carbide.
Husband:
Manual laborers sometimes wear rings of inexpensive or more durable materials like tungsten while working or bear an ink tattoo to avoid personal injury or damaging a ring of precious metal. Additionally, the use of silicon wedding bands has become more common among men (and women) while in a gym or other environment with potential hazards (firefighter, etc.); these bands have enough flexibility to snap off if caught and are not typically electrically conductive.
Husband:
Beard Among the Amish, and Hutterite communities of Canada and the United States, only married men are permitted to grow and maintain a beard. Unmarried men were required to shave.
Husband:
Tallit In some Ashkenazi Jewish communities, men wear a prayer shawl, denominated a "tallit" or "tallis", only upon marriage. It is customary for the father of the bride to present the groom with a tallit as a wedding present. In other Jewish communities, both Ashkenazic and Sephardic, all males wear the tallis, but only husbands wear it over their heads.
Husband:
Sacred thread In contemporary Hinduism, after the Upanayana ceremony, Brahmin men wear a sacred thread (Yagnopavitam) over their left shoulder and under right arm, one that usually has 3 strands and 1 knot, when they start their traditional education. When men get married, they wear one more sacred thread, 6 strands and 2 knots, the second thread signifies his marriage to his wife. In traditional attire, the sacred thread is usually visible, but in the modern era, it is hidden within the shirt.
Wife:
Jewelry Engagement ring: In many Western cultures, a proposal of marriage is traditionally accompanied by the gift of a ring. The man proposes and offers the ring; if the woman accepts this proposal of marriage, she will wear the ring, showing she is no longer available for courtship. In British-American tradition, diamond rings are the most popular type of engagement ring. The engagement ring is usually worn on the left ring finger (sometimes this ring is switched from the right to the left hand as part of the wedding ceremony).
Wife:
Wedding ring: Many Western wedding ceremonies include the exchange of a wedding ring or rings. A common custom is for the groom to place a ring on the bride's finger and say, "With this ring I thee wed." Sometimes both bride and groom present each other with rings and repeat either these or similar words. After the ceremony, the rings are worn throughout the marriage. In the event of divorce, the couple usually removes their rings; but some widows continue to wear their wedding ring, sometimes switching it to the right hand, while others do not. In Jewish tradition, the wedding ring must be a plain band, without gemstones. China has acquired the custom of wedding rings as late as the era of post-Cultural Revolution economic reforms, when rings were affordable and Western influence was allowed in. As an adopted habit, there are variations on how rings are used, if at all, and when. Some women wear the wedding ring on the left hand, men on the right (representing yin and yang). Some men wear the ring on the right hand. Many Chinese put the ring away to protect it, except for important holidays, such as anniversaries. In Chinese tradition, higher status for men was signified by having several young female partners or concubines. A ring denies that status. For this reason, many modern Chinese men do not wear a wedding ring. Diamonds and two-partner wedding rings are advertised in modern China. The Japanese, despite American occupation in the 1950s, only acquired a culture for wedding and engagement rings in the 1960s. In 1959, the importing of diamonds was allowed. In 1967 a U.S. advertising agency created a marketing campaign on behalf of the De Beers diamonds. The campaign equated rings with other symbols of Western culture. The campaign resulted in a sharp increase in demand: From 5% in 1967, to 27% in 1972, to 50% in 1978, to 60% in 1980.
Wife:
Toe rings: Toe rings in India are usually made of silver and worn in pairs (unlike in Western countries, where they are worn singly or in unmatched pairs) on the second toe of both feet. Traditionally they are quite ornate, though more contemporary designs are now being developed to cater to the modern bride. Toe rings may not be made of gold, as gold holds a 'respected' status and may not be worn below the waist by Hindus.
Wife:
Mangalasutra: In Hindu wedding ceremonies, the groom gives the bride a gold pendant or necklace incorporating black beads or black string. This is called a "mangalasutra". It not only proclaims a woman's marriage, but it is believed by many to exercise a protective influence over the husband. This resembles the karwa-chauth celebration, in which a wife fasts and prays for her husband's welfare.
Wife:
Bangles: Hindu wives also wear bangles of either white ("sankha") and/or red colour ("pala") on both hands, and do not remove them until they are single. Often made of glass, they are broken when the marriage ends. Bollywood uses this to great dramatic effect in Hindi films, with a woman being informed of the demise of her husband by the messenger, often her son, smashing her glass bangles and wiping the sindoor off her forehead. Bangles made of gold, silver, or other materials are also worn by middle class wives.
Wife:
Headwear In Orthodox Judaism, married women cover their hair at all times outside of their home. The kind of hair covering may be determined by local custom or personal preference. Headscarves, snoods, hats, berets, or - sometimes - wigs are used. Turkmen wives wear a special hat similar to a circlet that is denominated a "Alyndaňy".
Hairstyle Hairstyle indicating marital status Zuni hair styles Cosmetics Sindoor, a red powder (vermilion), is put on Hindu wives' foreheads to indicate their marital status.
Clothes Tibetan wives wear aprons.
In western and northern Europe, it was previously common for widows to wear black, at least until the first anniversary of the death of their husbands, and some still practice this custom. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phosphate-selective porin**
Phosphate-selective porin:
Phosphate-selective porins are a family of outer bacterial membrane proteins. These are anion-specific porins, the binding site of which has a higher affinity for phosphate than chloride ions. Porin O has a higher affinity for polyphosphates, while porin P has a higher affinity for orthophosphate. In Pseudomonas aeruginosa, porin O was found to be expressed only under phosphate-starvation conditions during the stationary growth phase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rotation model of learning**
Rotation model of learning:
The rotation model of learning involves the traditional face-to-face learning with online learning. In this, the time schedule is divided and fixed between these two processes or it runs on the teacher's discretion for a given course. The classroom kids usually monitors both the face-to-face and online learning, and the online learning takes place on a one-to-one basis. Students rotate across online learning, small group instruction and pencil-pen assignments. This model includes four sub-models: Station rotation: a rotation model in which for a given course or the subject, the student rotates on a fixed schedule or at teacher's discretion one online learning station to another which might be activities such as small group instruction, group projects, and individual tutoring. It differs from individual-rotation model.
Rotation model of learning:
Lab-rotation model: The student rotates to a brick and mortar computer lab for online learning station.
Flipped-classroom model: In this, the students rotate on a fixed schedule or at a teacher's discretion across the classroom learning and online learning after the school hours. The online learning acts as a primary source for the content to be taught in the next day's class.
Rotation model of learning:
Individual-rotation model: In this rotation model, the teacher sets individual timing for the students for rotation among different learning modalities. It differs from other rotation models as the student doesn't have to rotate to each available station.Station rotation, lab-rotation, and flipped-classroom rotation are to be considered truly blended or hybrid classrooms while individual-rotation model borders on a more typical online classroom. Station, lab, and flipped-classroom rotations are considered to be blended, or hybrid classrooms, because they meet the four criteria; they represent an integration of old and new styles, they are designed with traditional mainstream curriculum in mind with the addition of online content, they keep students in their seats in traditional brick-and-mortar classrooms, and they are not simpler versions of the class but integrated classrooms where the teacher still needs expertise from traditional styles. The individual-rotation model, while considered a blended classroom, really falls closer to online learning. The curriculum is built for the individual, meaning that students could independently work completely online if this style suits them.Over all the rotation model of learning consists of the following components: Personalized online instruction Teacher led small group instruction Independent and collaborative practice
Role of teacher:
In the rotation model, teachers navigate the program in such a way that they would be aware of the issues faced by the students during their independent work time, which helps in designing assessment tools to monitor their students' learning path. Teachers should follow certain guidelines to assist the students to use computers effectively, appropriately and safely.
Teachers should be always prepared for technical issues or the power outages coming up between or before the class.
They should address the behaviour of the students on the computer.
The physical layout of the class should be designed for the transition happening during the rotations.
Advantages:
This model helps the students to receive differentiated experiences from face-to-face learning to online learning along with independent and collaborative practice.
This model helps students to learn on their own pace and researchers have shown that these students out performs the students who have exposure only to one type of instruction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SpecC**
SpecC:
SpecC is a System Description Language (SDL), or System-level Design Language (SLDL), and is an extension of the ANSI C programming language. It is used to aid the design and specification of digital embedded systems, providing improved productivity whilst retaining the ability to change a design at functional and specification level, unlike HDLs like Verilog and VHDL. An architectural model can be created which allows other tools to directly map the design onto silicon or FPGA. The main aim is for the reuse, exchange and integration of IP at various levels of abstraction.
SpecC:
The language and design methodology were created by Rainer Dömer and Daniel Gajski at the Centre for Embedded Computer Systems at University of California, Irvine in 2001.
SpecC:
Similar projects and design methodologies include SystemC, an SDL based on C++. Although this rival language has seen much more widespread industry usage (although SpecC is popular in Japan), SpecC retains simplicity whilst also providing the vital features of any SDL, such as concurrency (SpecC provides pipelined and parallel flows), synchronisation, state transitions (not available in Verilog), and composite data types . | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mutt (email client)**
Mutt (email client):
Mutt is a text-based email client for Unix-like systems. It was originally written by Michael Elkins in 1995 and released under the GNU General Public License version 2 or any later version.The Mutt slogan is "All mail clients suck. This one just sucks less."
Operation:
Mutt supports most mail storing formats (notably both mbox and Maildir) and protocols (POP3, IMAP, etc.). It also includes MIME support, notably full PGP/GPG and S/MIME integration.
Operation:
Mutt was originally designed as a Mail User Agent (MUA) and relied on locally accessible mailbox and sendmail infrastructure. According to the Mutt homepage "though written from scratch, Mutt's initial interface was based largely on the ELM mail client". New to Mutt were message scoring and threading capabilities. Support for fetching and sending email via various protocols such as POP3, IMAP and SMTP was added later. However, Mutt still relies on external tools for composing and filtering messages.
Operation:
Mutt has hundreds of configuration directives and commands. It allows for changing all the key bindings and making keyboard macros for complex actions, as well as the colors and the layout of most of the interface. Through variants of a concept known as "hooks", many of its settings can be changed based on criteria such as current mailbox or outgoing message recipients. Mutt supports an optional sidebar, similar to those often found in graphical mail clients. There are also many patches and extensions available that add functionality, such as NNTP support.
Operation:
Mutt is fully controlled with the keyboard, and has support for mail conversation threading, meaning one can easily move around long discussions such as in mailing lists. New messages are composed with an external text editor, unlike pine, which embeds its own editor known as pico.
Operation:
Mutt is capable of efficiently searching mail stores by calling on mail indexing tools such as Notmuch, and many people recommend Mutt be used this way. Alternatively, users can search their mail stores from Mutt by calling grep via a Bash script.Mutt is often used by security professionals or security-conscious users because of its smaller attack surface compared with other clients that ship with a web browser rendering engine or a JavaScript interpreter. In relation to Transport Layer Security, Mutt can be configured to trust certificates on first use, and not to use older, less secure versions of the Transport Layer Security protocol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Crispy pata**
Crispy pata:
Crispy pata is a Filipino dish consisting of deep fried pig trotters or knuckles served with a soy-vinegar dip. It can be served as party fare or an everyday dish. Many restaurants serve boneless pata as a specialty. The dish is quite similar to the German Schweinshaxe. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dump'n'Chase**
Dump'n'Chase:
The principle of Dump'n'Chase is a method of play in ice hockey to penetrate the enemy zone.
This method involves aggressively exerting pressure or forcing scoring chances upon the opposite team. This tactic is used prominently in North American ice hockey leagues. It is important that the teams own players do not run into offside.
Dump'n'Chase:
Tactically, a Dump’n’Chase accomplishes a few things. From a defensive standpoint, it moves the puck as far away from the own net as possible. It also removes or decreases the risk of offsides or neutral zone turnovers. Additionally, it advances the attack deep into the enemy zone in the case that an offensive player gains possession of the puck. The tactic may also be used as a way for the attacking play to switch sides from left to right or right to left through a diagonal cross-ice dump-in.
Dump'n'Chase:
Dump’n’Chase is about the player leading the target crossing the center line and then playing the puck to the back gate so that the puck ricochets off it. As soon as the puck is on the way to the boards, the players of their own team run around the opposing defenders, so that they are the first at the puck and can thus build up attack pressure in the opposing zone.
Dump'n'Chase:
This tactic is mainly used when the opponent has chosen a more defensive strategy. This can be when the opponent is waiting on the opposing blue line in a chain of five or four and wants to intercept or prevent the counterattack. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SEMA3B**
SEMA3B:
Semaphorin-3B is a protein that in humans is encoded by the SEMA3B gene.
Function:
The semaphorin/collapsin family of molecules plays a critical role in the guidance of growth cones during neuronal development. The secreted protein encoded by this gene family member is important in axonal guidance and has been shown to act as a tumor suppressor by inducing apoptosis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microoptoelectromechanical systems**
Microoptoelectromechanical systems:
Microoptoelectromechanical systems (MOEMS), also known as optical MEMS, are integrations of mechanical, optical, and electrical systems that involve sensing or manipulating optical signals at a very small size. MOEMS includes a wide variety of devices, for example optical switch, optical cross-connect, tunable VCSEL, microbolometers. These devices are usually fabricated using micro-optics and standard micromachining technologies using materials like silicon, silicon dioxide, silicon nitride and gallium arsenide.
Merging technologies:
MOEMS includes two major technologies, microelectromechanical systems and micro-optics. Both these two technologies independently involve in batch processing similar to integrated circuits, and micromachining similar to fabrication of microsensor.
History of MOEMS:
During 1991-1993, Dr. M. Edward Motamedi, a former Rockwell International innovator in the areas of both microelectromechanical systems and micro-optics, used internally the acronym of MOEMS for microoptoelectromechanical systems. This was to distinguish between optical MEMS and MOEMS, where optical MEMS could include bulk optics but MOEMS is truly based on microtechnology where MOEMS devices are batch-processed exactly like integrated circuits, but this is not true in most cases for optical MEMS.
History of MOEMS:
In 1993, Dr. Motamedi officially introduced MOEMS for the first time, as the powerful combination of MEMS and micro-optics, in an invited talk at the SPIE Critical Reviews of Optical Science and Technology conference in San Diego. In this talk Dr. Motamedi introduced the figure below, for showing that MOEMS is the interaction of three major microtechnologies; namely micro-optics, micromechanics, and microelectronics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Propositional directed acyclic graph**
Propositional directed acyclic graph:
A propositional directed acyclic graph (PDAG) is a data structure that is used to represent a Boolean function. A Boolean function can be represented as a rooted, directed acyclic graph of the following form: Leaves are labeled with ⊤ (true), ⊥ (false), or a Boolean variable.
Non-leaves are △ (logical and), ▽ (logical or) and ◊ (logical not).
△ - and ▽ -nodes have at least one child.
Propositional directed acyclic graph:
◊ -nodes have exactly one child.Leaves labeled with ⊤ (⊥ ) represent the constant Boolean function which always evaluates to 1 (0). A leaf labeled with a Boolean variable x is interpreted as the assignment x=1 , i.e. it represents the Boolean function which evaluates to 1 if and only if x=1 . The Boolean function represented by a △ -node is the one that evaluates to 1, if and only if the Boolean function of all its children evaluate to 1. Similarly, a ▽ -node represents the Boolean function that evaluates to 1, if and only if the Boolean function of at least one child evaluates to 1. Finally, a ◊ -node represents the complementary Boolean function its child, i.e. the one that evaluates to 1, if and only if the Boolean function of its child evaluates to 0.
PDAG, BDD, and NNF:
Every binary decision diagram (BDD) and every negation normal form (NNF) are also a PDAG with some particular properties. The following pictures represent the Boolean function f(x1,x2,x3)=−x1∗−x2∗−x3+x1∗x2+x2∗x3 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TrkB IRES**
TrkB IRES:
The TrkB internal ribosome entry site (IRES) is an RNA element which is present in the 5' UTR sequence of the mRNA. TrkB is a neurotrophin receptor which is essential for the development and maintenance of the nervous system. The internal ribosome entry site IRES element allows cap-independent translation of TrkB which may be needed for efficient translation in neuronal dendrites. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Risky sexual behavior**
Risky sexual behavior:
Risky sexual behavior is the description of the activity that will increase the probability that a person engaging in sexual activity with another person infected with a sexually transmitted infection will be infected or become pregnant, or make a partner pregnant. It can mean two similar things: the behavior itself, and the description of the partner's behavior. The behavior could be unprotected vaginal, oral, or anal intercourse. The partner could be a nonexclusive partner, HIV-positive, or an intravenous drug user. Drug use is associated with risky sexual behaviors.
Factors:
Risky sexual behavior can be: Barebacking, i.e. sex without a condom.
Mouth-to-genital contact.
Starting sexual activity at a young age.
Having multiple sex partners.
Having a high-risk partner, someone who has multiple sex partners or infections.
Anal sex without condom and proper lubrication.
Sex with a partner who has ever injected drugs.
Factors:
Engaging in sex work.Risky sexual behavior includes unprotected intercourse, multiple sex partners, and illicit drug use. The use of alcohol and illicit drugs greatly increases the risk of gonorrhea, chlamydia, trichomoniasis, hepatitis B, and HIV/AIDS. Trauma from penile-anal sex has been identified as a risky sexual behavior.Risky sexual behaviors can lead to serious consequences both for person and their partner(s). This sometimes includes cervical cancer, ectopic pregnancy and infertility. An association exists between those with a higher incidence of body art (body piercings and tattoos) and risky sexual behavior.
Epidemiology:
According to the National Youth Behavior Risk Survey, 19% of all sexually active adolescents in the US consumed alcohol or used other drugs before their last sexual intercourse. In contrast, adolescents who reported no substance use were found to be the least likely to engage in sexual risk-taking.Most Canadian and American adolescents aged 15 to 19 years describe having had sexual intercourse at least one time. In the same population, 23.9% and 45.5% of young, adolescent females describe having sex with two or more sexual partners during the previous year. Of the males in the same population, 32.1% of Canadian males had two or more partners and 50.8% of American males also describe a similar experience.Alcohol is the most commonly used substance among youth aged 18–25 years. 10% of young adults had an alcohol use disorder in 2018, which is greater than the prevalence among all other age cohorts. Research indicates that alcohol can lead to risky sexual behavior including lack of condom use, sexual intercourse with a non-primary partner, as well as lower likelihood of using contraception in general.Among older age cohorts, a similar positive trend can be observed in risky sexual behavior when combined with alcohol use. For instance, research on older men who have sex with men (MSM) showed that the likelihood of engaging in risky sexual activities increased with the use of alcohol and other drugs.
Treatment and interventions:
There are several factors linked to risky sexual behaviors. These include inconsistent condom use, alcohol use, polysubstance abuse, depression, lack of social support, recent incarceration, residing with a partner, and exposure to intimate partner violence and childhood sexual abuse. Further research is needed to establish the exact causal relationship between these factors and risky sexual behaviors. Sexual health risk reduction can include motivational exercises, assertiveness skills, educational and behavioral interventions. Counseling has been developed and implemented for people with severe mental illness, may improve participants' knowledge, attitudes, beliefs, behaviors or practices (including assertiveness skills) and could lead to a reduction in risky sexual behavior.There are several studies on the management of risky sexual behavior among youth, with most focusing on the prevention of sexually transmitted infections (STIs) such as HIV. A meta-analysis evaluating prevention interventions among adolescents offers support for these programs in contributing to successful outcomes such as decreased incident STIs, increased condom use, and decreased or delayed penetrative sex. The findings showed that most interventions were administered in a group format and involved psychoeducation on HIV/AIDS, active interpersonal skills-training with some additionally focusing on self-management skills-training and condom information/ demonstrations. Some evidence suggests that family interventions may be beneficial in preventing long-term risky sexual behavior in early adulthood. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hobonichi Techo**
Hobonichi Techo:
The Hobonichi Techo (ほぼ日手帳, Hobonichi Techō) is a popular Japanese brand notebook/daily planner manufactured by Hobo Nikkan Itoi Shinbun (Hobonichi).
Features:
The planner's pages feature a 4 mm lined grid to maximize customization and is printed on Tomoe River (Tomoegawa) paper, a very thin and high quality paper. It is resistant to bleeding and feathering of inks and paints, especially fountain pen inks and watercolors, though alcohol-based pens will bleed through. The book is structured with a yearly overview section, monthly pages, various informational pages in the back, and most importantly, one full page dedicated to each day. Daily pages display the current moon phase, the day/week of the year, and short quotations from a variety of sources. The planner may be used in an optional cover with several useful features like card pockets, bookmarks, and a pen holder which doubles as a clasp. Covers are offered in a variety of materials and are sold in a different set of designs every year. The planner and covers are available in two sizes, the A6 'Original', and the larger A5 'Cousin'. Each Hobonichi Techo has a unique serial number printed on the final page, as a mark of its authenticity.
English-language version:
In 2013, Hobonichi collaborated with designer brand ARTS&SCIENCE to release an English-language version of the planner for the first time. The English planner is available only in A6 size, differentiated by a black cover with gold stamped Japanese characters for planner, "手帳" (Techō), and the ARTS&SCIENCE logo, as opposed to the cream cover of the Japanese version. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thermal acoustic imaging**
Thermal acoustic imaging:
Thermal acoustic imaging (TAI) is a proprietary active thermographic inspection process developed by Pratt and Whitney (P&W) in 2005; TAI is a nondestructive testing (NDT) method to detect internal and external cracking of hollow core turbofan engine fan blades. TAI is performed to inspect the PW4000 112-inch (2,800 mm) diameter fan blades in an enclosed air-conditioned room within P&W's overhaul and repair facility in East Hartford, Connecticut.
Technical description:
In the TAI process, sound energy is applied to excite the fan blade. If a discontinuity exists in the metal, the excitation will cause each side of the contacting discontinuity to move, resulting in frictional heating. The frictional heating is detected on the surface of the fan blade by a thermal imaging sensor.To examine a complete fan blade, the convex and concave surfaces of the fan blade airfoil are divided into zones and the computer controlled thermal sensor takes an image of each zone while sound energy is applied. After both sides of the fan blade have been completely scanned, the images are processed by a computer and then displayed on a monitor for evaluation by an inspector. The computer can enhance the image to assist the inspector in evaluating any indications. Some indeterminate indications may require reinspection, which in turn may require repainting of the fan blade and repeating the TAI process. If a fan blade has an indication the inspector is not able to evaluate conclusively, the inspector should forward the images along with the fan blade to a Process Engineer for further evaluation and possible application of alternative NDT methods such as ultrasonic and/or x-ray inspection.
History:
In 2005, when TAI was initiated, P&W, following standard NDT industry practice, categorized the TAI as a new and emerging technology that allowed TAI to be performed without establishing a formal training program and certification requirements.
History:
In 2018, P&W continued to categorize TAI as a new and emerging technology, despite the manufacture and subsequent TAI inspection of over 9,000 fan blades. In the final report on the 2018 United Airlines Flight 1175 (UA1175) contained engine failure of its PW4000-112 series engine, where the fractured fan blade was found to have had a rejectable indication at the previous TAI inspection that was not properly identified, the National Transportation Safety Board faulted P&W for this, concluding the probable cause of the UA1175 incident was: the fracture of a fan blade due to P&W's continued classification of the TAI inspection process as a new and emerging technology that permitted them to continue accomplishing the inspection without having to develop a formal, defined initial and recurrent training program or an inspector certification program. The lack of training resulted in the inspector making an incorrect evaluation of an indication that resulted in a blade with a crack being returned to service where it eventually fractured.
History:
After this incident, P&W initiated an overinspection and reviewed the TAI inspection records for all 9,606 previously inspected PW4000 112-inch fan blades. During the overinspection, there were two fan blades that were in service at Korean Air and United Airlines that had TAI indications that could not be resolved. Subsequent x-ray inspection of both revealed peening shot in the cavity in the area where the previous TAI indication had been reported. P&W also reported that between December 2004 and the time of the UA1175 incident in 2018, cracks had been detected in five PW4000 112-inch fan blades. One was identified visually and the other four were detected by TAI.On February 23, 2021, four days after a similar contained engine failure incident that occurred in another PW4000 engine on United Airlines Flight 328 (UA328), the U.S. Federal Aviation Administration (FAA) issued an Emergency Airworthiness Directive that required U.S. operators of airplanes equipped with Pratt & Whitney PW4000-112 engines to inspect these engines before further flight. After reviewing the available data and considering other safety factors, the FAA determined that operators must conduct a TAI inspection of the large titanium fan blades located at the front of each engine. FAA noted that TAI technology can detect cracks on the interior surfaces of the hollow fan blades, or in areas that cannot be seen during a visual inspection. The previous inspection interval for this engine was 6,500 flight cycles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tryptamine**
Tryptamine:
Tryptamine is an indolamine metabolite of the essential amino acid, tryptophan. The chemical structure is defined by an indole—a fused benzene and pyrrole ring, and a 2-aminoethyl group at the second carbon (third aromatic atom, with the first one being the heterocyclic nitrogen). The structure of tryptamine is a shared feature of certain aminergic neuromodulators including melatonin, serotonin, bufotenin and psychedelic derivatives such as dimethyltryptamine (DMT), psilocybin, psilocin and others. Tryptamine has been shown to activate trace amine-associated receptors expressed in the mammalian brain, and regulates the activity of dopaminergic, serotonergic and glutamatergic systems. In the human gut, symbiotic bacteria convert dietary tryptophan to tryptamine, which activates 5-HT4 receptors and regulates gastrointestinal motility. Multiple tryptamine-derived drugs have been developed to treat migraines, while trace amine-associated receptors are being explored as a potential treatment target for neuropsychiatric disorders.For a list of tryptamine derivatives, see: List of substituted tryptamines.
Natural occurrences:
For a list of plants, fungi and animals containing tryptamines, see List of psychoactive plants and List of naturally occurring tryptamines.
Mammalian brain Endogenous levels of tryptamine in the mammalian brain are less than 100ng per gram of tissue. However, elevated levels of trace amines have been observed in patients with certain neuropsychiatric disorders, such as bipolar depression and schizophrenia.
Natural occurrences:
Mammalian gut microbiome Tryptamine is relatively abundant in the gut and feces of humans and rodents. Commensal bacteria, including Ruminococcus gnavus and Clostridium sporogenes in the gastrointestinal tract, possess the enzyme tryptophan decarboxylase, which aids in the conversion of dietary tryptophan to tryptamine. Tryptamine is a ligand for gut epithelial serotonin type 4 (5-HT4) receptors and regulates gastrointestinal electrolyte balance through colonic secretions.
Metabolism:
Biosynthesis To yield tryptamine in vivo, tryptophan decarboxylase removes the carboxylic acid group on the α-carbon of tryptophan. Synthetic modifications to tryptamine can produce serotonin and melatonin; however, these pathways do not occur naturally as the main pathway for endogenous neurotransmitter synthesis.
Catabolism Monoamine oxidases A and B are the primary enzymes involved in tryptamine metabolism to produce indole-3-acetaldehyde, however it is unclear which isoform is specific to tryptamine degradation.
Mechanisms of action and biological effects:
Neuromodulation Tryptamine can weakly activate the trace amine-associated receptor, TAAR1 (hTAAR1 in humans). Limited studies have considered tryptamine to be a trace neuromodulator capable of regulating the activity of neuronal cell responses without binding to the associated postsynaptic receptors.
Mechanisms of action and biological effects:
hTAAR1 hTAAR1 is a stimulatory G-protein coupled receptor (GPCR) that is weakly expressed in the intracellular compartment of both pre- and postsynaptic neurons. Tryptamine and other hTAAR1 agonists can increase neuronal firing by inhibiting neurotransmitter recycling through cAMP-dependent phosphorylation of the monoamine reuptake transporter. This mechanism increases the amount of neurotransmitter in the synaptic cleft, subsequently increasing postsynaptic receptor binding and neuronal activation. Conversely, when hTAAR1 are colocalized with G protein-coupled inwardly-rectifying potassium channels (GIRKs), receptor activation reduces neuronal firing by facilitating membrane hyperpolarization through the efflux of potassium ions. The balance between the inhibitory and excitatory activity of hTAAR1 activation highlights the role of tryptamine in the regulation of neural activity.Activation of hTAAR1 is under investigation as a novel treatment for depression, addiction, and schizophrenia. hTAAR1 is primarily expressed in brain structures associated with dopamine systems, such as the ventral tegmental area (VTA) and serotonin systems in the dorsal raphe nuclei (DRN). Additionally, the hTAAR1 gene is localized at 6q23.2 on the human chromosome, which is a susceptibility locus for mood disorders and schizophrenia. Activation of TAAR1 suggests a potential novel treatment for neuropsychiatric disorders, as TAAR1 agonists produce anti-depressive activity, increased cognition, reduced stress and anti-addiction effects.
Mechanisms of action and biological effects:
Gastrointestinal motility Tryptamine produced by mutualistic bacteria in the human gut activates serotonin GPCRs ubiquitously expressed along the colonic epithelium. Upon tryptamine binding, the activated 5-HT4 receptor undergoes a conformational change which allows its Gs alpha subunit to exchange GDP for GTP, and its liberation from the 5-HT4 receptor and βγ subunit. GTP-bound Gs activates adenylyl cyclase, which catalyzes the conversion of ATP into cyclic adenosine monophosphate (cAMP). cAMP opens chloride and potassium ion channels to drive colonic electrolyte secretion and promote intestinal motility. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rasterisation**
Rasterisation:
In computer graphics, rasterisation (British English) or rasterization (American English) is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image (a series of pixels, dots or lines, which, when displayed together, create the image which was represented via shapes). The rasterized image may then be displayed on a computer display, video display or printer, or stored in a bitmap file format. Rasterization may refer to the technique of drawing 3D models, or the conversion of 2D rendering primitives such as polygons, line segments into a rasterized format.
Etymology:
The term "rasterisation" comes from German Raster 'grid, pattern, schema', and Latin rāstrum 'scraper, rake'.
2D images:
Line primitives Bresenham's line algorithm is an example of an algorithm used to rasterize lines.
Circle primitives Algorithms such as Midpoint circle algorithm are used to render circle onto a pixelated canvas.
3D images:
Rasterization is one of the typical techniques of rendering 3D models. Compared with other rendering techniques such as ray tracing, rasterization is extremely fast and therefore used in most realtime 3D engines. However, rasterization is simply the process of computing the mapping from scene geometry to pixels and does not prescribe a particular way to compute the color of those pixels. The specific color of each pixel is assigned by a pixel shader (which in modern GPUs is completely programmable). Shading may take into account physical effects such as light position, their approximations or purely artistic intent.
3D images:
The process of rasterizing 3D models onto a 2D plane for display on a computer screen ("screen space") is often carried out by fixed function (non-programmable) hardware within the graphics pipeline. This is because there is no motivation for modifying the techniques for rasterization used at render time and a special-purpose system allows for high efficiency.
3D images:
Triangle rasterization Polygons are a common representation of digital 3D models. Before rasterization, individual polygons are typically broken down into triangles, therefore a typical problem to solve in 3D rasterization is rasterization of a triangle. Properties that are usually required from triangle rasterization algorithms are that rasterizing two adjacent triangles (i.e. those that share an edge) leaves no holes (non-rasterized pixels) between the triangles, so that the rasterized area is completely filled (just as the surface of adjacent triangles). And no pixel is rasterized more than once, i.e. the rasterized triangles don't overlap. This is to guarantee that the result doesn't depend on the order in which the triangles are rasterized. Overdrawing pixels can also mean wasting computing power on pixels that would be overwritten.This leads to establishing rasterization rules to guarantee the above conditions. One set of such rules is called a top-left rule, which states that a pixel is rasterized if and only if its center lies completely inside the triangle. Or its center lies exactly on the triangle edge (or multiple edges in case of corners) that is (or, in case of corners, all are) either top or left edge.A top edge is an edge that is exactly horizontal and lies above other edges, and a left edge is a non-horizontal edge that is on the left side of the triangle.
3D images:
This rule is implemented e.g. by Direct3D and many OpenGL implementations (even though the specification doesn't define it and only requires a consistent rule).
Quality:
The quality of rasterization can be improved by antialiasing, which creates "smooth" edges. Sub-pixel precision is a method which takes into account positions on a finer scale than the pixel grid and can produce different results even if the endpoints of a primitive fall into same pixel coordinates, producing smoother movement animations. Simple or older hardware, such as PlayStation 1, lacked sub-pixel precision in 3D rasterization. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**English for specific purposes**
English for specific purposes:
English for specific purposes (ESP) is a subset of English as a second or foreign language. It usually refers to teaching the English language to university students or people already in employment, with reference to the particular vocabulary and skills they need. As with any language taught for specific purposes, a given course of ESP will focus on one occupation or profession, such as Technical English, Scientific English, English for medical professionals, English for waiters, English for tourism, etc. Despite the seemingly limited focus, a course of ESP can have a wide-ranging impact, as is the case with Environmental English.English for academic purposes, taught to students before or during their degrees, is one sort of ESP, as is Business English. Aviation English is taught to pilots, air traffic controllers and civil aviation cadets to enable clear radio communications.
Definition:
Absolute characteristics ESP is defined to meet psychological needs of the learners and how they will respond to temptations (Maslow's hierarchy of needs).
ESP makes use of underlying methodology and activities of the discipline it serves.
ESP is centered on the language appropriate to these activities in terms of grammar, lexis, register, study skills, discourse and genre.
Definition:
Variable characteristics Strevens' (1988) ESP may be, but is not necessarily: Restricted as to the language skills to be learned (e.g. reading only); Not taught according to any pre-ordained methodology (pp. 1–2)Dudley-Evans & St John (1998) ESP may be related to or designed for specific disciplines;(Dabong, 2019) ESP may use, in specific teaching situations, a different methodology from that of general English; ESP is likely to be designed for adult learners, either at a tertiary level institution or in a professional work situation. It could, however, be for learners at secondary school level; ESP is generally designed for intermediate or advanced students; Most ESP courses assume some basic knowledge of the language system, but it can be used with beginners (pp. 4–5)
Teaching:
ESP is taught in many universities of the world. Many professional associations of teachers of English (TESOL, IATEFL) have ESP sections. Much attention is devoted to ESP course design. ESP teaching has much in common with English as a Foreign or Second Language and English for Academic Purposes (EAP). Quickly developing Business English can be considered as part of a larger concept of English for Specific Purposes.
Teaching:
ESP is different from standard English teaching in the fact that the one doing the teaching not only has to be proficient in standard English, but they also must be knowledgeable in a technical field. When doctors of foreign countries learn English, they need to learn the names of their tools, naming conventions, and methodologies of their profession before one can ethically perform surgery. ESP courses for medicine would be relevant for any medical profession, just as how learning electrical engineering would be beneficial to a foreign engineer. Some ESP scholars recommend a "two layer" ESP course: the first covering all generic knowledge in the specific field of study, and then a second layer that would focus on the specifics of the specialization of the individual.
Notes:
Hutchinson, T. & A. Francisco. 1987. English for Specific Purposes: A learning-centered approach. Cambridge: Cambridge University Press.
Eric.ed.gov, Dudley-Evans, Tony. An Overview of ESP in the 1990s. In: The Japan Conference on English for Specific Purposes Proceedings (Aizuwakamatsu City, Fukushima, Japan, November 8, 1997) Amazon.co.uk, Dudley-Evans, Tony (1998). Developments in English for Specific Purposes: A multi-disciplinary approach. Cambridge University Press.
Ideas and Options in English for Specific Purposes 2006 ISBN 978-0-8058-4418-4 Developmentalpsychologyarena.com, Helen Basturkmen. Ideas and Options in English for Specific Purposes. Published by: Routledge, 2005 Eric.ed.gov, The Apitong 3rd floor on English for Specific Purposes Proceedings (Aizuwakamatsu City, Fukushima, November 8, 1997) Orr, Thomas, Ed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Biorock**
Biorock:
Biorock (also seacrete) is a cement-like engineering material formed when a small electric current is passed between underwater metal electrodes placed in seawater causing dissolved minerals to accrete onto the cathode to form a thick layer of limestone. This 'accretion process' can be used to create building materials or to create artificial 'electrified reefs' for the benefit of corals and other sea-life. Discovered by Wolf Hilbertz in 1976, biorock was protected by patents and a trademark which have now expired.
History:
During the 1970s Professor Wolf Hilbertz, an architect by training, was studying seashells and reefs at the School of Architecture at the University of Texas. He was thinking about how humans could emulate the way coral grow. After preliminary work in 1975, in 1976 he discovered that by passing electric currents through salt water, over time a thick layer of various materials including limestone deposited on the cathode. Later experiments showed that the coating could thicken at the rate of 5 cm per year for as long as current flows.
History:
Hilbertz’s original plan was to use this technology to grow low-cost structures in the ocean. He detailed his basic theory in a technical journal in 1979, believing that the process should not be patented so that it could be commercially exploited by anyone. However, having been let down a number of times he incorporated a company, The Marine Resources Company, raised venture capital and filed a number of patents relating to biorock.He dissolved Marine Resources Company in 1982 as his focus shifted to creating artificial coral reefs (or electrified reefs) after meeting Thomas J. Goreau. Hilbertz formed a partnership with Goreau, who continued work coral reef restoration and biorock after Hilbertz' death in 2007.
Process:
The chemical process that takes place on the cathode is as follows: Calcium carbonate (aragonite) combines with magnesium, chloride and hydroxyl ions to slowly accrete around the cathode coating it with a thick layer of material similar in composition to magnesium oxychloride cement. Over time cathodic protection replaces the negative chloride ion (Cl-) with dissolved bicarbonate (HCO3-) to harden the coating to a hydromagnesite-aragonite mixture with gaseous oxygen evolving through the porous structure. Compressive strength has been measured from 3,720 to 5,350 psi (25.6 to 36.9 MPa), comparable to the concrete used for sidewalks. The material grows rapidly, strengthens with age and is self-repairing whilst power is applied. The process is one that emits carbon dioxide into the atmosphere rather than sequestering it.The electrical current, supplied by a low DC voltage (often <4 volts) at a low current, is required on a continuous, pulsed or intermittent basis which can therefore be generated nearby from a low-cost integrated renewable energy source such as a small floating solar panel array. One kilowatt hour of electricity accretes about 0.4 to 1.5 kilograms (0.9 to 3.3 lb) of biorock, depending on parameters such as depth, electric current, salinity and water temperature.
Electrified reef:
Electrified reefs can be constructed using the Biorock process which provides a substrate on which corals thrive, being very similar to that of a natural reef. The structural element of the reef can be constructed out of low-cost rebar metal on which the rock will form which can be created locally in a shape appropriate to the location and purpose. Power is supplied between this large metal structure (the cathode) and a much smaller anode. Coral also benefits from the electrified and oxygenated reef environment that forms around the cathode. High levels of dissolved oxygen makes it highly attractive to marine organisms, particularly fin fish.
Patents:
US 4246075 "Mineral accretion of large surface structures, building components and elements" 1981 (expired) US 4440605 "Repair of reinforced concrete structures by mineral accretion" 1984 (expired) US 4461684 "Accretion coating and mineralization of materials for protection against biodegradation" 1984 (expired) US 5543034 "Method of enhancing the growth of aquatic organisms, and structures created thereby" 1996 (expired)
Trademark:
The term Biorock was protected by a trademark between 2000 and 2010, but can now be used without restriction.
Published works:
Hilbertz, W. H., Marine architecture: an alternative, in: Arch. Sci. Rev., 1976 Hilbertz, W. H., Mineral accretion technology: applications for architecture and aquaculture with D. Fletcher und C. Krausse, Industrial Forum, 1977 Hilbertz, W. H., Building Environments That Grow, in: The Futurist (June 1977): 148-49 Hilbertz, W. H. et al., Electrodeposition of Minerals in Sea Water: Experiments and Applications, in: IEEE Journal on Oceanic Engineering, Vol. OE-4, No. 3, pp. 94–113, 1979 Ortega, Alvaro, Basic Technology: Mineral Accretion for Shelter. Seawater as a Source for Building, MIMAR 32: Architecture in Development, No. 32, pp. 60–63, 1989 Hilbertz, W. H., Solar-generated construction material from sea water to mitigate global warming, in: Building Research & Information, Volume 19, Issue 4 July 1991, pages 242 - 255 Hilbertz, W. H., Solar-generated building material from seawater as a sink for carbon, Ambio 1992 Balbosa, Enrique Amat, Revista Arquitectura y Urbanismo, Vol. 15, no. 243, 1994 Goreau, T. J. + Hilbertz, W. H. + Evans, S. + Goreau, P. + Gutzeit, F. + Despaigne, C. + Henderson, C. + Mekie, C. + Obrist, R. + Kubitza, H., Saya de Malha Expedition, March 2002, 101 p., Sun&Sea e.V. Hamburg, Germany, August 2002 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Giovanni Ciccotti**
Giovanni Ciccotti:
Giovanni Ciccotti (born 19 December 1943, in Rome Italy) is an Italian physicist.Ciccotti held the position of Professor of the Structure of Matter at the University of Rome La Sapienza until 2013. He is the author of more than a hundred articles on molecular dynamics and statistical mechanics. He worked with J.P. Ryckaert on new methods for molecular dynamics on constrained systems. See also SHAKE algorithm.
Giovanni Ciccotti:
He has edited several books on molecular dynamics and statistical mechanics developments, including: "Molecular Dynamics Simulation of Statistical Mechanical Systems", E. Fermi 1985 Summer School. G. Ciccotti and W. G. Hoover Eds. North Holland, Amsterdam, 1986.
"Simulation of Liquids and Solids. Molecular Dynamics and MonteCarlo Methods in Statistical Mechanics. A reprint Book". G. Ciccotti, D. Frenkel and I. R. Mc Donald, Eds. North Holland, Amsterdam, 1987.
"MonteCarlo and Molecular Dynamics of Condensed Matter Systems", Euroconference 1995, K. Binder and G. Ciccotti, Eds., SIF, Bologna, 1996.
"Simulation of Classical and Quantum Dynamics in Condensed Phase", Euroconference 1997, B. J. Berne, G. Ciccotti and D. F. Coker, Eds. World Scientific, Singapore, 1998. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Google Cloud Storage**
Google Cloud Storage:
Google Cloud Storage is a RESTful online file storage web service for storing and accessing data on Google Cloud Platform infrastructure. The service combines the performance and scalability of Google's cloud with advanced security and sharing capabilities. It is an Infrastructure as a Service (IaaS), comparable to Amazon S3. Contrary to Google Drive and according to different service specifications, Google Cloud Storage appears to be more suitable for enterprises.
Feasibility:
User activation is resourced through the API Developer Console. Google Account holders must first access the service by logging in and then agreeing to the Terms of Service, followed by enabling a billing structure.
Design:
Google Cloud Storage stores objects (originally limited to 100 GiB, currently up to 5 TiB) in projects which are organized into buckets. All requests are authorized using Identity and Access Management policies or access control lists associated with a user or service account. Bucket names and keys are chosen so that objects are addressable using HTTP URLs: https://storage.googleapis.com/bucket/object http://bucket.storage.googleapis.com/object https://storage.cloud.google.com/bucket/object
Features:
Google Cloud Storage offers four storage classes, identical in throughput, latency and durability. The four classes, Multi-Regional Storage, Regional Storage, Nearline Storage, and Coldline Storage, differ in their pricing, minimum storage durations, and availability.
Interoperability - Google Cloud Storage is interoperable with other cloud storage tools and libraries that work with services such as Amazon S3 and Eucalyptus Systems.
Consistency - Upload operations to Google Cloud Storage are atomic, providing strong read-after-write consistency for all upload operations.
Features:
Access Control - Google Cloud Storage uses access control lists (ACLs) to manage object and bucket access. An ACL consists of one or more entries, each granting a specific permission to a scope. Permissions define what someone can do with an object or bucket (for example, READ or WRITE). Scopes define who the permission applies to. For example, a specific user or a group of users (such as Google account email addresses, Google Apps domain, public access, etc.) Resumable Uploads - Google Cloud Storage provides a resumable data transfer feature that allows users to resume upload operations after a communication failure has interrupted the flow of data. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PSB-10**
PSB-10:
PSB-10 is a drug which acts as a selective antagonist for the adenosine A3 receptor (ki value at human A3 receptor is 0.44 nM), with high selectivity over the other three adenosine receptor subtypes (ki values at human A1, A2A and A2B receptors are 4.1, 3.3 and 30 μM). Further pharmacological experiments in a [35S]GTPγS binding assay using hA3-CHO-cells indicated that PSB-10 acts as an inverse agonist (IC50 = 4 nM). It has been shown to produce antiinflammatory effects in animal studies. Simple xanthine derivatives such as caffeine and DPCPX have generally low affinity for the A3 subtype and must be extended by expanding the ring system and adding an aromatic group to give high A3 affinity and selectivity. The affinity towards adenosine A3 subtype was measured against the radioligand PSB-11. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spark plug**
Spark plug:
A spark plug is an electrical device used in an internal combustion engine to produce a spark which ignites the air-fuel mixture in the combustion chamber. As part of the engine's ignition system, the spark plug receives high-voltage electricity (generated by an ignition coil in modern engines and transmitted via a spark plug wire) which it uses to generate a spark in the small gap between the positive and negative electrodes. The timing of the spark is a key factor in the engine's behaviour, and the spark plug usually operates shortly before the combustion stroke commences.
Spark plug:
The spark plug was invented in 1860, however its use only became widespread after the invention of the ignition magneto in 1902. Diesel engines use compression ignition (instead of spark ignition), therefore they do not normally use spark plugs.
Design:
The main elements of a spark plug are the shell, insulator, central electrode and side electrode (also known as "ground strap"). The main part of the insulator is typically made from sintered alumina (Al2O3), a hard ceramic material with high dielectric strength. In marine engines, the shell of the spark plug is often a double-dipped, zinc-chromate coated metal.A spark plug passes through the wall of the combustion chamber, therefore it must also form part of the seal for the high-pressure gasses within the combustion chamber.
Design:
Electrodes The central electrode is connected to the terminal through an internal wire. The central electrode setup as the cathode from where the electrons are ejected. This is because the central electrode is usually the hottest part of the plug, and thermionic emission principles mean it is easier to eject electrons from a hotter surface. The sharp tip of the central electrode also increases the electrical field strength, thus increasing the emission of electrons. The side electrode (which is colder and blunter) requires up to 45 percent higher voltage, therefore only wasted spark systems use the side electrode as the cathode.The side electrode is made from high-nickel steel and is welded or hot forged to the side of the metal shell.Spark plugs can contain up to four side electrodes surrounding the central electrode. Multiple side electrodes generally provide longer life, as when the spark gap widens due to electric discharge wear, the spark moves to another closer ground electrode. The disadvantage of multiple side electrodes is that a shielding effect can occur for each electrode, leading to a less efficient burn and increased fuel consumption.
Design:
Gap size The distance between the tip of the spark plug and the central electrode is called the "spark plug gap" and is a key factor in the function of a spark plug. Spark plug gaps for car engines are typically 0.6 to 1.8 mm (0.024 to 0.071 in). Modern engines (using solid-state ignition systems and electronic fuel injection) typically use larger gaps than older engines that use breaker point distributors and carburetors.
Design:
Smaller plug gap sizes usually are more reliable at producing a spark, however the spark may be too weak to ignite the fuel-air mixture. A larger plug gap size will produce a stronger spark, however the spark might not always be produced (such as at high RPM). Gap adjustment is not recommended for iridium and platinum spark plugs, because there is a risk of damaging a metal disk welded to the electrode.
Design:
Wasted spark applications Wasted spark systems place a greater strain upon spark plugs since they alternately fire electrons in both directions (from the ground electrode to the central electrode, not just from the central electrode to the ground electrode). As a result, vehicles with such a system should have precious metals on both electrodes, not just on the central electrode, in order to increase service replacement intervals since they wear down the metal more quickly in both directions, not just one.
Indexing of plugs:
"Indexing" of plugs upon installation involves installing the spark plug so that the open area of the gap (i.e. the side not shrouded by the side electrode), faces the center of the combustion chamber. This is claimed to improve ignition by maximising the exposure of the fuel-air mixture to the spark in every cylinder.
Indexing is accomplished by either: Using thin washers to set the amount of thread engaged by the spark plug, thus determining the orientation of the spark plug within the cylinder head. This must be done individually for each plug, as the orientation of the gap with respect to the threads of the shell is usually random.
Producing spark plugs with a specific orientation of the gap relative to the threads of the shell. These spark plugs and usually designated as such by a suffix to the part number of the spark plug.
Heat range:
An important factor for a spark plug is the temperature that the tip is designed to withstand, called the heat range. Typical heat ranges for passenger car engines are usually between 500 and 850 °C (932 and 1,562 °F). A hotter spark plug has more insulation between itself and the cylinder head, causing less heat to be dissipated from the spark plug and therefore the spark plug remaining hotter. Temperatures higher than 450 °C (842 °F) are needed to prevent carbon build-up on the spark plug, while temperatures over 800 °C (1,470 °F) can cause overheating of the plug.Switching to a higher heat range is sometimes used to compensate for fuel delivery or oil consumption problems, however this increases the risk of pre-ignition.
History:
Belgian-French engineer Étienne Lenoir is generally credited with the invention of the spark plug in 1860, due to its use in the early Lenoir gas engine.Several patents relating to electrical ignition systems were filed in the late 1890s, including from Serbian engineer Nikola Tesla, British engineer Frederick Richard Simms and German engineer Robert Bosch. The use of high-voltage spark plugs in commercial viable engines was only made possible after the 1902 however, due to the invention of magneto-based ignition systems by Bosch engineer Gottlob Honold. Early manufacturers of spark plugs included American company Champion, British company Lodge brothers and London-based KLG (who pioneed the use of mica as an insulator).
History:
During the 1930s, American geologist Helen Blair Bartlett developed an alumina ceramic-based insulator for the spark plug.Polonium spark plugs were marketed by Firestone from 1940 to 1953. While the amount of radiation from the plugs was minuscule and not a threat to the consumer, the benefits of such plugs quickly diminished after approximately a month because of polonium's short half-life, and because buildup on the conductors would block the radiation that improved engine performance. The premise behind the polonium spark plug, as well as Alfred Matthew Hubbard's prototype radium plug that preceded it, was that the radiation would improve ionization of the fuel in the cylinder and thus allow the plug to fire more quickly and efficiently. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mixed farming**
Mixed farming:
Mixed farming is a type of farming which involves both the growing of crops and the raising of livestock.
Mixed farming:
Such agriculture occurs across Asia and in countries such as India, Malaysia, Indonesia, Afghanistan, South Africa, China, Central Europe, Canada, and Russia. Though at first it mainly served domestic consumption, countries such as the United States and Japan now use it for commercial purposes.The cultivation of crops alongside the rearing of animals for meat or eggs or milk defines mixed farming.
Mixed farming:
For example, a mixed farm may grow cereal crops, such as wheat or rye, and also keep cattle, sheep, pigs or poultry. Often the dung from the cattle serves to fertilize the crops. Also some of the crops might be used as fodder for the livestock. Before horses were commonly used for haulage, many young male cattle on such farms were often not butchered as surplus for meat but castrated and used as bullocks to haul the cart and the plough.
Sources:
Devendra, C.; Thomas, D. (2002). "Crop–animal interactions in mixed farming systems in Asia". 71 (1–2): 27–40. doi:10.1016/S0308-521X(01)00034-8. {{cite journal}}: Cite journal requires |journal= (help) Schiere, J. B.; Ibrahim, M. N. M.; Keulen, H. van (2002). "The role of livestock for sustainability in mixed farming: criteria and scenario studies under varying resource allocation". Agriculture, Ecosystems & Environment. 90 (2). doi:10.1016/S0167-8809(01)00176-1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Soundbar**
Soundbar:
A soundbar, sound bar or media bar is a type of loudspeaker that projects audio from a wide enclosure. It is much wider than it is tall, partly for acoustic reasons, and partly so it can be mounted above or below a display device (e.g. above a computer monitor or under a home theater or television screen). In a soundbar, multiple speakers are placed in a single cabinet, which helps to create stereo sound or a surround-sound effect. A separate subwoofer is typically included with, or may be used to supplement, a soundbar.
History:
Early passive versions simply integrated left, centre and right speakers into one enclosure, sometimes called an "LCR soundbar".
History:
Altec Lansing introduced a multichannel soundbar in 1998 called the Voice Of The Digital Theatre or the ADA106. It was a powered speaker system that offered stereo, Dolby Pro-Logic and AC3 surround sound from the soundbar and a separate subwoofer. The soundbar contained four 3″ full range drivers and two 1″ tweeters while the subwoofer housed one 8″ dual voice coil driver. It used Altec Lansing’s side-firing technology and algorithms to provide surround sound from the sides, rear and front. This configuration eliminated the wiring of separate speakers and the space they would require.
Advantages and disadvantages:
Soundbars are relatively small and can be easily positioned under a display, are easy to set up, and are usually less expensive than other stereo sound systems. However, because of their smaller size and lack of flexibility in positioning, soundbars do not fill a room with sound as well as separate-speaker stereo systems do.
Soundbar hybrid:
To take advantages both from soundbar and stereo set system, some manufacturers produce soundbar hybrids in which the soundbar represents left, center, and right speakers plus (wireless) subwoofer and rear-left and rear-right speakers. Sometimes producers make soundbars with left, center, and right speakers plus detachable charge rear-left and rear-right speakers.With the increasing availability of Dolby Atmos content since 2021, it has become increasingly important for soundbars to produce height effects. To deliver a realistic sense of height from a soundbar hybrid system, audio specialized companies such as Nakamichi have developed proprietary upmixing algorithms, using a combination of spatial-amplification, phase improvements, and height effect sound layer interlacing to deliver realistic vertical effects. Another method that soundbars may employ to deliver height effects is through the use of up-firing speakers, which rely heavily on the ceiling of the room to bounce height effects off the ceiling, towards the listener.
Usage:
Soundbars were primarily designed to generate strong sound with good bass response. Soundbar usage has increased steadily as the world has moved to flat-screen displays. Earlier television sets and display units were primarily CRT-based; hence the box was bigger, facilitating larger speakers with good response. But with flat-screen televisions the depth of the screen is reduced dramatically, leaving little room for speakers. As a result, the built-in speakers lack bass response. Soundbars help to bridge this gap. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kindling (sedative–hypnotic withdrawal)**
Kindling (sedative–hypnotic withdrawal):
Kindling due to substance withdrawal refers to the neurological condition which results from repeated withdrawal episodes from sedative–hypnotic drugs such as alcohol and benzodiazepines.
Kindling (sedative–hypnotic withdrawal):
Each withdrawal leads to more severe withdrawal symptoms than in previous episodes. Individuals who have had more withdrawal episodes are at an increased risk of very severe withdrawal symptoms, up to and including seizures and death. Long-term use of GABAergic-acting sedative–hypnotic drugs causes chronic GABA receptor downregulation as well as glutamate overactivity, which can lead to drug and neurotransmitter sensitization, central nervous system hyperexcitability, and excitotoxicity.
Symptoms:
Binge drinking is believed to increase impulsivity due to altered functioning of prefrontal–subcortical and orbitofrontal circuits. Binge drinking in alcoholics who have undergone repeated detoxification is associated with an inability to interpret facial expressions properly; this is believed to be due to kindling of the amygdala with resultant distortion of neurotransmission. Adolescents, females and young adults are most sensitive to the neuropsychological effects of binge drinking. Adolescence, particularly early adolescence, is a developmental stage which is particularly vulnerable to the neurotoxic and neurocognitive adverse effects of binge drinking due to it being a time of significant brain development.Approximately 3 percent of people who are alcohol dependent experience psychosis during acute intoxication or withdrawal. Alcohol-related psychosis may manifest itself through a kindling mechanism. The mechanism of alcohol-related psychosis is due to distortions to neuronal membranes, gene expression, as well as thiamine deficiency. It is possible in some cases that alcohol abuse via a kindling mechanism can cause the development of a chronic substance-induced psychotic disorder (e.g., schizophrenia). The effects of an alcohol-related psychosis include an increased risk of depression and suicide as well as psychosocial impairments.Repeated acute intoxication followed by acute withdrawal is associated with profound behavioural changes and neurobiological alterations in several brain regions. Much of the documented evidence of kindling caused by repeated detoxification regards increased seizure frequency. Increased fear and anxiety and cognitive impairments are also associated with alcohol withdrawal kindling due to binge drinking or alcoholics with repeated alcohol withdrawal experiences. The impairments induced by binge drinking or repeated detoxification of alcoholics cause a loss of behavioural inhibition of the prefrontal cortex; the prefrontal cortex is mediated by subcortical systems such as the amygdala. This loss of behavioral control due to brain impairment predisposes an individual to alcoholism and increases the risk of an abstaining alcoholic relapsing. This impairment may also result in long-term adverse effects on emotional behavior. Impaired associative learning may make behavioural therapies involving conditioning approaches for alcoholics less effective.
Causes:
Binge drinking regimes are associated with causing an imbalance between inhibitory and excitatory amino acids and changes in monoamine release in the central nervous system, which increases neurotoxicity; this may result in cognitive impairments, psychological problems, and may cause irreversible brain damage in both adolescent and adult long-term binge drinkers. Similar to binge drinkers, individuals suffering from alcohol dependence develop changes to neurotransmitter systems, which occur as a result of kindling and sensitization during withdrawal. This progressively lowers the threshold needed to cause alcohol-related brain damage and cognitive impairments, leading to altered neurological function. The changes in activity of excitatory and inhibitory neurotransmitter systems is similar to that which occurs in individuals suffering from limbic or temporal lobe epilepsy.Adaptational changes at the GABAA benzodiazepine receptor complex do not fully explain tolerance, dependence, and withdrawal from benzodiazepines. Other receptor complexes may be involved; in particular, the excitatory glutamate system is implicated. The involvement of glutamate in benzodiazepine dependence explains long-term potentiation as well as neuro-kindling phenomena. Use of a short-acting benzodiazepine at night as a sleeping pill causes repeated acute dependence followed by acute withdrawal. There is some evidence that a prior history of CNS depressant dependence (e.g. alcohol) increases the risk of dependence on benzodiazepines. Tolerance to drugs is commonly believed to be due to receptor down-regulation; however, there is very limited evidence to support this, and this hypothesis comes from animal studies using very high doses. Instead, other mechanisms, such as receptor uncoupling, may play a more important role in the development of benzodiazepine dependence; this may lead to prolonged conformational changes in the receptors or altered subunit composition of the receptors.
Pathophysiology:
Benzodiazepines Repeated benzodiazepine withdrawal episodes may result in similar neuronal kindling as that seen after repeated withdrawal episodes from alcohol, with resultant increased neuro-excitability. The glutamate system is believed to play an important role in this kindling phenomenon with AMPA receptors which are a subtype of glutamate receptors being altered by repeated withdrawals from benzodiazepines. The changes which occur after withdrawal in AMPA receptors in animals have been found in regions of the brain which govern anxiety and seizure threshold; thus kindling may result in increased severity of anxiety and a lowered seizure threshold during repeated withdrawal. Changes in the glutamate system and GABA system may play an important role at different time points during benzodiazepine withdrawal syndrome.
Pathophysiology:
Alcohol Binge drinking may induce brain damage due to the repeated cycle of acute intoxication followed by an acute abstinence withdrawal state. Based on animal studies, regular binge drinking in the long-term is thought to be more likely to result in brain damage than chronic (daily) alcoholism. This is due to the 4- to 5-fold increase in glutamate release in nucleus accumbens during the acute withdrawal state between binges but only in dose 3 g/kg, in 2 g/kg there is no increase in glutamate release. In contrast, during withdrawal from chronic alcoholism only a 2- to 3-fold increase in glutamate release occurs. The high levels of glutamate release causes a chain reaction in other neurotransmitter systems. The reason that chronic sustained alcoholism is thought by some researchers to be less brain damaging than binge drinking is because tolerance develops to the effects of alcohol and unlike binge drinking repeated periods of acute withdrawal does not occur, but there are also many alcoholics who typically drink in binges followed by periods of no drinking. Excessive glutamate release is a known major cause of neuronal cell death. Glutamate causes neurotoxicity due to excitotoxicity and oxidative glutamate toxicity. Evidence from animal studies suggests that some people may be more genetically sensitive to the neurotoxic and brain damage associated with binge drinking regimes. Binge drinking activates microglial cells which leads to the release of inflammatory cytokines and mediators such as tumour necrosis factor, and nitric oxide causing neuroinflammation leading to neuronal destruction.Repeated acute withdrawal from alcohol which occurs in heavy binge drinkers has been shown in several studies to be associated with cognitive deficits as a result of neural kindling; neural kindling due to repeated withdrawals is believed to be the mechanism of cognitive damage in both binge drinkers and alcoholics. Neural kindling may explain the advancing pathogenesis and progressively deteriorating course of alcoholism and explain continued alcohol abuse as due to avoidance of distressing acute withdrawal symptoms which get worse with each withdrawal. Multiple withdrawals from alcohol is associated with long-term nonverbal memory impairment in adolescents and to poor memory in adult alcoholics. Adult alcoholics who experienced two or more withdrawals showed more frontal lobe impairments than alcoholics who had a history of one or no prior alcohol withdrawals. The finding of kindling in alcoholism is consistent with the mechanism of brain damage due to binge drinking and subsequent withdrawal.
Diagnosis:
Definition Kindling refers to the phenomenon of increasingly severe withdrawal symptoms, including an increased risk of seizures, that occurs as a result of repeated withdrawal from alcohol or other sedative–hypnotics with related modes of action. Ethanol (alcohol) has a very similar mechanism of tolerance and withdrawal to benzodiazepines, involving the GABAA receptors, NMDA receptors and AMPA receptors, but the majority of research into kindling has primarily focused on alcohol. An intensification of anxiety and other psychological symptoms of alcohol withdrawal also occurs.
Treatment:
Failure to manage the alcohol withdrawal syndrome appropriately can lead to permanent brain damage or death.Acamprosate, a drug used to promote abstinence from alcohol, an NMDA antagonist drug, reduces excessive glutamate activity in the central nervous system and thereby may reduce excitotoxicity and withdrawal related brain damage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**A Syntopicon**
A Syntopicon:
A Syntopicon: An Index to The Great Ideas (1952; second edition, 1990) is a two-volume index, published as volumes 2 and 3 of Encyclopædia Britannica, Inc.’s collection Great Books of the Western World. Compiled by Mortimer J. Adler, an American philosopher, under the guidance of Robert Hutchins, president of the University of Chicago, the volumes were billed as a collection and guide to the most important ideas, clustered under 102 "Great Ideas", of the Western canon. The term “syntopicon” as well as "Great Ideas" were coined specifically for this undertaking, the former a Neo-latin word meaning “a collection of topics.” The volumes catalogued what Adler and his team deemed to be the fundamental ideas contained in the works of the Great Books of the Western World, which stretched chronologically from Homer to Freud. The Syntopicon lists, under each idea, where every occurrence of the concept can be located in the collection's famous works. The Syntopicon was revised as part of the second edition of the collection.
History:
The Syntopicon was created to set the Great Books collection apart from previously published sets (such as The Harvard Classics). Robert Hutchins, at the time, in addition to being the president of the University of Chicago, also served as chairman of the Board of Editors of the Encyclopædia Britannica. Hutchins and Adler were recruited by the encyclopedia's publisher, William Benton, for a “special idea” which would increase the collection's marketability.With this mission, Adler undertook a project that would consume over a decade of his life: identifying and indexing the western world's Great Ideas. In the end, the Syntopicon would require over 400,000 man-hours of reading and cost over two million dollars. Britannica publisher Benton joked at the Great Books presentation dinner that “the Syntopicon is said to be the most expensive two volumes editorially in all publishing history. How Hutchins and Adler achieved that unique distinction, the publisher is still trying to figure out. (Clifton Fadiman) assured me some day the story will be told. I’d like it whenever I can get it.”The process of cataloging each appearance of one of the “Great Ideas” in all 431 works by 71 authors in the collection was so arduous that the Syntopicon nearly did not make it to print. Before it even came time to print, the budget had topped a million dollars and there was not even “a penny for paper” left. Adler persevered, however, having spent the previous eight years of his life on the project. He single-handedly raised funds by selling more expensive “Founders Editions” of the sets, and disobeyed the order to fire his entire staff. There were times, during the process when he admitted: “the question was could we sell the plates for junk! Could we dispose of the plates as old metal?”Adler felt, through it all, that he was creating something completely new. The Syntopicon, he felt, would be revolutionary, its release on par with such events as the creation of the first dictionary. It would do for ideas what previous reference books had done for words and facts. He worked with a team of over 100 readers who met twice a week for years to discuss the readings and the ideas within them. Among his editorial team was a young Saul Bellow.
History:
In the end, he produced two volumes, which listed 102 Great Ideas of Western Civilization, intending them to be comprehensive of Western thought, alphabetically, from Angel to World. He realized no hierarchical scheme would command the assent of a pluralistic society where persons centered their thinking and loyalty around so many different ways at looking at things, though allowing for non-prescriptive hierarchical listings of smaller ideas within the greater ideas. The set was released with much fanfare, including an honorary gifting of the first two editions to U.S. President Harry S. Truman and Elizabeth II. An internal memo from Encyclopædia Britannica, regarding a release party, reads: The projected sequences of events is: (1) Mr. Hutchins kicks off the conference with a discussion of the Great Books movement, and the university’s and Britannica’s interest in the set; (2) Mr. Adler tells in detail of the contents of the set and the significance of the Syntopicon; (3) Cocktails.The second edition of Great Books of the Western World contained six additional volumes of works from the twentieth century, like James Joyce, Max Weber, etc. The second edition of the Syntopicon added, to each of the articles on the great ideas, references to the works in the additional volumes and revisions to the introduction to each great idea to include some mention of how the given idea was understood by the twentieth-century authors in the collection.
Purpose:
In addition to being a “special idea” that would set Great Books of the Western World apart, the Syntopicon serves four other purposes, outlined in its preface. The Syntopicon can serve as a reference book, as a book to be read, as an “instrument of liberal education,” and as “an instrument of discovery and research.”Above all, however, the Syntopicon was created to unite the western world's canon. It was created to solve what Adler saw as a fundamental problem, that “different authors say the same thing in different ways, or use the same words to say quite different things.” By cataloging the things the great authors were saying in a more scientific manner, Adler hoped to show the underlying unity that ultimately existed in the collected works.
Purpose:
For Hutchins, a chief proponent of classical education, the creation of the Syntopicon held even greater implications. Hutchins felt that the ideas being discussed and cross-listed in the Syntopicon might be "powerful enough to save the world from self-destruction."
Content:
The Syntopicon consists of 102 chapters on the 102 Great Ideas. Each chapter is broken down into five distinct sections: the introduction, an outline of topics, references, cross-references, and additional readings. Adler penned all 102 introductions himself, giving a brief essay on the idea and its connection with the western canon. The outline of topics broke each idea down further, into as many as 15 sub-ideas. For instance, the first idea “Angel” is broken down into “Inferior deities or demi-gods in polytheistic religion,” “the philosophical consideration of pure intelligences, spiritual substances, supra-human persons” and seven other subtopics. After this is the references section (for instance, “inferior deities or demi-gods in polytheistic religion” can be found in Homer, Sophocles, Shakespeare, Milton, Bacon, Locke, Hegel, Goethe and more). Cross-references follow, where similar ideas are listed. Last is the additional readings, in which one could seek out more on the subject of “Angel.” The list of 102 ideas is broken between the two volumes, as follows: Volume I: Angel, Animal, Aristocracy, Art, Astronomy, Beauty, Being, Cause, Chance, Change, Citizen, Constitution, Courage, Custom and Convention, Definition, Democracy, Desire, Dialectic, Duty, Education, Element, Emotion, Eternity, Evolution, Experience, Family, Fate, Form, God, Good and Evil, Government, Habit, Happiness, History, Honor, Hypothesis, Idea, Immortality, Induction, Infinity, Judgment, Justice, Knowledge, Labor, Language, Law, Liberty, Life and Death, Logic, and Love.
Content:
Volume II: Man, Mathematics, Matter, Mechanics, Medicine, Memory and Imagination, Metaphysics, Mind, Monarchy, Nature, Necessity and Contingency, Oligarchy, One and Many, Opinion, Opposition, Philosophy, Physics, Pleasure and Pain, Poetry, Principle, Progress, Prophecy, Prudence, Punishment, Quality, Quantity, Reasoning, Relation, Religion, Revolution, Rhetoric, Same and Other, Science, Sense, Sign and Symbol, Sin, Slavery, Soul, Space, State, Temperance, Theology, Time, Truth, Tyranny and Despotism, Universal and Particular, Virtue and Vice, War and Peace, Wealth, Will, Wisdom, and World.
Reactions:
The release of the Syntopicon and the Great Books of the Western World was covered in the New York Herald, The New York Times, the Los Angeles Times, Time magazine, the Chicago Tribune, and local papers across the United States.
Reactions:
Full-page ads for Time magazine appeared in newspapers across the country, saying “What and Why is a Syntopicon?” Time published an article about the new, pivotal piece of reference material, and Adler wrote them back in thanks, saying that “I think the piece is both thoroughly accurate and very lively, which, considering the subject matter, is quite a feat.”Look magazine ran a feature on the Syntopicon, displaying behind-the-scene photos of the index's staff at work. In one picture, editor Joe Roddy is shown weighing the number of entries about love on a scale. “Discussions of love by great authors,” the caption reads, “outweigh sin and eternity.” Life magazine featured the scholars who had worked on the Syntopicon in front of 102 card catalogue drawers.Despite the extensive press coverage and cross-country speeches, however, sales of the Syntopicon and the Great Books of the Western World were slow. Experienced encyclopedia salesmen had to be brought in to move the product, despite Hutchins’ dislike of this solution.
Reactions:
The Syntopicon's list was "arbitrary", as even Adler admitted. The press and others also found problems with the Syntopicon, and despite Adler’s predictions (“we predict that, as dictionaries are indispensable in the realm of words, and encyclopaedias in the realm of facts, so the Syntopicon will become indispensable in the realm of ideas” ) the Syntopicon has fallen into relative obscurity. At the time of its release, The New York Times Book Review declared: “Its defects are on the surface. One is the implication that great books are concerned only with ideas which can be logically analyzed—whereas many masterpieces of literature live in realms partially or wholly outside the realm of logic. Another is the conception that the chief purpose of reading a book is to crack its shell and reach its kernel—the form itself being unimportant decoration.”
Subsequent books:
Six Great Ideas In a succeeding book, Adler expressed his regret that the civil rights concept of Equality had not been selected. He attempted to rectify the omission with Six Great Ideas: Truth–Goodness–Beauty–Liberty–Equality–Justice (1981).
Subsequent books:
This is a much-repeated expression by Adler, so it may be worth noting that Adler would often make remarks to cater to frequently-posed questions about the Syntopicon that had popular re-publication in media outlets and possibly appeal but little bearing on the actual philosophical work or usefulness pertaining to the set to which it belonged. People, particularly ideologues, would see the list of Great Ideas as representing a kind of popularity contest of significance among ideas when the Great Ideas were actually employed to group ideas under short-as-possible names more like older and newer constellations established over the centuries are used as a convenience to group stars in the night sky, which even the most modern astronomers might use to introduce the study of the stars to novices.
Subsequent books:
Adler deliberately included what could easily be regarded as alternate "constellations" made of smaller ideas in a list accompanying the Great Ideas content within the set called "Inventory of Terms". For example, the alternate "constellation" of the important idea of political equality, which, had it been classed with a Great Idea of Equality, would have been presented with ideas of more of a mathematical quality to express the idea comprehensively and also would have included a large set of subtopics already in the set, such as: Liberty and equality for all under lawUniversal suffrage: the abolition of privileged classesThe problem of economic justice: the choice between capitalism and socialismJustice and equality: the kinds of justice in relation to the measure and modes of equality and inequality Despotic and constitutional government with respect to political liberty and equality: the rights of the governed The character and extent of citizenship under different types of constitutions The qualifications for citizenship: extent of suffrage The organization of workmen and the formation of trade unions to protect labor's rights and interests The freedom of equals under government: the equality of citizenship Love between equals and unequals, like and unlike: the fraternity of citizenship The aims of political revolution: the seizure of power; the attainment of liberty, justice, equality Propaedia Adler attempted another index, in one volume, the Propædia for the fifteenth edition of the Encyclopædia Britannica. A traditional two-volume alphabetical index has since been produced for the more recent versions of the fifteenth edition, in addition to Propædia.
Subsequent books:
1992 Edition The Syntopicon was republished as The Great Ideas: a Lexicon of Western Thought (1992), a single volume reprint of the commentary on all 102 ideas without including indexing to the Great Books or cross-references to the other Great Ideas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BleachBit**
BleachBit:
BleachBit is a free and open-source disk space cleaner, privacy manager, and computer system optimizer. The BleachBit source code is licensed under the GNU General Public License version 3.
History:
BleachBit was first publicly released on 24 December 2008 for Linux systems. The 0.2.1 release created some controversy by suggesting Linux needed a registry cleaner.
Version 0.4.0 introduced CleanerML, a standards-based markup language for writing new cleaners. On May 29, 2009, BleachBit version 0.5.0 added support for Windows XP, Windows Vista, and Windows 7. On September 16, 2009, version 0.6.4 introduced command-line interface support.BleachBit is available for download through its website and the repositories of many Linux distributions.
Features:
Identifying and removing Web cache, HTTP cookies, URL history, temporary files log files and Flash cookies for Firefox, Opera, Safari, APT, Google Chrome Removing unused localizations (also called locale files) which are translations of software Shredding files and wiping unallocated disk space to minimize data remanence Wiping unallocated disk space to improve data compression ratio for disk image backups Vacuuming Firefox's SQLite database which suffers fragmentation Command line interface for scripting automation and headless operation
Technology:
BleachBit is written in the Python programming language and uses PyGTK.
Most of BleachBit's cleaners are written in CleanerML, an open standard XML-based markup language for writing cleaners. CleanerML deals not only with deleting files, but also executes more specialized actions, such as vacuuming an SQLite database (used, for example, to clean Yum).
Technology:
BleachBit's file shredder uses only a single, "secure" pass because its developers believe that there is a lack of evidence that multiple passes, such as the 35-pass Gutmann method, are more effective. They also assert that multiple passes are significantly slower and may give the user a false sense of security by overshadowing other ways in which privacy may be compromised.
Controversy:
In August 2016, Republican U.S. Congressman Trey Gowdy announced that he had seen notes from the Federal Bureau of Investigation (FBI), taken during an investigation of Hillary Clinton's emails, that stated that her staff had used BleachBit in order to delete tens of thousands of emails on her private server. Subsequently, then presidential nominee Donald Trump claimed Clinton had “acid washed” and “bleached” her emails, calling it “an expensive process.”After the announcement, BleachBit's company website reportedly received increased traffic. In October 2016, the FBI released edited documents from their Clinton email investigation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phase-shift keying**
Phase-shift keying:
Phase-shift keying (PSK) is a digital modulation process which conveys data by changing (modulating) the phase of a constant frequency carrier wave. The modulation is accomplished by varying the sine and cosine inputs at a precise time. It is widely used for wireless LANs, RFID and Bluetooth communication.
Phase-shift keying:
Any digital modulation scheme uses a finite number of distinct signals to represent digital data. PSK uses a finite number of phases, each assigned a unique pattern of binary digits. Usually, each phase encodes an equal number of bits. Each pattern of bits forms the symbol that is represented by the particular phase. The demodulator, which is designed specifically for the symbol-set used by the modulator, determines the phase of the received signal and maps it back to the symbol it represents, thus recovering the original data. This requires the receiver to be able to compare the phase of the received signal to a reference signal – such a system is termed coherent (and referred to as CPSK).
Phase-shift keying:
CPSK requires a complicated demodulator, because it must extract the reference wave from the received signal and keep track of it, to compare each sample to. Alternatively, the phase shift of each symbol sent can be measured with respect to the phase of the previous symbol sent. Because the symbols are encoded in the difference in phase between successive samples, this is called differential phase-shift keying (DPSK). DPSK can be significantly simpler to implement than ordinary PSK, as it is a 'non-coherent' scheme, i.e. there is no need for the demodulator to keep track of a reference wave. A trade-off is that it has more demodulation errors.
Introduction:
There are three major classes of digital modulation techniques used for transmission of digitally represented data: Amplitude-shift keying (ASK) Frequency-shift keying (FSK) Phase-shift keying (PSK)All convey data by changing some aspect of a base signal, the carrier wave (usually a sinusoid), in response to a data signal. In the case of PSK, the phase is changed to represent the data signal. There are two fundamental ways of utilizing the phase of a signal in this way: By viewing the phase itself as conveying the information, in which case the demodulator must have a reference signal to compare the received signal's phase against; or By viewing the change in the phase as conveying information – differential schemes, some of which do not need a reference carrier (to a certain extent).A convenient method to represent PSK schemes is on a constellation diagram. This shows the points in the complex plane where, in this context, the real and imaginary axes are termed the in-phase and quadrature axes respectively due to their 90° separation. Such a representation on perpendicular axes lends itself to straightforward implementation. The amplitude of each point along the in-phase axis is used to modulate a cosine (or sine) wave and the amplitude along the quadrature axis to modulate a sine (or cosine) wave. By convention, in-phase modulates cosine and quadrature modulates sine.
Introduction:
In PSK, the constellation points chosen are usually positioned with uniform angular spacing around a circle. This gives maximum phase-separation between adjacent points and thus the best immunity to corruption. They are positioned on a circle so that they can all be transmitted with the same energy. In this way, the moduli of the complex numbers they represent will be the same and thus so will the amplitudes needed for the cosine and sine waves. Two common examples are "binary phase-shift keying" (BPSK) which uses two phases, and "quadrature phase-shift keying" (QPSK) which uses four phases, although any number of phases may be used. Since the data to be conveyed are usually binary, the PSK scheme is usually designed with the number of constellation points being a power of two.
Binary phase-shift keying (BPSK):
BPSK (also sometimes called PRK, phase reversal keying, or 2PSK) is the simplest form of phase shift keying (PSK). It uses two phases which are separated by 180° and so can also be termed 2-PSK. It does not particularly matter exactly where the constellation points are positioned, and in this figure they are shown on the real axis, at 0° and 180°. Therefore, it handles the highest noise level or distortion before the demodulator reaches an incorrect decision. That makes it the most robust of all the PSKs. It is, however, only able to modulate at 1 bit/symbol (as seen in the figure) and so is unsuitable for high data-rate applications.
Binary phase-shift keying (BPSK):
In the presence of an arbitrary phase-shift introduced by the communications channel, the demodulator (see, e.g. Costas loop) is unable to tell which constellation point is which. As a result, the data is often differentially encoded prior to modulation.
BPSK is functionally equivalent to 2-QAM modulation.
Implementation The general form for BPSK follows the equation: cos 1.
This yields two phases, 0 and π.
In the specific form, binary data is often conveyed with the following signals: cos cos (2πft) for binary "0" cos (2πft) for binary "1"where f is the frequency of the base band.
Hence, the signal space can be represented by the single basis function cos (2πft) where 1 is represented by Ebϕ(t) and 0 is represented by −Ebϕ(t) . This assignment is arbitrary.
Binary phase-shift keying (BPSK):
This use of this basis function is shown at the end of the next section in a signal timing diagram. The topmost signal is a BPSK-modulated cosine wave that the BPSK modulator would produce. The bit-stream that causes this output is shown above the signal (the other parts of this figure are relevant only to QPSK). After modulation, the base band signal will be moved to the high frequency band by multiplying cos (2πfct) Bit error rate The bit error rate (BER) of BPSK under additive white Gaussian noise (AWGN) can be calculated as: Pb=Q(2EbN0) or erfc (EbN0) Since there is only one bit per symbol, this is also the symbol error rate.
Quadrature phase-shift keying (QPSK):
Sometimes this is known as quadriphase PSK, 4-PSK, or 4-QAM. (Although the root concepts of QPSK and 4-QAM are different, the resulting modulated radio waves are exactly the same.) QPSK uses four points on the constellation diagram, equispaced around a circle. With four phases, QPSK can encode two bits per symbol, shown in the diagram with Gray coding to minimize the bit error rate (BER) – sometimes misperceived as twice the BER of BPSK.
Quadrature phase-shift keying (QPSK):
The mathematical analysis shows that QPSK can be used either to double the data rate compared with a BPSK system while maintaining the same bandwidth of the signal, or to maintain the data-rate of BPSK but halving the bandwidth needed. In this latter case, the BER of QPSK is exactly the same as the BER of BPSK – and believing differently is a common confusion when considering or describing QPSK. The transmitted carrier can undergo numbers of phase changes.
Quadrature phase-shift keying (QPSK):
Given that radio communication channels are allocated by agencies such as the Federal Communications Commission giving a prescribed (maximum) bandwidth, the advantage of QPSK over BPSK becomes evident: QPSK transmits twice the data rate in a given bandwidth compared to BPSK - at the same BER. The engineering penalty that is paid is that QPSK transmitters and receivers are more complicated than the ones for BPSK. However, with modern electronics technology, the penalty in cost is very moderate.
Quadrature phase-shift keying (QPSK):
As with BPSK, there are phase ambiguity problems at the receiving end, and differentially encoded QPSK is often used in practice.
Implementation The implementation of QPSK is more general than that of BPSK and also indicates the implementation of higher-order PSK. Writing the symbols in the constellation diagram in terms of the sine and cosine waves used to transmit them: cos 4.
This yields the four phases π/4, 3π/4, 5π/4 and 7π/4 as needed.
This results in a two-dimensional signal space with unit basis functions cos sin (2πfct) The first basis function is used as the in-phase component of the signal and the second as the quadrature component of the signal.
Hence, the signal constellation consists of the signal-space 4 points (±Es2±Es2).
The factors of 1/2 indicate that the total power is split equally between the two carriers.
Comparing these basis functions with that for BPSK shows clearly how QPSK can be viewed as two independent BPSK signals. Note that the signal-space points for BPSK do not need to split the symbol (bit) energy over the two carriers in the scheme shown in the BPSK constellation diagram.
QPSK systems can be implemented in a number of ways. An illustration of the major components of the transmitter and receiver structure are shown below.
Quadrature phase-shift keying (QPSK):
Probability of error Although QPSK can be viewed as a quaternary modulation, it is easier to see it as two independently modulated quadrature carriers. With this interpretation, the even (or odd) bits are used to modulate the in-phase component of the carrier, while the odd (or even) bits are used to modulate the quadrature-phase component of the carrier. BPSK is used on both carriers and they can be independently demodulated.
Quadrature phase-shift keying (QPSK):
As a result, the probability of bit-error for QPSK is the same as for BPSK: Pb=Q(2EbN0) However, in order to achieve the same bit-error probability as BPSK, QPSK uses twice the power (since two bits are transmitted simultaneously).
The symbol error rate is given by: Ps=1−(1−Pb)2=2Q(EsN0)−[Q(EsN0)]2.
Quadrature phase-shift keying (QPSK):
If the signal-to-noise ratio is high (as is necessary for practical QPSK systems) the probability of symbol error may be approximated: erfc erfc (EbN0) The modulated signal is shown below for a short segment of a random binary data-stream. The two carrier waves are a cosine wave and a sine wave, as indicated by the signal-space analysis above. Here, the odd-numbered bits have been assigned to the in-phase component and the even-numbered bits to the quadrature component (taking the first bit as number 1). The total signal – the sum of the two components – is shown at the bottom. Jumps in phase can be seen as the PSK changes the phase on each component at the start of each bit-period. The topmost waveform alone matches the description given for BPSK above.
Quadrature phase-shift keying (QPSK):
The binary data that is conveyed by this waveform is: 11000110.
The odd bits, highlighted here, contribute to the in-phase component: 11000110 The even bits, highlighted here, contribute to the quadrature-phase component: 11000110 Variants Offset QPSK (OQPSK) Offset quadrature phase-shift keying (OQPSK) is a variant of phase-shift keying modulation using four different values of the phase to transmit. It is sometimes called staggered quadrature phase-shift keying (SQPSK).
Quadrature phase-shift keying (QPSK):
Taking four values of the phase (two bits) at a time to construct a QPSK symbol can allow the phase of the signal to jump by as much as 180° at a time. When the signal is low-pass filtered (as is typical in a transmitter), these phase-shifts result in large amplitude fluctuations, an undesirable quality in communication systems. By offsetting the timing of the odd and even bits by one bit-period, or half a symbol-period, the in-phase and quadrature components will never change at the same time. In the constellation diagram shown on the right, it can be seen that this will limit the phase-shift to no more than 90° at a time. This yields much lower amplitude fluctuations than non-offset QPSK and is sometimes preferred in practice.
Quadrature phase-shift keying (QPSK):
The picture on the right shows the difference in the behavior of the phase between ordinary QPSK and OQPSK. It can be seen that in the first plot the phase can change by 180° at once, while in OQPSK the changes are never greater than 90°.
Quadrature phase-shift keying (QPSK):
The modulated signal is shown below for a short segment of a random binary data-stream. Note the half symbol-period offset between the two component waves. The sudden phase-shifts occur about twice as often as for QPSK (since the signals no longer change together), but they are less severe. In other words, the magnitude of jumps is smaller in OQPSK when compared to QPSK.
Quadrature phase-shift keying (QPSK):
SOQPSK The license-free shaped-offset QPSK (SOQPSK) is interoperable with Feher-patented QPSK (FQPSK), in the sense that an integrate-and-dump offset QPSK detector produces the same output no matter which kind of transmitter is used.These modulations carefully shape the I and Q waveforms such that they change very smoothly, and the signal stays constant-amplitude even during signal transitions. (Rather than traveling instantly from one symbol to another, or even linearly, it travels smoothly around the constant-amplitude circle from one symbol to the next.) SOQPSK modulation can be represented as the hybrid of QPSK and MSK: SOQPSK has the same signal constellation as QPSK, however the phase of SOQPSK is always stationary.The standard description of SOQPSK-TG involves ternary symbols. SOQPSK is one of the most spread modulation schemes in application to LEO satellite communications.
Quadrature phase-shift keying (QPSK):
π/4-QPSK This variant of QPSK uses two identical constellations which are rotated by 45° ( π/4 radians, hence the name) with respect to one another. Usually, either the even or odd symbols are used to select points from one of the constellations and the other symbols select points from the other constellation. This also reduces the phase-shifts from a maximum of 180°, but only to a maximum of 135° and so the amplitude fluctuations of π/4 -QPSK are between OQPSK and non-offset QPSK.
Quadrature phase-shift keying (QPSK):
One property this modulation scheme possesses is that if the modulated signal is represented in the complex domain, transitions between symbols never pass through 0. In other words, the signal does not pass through the origin. This lowers the dynamical range of fluctuations in the signal which is desirable when engineering communications signals.
On the other hand, π/4 -QPSK lends itself to easy demodulation and has been adopted for use in, for example, TDMA cellular telephone systems.
Quadrature phase-shift keying (QPSK):
The modulated signal is shown below for a short segment of a random binary data-stream. The construction is the same as above for ordinary QPSK. Successive symbols are taken from the two constellations shown in the diagram. Thus, the first symbol (1 1) is taken from the "blue" constellation and the second symbol (0 0) is taken from the "green" constellation. Note that magnitudes of the two component waves change as they switch between constellations, but the total signal's magnitude remains constant (constant envelope). The phase-shifts are between those of the two previous timing-diagrams.
Quadrature phase-shift keying (QPSK):
DPQPSK Dual-polarization quadrature phase shift keying (DPQPSK) or dual-polarization QPSK - involves the polarization multiplexing of two different QPSK signals, thus improving the spectral efficiency by a factor of 2. This is a cost-effective alternative to utilizing 16-PSK, instead of QPSK to double the spectral efficiency.
Higher-order PSK:
Any number of phases may be used to construct a PSK constellation but 8-PSK is usually the highest order PSK constellation deployed. With more than 8 phases, the error-rate becomes too high and there are better, though more complex, modulations available such as quadrature amplitude modulation (QAM). Although any number of phases may be used, the fact that the constellation must usually deal with binary data means that the number of symbols is usually a power of 2 to allow an integer number of bits per symbol.
Higher-order PSK:
Bit error rate For the general M-PSK there is no simple expression for the symbol-error probability if M>4 . Unfortunately, it can only be obtained from Ps=1−∫−π/Mπ/Mpθr(θr)dθr, where sin cos tan −1(r2r1),γs=EsN0 and r1∼N(Es,12N0) and r2∼N(0,12N0) are each Gaussian random variables.
This may be approximated for high M and high Eb/N0 by: sin πM).
The bit-error probability for M -PSK can only be determined exactly once the bit-mapping is known. However, when Gray coding is used, the most probable error from one symbol to the next produces only a single bit-error and Pb≈1kPs.
Higher-order PSK:
(Using Gray coding allows us to approximate the Lee distance of the errors as the Hamming distance of the errors in the decoded bitstream, which is easier to implement in hardware.) The graph on the right compares the bit-error rates of BPSK, QPSK (which are the same, as noted above), 8-PSK and 16-PSK. It is seen that higher-order modulations exhibit higher error-rates; in exchange however they deliver a higher raw data-rate.
Higher-order PSK:
Bounds on the error rates of various digital modulation schemes can be computed with application of the union bound to the signal constellation.
Spectral efficiency Bandwidth (or spectral) efficiency of M-PSK modulation schemes increases with increasing of modulation order M (unlike, for example, M-FSK): log bits Hz ] The same relationship holds true for M-QAM.
Differential phase-shift keying (DPSK):
Differential encoding Differential phase shift keying (DPSK) is a common form of phase modulation that conveys data by changing the phase of the carrier wave. As mentioned for BPSK and QPSK there is an ambiguity of phase if the constellation is rotated by some effect in the communications channel through which the signal passes. This problem can be overcome by using the data to change rather than set the phase.
Differential phase-shift keying (DPSK):
For example, in differentially encoded BPSK a binary "1" may be transmitted by adding 180° to the current phase and a binary "0" by adding 0° to the current phase. Another variant of DPSK is Symmetric Differential Phase Shift keying, SDPSK, where encoding would be +90° for a "1" and −90° for a "0".
Differential phase-shift keying (DPSK):
In differentially encoded QPSK (DQPSK), the phase-shifts are 0°, 90°, 180°, −90° corresponding to data "00", "01", "11", "10". This kind of encoding may be demodulated in the same way as for non-differential PSK but the phase ambiguities can be ignored. Thus, each received symbol is demodulated to one of the M points in the constellation and a comparator then computes the difference in phase between this received signal and the preceding one. The difference encodes the data as described above. Symmetric differential quadrature phase shift keying (SDQPSK) is like DQPSK, but encoding is symmetric, using phase shift values of −135°, −45°, +45° and +135°.
Differential phase-shift keying (DPSK):
The modulated signal is shown below for both DBPSK and DQPSK as described above. In the figure, it is assumed that the signal starts with zero phase, and so there is a phase shift in both signals at t=0 Analysis shows that differential encoding approximately doubles the error rate compared to ordinary M -PSK but this may be overcome by only a small increase in Eb/N0 . Furthermore, this analysis (and the graphical results below) are based on a system in which the only corruption is additive white Gaussian noise (AWGN). However, there will also be a physical channel between the transmitter and receiver in the communication system. This channel will, in general, introduce an unknown phase-shift to the PSK signal; in these cases the differential schemes can yield a better error-rate than the ordinary schemes which rely on precise phase information.
Differential phase-shift keying (DPSK):
One of the most popular applications of DPSK is the Bluetooth standard where π/4 -DQPSK and 8-DPSK were implemented.
Differential phase-shift keying (DPSK):
Demodulation For a signal that has been differentially encoded, there is an obvious alternative method of demodulation. Instead of demodulating as usual and ignoring carrier-phase ambiguity, the phase between two successive received symbols is compared and used to determine what the data must have been. When differential encoding is used in this manner, the scheme is known as differential phase-shift keying (DPSK). Note that this is subtly different from just differentially encoded PSK since, upon reception, the received symbols are not decoded one-by-one to constellation points but are instead compared directly to one another.
Differential phase-shift keying (DPSK):
Call the received symbol in the k th timeslot rk and let it have phase ϕk . Assume without loss of generality that the phase of the carrier wave is zero. Denote the additive white Gaussian noise (AWGN) term as nk . Then rk=Esejϕk+nk.
Differential phase-shift keying (DPSK):
The decision variable for the k−1 th symbol and the k th symbol is the phase difference between rk and rk−1 . That is, if rk is projected onto rk−1 , the decision is taken on the phase of the resultant complex number: rkrk−1∗=Esej(φk−φk−1)+Esejφknk−1∗+Ese−jφk−1nk+nknk−1∗ where superscript * denotes complex conjugation. In the absence of noise, the phase of this is ϕk−ϕk−1 , the phase-shift between the two received signals which can be used to determine the data transmitted.
Differential phase-shift keying (DPSK):
The probability of error for DPSK is difficult to calculate in general, but, in the case of DBPSK it is: Pb=12e−EbN0, which, when numerically evaluated, is only slightly worse than ordinary BPSK, particularly at higher Eb/N0 values.
Using DPSK avoids the need for possibly complex carrier-recovery schemes to provide an accurate phase estimate and can be an attractive alternative to ordinary PSK.
Differential phase-shift keying (DPSK):
In optical communications, the data can be modulated onto the phase of a laser in a differential way. The modulation is a laser which emits a continuous wave, and a Mach–Zehnder modulator which receives electrical binary data. For the case of BPSK, the laser transmits the field unchanged for binary '1', and with reverse polarity for '0'. The demodulator consists of a delay line interferometer which delays one bit, so two bits can be compared at one time. In further processing, a photodiode is used to transform the optical field into an electric current, so the information is changed back into its original state.
Differential phase-shift keying (DPSK):
The bit-error rates of DBPSK and DQPSK are compared to their non-differential counterparts in the graph to the right. The loss for using DBPSK is small enough compared to the complexity reduction that it is often used in communications systems that would otherwise use BPSK. For DQPSK though, the loss in performance compared to ordinary QPSK is larger and the system designer must balance this against the reduction in complexity.
Differential phase-shift keying (DPSK):
Example: Differentially encoded BPSK At the th time-slot call the bit to be modulated bk , the differentially encoded bit ek and the resulting modulated signal mk(t) . Assume that the constellation diagram positions the symbols at ±1 (which is BPSK). The differential encoder produces: ek=ek−1⊕bk where ⊕ indicates binary or modulo-2 addition.
So ek only changes state (from binary "0" to binary "1" or from binary "1" to binary "0") if bk is a binary "1". Otherwise it remains in its previous state. This is the description of differentially encoded BPSK given above.
The received signal is demodulated to yield ek=±1 and then the differential decoder reverses the encoding procedure and produces bk=ek⊕ek−1, since binary subtraction is the same as binary addition.
Therefore, bk=1 if ek and ek−1 differ and bk=0 if they are the same. Hence, if both ek and ek−1 are inverted, bk will still be decoded correctly. Thus, the 180° phase ambiguity does not matter.
Differential schemes for other PSK modulations may be devised along similar lines. The waveforms for DPSK are the same as for differentially encoded PSK given above since the only change between the two schemes is at the receiver.
Differential phase-shift keying (DPSK):
The BER curve for this example is compared to ordinary BPSK on the right. As mentioned above, whilst the error rate is approximately doubled, the increase needed in Eb/N0 to overcome this is small. The increase in Eb/N0 required to overcome differential modulation in coded systems, however, is larger – typically about 3 dB. The performance degradation is a result of noncoherent transmission – in this case it refers to the fact that tracking of the phase is completely ignored.
Differential phase-shift keying (DPSK):
Definitions For determining error-rates mathematically, some definitions will be needed: Eb , energy per bit Es=nEb , energy per symbol with n bits Tb , bit duration Ts , symbol duration 12N0 , noise power spectral density (W/Hz) Pb , probability of bit-error Ps , probability of symbol-error Q(x) will give the probability that a single sample taken from a random process with zero-mean and unit-variance Gaussian probability density function will be greater or equal to x . It is a scaled form of the complementary Gaussian error function: erfc (x2),x≥0 .The error rates quoted here are those in additive white Gaussian noise (AWGN). These error rates are lower than those computed in fading channels, hence, are a good theoretical benchmark to compare with.
Applications:
Owing to PSK's simplicity, particularly when compared with its competitor quadrature amplitude modulation, it is widely used in existing technologies.
Applications:
The wireless LAN standard, IEEE 802.11b-1999, uses a variety of different PSKs depending on the data rate required. At the basic rate of 1 Mbit/s, it uses DBPSK (differential BPSK). To provide the extended rate of 2 Mbit/s, DQPSK is used. In reaching 5.5 Mbit/s and the full rate of 11 Mbit/s, QPSK is employed, but has to be coupled with complementary code keying. The higher-speed wireless LAN standard, IEEE 802.11g-2003, has eight data rates: 6, 9, 12, 18, 24, 36, 48 and 54 Mbit/s. The 6 and 9 Mbit/s modes use OFDM modulation where each sub-carrier is BPSK modulated. The 12 and 18 Mbit/s modes use OFDM with QPSK. The fastest four modes use OFDM with forms of quadrature amplitude modulation.
Applications:
Because of its simplicity, BPSK is appropriate for low-cost passive transmitters, and is used in RFID standards such as ISO/IEC 14443 which has been adopted for biometric passports, credit cards such as American Express's ExpressPay, and many other applications.Bluetooth 2 uses π/4 -DQPSK at its lower rate (2 Mbit/s) and 8-DPSK at its higher rate (3 Mbit/s) when the link between the two devices is sufficiently robust. Bluetooth 1 modulates with Gaussian minimum-shift keying, a binary scheme, so either modulation choice in version 2 will yield a higher data rate. A similar technology, IEEE 802.15.4 (the wireless standard used by Zigbee) also relies on PSK using two frequency bands: 868–915 MHz with BPSK and at 2.4 GHz with OQPSK.
Applications:
Both QPSK and 8PSK are widely used in satellite broadcasting. QPSK is still widely used in the streaming of SD satellite channels and some HD channels. High definition programming is delivered almost exclusively in 8PSK due to the higher bitrates of HD video and the high cost of satellite bandwidth. The DVB-S2 standard requires support for both QPSK and 8PSK. The chipsets used in new satellite set top boxes, such as Broadcom's 7000 series support 8PSK and are backward compatible with the older standard.Historically, voice-band synchronous modems such as the Bell 201, 208, and 209 and the CCITT V.26, V.27, V.29, V.32, and V.34 used PSK.
Mutual information with additive white Gaussian noise:
The mutual information of PSK can be evaluated in additive Gaussian noise by numerical integration of its definition. The curves of mutual information saturate to the number of bits carried by each symbol in the limit of infinite signal to noise ratio Es/N0 . On the contrary, in the limit of small signal to noise ratios the mutual information approaches the AWGN channel capacity, which is the supremum among all possible choices of symbol statistical distributions.
Mutual information with additive white Gaussian noise:
At intermediate values of signal to noise ratios the mutual information (MI) is well approximated by: MI log 2(4πeEsN0).
The mutual information of PSK over the AWGN channel is generally farther to the AWGN channel capacity than QAM modulation formats. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Peace gaming**
Peace gaming:
Peace gaming is a neologism coined by Utsumi Takeshi to describe non-military global simulations, or simulations that involve both military and civilian variables.
Peace gaming:
Peace gaming is based on the idea that global simulations which are modeled entirely on military actions (war gaming) can never be more than zero-sum games. In other words, in order for one side to achieve its objective, all others must lose. Proponents of peace gaming simulations argue that when "civilian" factors which exist in the real world, such as the economy, manufacturing, and trade are brought into play, the simulation becomes not only more realistic but also ceases to be a zero-sum game. Through collaborative action on the part of the competitors as opposed to purely confrontational, all sides can gain benefit and thus all can theoretically claim victory. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Canine hydrotherapy**
Canine hydrotherapy:
Canine hydrotherapy is a form of hydrotherapy directed at the treatment of chronic conditions, post-operative recovery, and pre-operative or general fitness in dogs.
Background:
A number of conditions in dogs may be aggravated by or may show slow or no improvement as a result of weight bearing exercise. Among these are hip dysplasia and osteochondritis dissecans (OCD), conditions most common in medium to large purebred dogs, such as German Shepherds, Labrador Retrievers or Golden Retrievers; chronic degenerative radiculomyelopathy (CDRM), a degenerative disease of the spinal cord which causes hind limb problems in German Shepherds; and luxating patella which is seen predominantly in small and toy breeds.Injuries to the cruciate ligament or other ligaments may make post-operative weight-bearing exercise in dogs problematic. Obese dogs, while requiring exercise, may aggravate existing conditions or injure themselves due to the weight exerted on their joints if walked normally.
History:
Hydrotherapy for humans has been used since ancient times, and some attempt at formalizing treatment was made during the 18th century. Similarly, the benefits of seawater for the treatment and prevention of leg injuries in horses has been known for centuries. Because of the financial benefits surrounding the treatment of race horses, around the mid-19th century, inventors began to produce devices to replicate the benefits of cold seawater immersion for horses.
History:
The greyhound racing industry eventually recognized the benefits of the equine treatment, and in the UK was brought to the forefront by a specialized canine hydrotherapy pool. From there, the therapy was extended to dogs in general.
This had led to the development of underwater treadmills to relieve stress on the animal's joints whilst building strength. These have been a turning point in the rehabilitation of dogs as they can be placed in smaller areas, but offer a controlled treatment.
Pool design:
Pool designs for canine hydrotherapy vary, but most have generic elements. The pool tends to be smaller than a human swimming pool and is heated. (This is unlike equine hydrotherapy pools. Horses generate a lot of body heat when swimming, so equine pools use cold water to prevent the animal overheating.) A dog's muscles benefit from the warming effects of the heated water. Most pools have a ramp for entry and exit, and some have harnesses to maintain the dog in position in the water. There may be a manual or electric hoist for lifting dogs in and out of the water. Water is chlorinated or treated with an alternative chemical. Some have jets to add resistance and make the dog swim more strongly.
Uses:
As an alternative or complement to weight-bearing exercise and medication, canine hydrotherapy may speed recovery after operations or slow the progression of degenerative conditions. It may be used as a pre-operative fitness regime to allow a dog to maintain condition before an operation if it can not exercise normally. When a congenital condition is identified in a puppy, it may be the case that surgery is not possible until the animal is physically mature; during the period preceding the surgery, hydrotherapy can be employed to maintain the dog's condition.
Uses:
Spinal injuries or surgery can cause impairment of motor function, which may be treated by allowing the dog to exercise in water; it provides support and allows the dog to exercise its muscles while nerve regeneration is taking place. Degenerative conditions can make normal weight bearing exercise difficult and pressure on joints and limbs may aggravate some conditions, so hydrotherapy can be used in these cases to allow the dog to exercise in an environment where there is no pressure on the affected areas.
Uses:
Obese dogs can build fitness and lose weight as a result of exercise in a hydrotherapy pool without putting excessive weight on their joints. Hydrotherapy may be used as part of a general fitness routine for dogs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.