text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Heavy metal subculture**
Heavy metal subculture:
Fans of heavy metal music, commonly referred to as "Metalheads", have created their own subculture that encompasses more than just appreciation of the style of music. Fans affirm their membership in the subculture or scene by attending metal concerts (an activity seen as central to the subculture), buying albums, growing their hair long in most to (almost always) all cases (although some metalheads do wear their hair short; one very famous example is late 70s to 80s-era Rob Halford), wearing jackets or vests often made of denim and leather, adorned with band patches and often studs, and since the early 1980s, by contributing to metal publications.The metal scene, like the rock scene in general, is associated with alcohol, tobacco and drug use, as well as riding motorcycles and having many tattoos. While there are songs that celebrate drinking, smoking, drug use, gambling, having tattoos and partying, there are also many songs that warn about the dangers of those activities. The metal fan base was traditionally working class, white and male in the 1970s, and since the 1980s, more female fans have developed an interest in the style. Also, its popularity and interest in it has grown among African Americans and other groups recently.
Nomenclature:
Heavy metal fans go by a number of different names, including metalhead, headbanger, hesher, mosher, and thrasher, being used only for fans of thrash metal, which began to differentiate itself from other varieties of metal in the late 80s. While the aforementioned labels vary in time and regional divisions, headbanger and metalhead are universally accepted to mean fans or the subculture itself.
Subculture:
Heavy metal fans have created a "subculture of alienation" with its own standards for achieving authenticity within the group. Deena Weinstein's book Heavy Metal: The Music And Its Culture argues that heavy metal "has persisted far longer than most genres of rock music" due to the growth of an intense "subculture which identified with the music." Metal fans formed an "exclusionary youth community" that was "distinctive and marginalized from the mainstream" society. The heavy metal scene developed a strongly masculine "community with shared values, norms, and behaviors." A "code of authenticity" is central to the heavy metal subculture; this code requires bands to have a "disinterest in commercial appeal" and radio hits as well as a refusal to "sell out." The metal code also includes "opposition to established authority, and separateness from the rest of society." Fans expect that the metal "vocation [for performers] includes total devotion to the music and deep loyalty to the youth subculture that grew up around it;" a metal performer must be an "idealized representative of the subculture."While the audience for metal is mainly "white, male, lower/middle class youth," this group is "tolerant of those outside its core demographic base who follow its codes of dress, appearance, and behavior." The activities in the metal subculture include the ritual of attending concerts, buying albums, and most recently, contributing to metal websites. Attending concerts affirms the solidarity of the subculture, as it is one of the ritual activities by which fans celebrate their music. Metal magazines help the members of the subculture to connect, find information and evaluations of bands and albums, and "express their solidarity." The long hair, leather jackets, and band patches of heavy metal fashion help encourage a sense of identification within the subculture. However, Weinstein notes that not all metal fans are "visible members" of the heavy metal subculture. Some metal fans may have short hair and dress in regular clothes.
Subculture:
Authenticity In the musical subcultures of heavy metal and punk, authenticity is a core value. The term poseur (or poser) is used to describe "a person who habitually pretends to be something he/she is not," as in adopting the appearance and clothing style of the metal scene without truly understanding the culture and its music. In a 1993 profile of heavy metal fans' "subculture of alienation," the author noted that the scene classified some members as "poseurs," that is, heavy metal performers or fans who pretended to be part of the subculture, but who were deemed to lack authenticity and sincerity. Jeffrey Arnett's 1996 book Metalheads: Heavy Metal Music and Adolescent Alienation argues that the heavy metal subculture classifies members into two categories by giving "acceptance as an authentic metalhead or rejection as a fake, a poseur."Heavy metal fans began using the term sell out in the 1980s to refer to bands who turned their heavy metal sound into radio-friendly rock music (e.g., glam metal). In metal, a sell out is "someone dishonest who adopted the most rigorous pose, or identity-affirming lifestyle and opinions." The metal bands that earned this epithet are those "who adopt the visible aspects of the orthodoxy (sound, images) without contributing to the underlying belief system."Ron Quintana's article on "Metallica['s] Early History" argues that when Metallica was trying to find a place in the L.A. metal scene in the early 1980s, "American hard-rock scene was dominated by highly coiffed, smoothly-polished bands such as Styx, Journey, and REO Speedwagon." He claims that this made it hard for Metallica to "play their [heavy] music and win over a crowd in a land where poseurs ruled and anything fast and heavy was ignored." In David Rocher's 1999 interview with Damian Montgomery, the frontman of Ritual Carnage, he praised Montgomery as "an authentic, no-frills, poseur-bashing, nun-devouring kind of gentleman, an enthusiastic metalhead truly in love with the lifestyle he preaches ... and unquestionably practises."In 2002, "[m]etal guru Josh Wood" claimed that the "credibility of heavy metal" in North America is being destroyed by the genre's demotion to "horror movie soundtracks, wrestling events and, worst of all, the so-called 'Mall Core' groups like Limp Bizkit." Wood claims that the "true [metal] devotee’s path to metaldom is perilous and fraught with poseurs."Christian metal bands are sometimes criticized within metal circles in a similar light. Some extreme metal adherents argue that Christian bands' adherence to the Christian church is an indicator of membership in an established authority, which renders Christian bands as "posers" and a contradiction to heavy metal's purpose. Some proponents argue personal faith in right-hand path beliefs should not be tolerated within metal. A small number of Norwegian black metal bands have threatened violence (and, in extremely rare instances, exhibited it) towards Christian artists or believers, as demonstrated in the early 1990s through occasional church arsons throughout Scandinavia.
Social aspects:
Gestures and movements At concerts, in place of typical dancing, metal fans are more likely to mosh and headbang (a movement in which the head is shaken up and down in time with the music).Fans in the heavy metal subculture often make the corna hand gesture formed by a fist with the index and little fingers extended. Also known as the "devil’s horns," the "metal fist," and other similar descriptors, the gesture was popularized by heavy metal vocalists Ronnie James Dio and Ozzy Osbourne.
Social aspects:
Alcohol and drug use The heavy metal scene is associated with alcohol and drug use.
Social aspects:
While there are heavy metal songs which celebrate alcohol or drug use (e.g., "Sweet Leaf" by Black Sabbath, which is about cannabis), there are many songs which warn about the dangers of alcohol and drug abuse and addiction. "Master of Puppets" by Metallica (which is about how drug abusers can end up being controlled by the drugs they use) and "Beyond the Realms of Death" by Judas Priest are two examples of songs that warn about such dangers.
Intolerance to other music:
On a 1985 edition of Australian music television show Countdown, music critic Molly Meldrum spoke about intolerance to other music within the subculture, observing "sections who just love heavy metal, and they actually don't like anything else." Queen frontman Freddie Mercury, a guest on the program, readily concurred with Meldrum's view, and opined that his comments were "very true". Directly addressing the resistance to alternate genres seen among certain heavy metal fans, Mercury asserted: "that's their problem".Interviewed in 2011, Sepultura frontman Derrick Green said: "I find that a lot of people can be very closed minded – they want to listen to metal and nothing else, but I'm not like that. I like doing metal music and having a heavy style, but I don't like to put myself in such a box and be trapped in it." Also that year, Anthrax drummer Charlie Benante admitted that hardened members of the heavy metal subculture "are not the most open-minded people when it comes to music."Ultimate Guitar reported in 2013 that thrash metal fans had directed "hate" towards Megadeth for venturing into more rock-oriented musical territory on that year's Super Collider album. Singer Dave Mustaine stated that their hostility was informed by an unwillingness to accept other genres and had "nothing to do with Megadeth or the greatness of the band and its music"; he also argued that the labelling of music fans contributed to their inability to appreciate other types of music. That same year Opeth frontman Mikael Åkerfeldt also alleged that most members of the subculture are resistant to the musical evolution of artists within the metal genre, stating that it "doesn't seem to be that important" to those listeners. He added: "I think most metal fans just want their Happy Meals served to them. They don't really want to know about what they're getting. For a while, I thought metal was a more open-minded thing but I was wrong."Journalists have written about the dismissive attitude of many metal fans. MetalReviews.com published a 2004 article entitled "The True, Real Metalhead: A Selective Intellect Or A Narrow-Minded Bastard?", wherein the writer confessed to being "truly bothered by the narrow-mindedness of a lot of [his] metal brothers and sisters". Critic Ryan Howe, in a 2013 piece for Sound and Motion magazine, penned an open letter to British metal fans, many of whom had expressed disgust about Avenged Sevenfold – whose music they deemed too light to qualify as metal – being booked to headline the 2014 instalment of popular metal event the Download Festival. Howe described the detractors as "narrow minded" and challenged them to attend the Avenged Sevenfold set and "be prepared to have [their] opinions changed."Despite widespread lack of appreciation of other music genres, some fans and musicians can profess a deep devotion to genres that often have nothing to do with metal music. For instance, Fenriz of Darkthrone is also known to be a techno DJ, and Metallica's Kirk Hammett is seen wearing a T-shirt of post-punk band The Sisters of Mercy in the music video for "Wherever I May Roam". Tourniquet band leader Ted Kirkpatrick is a "great admirer of the classical masters".Some metal fans are also fond of punk rock, most notably the hardcore punk scene which helped inspired the extreme metal subgenres and even fusion genres such as crossover thrash, grindcore and the New York hardcore scene.
Intolerance to other music:
The term metal elitist is sometimes used by heavy metal fans and musicians to differentiate members of the subculture who display insulated, exclusionary or rigid attitudes from ostensibly more open-minded ones. Elitist attitudes are particularly associated with fans and musicians of the black metal subgenre. Characteristics described as distinguishing metal elitists or "nerds" from other fans of metal music include "constant one-upping," "endless pedantry" and hesitancy to "go against the metal orthodoxy." While the term "metal elitism" is usually used pejoratively, elitism is occasionally defended by members of the subculture as a means of keeping the metal genre insulated, in order to prevent it from selling out.Heavy metal is also known for its large quantity of fusion subgenres including nu metal, folk metal and symphonic metal - contradicting the notion of metal as an isolated musical genre. Many popular groups within the genre are also fusion-music acts not represented by any larger subgenre, such as Skindred and Matanza.
Attire:
Another aspect of heavy metal culture is its fashion. Like the metal music, these fashions have changed over the decades, while keeping some core elements. Typically, the heavy metal fashions of the late 1970s – 1980s comprised tight blue jeans or drill pants, motorcycle boots or hi-top sneakers and black T-shirts, worn with a sleeveless kutte of denim or leather emblazoned with woven patches and button pins from heavy metal bands. Sometimes, a denim vest, emblazoned with album art "knits" (cloth patches) would be worn over a long-sleeved leather jacket. As with other musical subcultures of the era, such as punks, this jacket and its emblems and logos helped the wearer to announce their interests. Metal fans often wear T-shirts with the emblem of bands.
Attire:
Around the mid-2000s, a renaissance of younger audiences became interested in 1980s metal, and the rise of newer bands embracing older fashion ideals led to a more 1980s-esque style of dress. Some of the new audience are young, urban hipsters who had "previously fetishized metal from a distance".
International variations:
Heavy metal fans can be found in virtually every country in the world. Even in some of the more orthodox Muslim countries of the Arab World a tiny metal culture exists, though judicial and religious authorities do not always tolerate it. In 2003, more than a dozen members and fans of Moroccan heavy metal bands were imprisoned for "undermining the Muslim faith." Heavy metal fans in many Arab countries have formed metal cultures, with movements such as Taqwacore.
Examples in fiction:
Heavy metal subculture appears in works of fiction, mostly adult cartoons, and 1980s and 1990s live action movies.
The 1986 film Trick or Treat stars Marc Price as Eddie Weinbauer, a bullied metalhead that obtains a unreleased demo record of his deceased heavy metal idol with startling consequences. Guest appearances by Gene Simmons and Ozzy Osbourne.
Examples in fiction:
The titular characters of Mike Judge's animated show Beavis and Butt-Head are among the most notorious examples of heavy metal subculture in fiction, being fond of bands representative of, or marginally associated to, the style (such as Metallica and AC/DC, whose logos emblazon the T-shirts of the protagonists respectively). They also exhibit stereotypical metalhead behavior such as headbanging to songs they like, singing guitar riffs in response to good things happening to them, and deeming glam metal bands as "wussy". However, in a subversion of the stereotype that members of the heavy metal subculture are intolerant towards other styles of music, the duo are very responsive to hip hop music due to them finding it to be just as authentic.
Examples in fiction:
The film and Saturday Night Live program Wayne's World is another good example of the heavy metal subculture in fiction.
Bill & Ted's Excellent Adventure is also a well-known example of heavy metal subculture in fiction, in which the titular characters are time travelers driven by the desire of keeping their band together.
Examples in fiction:
In the Happy Tree Friends episode entitled "In a Jam", the characters Cuddles, Lumpy, Russell, Handy, and Sniffles are in a rock and roll/heavy metal band. In that episode, they also have a stereotypical metalhead/rocker attitude such as being rude to people auditioning to be in the band, being careless, and even having some hand gestures that belong to the subculture.
Examples in fiction:
In an episode of Family Guy entitled "Saving Private Brian", Chris Griffin gets inspired by Marilyn Manson to become part of the heavy metal subculture and is mouthy to Lois and Peter, while they, of course, don't like it.
In one episode of SpongeBob SquarePants entitled "Krabby Road", Plankton makes a rock and roll/heavy metal band called "Plankton and the Patty Stealers" and gets SpongeBob, Patrick, and Squidward to be a part of it.
The Disney XD sitcom I'm in the Band featured a teenaged boy named Trip who was the lead guitarist of a heavy metal/rock and roll band called "Iron Weasel"; the show also focused on heavy metal subculture in high school.
The French-Canadian cartoon My Dad the Rock Star, created by Gene Simmons, featured the father, Rock Zilla, having a family belonging to the rocker/metalhead subculture. His son, William (aka Willy), is the main protagonist as well, but has trouble fitting in with his peers and family since he wants to live a more normal lifestyle.
The Death Metal Epic is a series of comic novels by the writer Dean Swinford. The books tell the story of a death metal guitarist from Florida and is set in the early 1990s, a timeframe in which the genre thrived in the location.
The 2001 film Rock Star, starring Mark Wahlberg and Jennifer Aniston, features a fictional heavy metal band "Steel Dragon", of whom the leading character eventually becomes the vocalist.
Metalocalypse, TV show that aired from August 6, 2006 to October 27, 2013, features fictional death metal band Dethklok.
Rodrick Heffley, the brother of main character Greg Heffley in the book series Diary of a Wimpy Kid (2007–present), is the drummer for a heavy metal band called "Löded Diper" that practices in the Heffley garage, much to the chagrin of his father.
The 2009 video game Brütal Legend is set in a world inspired by heavy metal music, and features characters voiced, and visually inspired, by Jack Black, Rob Halford, Lemmy Kilmister, Lita Ford, and Ozzy Osbourne.
The characters Johnny Klebitz, Jim Fitzgerald, Clay Simons, Terry Thorpe, Patrick McReary and Brucie Kibbutz, from Grand Theft Auto IV and its episodes, are all metalheads.
The 2011 horror novel The Ritual by Adam Nevill features three metalhead antagonists named Loki, Fenris and Surtr who attempt to sacrifice the protagonist, Luke, to Odin.
Examples in fiction:
The 2018 adventure video game Detroit: Become Human features a detective, Hank Anderson, who is shown to be a fan of a fictional heavy metal band called "Knights of the Black Death." The 2018 film Lords of Chaos, starting Rory Culkin and Emory Cohen, is a historical fiction account of the early 1990s Norwegian black metal scene told from the perspective of Mayhem co-founder Euronymous.
Examples in fiction:
The 2022 film Metal Lords, its story follows two high school best friends and metal music lovers, Hunter and Kevin, who set out to start a metal band, against societal norms. It features special cameos by Scott Ian from Anthrax, Kirk Hammett from Metallica, Tom Morello from Rage Against the Machine and Rob Halford from Judas Priest. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digital learning**
Digital learning:
Digital learning is any type of learning that is accompanied by technology or by instructional practice that makes effective use of technology. It encompasses the application of a wide spectrum of practices, including blended and virtual learning. Digital learning is sometimes confused with online learning or e-learning; digital learning encompasses the aforementioned concepts.
Digital learning has become widespread during the COVID-19 pandemic.
Overview:
A digital learning strategy may include any of or a combination of any of the following: adaptive learning badging and gamification blended learning classroom technologies e-textbooks learning analytics learning objects mobile learning, e.g. mobile phones, tablet computers, laptops, computers.
personalised learning online learning (or e-learning) open educational resources (OERs) technology-enhanced teaching and learning virtual reality augmented realityThrough the use of mobile technologies, learning while travelling is possible.
Pedagogies that incorporate digital learning:
Digital learning is meant to enhance the learning experience rather than replace traditional methods altogether. Listed below are common pedagogies, or practices of teaching, that combine technology and learning: Blended/hybrid learning Online learning Flipped learning 1:1 learning Differentiated learning Individualized learning Personalized learning Gamification Understanding by Design (UBD)
Pros of Digital Learning:
Digital learning has many beneficial outcomes, one of which is the student’s ability to work at his/her own pace. With assignments being online students can decide when they want to complete them. If they work best in the morning, they can do them in the morning. On the other hand, if they work best in the evening, they can complete the assignments in the evening. Without having the stress and time limitations of being in a classroom, they can take as long or as little time as they need. This allows them to understand the concept and retain the knowledge fully.
Pros of Digital Learning:
Digital learning offers many environmental benefits. Online education relies strictly on digital documents, therefore reducing paper waste and the amount of trees cut down. Studies show that using ebooks as opposed to traditional textbooks would save more than 28,000 trees per million books. Another environmental benefit of digital learning is that it reduces transportation. Completing assignments online as opposed to commuting to class reduces carbon dioxide emissions in the environment by about 148 pounds each semester. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Global Safety Information Exchange**
Global Safety Information Exchange:
The Global Safety Information Exchange (GSIE) is a system launched by ICAO in September 2010 to confidentially share information about aviation safety incidents; enabling ICAO to identify trends may make it possible to improve safety through risk reduction.
International agreements:
In September 2009, IATA, the European Commission, and the US Department of Transportation signed an agreement for information exchange which is a foundation for the GSIE. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bromodichloromethane**
Bromodichloromethane:
Bromodichloromethane is a trihalomethane with formula CHBrCl2.
Bromodichloromethane has formerly been used as a flame retardant, and a solvent for fats and waxes and because of its high density for mineral separation. Now it is only used as a reagent or intermediate in organic chemistry.
Bromodichloromethane:
Bromodichloromethane can also occur in municipally-treated drinking water as a by-product of the chlorine disinfection process.According to the Environmental Working Group, a non-profit organization that strives to educate consumers about potential chemical and environmental health risks, bromodichloromethane can increase the risk of cancer, harm to reproduction and child development, and may cause changes to fetal growth and development in when present in quantities higher than 0.06 parts per billion (ppb). This data largely comes from studies reviewed or conducted by the California Office of Environmental Health Hazard Assessment. No standards regulating the presence of bromodichloromethane in drinking water currently exist in the United States. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nik operon**
Nik operon:
The nik operon is an operon required for uptake of nickel ions into the cell. It is present in many bacteria, but has been extensively studied in Helicobacter pylori. Nickel is an essential nutrient for many microorganisms, where it participates in a variety of cellular processes. However, excessive levels of nickel ions in cell can be fatal to the cell. Nickel ion concentration in the cell is regulated through the nik operon.
Structure of the nik operon:
The nik operon consists of six genes. The first five genes nikABCDE encode components of a typical ABC transport system and the last gene nikR encodes a DNA-binding protein that represses transcription of nikABCDE when sufficient Ni2+ is present. The nikR gene is located 5 bp downstream of the end of nikE, transcribed in the same direction as nikABCDE. The following table summarizes the structure of the nik operon:
Regulation:
nikR Regulation Regulation of expression of the nikR gene is achieved by two promoters. The first is through the FNR regulon. The FNR controlled regulation of nikABCDE–nikR occurs at a FNR box located upstream of nikA at a putative NikR binding site. The second promoter element regulating nikR expression occurs 51 bp upstream of the nikR transcription start site and results in low-level constitutive expression. There is also evidence that nikR expression is partially autoregulated.
Regulation:
Regulation of Ni2+ uptake Ni2+ is taken up into prokaryotic cells by one of two types of high-affinity transport systems. The first method involves ABC-type transporters (discussed in this article) and the second mechanism makes use of transition-metal permeases (such as HoxN of Ralstonia eutropha). The ABC-type transporter system consists of five proteins, NikA–E, that carry out the ATP-dependent transport of Ni2+. NikA is a soluble, periplasmic, Ni-binding protein; NikB and NikC form a transmembrane pore for passage of Ni; and NikD and NikE hydrolyze ATP and couple this energy to Ni2+-transport. When Ni2+ is available in excess, NikR protein represses transcription of nikABCDE.
Repression by NikR binding:
Using profile-based sequence database searches, NikR was shown to be a member of the ribbon-helix-helix (RHH) family of transcription factors. It has been demonstrated that the N-terminal domain of NikR is responsible for binding to DNA and that it only binds in presence of Ni2+. NikR has two sites for binding to Ni2+ ions. Binding of Ni2+ at concentrations that allow full occupancy of only the high-affinity sites is sufficient for operator binding, but the affinity for the operator is increased 1000-fold and the operator footprints are larger when both nickel-binding sites are occupied. These results, combined with estimates of intracellular Ni2+ and NikR concentrations, lead to the conclusion that NikR is able to sense Ni2+ and regulate the nik operon expression over a range of intracellular Ni2+ concentrations from as low as one to as high as 10,000 molecules per cell. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sodium phosphide**
Sodium phosphide:
Sodium phosphide is the inorganic compound with the formula Na3P. It is a black solid. It is often described as Na+ salt of the P3− anion. Na3P is a source of the highly reactive phosphide anion. It should not be confused with sodium phosphate, Na3PO4.
In addition to Na3P, five other binary compositions of sodium and phosphorus are known: NaP, Na3P7, Na3P11, NaP7, and NaP15.
Structure and Properties:
The compound crystallizes in a hexagonal motif, often called the sodium arsenide structure. Like K3P, solid Na3P features pentacoordinate P centers.
Preparation:
The first preparation of Na3P was first reported in the mid-19th century. French researcher, Alexandre Baudrimont prepared sodium phosphide by treating molten sodium with phosphorus pentachloride.
Preparation:
8 Na(l) + PCl5 → 5 NaCl + Na3PMany different routes to Na3P have been described. Due to its flammability and toxicity, Na3P (and related salts) is often prepared and used in situ. White phosphorus is reduced by sodium-potassium alloy: P4 + 12 Na → 4 Na3PPhosphorus reacts with sodium in an autoclave at 150 °C for 5 hours to produce Na3P.Alternatively the reaction can be conducted at normal pressures but using a temperatures gradient to generate nonvolatile NaxP phases (x < 3) that then react further with sodium. In some cases, an electron-transfer agent, such as naphthalene, is used. In such applications, the naphthalene forms the soluble sodium naphthalenide, which reduces the phosphorus.
Uses:
Sodium phosphide is a source of the highly reactive phosphide anion. The material is insoluble in all solvents but reacts as a slurry with acids and related electrophiles to give derivatives of the type PM3: Na3P + 3 E+ → E3P (E = H, Me3Si)The trimethylsilyl derivative is volatile (b.p. 30-35 C @ 0.001 mm Hg) and soluble. It serves as a soluble equivalent to "P3−".
Uses:
Indium phosphide, a semiconductor arises by treating in-situ generated "sodium phosphide" with indium(III) chloride in hot N,N’-dimethylformamide as solvent. In this process, the phosphide reagent is generated from sodium metal and white phosphorus, whereupon it immediately reacts with the indium salt: Na3P + InCl3 → InP + 3NaClSodium phosphide is also employed commercially as a catalyst in conjunction with zinc phosphide and aluminium phosphide for polymer production. When Na3P is removed from the ternary catalyst polymerization of propylene and 4-methyl-1-pentene is not effective.
Precautions:
Sodium phosphide is highly dangerous releasing toxic phosphine upon hydrolysis, a process that is so exothermic that fires result. The USDOT has forbidden the transportation of Na3P on aircraft and trains due to the potential fire and toxic hazards. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UQCC**
UQCC:
Ubiquinol-cytochrome c reductase complex chaperone CBP3 homolog is an enzyme that in humans is encoded by the UQCC gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pseudoscience**
Pseudoscience:
Pseudoscience consists of statements, beliefs, or practices that claim to be both scientific and factual but are incompatible with the scientific method. Pseudoscience is often characterized by contradictory, exaggerated or unfalsifiable claims; reliance on confirmation bias rather than rigorous attempts at refutation; lack of openness to evaluation by other experts; absence of systematic practices when developing hypotheses; and continued adherence long after the pseudoscientific hypotheses have been experimentally discredited.The demarcation between science and pseudoscience has scientific, philosophical, and political implications. Philosophers debate the nature of science and the general criteria for drawing the line between scientific theories and pseudoscientific beliefs, but there is widespread agreement "that creationism, astrology, homeopathy, Kirlian photography, dowsing, ufology, ancient astronaut theory, Holocaust denialism, Velikovskian catastrophism, and climate change denialism are pseudosciences." There are implications for health care, the use of expert testimony, and weighing environmental policies. Addressing pseudoscience is part of science education and developing scientific literacy.Pseudoscience can have dangerous effects. For example, pseudoscientific anti-vaccine activism and promotion of homeopathic remedies as alternative disease treatments can result in people forgoing important medical treatments with demonstrable health benefits, leading to deaths and ill-health. Furthermore, people who refuse legitimate medical treatments for contagious diseases may put others at risk. Pseudoscientific theories about racial and ethnic classifications have led to racism and genocide.
Pseudoscience:
The term pseudoscience is often considered pejorative, particularly by purveyors of it, because it suggests something is being presented as science inaccurately or even deceptively. Therefore, those practicing or advocating pseudoscience frequently dispute the characterization.
Etymology:
The word pseudoscience is derived from the Greek root pseudo meaning false and the English word science, from the Latin word scientia, meaning "knowledge". Although the term has been in use since at least the late 18th century (e.g., in 1796 by James Pettit Andrews in reference to alchemy), the concept of pseudoscience as distinct from real or proper science seems to have become more widespread during the mid-19th century. Among the earliest uses of "pseudo-science" was in an 1844 article in the Northern Journal of Medicine, issue 387: That opposite kind of innovation which pronounces what has been recognized as a branch of science, to have been a pseudo-science, composed merely of so-called facts, connected together by misapprehensions under the disguise of principles.
Etymology:
An earlier use of the term was in 1843 by the French physiologist François Magendie, that refers to phrenology as "a pseudo-science of the present day". During the 20th century, the word was used pejoratively to describe explanations of phenomena which were claimed to be scientific, but which were not in fact supported by reliable experimental evidence.
Etymology:
Dismissing the separate issue of intentional fraud – such as the Fox sisters' "rappings" in the 1850s – the pejorative label pseudoscience distinguishes the scientific 'us', at one extreme, from the pseudo-scientific 'them', at the other, and asserts that 'our' beliefs, practices, theories, etc., by contrast with that of 'the others', are scientific. There are four criteria: (a) the 'pseudoscientific' group asserts that its beliefs, practices, theories, etc., are 'scientific'; (b) the 'pseudoscientific' group claims that its allegedly established facts are justified true beliefs; (c) the 'pseudoscientific' group asserts that its 'established facts' have been justified by genuine, rigorous, scientific method; and (d) this assertion is false or deceptive: "it is not simply that subsequent evidence overturns established conclusions, but rather that the conclusions were never warranted in the first place"From time to time, however, the usage of the word occurred in a more formal, technical manner in response to a perceived threat to individual and institutional security in a social and cultural setting.
Relationship to science:
Pseudoscience is differentiated from science because – although it usually claims to be science – pseudoscience does not adhere to scientific standards, such as the scientific method, falsifiability of claims, and Mertonian norms.
Relationship to science:
Scientific method A number of basic principles are accepted by scientists as standards for determining whether a body of knowledge, method, or practice is scientific. Experimental results should be reproducible and verified by other researchers. These principles are intended to ensure experiments can be reproduced measurably given the same conditions, allowing further investigation to determine whether a hypothesis or theory related to given phenomena is valid and reliable. Standards require the scientific method to be applied throughout, and bias to be controlled for or eliminated through randomization, fair sampling procedures, blinding of studies, and other methods. All gathered data, including the experimental or environmental conditions, are expected to be documented for scrutiny and made available for peer review, allowing further experiments or studies to be conducted to confirm or falsify results. Statistical quantification of significance, confidence, and error are also important tools for the scientific method.
Relationship to science:
Falsifiability During the mid-20th century, the philosopher Karl Popper emphasized the criterion of falsifiability to distinguish science from non-science. Statements, hypotheses, or theories have falsifiability or refutability if there is the inherent possibility that they can be proven false, that is, if it is possible to conceive of an observation or an argument that negates them. Popper used astrology and psychoanalysis as examples of pseudoscience and Einstein's theory of relativity as an example of science. He subdivided non-science into philosophical, mathematical, mythological, religious and metaphysical formulations on one hand, and pseudoscientific formulations on the other.Another example which shows the distinct need for a claim to be falsifiable was stated in Carl Sagan's publication The Demon-Haunted World when he discusses an invisible dragon that he has in his garage. The point is made that there is no physical test to refute the claim of the presence of this dragon. Whatever test one thinks can be devised, there is a reason why it does not apply to the invisible dragon, so one can never prove that the initial claim is wrong. Sagan concludes; "Now, what's the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all?". He states that "your inability to invalidate my hypothesis is not at all the same thing as proving it true", once again explaining that even if such a claim were true, it would be outside the realm of scientific inquiry.
Relationship to science:
Mertonian norms During 1942, Robert K. Merton identified a set of five "norms" which characterize real science. If any of the norms were violated, Merton considered the enterprise to be non-science. These are not broadly accepted by the scientific community. His norms were: Originality: The tests and research done must present something new to the scientific community.
Detachment: The scientists' reasons for practicing this science must be simply for the expansion of their knowledge. The scientists should not have personal reasons to expect certain results.
Universality: No person should be able to more easily obtain the information of a test than another person. Social class, religion, ethnicity, or any other personal factors should not be factors in someone's ability to receive or perform a type of science.
Skepticism: Scientific facts must not be based on faith. One should always question every case and argument and constantly check for errors or invalid claims.
Public accessibility: Any scientific knowledge one obtains should be made available to everyone. The results of any research should be published and shared with the scientific community.
Relationship to science:
Refusal to acknowledge problems In 1978, Paul Thagard proposed that pseudoscience is primarily distinguishable from science when it is less progressive than alternative theories over a long period of time, and its proponents fail to acknowledge or address problems with the theory. In 1983, Mario Bunge suggested the categories of "belief fields" and "research fields" to help distinguish between pseudoscience and science, where the former is primarily personal and subjective and the latter involves a certain systematic method. The 2018 book about scientific skepticism by Steven Novella, et al. The Skeptics' Guide to the Universe lists hostility to criticism as one of the major features of pseudoscience.
Relationship to science:
Criticism of the term Larry Laudan has suggested pseudoscience has no scientific meaning and is mostly used to describe human emotions: "If we would stand up and be counted on the side of reason, we ought to drop terms like 'pseudo-science' and 'unscientific' from our vocabulary; they are just hollow phrases which do only emotive work for us". Likewise, Richard McNally states, "The term 'pseudoscience' has become little more than an inflammatory buzzword for quickly dismissing one's opponents in media sound-bites" and "When therapeutic entrepreneurs make claims on behalf of their interventions, we should not waste our time trying to determine whether their interventions qualify as pseudoscientific. Rather, we should ask them: How do you know that your intervention works? What is your evidence?" Alternative definition For philosophers Silvio Funtowicz and Jerome R. Ravetz "pseudo-science may be defined as one where the uncertainty of its inputs must be suppressed, lest they render its outputs totally indeterminate". The definition, in the book Uncertainty and Quality in Science for Policy, alludes to the loss of craft skills in handling quantitative information, and to the bad practice of achieving precision in prediction (inference) only at the expenses of ignoring uncertainty in the input which was used to formulate the prediction. This use of the term is common among practitioners of post-normal science. Understood in this way, pseudoscience can be fought using good practices to assess uncertainty in quantitative information, such as NUSAP and – in the case of mathematical modelling – sensitivity auditing.
History:
The history of pseudoscience is the study of pseudoscientific theories over time. A pseudoscience is a set of ideas that presents itself as science, while it does not meet the criteria to be properly called such.Distinguishing between proper science and pseudoscience is sometimes difficult. One proposal for demarcation between the two is the falsification criterion, attributed most notably to the philosopher Karl Popper. In the history of science and the history of pseudoscience it can be especially difficult to separate the two, because some sciences developed from pseudosciences. An example of this transformation is the science of chemistry, which traces its origins to the pseudoscientific or pre-scientific study of alchemy.
History:
The vast diversity in pseudosciences further complicates the history of science. Some modern pseudosciences, such as astrology and acupuncture, originated before the scientific era. Others developed as part of an ideology, such as Lysenkoism, or as a response to perceived threats to an ideology. Examples of this ideological process are creation science and intelligent design, which were developed in response to the scientific theory of evolution.
Indicators of possible pseudoscience:
A topic, practice, or body of knowledge might reasonably be termed pseudoscientific when it is presented as consistent with the norms of scientific research, but it demonstrably fails to meet these norms.
Use of vague, exaggerated or untestable claims Assertion of scientific claims that are vague rather than precise, and that lack specific measurements.
Assertion of a claim with little or no explanatory power.
Failure to make use of operational definitions (i.e., publicly accessible definitions of the variables, terms, or objects of interest so that persons other than the definer can measure or test them independently) (See also: Reproducibility).
Failure to make reasonable use of the principle of parsimony, i.e., failing to seek an explanation that requires the fewest possible additional assumptions when multiple viable explanations are possible (See: Occam's razor).
Lack of boundary conditions: Most well-supported scientific theories possess well-articulated limitations under which the predicted phenomena do and do not apply.
Lack of effective controls, such as placebo and double-blind, in experimental design.
Lack of understanding of basic and established principles of physics and engineering.
Improper collection of evidence Assertions that do not allow the logical possibility that they can be shown to be false by observation or physical experiment (See also: Falsifiability).
Assertion of claims that a theory predicts something that it has not been shown to predict. Scientific claims that do not confer any predictive power are considered at best "conjectures", or at worst "pseudoscience" (e.g., ignoratio elenchi).
Assertion that claims which have not been proven false must therefore be true, and vice versa (See: Argument from ignorance).
Over-reliance on testimonial, anecdotal evidence, or personal experience: This evidence may be useful for the context of discovery (i.e., hypothesis generation), but should not be used in the context of justification (e.g., statistical hypothesis testing).
Use of myths and religious texts as if they were fact, or basing evidence on readings of such texts.
Use of concepts and scenarios from science fiction as if they were fact. This technique appeals to the familiarity that many people already have with science fiction tropes through the popular media.
Presentation of data that seems to support claims while suppressing or refusing to consider data that conflict with those claims. This is an example of selection bias or cherry picking, a distortion of evidence or data that arises from the way that the data are collected. It is sometimes referred to as the selection effect.
Repeating excessive or untested claims that have been previously published elsewhere, and promoting those claims as if they were facts; an accumulation of such uncritical secondary reports, which do not otherwise contribute their own empirical investigation, is called the Woozle effect.
Indicators of possible pseudoscience:
Reversed burden of proof: science places the burden of proof on those making a claim, not on the critic. "Pseudoscientific" arguments may neglect this principle and demand that skeptics demonstrate beyond a reasonable doubt that a claim (e.g., an assertion regarding the efficacy of a novel therapeutic technique) is false. It is essentially impossible to prove a universal negative, so this tactic incorrectly places the burden of proof on the skeptic rather than on the claimant.
Indicators of possible pseudoscience:
Appeals to holism as opposed to reductionism: proponents of pseudoscientific claims, especially in organic medicine, alternative medicine, naturopathy and mental health, often resort to the "mantra of holism" to dismiss negative findings.
Indicators of possible pseudoscience:
Lack of openness to testing by other experts Evasion of peer review before publicizing results (termed "science by press conference"): Some proponents of ideas that contradict accepted scientific theories avoid subjecting their ideas to peer review, sometimes on the grounds that peer review is biased towards established paradigms, and sometimes on the grounds that assertions cannot be evaluated adequately using standard scientific methods. By remaining insulated from the peer review process, these proponents forgo the opportunity of corrective feedback from informed colleagues.
Indicators of possible pseudoscience:
Some agencies, institutions, and publications that fund scientific research require authors to share data so others can evaluate a paper independently. Failure to provide adequate information for other researchers to reproduce the claims contributes to a lack of openness.
Appealing to the need for secrecy or proprietary knowledge when an independent review of data or methodology is requested.
Substantive debate on the evidence by knowledgeable proponents of all viewpoints is not encouraged.
Absence of progress Failure to progress towards additional evidence of its claims. Terence Hines has identified astrology as a subject that has changed very little in the past two millennia.
Indicators of possible pseudoscience:
Lack of self-correction: scientific research programmes make mistakes, but they tend to reduce these errors over time. By contrast, ideas may be regarded as pseudoscientific because they have remained unaltered despite contradictory evidence. The work Scientists Confront Velikovsky (1976) Cornell University, also delves into these features in some detail, as does the work of Thomas Kuhn, e.g., The Structure of Scientific Revolutions (1962) which also discusses some of the items on the list of characteristics of pseudoscience.
Indicators of possible pseudoscience:
Statistical significance of supporting experimental results does not improve over time and are usually close to the cutoff for statistical significance. Normally, experimental techniques improve or the experiments are repeated, and this gives ever stronger evidence. If statistical significance does not improve, this typically shows the experiments have just been repeated until a success occurs due to chance variations.
Personalization of issues Tight social groups and authoritarian personality, suppression of dissent and groupthink can enhance the adoption of beliefs that have no rational basis. In attempting to confirm their beliefs, the group tends to identify their critics as enemies.
Assertion of a conspiracy on the part of the mainstream scientific community to suppress pseudoscientific information.
Attacking the motives, character, morality, or competence of critics (See Ad hominem fallacy).
Use of misleading language Creating scientific-sounding terms to persuade non-experts to believe statements that may be false or meaningless: for example, a long-standing hoax refers to water by the rarely used formal name "dihydrogen monoxide" and describes it as the main constituent in most poisonous solutions to show how easily the general public can be misled.
Using established terms in idiosyncratic ways, thereby demonstrating unfamiliarity with mainstream work in the discipline.
Prevalence of pseudoscientific beliefs:
Countries The Ministry of AYUSH in the Government of India is purposed with developing education, research and propagation of indigenous alternative medicine systems in India. The ministry has faced significant criticism for funding systems that lack biological plausibility and are either untested or conclusively proven as ineffective. Quality of research has been poor, and drugs have been launched without any rigorous pharmacological studies and meaningful clinical trials on Ayurveda or other alternative healthcare systems. There is no credible efficacy or scientific basis of any of these forms of treatment.In his book The Demon-Haunted World, Carl Sagan discusses the government of China and the Chinese Communist Party's concern about Western pseudoscience developments and certain ancient Chinese practices in China. He sees pseudoscience occurring in the United States as part of a worldwide trend and suggests its causes, dangers, diagnosis and treatment may be universal.A large percentage of the United States population lacks scientific literacy, not adequately understanding scientific principles and method. In the Journal of College Science Teaching, Art Hobson writes, "Pseudoscientific beliefs are surprisingly widespread in our culture even among public school science teachers and newspaper editors, and are closely related to scientific illiteracy." However, a 10,000-student study in the same journal concluded there was no strong correlation between science knowledge and belief in pseudoscience.During 2006, the U.S. National Science Foundation (NSF) issued an executive summary of a paper on science and engineering which briefly discussed the prevalence of pseudoscience in modern times. It said, "belief in pseudoscience is widespread" and, referencing a Gallup Poll, stated that belief in the 10 commonly believed examples of paranormal phenomena listed in the poll were "pseudoscientific beliefs". The items were "extrasensory perception (ESP), that houses can be haunted, ghosts, telepathy, clairvoyance, astrology, that people can communicate mentally with someone who has died, witches, reincarnation, and channelling". Such beliefs in pseudoscience represent a lack of knowledge of how science works. The scientific community may attempt to communicate information about science out of concern for the public's susceptibility to unproven claims. The NSF stated that pseudoscientific beliefs in the U.S. became more widespread during the 1990s, peaked about 2001, and then decreased slightly since with pseudoscientific beliefs remaining common. According to the NSF report, there is a lack of knowledge of pseudoscientific issues in society and pseudoscientific practices are commonly followed. Surveys indicate about a third of adult Americans consider astrology to be scientific.In Russia, in the late 20th and early 21st century, significant budgetary funds were spent on programs for the experimental study of "torsion fields", the extraction of energy from granite, the study of "cold nuclear fusion", and astrological and extrasensory "research" by the Ministry of Defense, the Ministry of Emergency Situations, the Ministry of Internal Affairs, and the State Duma (see Military Unit 10003). In 2006, Deputy Chairman of the Security Council of the Russian Federation Nikolai Spassky published an article in Rossiyskaya Gazeta, where among the priority areas for the development of the Russian energy sector, the task of extracting energy from a vacuum was in the first place. The Clean Water project was adopted as a United Russia party project; in the version submitted to the government, the program budget for 2010–2017 exceeded $14 billion.
Prevalence of pseudoscientific beliefs:
Racism There have been many connections between pseudoscientific writers and researchers and their anti-semitic, racist and neo-Nazi backgrounds. They often use pseudoscience to reinforce their beliefs. One of the most predominant pseudoscientific writers is Frank Collin, a self-proclaimed Nazi who goes by Frank Joseph in his writings. The majority of his works include the topics of Atlantis, extraterrestrial encounters, and Lemuria as well as other ancient civilizations, often with white supremacist undertones. For example, he posited that European peoples migrated to North America before Columbus, and that all Native American civilizations were initiated by descendants of white people.The Alt-Right using pseudoscience to base their ideologies on is not a new issue. The entire foundation of anti-semitism is based on pseudoscience, or scientific racism. In an article from Newsweek by Sander Gilman, Gilman describes the pseudoscience community's anti-semitic views. "Jews as they appear in this world of pseudoscience are an invented group of ill, stupid or stupidly smart people who use science to their own nefarious ends. Other groups, too, are painted similarly in 'race science', as it used to call itself: African-Americans, the Irish, the Chinese and, well, any and all groups that you want to prove inferior to yourself". Neo-Nazis and white supremacist often try to support their claims with studies that "prove" that their claims are more than just harmful stereotypes. For example Bret Stephens published a column in The New York Times where he claimed that Ashkenazi Jews had the highest IQ among any ethnic group. However, the scientific methodology and conclusions reached by the article Stephens cited has been called into question repeatedly since its publication. It has been found that at least one of that study's authors has been identified by the Southern Poverty Law Center as a white nationalist.The journal Nature has published a number of editorials in the last few years warning researchers about extremists looking to abuse their work, particularly population geneticists and those working with ancient DNA. One article in Nature, titled "Racism in Science: The Taint That Lingers" notes that early-twentieth-century eugenic pseudoscience has been used to influence public policy, such as the Immigration Act of 1924 in the United States, which sought to prevent immigration from Asia and parts of Europe. Research has repeatedly shown that race is not a scientifically valid concept, yet some scientists continue to look for measurable biological differences between 'races'.
Explanations:
In a 1981 report Singer and Benassi wrote that pseudoscientific beliefs have their origin from at least four sources.
Common cognitive errors from personal experience.
Erroneous sensationalistic mass media coverage.
Sociocultural factors.
Poor or erroneous science education.A 1990 study by Eve and Dunn supported the findings of Singer and Benassi and found pseudoscientific belief being promoted by high school life science and biology teachers.
Explanations:
Psychology The psychology of pseudoscience attempts to explore and analyze pseudoscientific thinking by means of thorough clarification on making the distinction of what is considered scientific vs. pseudoscientific. The human proclivity for seeking confirmation rather than refutation (confirmation bias), the tendency to hold comforting beliefs, and the tendency to overgeneralize have been proposed as reasons for pseudoscientific thinking. According to Beyerstein, humans are prone to associations based on resemblances only, and often prone to misattribution in cause-effect thinking.Michael Shermer's theory of belief-dependent realism is driven by the belief that the brain is essentially a "belief engine" which scans data perceived by the senses and looks for patterns and meaning. There is also the tendency for the brain to create cognitive biases, as a result of inferences and assumptions made without logic and based on instinct – usually resulting in patterns in cognition. These tendencies of patternicity and agenticity are also driven "by a meta-bias called the bias blind spot, or the tendency to recognize the power of cognitive biases in other people but to be blind to their influence on our own beliefs".
Explanations:
Lindeman states that social motives (i.e., "to comprehend self and the world, to have a sense of control over outcomes, to belong, to find the world benevolent and to maintain one's self-esteem") are often "more easily" fulfilled by pseudoscience than by scientific information. Furthermore, pseudoscientific explanations are generally not analyzed rationally, but instead experientially. Operating within a different set of rules compared to rational thinking, experiential thinking regards an explanation as valid if the explanation is "personally functional, satisfying and sufficient", offering a description of the world that may be more personal than can be provided by science and reducing the amount of potential work involved in understanding complex events and outcomes.Anyone searching for psychological help that is based in science should seek a licensed therapist whose techniques are not based in pseudoscience. Hupp and Santa Maria provide a complete explanation of what that person should look for.
Explanations:
Education and scientific literacy There is a trend to believe in pseudoscience more than scientific evidence. Some people believe the prevalence of pseudoscientific beliefs is due to widespread scientific illiteracy. Individuals lacking scientific literacy are more susceptible to wishful thinking, since they are likely to turn to immediate gratification powered by System 1, our default operating system which requires little to no effort. This system encourages one to accept the conclusions they believe, and reject the ones they do not. Further analysis of complex pseudoscientific phenomena require System 2, which follows rules, compares objects along multiple dimensions and weighs options. These two systems have several other differences which are further discussed in the dual-process theory. The scientific and secular systems of morality and meaning are generally unsatisfying to most people. Humans are, by nature, a forward-minded species pursuing greater avenues of happiness and satisfaction, but we are all too frequently willing to grasp at unrealistic promises of a better life.Psychology has much to discuss about pseudoscience thinking, as it is the illusory perceptions of causality and effectiveness of numerous individuals that needs to be illuminated. Research suggests that illusionary thinking happens in most people when exposed to certain circumstances such as reading a book, an advertisement or the testimony of others are the basis of pseudoscience beliefs. It is assumed that illusions are not unusual, and given the right conditions, illusions are able to occur systematically even in normal emotional situations. One of the things pseudoscience believers quibble most about is that academic science usually treats them as fools. Minimizing these illusions in the real world is not simple. To this aim, designing evidence-based educational programs can be effective to help people identify and reduce their own illusions.
Boundaries with science:
Classification Philosophers classify types of knowledge. In English, the word science is used to indicate specifically the natural sciences and related fields, which are called the social sciences. Different philosophers of science may disagree on the exact limits – for example, is mathematics a formal science that is closer to the empirical ones, or is pure mathematics closer to the philosophical study of logic and therefore not a science? – but all agree that all of the ideas that are not scientific are non-scientific. The large category of non-science includes all matters outside the natural and social sciences, such as the study of history, metaphysics, religion, art, and the humanities. Dividing the category again, unscientific claims are a subset of the large category of non-scientific claims. This category specifically includes all matters that are directly opposed to good science. Un-science includes both "bad science" (such as an error made in a good-faith attempt at learning something about the natural world) and pseudoscience. Thus pseudoscience is a subset of un-science, and un-science, in turn, is subset of non-science.
Boundaries with science:
Science is also distinguishable from revelation, theology, or spirituality in that it offers insight into the physical world obtained by empirical research and testing. The most notable disputes concern the evolution of living organisms, the idea of common descent, the geologic history of the Earth, the formation of the solar system, and the origin of the universe. Systems of belief that derive from divine or inspired knowledge are not considered pseudoscience if they do not claim either to be scientific or to overturn well-established science. Moreover, some specific religious claims, such as the power of intercessory prayer to heal the sick, although they may be based on untestable beliefs, can be tested by the scientific method.
Boundaries with science:
Some statements and common beliefs of popular science may not meet the criteria of science. "Pop" science may blur the divide between science and pseudoscience among the general public, and may also involve science fiction. Indeed, pop science is disseminated to, and can also easily emanate from, persons not accountable to scientific methodology and expert peer review.
Boundaries with science:
If claims of a given field can be tested experimentally and standards are upheld, it is not pseudoscience, regardless of how odd, astonishing, or counterintuitive those claims are. If claims made are inconsistent with existing experimental results or established theory, but the method is sound, caution should be used, since science consists of testing hypotheses which may turn out to be false. In such a case, the work may be better described as ideas that are "not yet generally accepted". Protoscience is a term sometimes used to describe a hypothesis that has not yet been tested adequately by the scientific method, but which is otherwise consistent with existing science or which, where inconsistent, offers reasonable account of the inconsistency. It may also describe the transition from a body of practical knowledge into a scientific field.
Boundaries with science:
Philosophy Karl Popper stated it is insufficient to distinguish science from pseudoscience, or from metaphysics (such as the philosophical question of what existence means), by the criterion of rigorous adherence to the empirical method, which is essentially inductive, based on observation or experimentation. He proposed a method to distinguish between genuine empirical, nonempirical or even pseudoempirical methods. The latter case was exemplified by astrology, which appeals to observation and experimentation. While it had empirical evidence based on observation, on horoscopes and biographies, it crucially failed to use acceptable scientific standards. Popper proposed falsifiability as an important criterion in distinguishing science from pseudoscience.
Boundaries with science:
To demonstrate this point, Popper gave two cases of human behavior and typical explanations from Sigmund Freud and Alfred Adler's theories: "that of a man who pushes a child into the water with the intention of drowning it; and that of a man who sacrifices his life in an attempt to save the child." From Freud's perspective, the first man would have suffered from psychological repression, probably originating from an Oedipus complex, whereas the second man had attained sublimation. From Adler's perspective, the first and second man suffered from feelings of inferiority and had to prove himself, which drove him to commit the crime or, in the second case, drove him to rescue the child. Popper was not able to find any counterexamples of human behavior in which the behavior could not be explained in the terms of Adler's or Freud's theory. Popper argued it was that the observation always fitted or confirmed the theory which, rather than being its strength, was actually its weakness. In contrast, Popper gave the example of Einstein's gravitational theory, which predicted "light must be attracted by heavy bodies (such as the Sun), precisely as material bodies were attracted." Following from this, stars closer to the Sun would appear to have moved a small distance away from the Sun, and away from each other. This prediction was particularly striking to Popper because it involved considerable risk. The brightness of the Sun prevented this effect from being observed under normal circumstances, so photographs had to be taken during an eclipse and compared to photographs taken at night. Popper states, "If observation shows that the predicted effect is definitely absent, then the theory is simply refuted." Popper summed up his criterion for the scientific status of a theory as depending on its falsifiability, refutability, or testability.
Boundaries with science:
Paul R. Thagard used astrology as a case study to distinguish science from pseudoscience and proposed principles and criteria to delineate them. First, astrology has not progressed in that it has not been updated nor added any explanatory power since Ptolemy. Second, it has ignored outstanding problems such as the precession of equinoxes in astronomy. Third, alternative theories of personality and behavior have grown progressively to encompass explanations of phenomena which astrology statically attributes to heavenly forces. Fourth, astrologers have remained uninterested in furthering the theory to deal with outstanding problems or in critically evaluating the theory in relation to other theories. Thagard intended this criterion to be extended to areas other than astrology. He believed it would delineate as pseudoscientific such practices as witchcraft and pyramidology, while leaving physics, chemistry, astronomy, geoscience, biology, and archaeology in the realm of science.In the philosophy and history of science, Imre Lakatos stresses the social and political importance of the demarcation problem, the normative methodological problem of distinguishing between science and pseudoscience. His distinctive historical analysis of scientific methodology based on research programmes suggests: "scientists regard the successful theoretical prediction of stunning novel facts – such as the return of Halley's comet or the gravitational bending of light rays – as what demarcates good scientific theories from pseudo-scientific and degenerate theories, and in spite of all scientific theories being forever confronted by 'an ocean of counterexamples'". Lakatos offers a "novel fallibilist analysis of the development of Newton's celestial dynamics, [his] favourite historical example of his methodology" and argues in light of this historical turn, that his account answers for certain inadequacies in those of Karl Popper and Thomas Kuhn. "Nonetheless, Lakatos did recognize the force of Kuhn's historical criticism of Popper – all important theories have been surrounded by an 'ocean of anomalies', which on a falsificationist view would require the rejection of the theory outright...Lakatos sought to reconcile the rationalism of Popperian falsificationism with what seemed to be its own refutation by history".
Boundaries with science:
Many philosophers have tried to solve the problem of demarcation in the following terms: a statement constitutes knowledge if sufficiently many people believe it sufficiently strongly. But the history of thought shows us that many people were totally committed to absurd beliefs. If the strengths of beliefs were a hallmark of knowledge, we should have to rank some tales about demons, angels, devils, and of heaven and hell as knowledge. Scientists, on the other hand, are very sceptical even of their best theories. Newton's is the most powerful theory science has yet produced, but Newton himself never believed that bodies attract each other at a distance. So no degree of commitment to beliefs makes them knowledge. Indeed, the hallmark of scientific behaviour is a certain scepticism even towards one's most cherished theories. Blind commitment to a theory is not an intellectual virtue: it is an intellectual crime.
Boundaries with science:
Thus a statement may be pseudoscientific even if it is eminently 'plausible' and everybody believes in it, and it may be scientifically valuable even if it is unbelievable and nobody believes in it. A theory may even be of supreme scientific value even if no one understands it, let alone believes in it.
Boundaries with science:
The boundary between science and pseudoscience is disputed and difficult to determine analytically, even after more than a century of study by philosophers of science and scientists, and despite some basic agreements on the fundamentals of the scientific method. The concept of pseudoscience rests on an understanding that the scientific method has been misrepresented or misapplied with respect to a given theory, but many philosophers of science maintain that different kinds of methods are held as appropriate across different fields and different eras of human history. According to Lakatos, the typical descriptive unit of great scientific achievements is not an isolated hypothesis but "a powerful problem-solving machinery, which, with the help of sophisticated mathematical techniques, digests anomalies and even turns them into positive evidence".
Boundaries with science:
To Popper, pseudoscience uses induction to generate theories, and only performs experiments to seek to verify them. To Popper, falsifiability is what determines the scientific status of a theory. Taking a historical approach, Kuhn observed that scientists did not follow Popper's rule, and might ignore falsifying data, unless overwhelming. To Kuhn, puzzle-solving within a paradigm is science. Lakatos attempted to resolve this debate, by suggesting history shows that science occurs in research programmes, competing according to how progressive they are. The leading idea of a programme could evolve, driven by its heuristic to make predictions that can be supported by evidence. Feyerabend claimed that Lakatos was selective in his examples, and the whole history of science shows there is no universal rule of scientific method, and imposing one on the scientific community impedes progress. Laudan maintained that the demarcation between science and non-science was a pseudo-problem, preferring to focus on the more general distinction between reliable and unreliable knowledge.[Feyerabend] regards Lakatos's view as being closet anarchism disguised as methodological rationalism. Feyerabend's claim was not that standard methodological rules should never be obeyed, but rather that sometimes progress is made by abandoning them. In the absence of a generally accepted rule, there is a need for alternative methods of persuasion. According to Feyerabend, Galileo employed stylistic and rhetorical techniques to convince his reader, while he also wrote in Italian rather than Latin and directed his arguments to those already temperamentally inclined to accept them.
Politics, health, and education:
Political implications The demarcation problem between science and pseudoscience brings up debate in the realms of science, philosophy and politics. Imre Lakatos, for instance, points out that the Communist Party of the Soviet Union at one point declared that Mendelian genetics was pseudoscientific and had its advocates, including well-established scientists such as Nikolai Vavilov, sent to a Gulag and that the "liberal Establishment of the West" denies freedom of speech to topics it regards as pseudoscience, particularly where they run up against social mores.Something becomes pseudoscientific when science cannot be separated from ideology, scientists misrepresent scientific findings to promote or draw attention for publicity, when politicians, journalists and a nation's intellectual elite distort the facts of science for short-term political gain, or when powerful individuals of the public conflate causation and cofactors by clever wordplay. These ideas reduce the authority, value, integrity and independence of science in society.
Politics, health, and education:
Health and education implications Distinguishing science from pseudoscience has practical implications in the case of health care, expert testimony, environmental policies, and science education. Treatments with a patina of scientific authority which have not actually been subjected to actual scientific testing may be ineffective, expensive and dangerous to patients and confuse health providers, insurers, government decision makers and the public as to what treatments are appropriate. Claims advanced by pseudoscience may result in government officials and educators making bad decisions in selecting curricula.The extent to which students acquire a range of social and cognitive thinking skills related to the proper usage of science and technology determines whether they are scientifically literate. Education in the sciences encounters new dimensions with the changing landscape of science and technology, a fast-changing culture and a knowledge-driven era. A reinvention of the school science curriculum is one that shapes students to contend with its changing influence on human welfare. Scientific literacy, which allows a person to distinguish science from pseudosciences such as astrology, is among the attributes that enable students to adapt to the changing world. Its characteristics are embedded in a curriculum where students are engaged in resolving problems, conducting investigations, or developing projects.Alan J. Friedman mentions why most scientists avoid educating about pseudoscience, including that paying undue attention to pseudoscience could dignify it.On the other hand, Robert L. Park emphasizes how pseudoscience can be a threat to society and considers that scientists have a responsibility to teach how to distinguish science from pseudoscience.Pseudosciences such as homeopathy, even if generally benign, are used by charlatans. This poses a serious issue because it enables incompetent practitioners to administer health care. True-believing zealots may pose a more serious threat than typical con men because of their delusion to homeopathy's ideology. Irrational health care is not harmless and it is careless to create patient confidence in pseudomedicine.On 8 December 2016, journalist Michael V. LeVine pointed out the dangers posed by the Natural News website: "Snake-oil salesmen have pushed false cures since the dawn of medicine, and now websites like Natural News flood social media with dangerous anti-pharmaceutical, anti-vaccination and anti-GMO pseudoscience that puts millions at risk of contracting preventable illnesses."The anti-vaccine movement has persuaded large number of parents not to vaccinate their children, citing pseudoscientific research that links childhood vaccines with the onset of autism. These include the study by Andrew Wakefield, which claimed that a combination of gastrointestinal disease and developmental regression, which are often seen in children with ASD, occurred within two weeks of receiving vaccines. The study was eventually retracted by its publisher, and Wakefield was stripped of his license to practice medicine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Body image**
Body image:
Body image is a person's thoughts, feelings and perception of the aesthetics or sexual attractiveness of their own body. The concept of body image is used in a number of disciplines, including neuroscience, psychology, medicine, psychiatry, psychoanalysis, philosophy, cultural and feminist studies; the media also often uses the term. Across these disciplines, there is no single consensus definition, but broadly speaking body image consists of the ways people view themselves; their memories, experiences, assumptions, and comparisons about their own appearances; and their overall attitudes towards their own respective heights, shapes, and weights—all of which are shaped by prevalent social and cultural ideals.
Body image:
Body image can be negative ("body negativity") or positive ("body positivity"). A person with a negative body image may feel self-conscious or ashamed, and may feel that others are more attractive. In a time where social media holds a very important place and is used frequently in our daily lives, people of different ages are affected emotionally and mentally by the appearance and body size/shape ideals set by the society they live in. These standards created and changed by society created a world filled with body shaming; the act of humiliating an individual by mocking or making critical comments about a person's physiological appearance. There are differences of body shaming someone and yourself, according to anad.org "We are our own worst critic" which means that we judge and see our own flaws more than anyone else. We body shame ourselves by judging or comparing ourselves to someone else.Aside from having low self-esteem, sufferers typically fixate on altering their physical appearances. Such behavior creates body dissatisfaction and higher risks of eating disorders, isolation, and mental illnesses in the long term. In Eating Disorders, a negative body image may also lead to body image disturbance, an altered perception of the whole one's body. Body dissatisfaction also characterizes body dysmorphic disorder, an obsessive-compulsive disorder defined by concerns about some specific aspect of one's body (usually face, skin or hair), which is severely flawed and warrants exceptional measures to hide or fix. Often, people who have a low body image will try to alter their bodies in some way, such as by dieting or by undergoing cosmetic surgery. On the other hand, positive body image consists of perceiving one's figure clearly and correctly, celebrating and appreciating one's body, and understanding that one's appearance does not reflect one's character or worth.Many factors contribute to a person's body image, including family dynamics, mental illness, biological predispositions and environmental causes for obesity or malnutrition, and cultural expectations (e.g., media and politics). People who are either underweight or overweight can have poor body image. However, when people are constantly told and shown the cosmetic appeal of weight loss and are warned about the risks of obesity, those who are normal or overweight on the BMI scale have higher risks of poor body image. "We expected women would feel worse about their bodies after seeing ultra-thin models, compared to no models if they have internalized the thin ideal, thus replicating previous findings."A 2007 report by the American Psychological Association found that a culture-wide sexualization of girls and women was contributing to increased female anxiety associated with body image. An Australian government Senate Standing Committee report on the sexualization of children in the media reported similar findings associated with body image. However, other scholars have expressed concern that these claims are not based on solid data.
History:
In Ancient Egypt, the perfect woman was said to have a slender figure, with narrow shoulders, and a tall waist. Ancient Greece focused more on the male figure, but its female ideal was full-figured and plump with fair skin tones. Han Dynasty China primarily emphasized the face, skin, and hair. Ideal traits included clear, white skin; long, dark hair; and, for men, a stalwart frame. Any feature that implied social status or wealth was ideal.During the Italian Renaissance, a wife's appearance was taken as an important reflection of her husband's status; size was linked to wealth, and so the ideal wife was held to have full hips and an ample bosom. The Victorian Era witnessed a similar movement, but the popularity of the waist-cinching corset led to the desirability of the hourglass figure. In the 1900s U.S. fashion and media industries celebrated the Gibson Girl: slim and tall, with a large bust and wide hips, but a narrow waist. These girls were also often shown in magazines such as Harper's Bazaar and LIFE, which resulted in a link between trendy fashions and styles, and the maintenance of active lifestyles and healthy well-beings After World War I, the Gibson Girl transformed into the Flapper, an ideal type which dominated the period of the "Roaring Twenties". Women transitioned towards androgynous looks, in which hair styles were kept short, and brassieres were worn to flatten the chest. Loose clothing was also a trend, as it downplayed the waist by lowering it below the navel, resulting in a straight boyish figure.Dress sense became more casual as well, perhaps reflecting a postwar relaxation of social and political tension, and a reaction against the matronly image of the women behind the alcohol Prohibition movement in the USA. With advertisements increasingly advocating the need to achieve a thinner frame, many women therefore pursued diets and exercise. Although slimmer body types were favoured, a sporty and healthy appearance was still prized above the frail and sickly look from the Victorian Era.
History:
Austrian neurologist and psychoanalyst Paul Schilder coined the phrase 'body-image' in his book The Image and Appearance of the Human Body (1935).The 1930s and 1940s witnessed the devastating effects of the second World War. While men were out on the battlefield, women began entering the workforce. This resulted in more formal and traditional military dress styles for women, which caused another shift in body image. While waists remained thin but prominent, the media embraced a more curvaceous look similar to the hourglass figure, through the addition of broad shoulders and large breasts as well.Since this era was part of the golden age of Hollywood, many celebrities continued to influence this trend by wearing tight-fitting clothing that emphasized their figures. Pin-up girls and sex symbols radiating glamour soon followed in the 1950s, and the proportions of the hourglass figure expanded. Notable names include Marilyn Monroe and Sophia Loren. In order to achieve this ideal figure, women consumed weight-gain supplements. The release of Playboy magazine and the Barbie doll during this era reinforced this ideal.Depictions of the perfect woman slowly grew thinner from the 1960s onward. The "Swinging Sixties" saw a similar look to the Flapper with the emergence of high-fashion model Twiggy, who promoted the thin and petite frame, with long slender legs, and an adolescent but androgynous figure. Other characteristics include, small busts, narrow hips, and flat stomachs. Many women either underwent diets or switched to weight-loss supplements to achieve the new look. This eventually resulted in an increase in anorexia nervosa during the 1970s.Greater importance was soon placed on fitness. Actress Farrah Fawcett introduced a more toned and athletic body type. The exercise craze continued in the 1980s with Jane Fonda and the release of workout videos, motivating women to be thin but fit and svelte. This era also saw the rise of tall, long-legged supermodels, such as Naomi Campbell and Cindy Crawford, who set a new beauty standard for women around the world.In the 1990s, supermodel Kate Moss popularized the stick-thin figure instead. The fashion industry pushed her image further with the heroin chic look, which dominated the catwalks during that time.: waifish appearances, bony structures, thin limbs, and an androgynous figure. Although this extreme period was short-lived, the 2000s saw the rise of the Victoria's Secret models, who altered beauty norms to include slim but healthy figures, with large breasts and bottoms, flat and visible abs, and prominent thigh gaps. More women pursued cosmetic surgery practices, on top of diets and exercise regimes, to attain the perfect appearance.As of 2017, efforts to promote a better body image have included an emphasis on fitness and plus-size models in the fashion and media industries. However, the advancement of technologies and pressures from the media have led to even greater importance being placed on the way we look as an indication of our personal value.
History:
Advancements in communication technology have resulted in a "platform of delivery in which we intercept and interpret messages about ourselves, our self-worth, and our bodies." Social media in particular has reshaped the "perfect body". Apps and filters retouch images to make people look beautiful, often with inconsistent ideals for hair, body type, and skin tone.According to a study by Dove, only 4% of women thought they were beautiful, while approximately 70% of women and girls in the UK believed the media's portrayal of impractical beauty standards fueled their appearance anxieties. As a result, the U.S. Department of Health and Human Services reported that, 91% of women were mostly unhappy with their bodies, while 40% will consider cosmetic surgery to fix their flaws.
Demographics:
Women "Social currency for girls and women continues to be rooted in physical appearance". Women "all over the world are evaluated and oppressed by their appearances", including their ages, skin tones, or sizes.
Demographics:
Many advertisements promote insecurities in their audiences in order to sell them solutions, and so may present retouched images, sexual objectification, and explicit messages that promote "unrealistic images of beauty" and undermine body image, particularly in female audiences.Body dissatisfaction creates negative attitudes, a damaging mentality, and negative habits in young women. The emphasis on an ideal female body shape and size is especially psychologically detrimental to young women, who may resort to grooming, dieting, and surgery in order to be happy. A negative body image is very common among young adult women. "The prevalence of eating disorder development among college females is especially high, with rates up to 24% among college students." Body dissatisfaction in girls is associated with increased rate of smoking and a decrease in comfort with sexuality when they're older, which may lead them to consider cosmetic surgery.Global eating disorder rates such as anorexia and bulimia are gradually rising in adolescent girls. The National Eating Disorders Association, reported that 95% of individuals who suffer from an eating disorder are aged 12 to 26, and anorexia is the third-most-common illness among teenagers. Teenage girls are most prone "to internalize negative messages and obsess about weight loss to obtain a thin appearance".The pressure on women and girls "to cope with the effects of culturally induced body insecurity" is severe, with many reporting that "their lives would be better if they were not judged by their looks and body shape, [as] this is leading to low self-esteem, eating disorders, mental health problems and depression.""Cultural messages about beauty (i.e. what it is, how it should be cultivated, and how it will be rewarded) are often implicitly conveyed through media representations of women."Women who compare themselves to images in the media believe they are more overweight than they actually are. One reason for this is because "idealised media images are routinely subjected to computer manipulation techniques, such as airbrushing (e.g. slimming thighs and increasing muscle tone). The resulting images present an unobtainable 'aesthetic perfection' that has no basis in biological reality."However, other researchers have contested the claims of the media effects paradigm. An article by Christopher Ferguson, Benjamin Winegard, and Bo Winegard, for example, argues that peer effects are much more likely to cause body dissatisfaction than media effects, and that media effects have been overemphasized. It also argues that one must be careful about making the leap from arguing that certain environmental conditions might cause body dissatisfaction to the claim that those conditions can cause diagnosable eating disorders.
Demographics:
When female undergraduates were exposed to depictions of thin women their body satisfaction decreased; when they were exposed to larger models, it rose. Many women engage in "fat talk" (speaking negatively about the weight-related size/shape of one's body), a behavior that has been associated with weight dissatisfaction, body surveillance, and body shame. Women who overhear others using fat talk may also experience an increase in body dissatisfaction and guilt.Monteath and McCabe found that 44% of women express negative feelings about both individual body parts and their bodies as a whole. 37.7% of young American males and 51% of young American females express dissatisfaction with their bodies.In America, the dieting industry earns roughly 40 billion dollars per year. A Harvard study (Fat Talk, Harvard University Press) published in 2000 revealed that 86% of teenage girls are on a diet or believe they should be on one. Dieting has become common even among very young children: 51% of 9- and 10-year-old girls feel better about themselves when they are on diets.
Demographics:
Men Similarly, media depictions idealizing a muscular physique have led to body dissatisfaction among young men. As many as 45% of teenage boys may suffer from Body Dysmorphic Disorder (BDD), a mental illness whereby an individual compulsively focuses on self-perceived bodily flaws. Men may also suffer from muscle dysmorphia and may incessantly pursue muscularity without ever becoming fully satisfied with their physiques.Research shows that the greatest impact on men’s criticism of their bodies comes from their male peers, including likeminded individuals or potentially people they admire who are around the same age, as opposed to romantic partners, female peers, or male relatives like fathers or brothers.18% of adolescent males were most worried about their weights and physiques (Malcore, 2016); 29% frequently thought about their appearances.; 50% had recently complained about the way they looked.25% of males report having been teased about their weight, while 33% specify social media as the source for self-consciousness. Following celebrities on social media sites makes it possible to interact personally with celebrities, which has been shown to influence male body image. A number of respondents also admitted to being affected by negative body talk from others.
Demographics:
The ideal male body is perceived to feature a narrow waist and hips, broad shoulders, a well-developed upper body, [and] toned "six-pack" abs. The figure may be traced back to an idealized male doll, G.I. Joe. The "bulked-up action heroes, along with the brawny characters in many video games, present an anatomically impossible ideal for boys, much as Barbie promotes proportions that are physically impossible for girls." Boys who are exposed to depictions of muscular warriors who solve problems with their fists may internalize the lesson that aggression and muscles are essential to masculinity.53% of boys cited advertisements as a "major source of pressure to look good; [though] social media (57%) and friends (68%) exerted more influence, while celebrities (49%) were slightly less persuasive". In spite of this, 22% of adolescent boys thought that the ideals depicted by the media were aspirational, while 33% called them healthy.Some studies have reported a higher incidence of body dissatisfaction among Korean boys and girls than among boys and girls living in the United States, while noting that these studies fail to control for the slimmer and smaller size of Koreans as compared with Westerners. A cross-cultural analysis of the United States and South Korea focusing on social media found that between South Korean men and American men, Korean men are more concerned with their body image in relation to their social media use.Many teenage boys participate in extreme workouts and weight training, and may abuse supplements and steroids to further increase muscle mass. In 2016, 10.5% acknowledged the use of muscle-enhancing substances, while 5 to 6% of respondents admitted to the use of steroids. Although dieting is often overlooked, a significant increase in eating disorders is present among men. Currently, 1 out of 4 men suffer from eating disorders, while 31% have admitted to purging or binge eating in the past.Men often desire up to 26 pounds of additional muscle mass. Men who endorse traditional masculine ideas are more likely to desire additional muscle. The connection between masculinity and muscle can be traced to Classical antiquity.Men with lower, more feminine waist–hip ratios (WHR) feel less comfortable and self-report lower body esteem and self-efficacy than men with higher, more masculine, WHRs.
Demographics:
Gender differences Although body dissatisfaction is more common in women, men are becoming increasingly negatively affected. In a longitudinal study that assessed body image across time and age between men and women, men placed greater significance on their physical appearances than women, even though women reported body image dissatisfaction more often. The difference was strongest among adolescents. One theory to explain the discrepancy is that women have already become accustomed and desensitized to media scrutiny.Studies suggest that the significance placed upon body image improved among women as they got older; men in comparison showed little variation in their attitude. Another suggested that "relative to men, women are considerably more psychologically aware of their appearances. Moreover, women's greater concern over body image has a greater impact on their daily lives."As men and women reach older age, body image takes on a different meaning. Research studies show that the importance attached to physical appearance decreases with age.
Demographics:
Weight The desire to lose weight is highly correlated with poor body image. Kashubeck-West et al. reported that when considering only men and women who desire to lose weight, sex differences in body image disappear.In her book The Beauty Myth, Naomi Wolf reported that "thirty-three thousand women told American researchers they would rather lose ten to fifteen pounds than achieve any other goal." Through repeated images of excessively thin women in media, advertisement, and modeling, thinness has become associated with not only beauty, but happiness and success. As Charisse Goodman put it in her article, "One Picture is Worth a Thousand Diets", advertisements have changed society's ideas of beauty and ugliness: "Indeed to judge by the phrasing of the ads, 'slender' and 'attractive' are one word, not two in the same fashion as 'fat' and 'ugly.'" Research by Martin and Xavier (2010) shows that people feel more pressure from society to be thin after viewing ads featuring a slim model. Ads featuring a larger sized model resulted in less pressure to be thin. People also felt their actual body sizes were larger after viewing a slim model as compared to a larger model.Many, like journalist Marisa Meltzer, have argued this contemporary standard of beauty to be described as anorexic thinness, an unhealthy idea that is not representative of a natural human body: "Never before has the 'perfect' body been at such odds with our true size."However, these figures do not distinguish between people at a low or healthy weight who are in fact overweight, between those whose self-perception as being overweight is incorrect and those whose perception of being overweight is correct.
Demographics:
Post-1997 studies indicate that around 64% of American adults are overweight, such that if the 56%/40% female/male dissatisfaction rates in the Psychology Today study have held steady since its release, those dissatisfaction rates are if anything disproportionately low: although some individuals continue to believe themselves to be overweight when they are not, those persons are now outnumbered by persons who might be expected to be dissatisfied with their bodies but are not.
Demographics:
In turn, although social pressure to lose weight has adverse effects on some individuals who do not need to lose weight, those adverse effects are arguably outweighed by social pressure's positive effect on the overall population, without which the recent increases in obesity and associated health and social problems (described in both popular and academic parlance as an "obesity epidemic") would be even more severe than they already are.Overweight children experience not only discrimination but overall body dissatisfaction, low self-esteem, social isolation and depression. Because of the negative stigma, the child may suffer severely from emotional and physical ailments that could persist past childhood into adulthood.
Demographics:
Race The association of light skin with moral virtue dates back at least to the medieval era, and was reinforced during the Atlantic slave trade. The medieval theory that all races had originated from the white race was an early source of the longstanding association of white bodies and beauty ideals with "normality," and other racial phenotypes as aberrant. The 1960s Black is Beautiful movement explicitly attempted to end that mindset.
Demographics:
A lack of black women in the fashion industry contributes to body image issues among African-American women. However, a 2003 experiment presented 3 photographs of attractive white, black and Asian women to white, black and Asian students. The study concluded that Asian women and white women both reported similar levels of body dissatisfaction, while the black women were less dis-satisfied with their own appearances. These findings are consistent with previous research showing that black women generally have higher self-esteem than white or Asian women in America.One study found that, among women, East Asian women are more satisfied with their bodies than white women. East Asian men however reported more body dissatisfaction than white males did.Western men desire as much as 30 pounds more muscle mass than do Asian men.
Demographics:
Sexuality There is no scientific consensus on how a person's sexuality affects their body image. For example, a 2013 study found that lesbian-identifying women reported less body dissatisfaction than did heterosexual women. In contrast, a 2015 study found no differences in weight satisfaction between heterosexual and lesbian and bisexual women, and no differences in the amount of pressure to be thin they experienced from the media, sexual partners, friends or family. This research did find that heterosexual women were more likely to have internalised the thin ideal (accepted the Western concept that thinness equals attractiveness) than lesbian and bisexual women. Lesbian and bisexual women have said that while they are often critical of mainstream body size/shape ideals these are still the ideals that they feel social pressure to conform to. In a study conducted in 2017, Henrichs-Beck and Szymanski claimed that lesbian gender definition within the lesbian culture may dictate whether or not they are dissatisfied with their bodies. They suggested that lesbians who identified as more stereotypically 'feminine' were at greater risk of body dissatisfaction, while those who identified as more 'butch', were traditionally more satisfied with their bodies. Qualitative research with non-heterosexual women found that female sexual/romantic partners were a source of both body confidence and concerns. These women reported that while they compared their body size and shape to that of their partner, and could feel more self-conscious if their partner was slimmer than them, their attractions to women who did not conform to the narrow Western definition of ‘beauty’ gave them confidence in their own appearance.A 2005 study found that gay men were more likely than straight men to have body image dissatisfaction, diet more, and were more fearful of becoming fat. There is some evidence to link the sexual objectification of gay males and heterosexual females by men in general as a reason for increased numbers in these groups for eating disorders and stimulants addictions. Bisexual people have historically been overlooked within body image research, either subsumed under gay/lesbian labels or ignored completely.
Causes:
The fashion industry Fashion industry insiders argue that clothes hang better on tall, thin catwalk models, but critics respond that an overemphasis on that body type communicates an unhealthy and unrealistic body image to the public.Fashion magazines directed at females subtly promote thinness and diet practices, and teenagers heavily rely on them for beauty and fashion advice. Seventeen in particular recorded one of the highest number of articles devoted to appearances; 69% of girls reported that it had influenced their ideal body shapes. 50% of advertisements featured also used beauty appeal to sell products. The U.S. Department of Health and Human Services reported that 90% of teenage girls felt a need to change their appearances, and that 81% of 10-year-olds were already afraid of being fat. According to a survey by the Manchester Metropolitan University, "self-esteem and views of body image suffered after the participants were shown magazine pictures of models, [suggests] that media portrayal of images can prolong anorexia and bulimia in women and may even be a cause of it". A 2014 survey of 13- to 17-year-old Americans found that 90% "felt pressured by fashion and media industries to be skinny", and that 65% believed that the bodies portrayed were too thin. More than 60% habitually compared themselves to models, and 46% strove to resemble models' bodies.According to Dove's Global Beauty And Confidence report, "a total of 71% of women and 67% of girls want to call on the media to do a better job portraying women of diverse physical appearance, age, race, shape and size." In addition, 67% of men now strongly believe that it is unacceptable for brands to use photo manipulation techniques to alter the body image of a model.In response, the fashion magazine industry has made efforts to include 'real' women, and to reduce or ban the use of airbrushing tools. Likewise, fashion brands and retailers adopt vanity sizing in their assortments to intentionally raise a customer's self-esteem while shopping in stores. This involves labeling clothes with smaller sizes than the actual cut of the items, to trick and attract the consumer.
Causes:
Fashion models themselves have experienced negative body image due to industry pressures: 69% were told to tone up, while 62% reported that their agencies had required them to lose weight or change their body shapes. 54% of models revealed that they would be dropped by their agencies if they failed to comply. Models frequently have underweight body mass index (BMI): a study published in the International Journal of Eating Disorders discovered that a majority of models had a BMI of 17.41, which qualifies as anorexia. In the past twenty years, runway models have also transformed from a typical size 6–8 to 0–2. The average weight of an American model was recorded to be twenty-three percent less than an average American woman. In 2006, the fashion industry came under fire due to the untimely deaths of two models, Luisel Ramos and Ana Carolina Reston, both of whom had suffered from eating disorders and been severely underweight. Other models endure intensive exercise regimes, diets, fasts, and detoxes; in order to maintain or lose weight. In addition, 17% have admitted to stimulant abuse, while another 8% frequently engaged in self-induced vomiting to induce weight loss.
Causes:
Attempts to improve Various jurisdictions have taken steps to protect models and promulgate healthier body image. The UK and US have pursued social education campaigns. Spain, Italy, Brazil, and Israel prohibit models from working with a BMI below 18.5. France similarly forbids the employment of extremely skinny models, and requires medical certificates to verify their health.France is also working on ensuring retailers specify when an image is airbrushed in magazines, websites, and advertisements, although it is unclear whether consumers are already aware of digital retouching techniques.Some brands voluntarily promote better body images. Fashion conglomerates Kering and LVMH recently "announced that they will no longer hire models smaller than a U.S. size 2". in hopes of improving the working conditions of models and inspiring others to follow suit. Critics have objected that to ban size-zero models from working constitutes discrimination or thin-shaming. Moreover, the announcement of a small minimum dress size, which does not fit the average body type of most countries, continues to "send the message that super slim body types is the 'ideal'".Plus-size models are slowly emerging in mainstream media, which may improve body image. Prominent plus-size models include Ashley Graham, the face of popular plus-size retailer Lane Bryant, and Iskra Lawrence, a classified role model for lingerie and swimwear retailer aerie. Christian Siriano cast five plus-size models for his New York Fashion Week shows. Siriano also made global headlines after he designed a gown for plus-sized actress Leslie Jones when other designers would not.Models have notably used Instagram as a tool to "encourage self-acceptance, fight back against body-shamers, and post plenty of selfies celebrating their figure". In the U.S., a group of plus-size models launched the #DearNYFW campaign, which targeted the fashion industry's harmful approach towards their bodies. This movement was broadcast across different social media platforms, with other models using the hashtag to share their experiences, in hopes of persuading the American fashion industry to start "prioritizing health and celebrate diversity on the runway".Fashion photographer Tarik Carroll released a photo series titled the EveryMAN Project to showcase large-framed queer and transgender men of color, with the purpose of "challenging hyper-masculinity and gender norms, while bringing body-positivity to the forefront".The lack of fashion-forward plus-size clothing in the fashion industry has given rise to the #PlusIsEqual movement. High-street brands such as Forever 21 and ASOS have increased plus-size product offerings. Other brands include Victoria Beckham's, who plans to release a range of high-street clothing with sizes up to XXXL, and Nike, which expanded its plus-size collection sizes 1X to 3X. In response to the criticism that the term plus-size caused unnecessary labeling, Kmart replaced its numerical sizing with positive tags such as, "lovely" and "fabulous" instead.Another tactic to promote body positivity has been to protest against photo retouching. In 2014, the Aarie Real campaign promised to display "campaign spreads and brand imagery with stomach rolls, gapless thighs and other perceived flaws that would normally have been edited out of the ads". Neon Moon, a feminist lingerie brand from London, advocates the beauty of flaws, instead of the need to retouch its models for aesthetic purposes. Campaigns often feature a range of "diverse models and lack of airbrushing as a marketing tool". U.S. e-tailer ModCloth explored other methods, such as employing its own staff as models for its swimwear collection.
Causes:
Social media Beauty standards are being enforced and shaped by social media. Users are constantly bombarded by notifications, posts, and photos about the lives of others, "sending messages about what we could, should, or would be if we only purchased certain products, made certain choices, or engaged in certain behaviors". Despite the ability to create and control content on social media, the online environment still enforces the same beauty standards that traditional media promoted. Over-engagement with social networking platforms and images will lead to unattainable ideas of beauty standards which ultimately results in low self-esteem and body image issues.A study by the Florida Health Experience found that "87% of women and 65% of men compare their bodies to images they consume on social and traditional media." They also found that users felt like they got more positive attention towards their body if they altered it in some way. A study by the University of South Australia discovered that individuals who frequently uploaded or viewed appearance-related items were more likely to internalize the thin ideal.Applications such as Instagram have become a "body-image battleground", while the "selfie" is now the universal lens which individuals use to criticize their bodies and others. Facebook and Snapchat also allow users to receive appearance approvals and community acceptance through the ratio of views, comments, and likes. Since individuals who use social media platforms often only display the high points of their lives, a survey by Common Sense Media reported that 22% felt bad if their posts were ignored, or if they did not receive the amount of attention they had hoped for. Instagram is ranked at the most detrimental to mental health according to a study done by the Royal Society for Mental Health. The increased use of body and facial reshaping applications such as Snapchat and Facetune has been identified as a potential cause of body dysmorphia. Social media apps that have body altering filters contribute to body image issues which most often result in eating disorders and body dysmorphia. Recently, a phenomenon referred to as 'Snapchat dysmorphia' has been used to describe people who request surgery to look like the edited versions of themselves as they appear through Snapchat Filters.Many users digitally manipulate the self-portraits they post to social media. According to research by the Renfrew Center Foundation, 50% of men and 70% of 18 to 35-year-old women edited their images before uploading. 35% of respondents were also actively concerned about being tagged in unattractive photos, while 27% fretted about their appearances online.Reports have also shown that the messages delivered by "fitspiration" websites are sometimes identical to the "thinspiration" or pro-anorexia types. This is evident through "language inducing guilt about weight or the body, and promoted dieting". The marketing of restrictive diets to young women as a form of self care can cause "increasingly disordered eating", and orthorexia, an obsession with the right and wrong types of food.In an international study of social media apps, photo-based social media apps, predominately Facebook, Instagram and Snapchat, are found to have a negative impact on the body image of men more than non photo-based social media apps.
Causes:
Unrealistic beauty standards On the other hand, social media has created unrealistic beauty standards. Many individuals are mentally and physically struggling to keep a healthy mind and body. Men and women often harm themselves by trying all sorts of diets or taking all sorts of pills to look like their favorite influencers and celebrities, unfortunately often times the look of the people they idealized is the result of medical procedures. In fact, more and more influencers and celebrities change the way they look, with the help of medical procedures. For example, the “plump lip” trend appeared years ago advertised by celebrities which resulted in an increase of 759% in botox procedures since the 2000s. We can also notice that the images that are posted every day on social media are idealized and unrealistic resulting from society's unreasonably high expectations.
Causes:
Attempts to improve In an attempt to tackle such issues, the UK launched a national campaign called Be Real, after findings showed 76% of secondary school students who learnt about body confidence in class felt more positive about themselves. The goal of this movement was thus to improve body confidence through educational resources provided to schools, and persuading the media, businesses, and the diet industry to endorse different body shapes and sizes instead.Social media platforms such as Instagram have banned the use of "thinspiration and "thinspo" related hashtags. Other solutions include the promotion of hashtags such as #SelfLove and #BodyPositivity, and the promotion of "transformation photos", side-by-side images displaying an individual's fitness or weight-loss progress, which users have utilized to showcase the deceptiveness of social media. In an effort to alleviate eating disorders, Eating Disorder Hope launched the Pro-Recovery Movement, a live Twitter chat encouraging sufferers to celebrate self-love and a positive body image, through recovery subject matters.Project HEAL introduced a campaign called #WhatMakesMeBeautiful, with the aim of celebrating admirable attributes other than appearance.There have been recent demands for social media sites to highlight photos that have been edited and prevent universal publication. Companies in France who want to avoid a fine must label their post if the image has been altered for enhancement.
Measurement:
Body image can be measured by asking a subject to rate their current and ideal body shape using a series of depictions. The difference between these two values is the measure of body dissatisfaction.
Measurement:
There are currently more than 40 "instruments" used to measure body image. All of these instruments can be put into three categories: figure preferences, video projection techniques, and questionnaires. Because there are so many ways to measure body image, it is difficult to draw meaningful research generalizations. Many factors have to be taken into account when measuring body image, including gender, ethnicity, culture, and age.
Measurement:
Figure rating scales One of the most prominent measures of body image is Figure Rating Scales, which present a series of body images graded from thin to muscular or from thin to obese. The subject is asked to indicate which figure best represents their current perceived body, and which represents their ideal or desired body. Bodies depicted in figure rating scales are either hand-drawn silhouettes, computer rendered images, or photographic images.
Measurement:
Video projection techniques One study showed each participant a series of images of himself or herself with either increased weight or decreased weight. Each participant was asked to respond to the pictures, and their startle and eyeblink response were measured. "Objective, psychophysiological measures, like the affect modulated startle eyeblink response, are less subject to reporting bias." Questionnaires BASS is a 9-item subscale of the Multidimensional Body-Self Relations Questionnaire. It uses a rating scale from −2 to +2 and assesses eight body areas and attributes and overall appearance (face, hair, lower torso, mid-torso, upper torso, muscle tone, height, and weight).Questionnaires can have confounding variable responses. For instance, "Acquiescent response style (ARS), or the tendency to agree with items on a survey, is more common among individuals from Asian and African cultures."
Body size and shape misperception:
As well as being dissatisfied with their body sizes, exposure to idealized images of thin bodies is associated with overestimation of one's own body size. Recent research suggests that this exposure to images of thin bodies may cause a recalibration of the visual perceptual mechanisms that represent body size in the brain, such that the observer sees subsequently-viewed bodies, including their own bodies, as heavier than they really are, a process known as "visual adaptation". There is evidence that individuals who are less satisfied with their bodies may spend a disproportionate amount of time directing their visual attention towards unusually thin bodies, resulting in an even greater overestimation of the size of subsequently-viewed bodies. Further evidence suggests that a similar mechanism may be at play in people (particularly young men) who underestimate their muscularity, such as those suffering from muscle dysmorphia. The nature of the interaction between body size and shape misperception and body dissatisfaction is not yet fully understood, however. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lactamide**
Lactamide:
Lactamide is an amide derived from lactic acid. It is a white crystalline solid with a melting point of 73-76 °C.
Lactamide can be prepared by the catalytic hydration of lactonitrile. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mobile social address book**
Mobile social address book:
A mobile social address book is a phonebook on a mobile device that enables subscribers to build and grow their social networks. The mobile social address book transforms the phone book on any standard mobile phone into a social networking platform that makes it easier for subscribers to exchange contact information. The mobile social address book is the convergence of personal information management (PIM) and social networking on a mobile device. While standard mobile phonebooks force users to manually enter contacts, mobile social address books automate this process by enabling subscribers to exchange contact information following a call or SMS. The contact information exchange occurs instantaneously and the user's phonebook updates automatically. Mobile social address books also provide dynamic updates of contacts if their numbers change over time.
History:
The first Mobile social address book appeared in 2007 by a company called IQzone Inc., which was founded by John Kuolt. It was the first company to integrate social networking sites like Facebook, Myspace, Linked in and integrate them with the address book (PIM) of a mobile device. Mobile social address books sought to bring the connectivity of social networking to the in-the-moment experience of the mobile phone. Users can easily exchange contact information regardless of their handset, mobile carrier, or social networking application they use.Examples of emerging companies providing technology to support mobile social address books include: PicDial (which dynamically augments the existing address book with pictures and status from Facebook, MySpace and Twitter, integrates with the call screen so during every call you see the latest picture and status of whoever is calling. It is a network address book so everything can be managed from Windows or Mac as well and lastly you can also set your one callerID picture and status for your friends to see when you call them) FusionOne (whose backup and synchronization solutions lets users easily transfer and update mobile content, including contact information, among different devices); Loopt (whose Loopt service provides a social compass alerting users when friends are near); OnePIN (whose CallerXchange person-to-person contact exchange service lets users share contact info with one click on the mobile phone); and VoxMobili (whose Phone Backup and Synchronized Address Book solutions let users safeguard and synchronize their contact information among different devices).
History:
In 2007, the Rich Communication Services communication protocol was formed. RCS combines different services defined by 3GPP and Open Mobile Alliance (OMA) with an enhanced phonebook with broad implications for the mobile social address book. In a February 2017 Wired Magazine article, RCS was quoted as "...infected with bureaucracy, complexity, and irrelevance," by industry analyst Dean Bubley in 2015, calling RCS a zombie: dead, but somehow still ambling around. The same active goes on to say: "Google sees it differently. For the company with seemingly thousands of messaging platforms, each one with different features and different audiences, RCS presents an opportunity." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alternative ribosome-rescue factor A**
Alternative ribosome-rescue factor A:
Alternative ribosome-rescue factor A (ArfA, YhdL) also known as peptidyl-tRNA hydrolase, is a protein that plays a role in rescuing of stalled ribosomes. It recruits RF2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**LGA 2011**
LGA 2011:
LGA 2011, also called Socket R, is a CPU socket by Intel released on November 14, 2011. It launched along with LGA 1356 to replace its predecessor, LGA 1366 (Socket B) and LGA 1567. While LGA 1356 was designed for dual-processor or low-end servers, LGA 2011 was designed for high-end desktops and high-performance servers. The socket has 2011 protruding pins that touch contact points on the underside of the processor.
LGA 2011:
The LGA 2011 socket uses QPI to connect the CPU to additional CPUs. DMI 2.0 is used to connect the processor to the PCH. The memory controller and 40 PCI Express (PCIe) lanes are integrated on the CPU. On a secondary processor an extra ×4 PCIe interface replaces the DMI interface. As with its predecessor LGA 1366, there is no provisioning for integrated graphics. This socket supports four DDR3 or DDR4 SDRAM memory channels with up to three unbuffered or registered DIMMs per channel, as well as up to 40 PCI Express 2.0 or 3.0 lanes. LGA 2011 also has to ensure platform scalability beyond eight cores and 20 MB of cache.The LGA 2011 socket is used by Sandy Bridge-E/EP and Ivy Bridge-E/EP processors with the corresponding X79 (E – enthusiast class) and C600-series (EP – Xeon class) chipsets. It and LGA 1155 are the two last Intel sockets to support Windows XP and Windows Server 2003.
LGA 2011:
LGA 2011-1 (Socket R2), an updated generation of the socket and the successor of LGA 1567, is used for Ivy Bridge-EX (Xeon E7 v2), Haswell-EX (Xeon E7 v3) and Broadwell-EX (Xeon E7 v4) CPUs, which were released in February 2014, May 2015 and July 2016, respectively.
LGA 2011:
LGA 2011-v3 (Socket R3, also referred to as LGA 2011-3) is another updated generation of the socket, used for Haswell-E and Haswell-EP CPUs and Broadwell-E, which were released in August and September 2014, respectively. Updated socket generations are physically similar to LGA 2011, but different electrical signals, ILM keying and integration of DDR4 memory controller rather than DDR3 prevent backward compatibility with older CPUs.In the server market, it was succeeded by LGA 3647, when in high-end desktop and workstation markets its successor is LGA 2066. Xeon E3 family of processors, later renamed Xeon E, uses consumer-grade sockets.
Physical design and socket generations:
Intel CPU sockets use the so-called Independent Loading Mechanism (ILM) retention device to apply the specific amount of uniform pressure required to correctly hold the CPU against the socket interface. As part of their design, ILMs have differently placed protrusions which are intended to mate with cutouts in CPU packagings. These protrusions, also known as ILM keying, have the purpose of preventing installation of incompatible CPUs into otherwise physically compatible sockets, and preventing ILMs to be mounted with a 180-degree rotation relative to the CPU socket.Different variants (or generations) of the LGA 2011 socket and associated CPUs come with different ILM keying, which makes it possible to install CPUs only into generation-matching sockets. CPUs that are intended to be mounted into LGA 2011-0 (R), LGA 2011-1 (R2) or LGA 2011-v3 (R3) sockets are all mechanically compatible regarding their dimensions and ball pattern pitches, but the designations of contacts are different between generations of the LGA 2011 socket and CPUs, which makes them electrically and logically incompatible. Original LGA 2011 socket is used for Sandy Bridge-E/EP and Ivy Bridge-E/EP processors, while LGA 2011-1 is used for Ivy Bridge-EX (Xeon E7 v2) and Haswell-EX (Xeon E7 V3) CPUs, which were released in February 2014 and May 2015, respectively. LGA 2011-v3 socket is used for Haswell-E and Haswell-EP CPUs, which were released in August and September 2014, respectively.Two types of ILM exist, with different shapes and heatsink mounting hole patterns, both with M4 x 0.7 threads: square ILM (80×80 mm mounting pattern), and narrow ILM (56×94 mm mounting pattern). Square ILM is the standard type, while the narrow one is alternatively available for space-constrained applications. A matching heatsink is required for each ILM type.
Chipsets:
Information for the Intel X79 (for desktop) and C600 series (for workstations and servers, codenamed Romley) chipsets is in the table below. The Romley (EP) platform was delayed approximately one quarter, allegedly due to a SAS controller bug.
The X79 appears to contain the same silicon as the C600 series, with ECS having enabled the SAS controller for one of their boards, even though SAS is not officially supported by Intel for X79.
Compatible processors:
Desktop processors Desktop processors compatible with LGA 2011, 2011–3 socket are Sandy Bridge-E, Ivy Bridge-E, Haswell-E and Broadwell-E.
Sandy Bridge-E and Ivy Bridge-E processors are compatible with the Intel X79 chipset.
Haswell-E and Broadwell-E processors are compatible with the Intel X99 chipset.
All models support: MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, Enhanced Intel SpeedStep Technology (EIST), Intel 64, XD bit (an NX bit implementation), TXT, Intel VT-x, Intel VT-d, Turbo Boost, AES-NI, Smart Cache, Hyper-threading, except the C1 stepping models, which lack VT-d.
Compatible processors:
Sandy Bridge-E, Ivy Bridge-E and Haswell-E processors are not bundled with standard air-cooled CPU coolers. Intel is offering a standard CPU cooler, and a liquid-cooled CPU cooler, which are both sold separately.1 The X79 chipset allows for increasing the base clock (BCLK), Intel calls it CPU Strap, by 1.00×, 1.25×, 1.66× or 2.50×. The CPU frequency is derived by the BCLK times the CPU multiplier.
Compatible processors:
Server processors Server processors compatible with LGA 2011 socket are Sandy Bridge-EP, Ivy Bridge-E, Haswell-E and Broadwell-E.
All models support: MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, Enhanced Intel SpeedStep Technology (EIST), Intel 64, XD bit (an NX bit implementation), TXT, Intel VT-x, Intel VT-d, AES-NI, Smart Cache. Not all support Hyper-threading and Turbo Boost.
Sandy Bridge-EP (Xeon E5) Ivy Bridge-EP (Xeon E5 v2) Ivy Bridge-EX (Xeon E7 v2) All processors are released on February 18, 2014, unless noted otherwise.
Haswell-EP (Xeon E5 v3) Server processors for the LGA 2011-v3 socket are listed in the tables below. As one of the significant changes from the previous generation, they support DDR4 memory. All processors are released on September 8, 2014, unless noted otherwise.
Haswell-EX (Xeon E7 v3) Socket LGA 2011-1 is used for Ivy Bridge-EX (Xeon E7 v2) and Haswell-EX (Xeon E7 V3) CPUs, which were released in February 2014 and May 2015, respectively. All processors are released on May 6, 2015, unless noted otherwise.
Broadwell-EP (Xeon E5 v4) Server processors for the LGA 2011-v3 socket are listed in the tables below. These processors are built on Broadwell-E architecture, 14nM lithography, 4-channel DDR4 ECC with up to 1.5TB and 40-lanes of PCI Express 3.0. E5-16xx v4 do not have QPI links. E5-26xx v4 and E5-46xx 4 processors have 2 QPI links.
Broadwell-EX (Xeon E7 v4) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fractal canopy**
Fractal canopy:
In geometry, a fractal canopy, a type of fractal tree, is one of the easiest-to-create types of fractals. Each canopy is created by splitting a line segment into two smaller segments at the end (symmetric binary tree), and then splitting the two smaller segments as well, and so on, infinitely. Canopies are distinguished by the angle between concurrent adjacent segments and ratio between lengths of successive segments.
Fractal canopy:
A fractal canopy must have the following three properties: The angle between any two neighboring line segments is the same throughout the fractal.
The ratio of lengths of any two consecutive line segments is constant.
Points all the way at the end of the smallest line segments are interconnected, which is to say the entire figure is a connected graph.The pulmonary system used by humans to breathe resembles a fractal canopy, as do trees, blood vessels, viscous fingering, electrical breakdown, and crystals with appropriately adjusted growth velocity from seed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Institute of Group Analysis**
Institute of Group Analysis:
The Institute of Group Analysis is a training organisation for group psychotherapists in the analytical tradition, based on the groundwork begun by S. H. Foulkes in forming the body of theory and practice now known as Group Analysis.
History and background:
The Group Analytic Society (London) was formed by Foulkes and others in 1952, and with a much expanded membership, now functions as an international scientific body. The tasks of training and qualification were delegated to the Institute of Group Analysis (London), which was formed in 1971. The Institute has been actively involved in establishing training programmes in Europe, where there are an increasing number of independent Institutes of Group Analysis and also in the UK. Some of these Institutes and training bodies are listed below. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Canon EF 400mm lens**
Canon EF 400mm lens:
The Canon EF 400mm are seven super-telephoto lenses made by Canon. These lenses have an EF mount that work with the EOS line of cameras. These lenses are widely used by sports and wildlife photographers.Canon has manufactured four 400mm prime lenses: EF 400mm f/2.8L IS III USM EF 400mm f/2.8L IS II USM EF 400mm f/4 DO IS II USM EF 400mm f/5.6L USMThe 400mm f/4 DO IS II USM, which replaced an earlier version of the same lens in 2014, is one of only two Canon lenses that make use of diffractive optics (the other is the EF 70–300mm f/4.5–5.6 DO IS USM). The use of diffractive optics allows the lens to be significantly lighter than it might otherwise be.These lenses are compatible with the Canon Extender EF teleconverters.
Use in astronomy:
Canon 400 mm f/2.8 L IS II USM lenses are used in the Dragonfly Telephoto Array. The array is designed to image astronomical objects with low surface brightness such as some satellite galaxies. The array started with three lenses but this has since increased to 24 with plans for 50. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apotropaic mark**
Apotropaic mark:
An apotropaic mark, also called a witch mark or anti-witch mark, is a symbol or pattern scratched on the walls, beams and thresholds of buildings to protect them from witchcraft or evil spirits. They have many forms; in Britain they are often flower-like patterns of overlapping circles.
Marks on buildings:
Apotropaic marks (from Greek apotrepein "to ward off" from apo- "away" and trepein "to turn") are symbols or patterns scratched into the fabric of a building with the intention of keeping witches out through apotropaic magic. Evil was thought to be held at bay through a wide variety of apotropaic objects such as amulets and talismans against the evil eye. Marks on buildings were one application of this type of belief.Other types of mark include the intertwined letters V and M or a double V (for the protector, the Virgin Mary, alias Virgo Virginum), and crisscrossing lines to confuse any spirits that might try to follow them.At the Bradford-on-Avon Tithe Barn, a flower-like pattern of overlapping circles is incised into a stone in the wall. Similar marks of overlapping circles have been found on a window sill dated about 1616 at Owlpen Manor in Gloucestershire, as well as taper burn marks on the jambs of a medieval door frame.
Marks on buildings:
The marks are most common near places where witches were thought to be able to enter, whether doors, windows or chimneys. For example, during works at Knole, near Sevenoaks in Kent, in 1609, oak beams beneath floors, particularly near fireplaces, were scorched and carved with scratched witch marks to prevent witches and demons from coming down the chimney.Marks have been found in buildings including Knole House, Shakespeare's Birthplace in Stratford-upon-Avon, the Tower of London, and many churches, but little effort has been made to look for them on secular buildings. A collection of over 100 marks - previously thought to be graffiti - was discovered in 2019 on the walls of a cave network at Creswell Crags in Nottinghamshire. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Leberknödel**
Leberknödel:
Leberknödel is a traditional dish of German, Austrian and Czech cuisines.
Leberknödel:
Leberknödel are usually composed of beef liver, though in the German Palatinate region pork is used as an alternative. The meat is ground and mixed with bread, eggs, parsley and various spices, often nutmeg or marjoram. In Austria spleen is often mixed with the liver in a 1/3 ratio. Using 2 moistened tablespoons, the batter is formed into dumplings and boiled in beef broth or fried in lard.
Leberknödel:
Due to their looser consistency, the boiled dumplings are meant to be eaten fresh after preparation, although the fried variant are somewhat less perishable due to the crust formed by frying.
In the Palatinate, Leberknödel are often served with sauerkraut and mashed potatoes. In Bavaria and Austria they are usually served in soup as Leberknödelsuppe (Liver dumpling soup). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Text+Kritik**
Text+Kritik:
Text+Kritik (stylized text+kritik) is a quarterly German journal for literature, music, film, and cultural studies in which German-language writers have their works analysed and presented by fellow writers and experts in literary research and criticism.
It was founded in 1963 by Heinz Ludwig Arnold who edited it from then until his death in 2011. At the time of the first edition, which appeared in 1963 and was dedicated to Günter Grass, the editorial team consisted of Lothar Baier, Gerd Hemmerich, Jochen Meyer, Wolf Wondratschek and Heinz Ludwig Arnold himself.
Each edition is focused on a different theme, which usually means it deals with one specific German-language writer. Featured writers have been as varied as Theodor W. Adorno, Hannah Arendt, Arno Schmidt, Paul Celan, Daniel Kehlmann, Herta Müller, Yoko Tawada, Hubert Fichte, Emine Sevgi Özdamar, Friedrich Dürrenmatt, Felicitas Hoppe and Rainald Goetz.
In 2013 Text+Kritik celebrated its fiftieth anniversary with a special volume, Zukunft der Literatur (Future of Literature). The journal is published four times per year in Munich by edition text+kritik. Co-editors are Hugo Dittberner, Norbert Hummelt, Hermann Korte, Steffen Martus, Axel Ruckaberle, Michael Scheffel, Claudia Stockinger and Michael Töteberg. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Body adiposity index**
Body adiposity index:
The body adiposity index (BAI) is a method of estimating the amount of body fat in humans. The BAI is calculated without using body weight, unlike the body mass index (BMI). Instead, it uses the size of the hips compared to the person's height.
Based on population studies, the BAI is approximately equal to the percentage of body fat for adult men and women of differing ethnicities.
Formula:
The BAI is calculated as: 100 hip circumference in m height in m height 18 Hip circumference (Pearson correlation coefficient, R = 0.602) and height (R = −0.524) are strongly correlated with percentage of body fat. Comparing BAI with "gold standard" dual-energy X-ray absorptiometry (DXA) results, the correlation between DXA-derived percentage of adiposity and the BAI in a target population was R = 0.85, with a concordance of C_b = 0.95.
Uses:
The BAI could be a good tool to measure adiposity due, at least in part, to the advantages over other more complex mechanical or electrical systems. Probably, the most important advantage of BAI over BMI is that weight is not needed. However, in general it seems that the BAI does not overcome the limitations of BMI.Stated advantages of the BAI are that it approximates body fat percentage, while the widely used BMI is known to be of limited accuracy, and is different for males and females with similar percentage body adiposity; and that it does not involve weighing, so can be used in remote locations with limited access to scales. A detailed study published in 2012 concluded that estimates of body fat percentage based on BAI were not more accurate than those based on BMI, waist circumference, or hip circumference.Adiposity indexes that include the waist circumference (for example waist-to-height ratio WHtR) may be better than BAI and BMI in evaluating metabolic and cardiovascular risk in both clinical practice and research. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Slide guitar**
Slide guitar:
Slide guitar is a technique for playing the guitar that is often used in blues music. It involves playing a guitar while holding a hard object (a slide) against the strings, creating the opportunity for glissando effects and deep vibratos that reflect characteristics of the human singing voice. It typically involves playing the guitar in the traditional position (flat against the body) with the use of a slide fitted on one of the guitarist's fingers. The slide may be a metal or glass tube, such as the neck of a bottle. The term bottleneck was historically used to describe this type of playing. The strings are typically plucked (not strummed) while the slide is moved over the strings to change the pitch. The guitar may also be placed on the player's lap and played with a hand-held bar (lap steel guitar).
Slide guitar:
Creating music with a slide of some type has been traced back to African stringed instruments and also to the origin of the steel guitar in Hawaii. Near the beginning of the twentieth century, blues musicians in the Mississippi Delta popularized the bottleneck slide guitar style, and the first recording of slide guitar was by Sylvester Weaver in 1923. Since the 1930s, performers including Robert Nighthawk, Earl Hooker, Elmore James, and Muddy Waters popularized slide guitar in electric blues and influenced later slide guitarists in rock music, including the Rolling Stones, Duane Allman, and Ry Cooder. Lap slide guitar pioneers include Oscar "Buddy" Woods, "Black Ace" Turner, and Freddie Roulette.
History:
The technique of using a hard object against a plucked string goes back to the diddley bow derived from a one-stringed African instrument. The diddley bow is believed to be one of the ancestors of the bottleneck style. When sailors from Europe introduced the Spanish guitar to Hawaii in the latter nineteenth century, the Hawaiians slackened some of the strings from the standard guitar tuning to make a chord – this became known as "slack-key" guitar, today referred to as an open tuning. With the "slack-key" the Hawaiians found it easy to play a three-chord song by moving a piece of metal along the fretboard and began to play the instrument across the lap. Near the end of the nineteenth century, a Hawaiian named Joseph Kekuku became proficient in playing this way using a steel bar against the guitar strings. The bar was called the "steel" and was the source of the name "steel guitar". Kekuku popularized the method and some sources claim he originated the technique. In the first half of the twentieth century, this so-called "Hawaiian guitar" style of playing spread to the US. Sol Hoʻopiʻi was an influential Hawaiian guitarist who in 1919, at age 17, came to the US mainland from Hawaii as a stow-away on a ship heading for San Francisco. Hoʻopiʻi's playing became popular in the late 1920s and he recorded songs like "Hula Blues" and "Farewell Blues". According to author Pete Madsen, "[Hoʻopiʻi's playing] would influence a legion of players from rural Mississippi."Most players of blues slide guitar were from the southern US particularly the Mississippi Delta, and their music was likely from an African origin handed down to African-American sharecroppers who sang as they toiled in the fields. The earliest Delta blues musicians were largely solo singer-guitarists. W. C. Handy commented on the first time he heard slide guitar in 1903, when a blues player performed in a local train station: "As he played, he pressed a knife on the strings of the guitar in a manner popularised by Hawaiian guitarists who used steel bars. The effect was unforgettable." Blues historian Gérard Herzhaft notes that Tampa Red was one of the first black musicians inspired by the Hawaiian guitarists of the beginning of the century, and he managed to adapt their sound to the blues. Tampa Red, as well as Kokomo Arnold, Casey Bill Weldon, and Oscar Woods adopted the Hawaiian mode of playing longer melodies with the slide instead of playing short riffs as they had done previously.In the early twentieth century, steel guitar playing divided into two streams: bottleneck-style, performed on a traditional Spanish guitar held flat against the body; and lap-style, performed on an instrument specifically designed or modified for the purpose of being played on the performer's lap. The bottleneck-style was typically associated with blues music and was popularized by African-American blues artists. The Mississippi Delta was the home of Robert Johnson, Son House, Charlie Patton, and other blues pioneers who prominently used the slide. The first known recording of the bottleneck style was in 1923 by Sylvester Weaver who recorded two instrumentals, "Guitar Blues" and "Guitar Rag". Guitarist and author Woody Mann identifies Tampa Red and Blind Willie Johnson as "developing the most distinctive styles in the recorded idom" of the time. He adds: Johnson was the first such player to achieve a real balance between treble and bass melodic lines, which acted as complementary voices in his arrangements of Baptist spirituals ... Tampa Red's [playing was] innovative for the late 1920s ... Thanks to his distinctive approach and suave sound, the Chicago-based Red became the most influential bottleneck player of the blues age, his smooth-sound work echoing in the playing of Blind Boy Fuller, Robert Nighthawk, Elmore James, and Muddy Waters.
Influential early electric slide guitarists:
When the guitar was electrified in the 1930s, it allowed solos on the instrument to be more audible, and thus more prominently featured. In the 1940s, players like Robert Nighthawk and Earl Hooker popularized electric slide guitar; but, unlike their predecessors, they used standard tuning. This allowed them to switch between slide and fretted guitar playing readily, which was an advantage in rhythm accompaniment.
Influential early electric slide guitarists:
Robert Nighthawk Robert Nighthawk (born Robert Lee McCollum) recorded extensively in the 1930s as "Robert Lee McCoy" with bluesmen such as John Lee "Sonny Boy" Williamson (also known as Sonny Boy Williamson I). He performed on acoustic guitar in a style influenced by Tampa Red. Sometime around World War II, after changing his last name to "Nighthawk" (from the title of one of his songs), he became an early proponent of electric slide guitar and adopted a metal slide. Nighthawk's sound was extremely clean and smooth, with a very light touch of the slide against the strings. He helped popularize Tampa Red's "Black Angel Blues" (later called "Sweet Little Angel"), "Crying Won't Help You", and "Anna Lou Blues" (as "Anna Lee") in his electric slide style-songs which later became part of the repertoire of Earl Hooker, B.B. King, and others. His style influenced both Muddy Waters and Hooker. Nighthawk is credited as one who helped bring music from Mississippi into the Chicago blues style of electric blues.
Influential early electric slide guitarists:
Earl Hooker As a teenager, Earl Hooker (a cousin of John Lee Hooker) sought out Nighthawk as his teacher and in the late 1940s the two toured the South extensively. Nighthawk had a lasting impact on Hooker's playing; however, by the time of his 1953 recording of "Sweet Angel" (a tribute of sorts to Nighthawk's "Sweet Little Angel"), Hooker had developed an advanced style of his own. His solos had a resemblance to the human singing voice and music writer Andy Grigg commented: "He had the uncanny ability to make his guitar weep, moan and talk just like a person ... his slide playing was peerless, even exceeding his mentor, Robert Nighthawk." The vocal approach is heard in Hooker's instrumental, "Blue Guitar", which was later overdubbed with a unison vocal by Muddy Waters and became "You Shook Me". Unusual for a blues player, Hooker explored using a wah-wah pedal in the 1960s to further emulate the human voice.
Influential early electric slide guitarists:
Elmore James Possibly the most influential electric blues slide guitarist of his era was Elmore James, who gained prominence with his 1951 song "Dust My Broom", a remake of Robert Johnson's 1936 song, "I Believe I'll Dust My Broom". It features James playing a series of triplets throughout the song which Rolling Stone magazine called "one immortal lick" and is heard in many blues songs to this day. Although Johnson had used the figure on several songs, James' overdriven electric sound made it "more insistent, firing out a machine-gun triplet beat that would become a defining sound of the early rockers", writes historian Ted Gioia. Unlike Nighthawk and Hooker, James used a full-chord glissando effect with an open E tuning and a bottleneck. Other popular songs by James, such as "It Hurts Me Too" (first recorded by Tampa Red), "The Sky Is Crying", "Shake Your Moneymaker", feature his slide playing.
Influential early electric slide guitarists:
Muddy Waters Although Muddy Waters, born McKinley Morganfield, made his earliest recordings using an acoustic slide guitar, as a guitarist, he was best known for his electric slide playing. Muddy Waters helped bring the Delta blues to Chicago and was instrumental in defining the city's electric blues style. He was also one of the pioneers of electric slide guitar. Beginning with "I Can't Be Satisfied" (1948), many of his hit songs featured slide, including "Rollin' and Tumblin'", "Rollin' Stone" (whose name was adopted by the well-known rock band and the magazine), "Louisiana Blues", and "Still a Fool". Waters used an open G tuning for several of his earlier songs, but later switched to a standard tuning and often used a capo to change keys. He usually played single notes with a small metal slide on his little finger and dampened the strings combined with varying the volume to control the amount of distortion. According to writer Ted Drozdowski, "One last factor to consider is slide vibrato that is achieved by shaking a slide back and forth. Muddy’s slide vibrato was insane, both manic and controlled. That added to the excitement of his playing."
Early developments in rock music:
Rock musicians began exploring electric slide guitar in the early 1960s. In the UK, groups such as the Rolling Stones, who were fans of Chicago blues and Chess Records artists in particular, began recording songs by Muddy Waters, Howlin' Wolf, and others. The Stones' second single, "I Wanna Be Your Man" (1963), featured a slide guitar break by Brian Jones, which may be the first appearance of a slide on a rock record. Critic Richie Unterberger commented, "Particularly outstanding was Brian Jones's slide guitar, whose wailing howl gave the tune a raunchy bluesiness missing in the Beatles' more straightforward rock 'n' roll arrangement." Jones also played slide on their 1964 single "Little Red Rooster", which reached number one on the British charts. One of his last contributions to a Stones recording was his acoustic guitar slide playing on "No Expectations", which biographer Paul Trynka describes as "subtle, totally without bombast or overemphasis ... the perfect embodiment of the journey he'd embarked on in 1961."In Chicago, Mike Bloomfield frequented blues clubs as early as the late 1950s – by the early 1960s Muddy Waters and harmonica virtuoso Little Walter encouraged him and occasionally allowed him to sit in on jam sessions. Waters recalled: "Mike was a great guitar player. He learned a lot of slide from me. Plus I guess he picked up a little lick or two from me, but he learned how to play a lot of slide and pick a lot of guitar." Bloomfield's slide playing attracted Paul Butterfield and, together with guitarist Elvin Bishop, they formed the classic lineup of the Paul Butterfield Blues Band. Their first album, The Paul Butterfield Blues Band (1965), features Bloomfield's slide guitar work on the band's adaptations of two Elmore James songs. "Shake Your Moneymaker" shows his well-developed slide style and "Look Over Yonders Wall" is ranked at number 27 on Rolling Stone magazine's list of the "100 Greatest Guitar Songs of All Time". Around the same time, he recorded with Bob Dylan for the Highway 61 Revisited album and contributed the distinctive slide guitar to the title track. On the second Butterfield album, East-West (1966), songs such as "Walkin' Blues" and "Two Trains Running" include slide playing that brought him to the audience's attention.
Early developments in rock music:
Ry Cooder was a child music prodigy and at age 15 began working on bottleneck guitar techniques and learned Robert Johnson songs. In 1964, Cooder, along with Taj Mahal, formed the Rising Sons, one of the earliest blues rock bands. His early guitar work appears on Captain Beefheart's debut Safe as Milk album (1967) and several songs on Taj Mahal's self-titled 1968 debut album. Also in 1968, he collaborated with the Rolling Stones on recording sessions, which resulted in Cooder playing slide on "Memo from Turner". The Jagger/Richards song was later included on the soundtrack to the 1970 film Performance; Rolling Stone included it at number 92 on its "100 Greatest Guitar Songs of All Time" list. In 1970, he recorded his own self titled debut album, which included the Blind Willie Johnson classic slide instrumental "Dark Was the Night, Cold Was the Ground" (re-recorded in 1984 for the soundtrack to Paris, Texas). Recognized as a master of slide guitar by 1967, Rolling Stone magazine ranked him at number eight on their list of the "100 Greatest Guitarists of All Time" in 2003.Duane Allman’s slide playing with the Allman Brothers Band was one of the formative influences in the creation of Southern rock. He also added memorable slide guitar to Derek and the Dominos’ Layla and Other Assorted Love Songs album, notably its title track, which was ranked at number 13 on Rolling Stone's "100 Greatest Guitar Songs". Allman, who died in a motorcycle accident at age 24, was hailed by NPR's Nick Morrison as "the most inventive slide guitarist of his era". He extended the role of the slide guitar by mimicking the harmonica effects of Sonny Boy Williamson II, most clearly in the Allman Brothers' rendition of Williamson's "One Way Out", recorded live at the Fillmore East and heard on their album Eat a Peach.
Technique:
The slide guitar, according to music educator Keith Wyatt, can be thought of as a "one-finger fretless guitar". The placement of a slide on a string determines the pitch, functioning in the manner of a steel guitar. The slide is pressed lightly against the treble strings to avoid hitting against the frets. The frets are used here only as a visual reference, and playing without their pitch-constraint enables the smooth expressive glissandos that typify blues music. This playing technique creates a hybrid of the attributes of a steel guitar and a traditional guitar in that the player's remaining (non-slide) fingers and thumb still have access to the frets, and may be used for playing rhythmic accompaniment or reaching additional notes. The guitar itself may be tuned in the traditional tuning or an open tuning. Most early blues players used open tunings, but most modern slide players use both. The major limitation of open tuning is that usually only one chord or voicing is easily available and is dictated by how the guitar is originally tuned.: 131 Two-note intervals can be played by slanting the slide on certain notes.In the sixteenth century, the notes of A–D–G–B–E were adopted as a tuning for guitar-like instruments, and the low E was added later to make E–A–D–G–B–E as the standard guitar tuning. In open tuning the strings are tuned to sound a chord when not fretted, and is most often major. Open tunings commonly used with slide guitar include open D or Vestapol tuning: D–A–D–F♯–A–D; and open G or Spanish tuning: D–G–D–G–B–D. Open E and open A, formed by raising each of those tunings a whole tone, are also common. Other tunings are also used, in particular the drop D tuning (low E string tuned down to D) is used by many slide players. This tuning allows for power chords, which contain root, fifth and eighth (octave) notes in the bass strings and conventional tuning for the rest of the strings. Robert Johnson, whose playing has been cited by Eric Clapton, Keith Richards, Jimi Hendrix, and Johnny Winter as being a powerful influence on them, used tunings of standard, open G, open D, and drop D.
Resonator guitars:
The National String Instrument Corporation produced the first metal-body resonator guitars in the late 1920s (see image at beginning of article). Popular with early slide players, these featured a large aluminum cone, resembling an inverted loudspeaker, attached under the instrument's bridge to increase its volume. It was patented in the late 1920s by the Dopyera brothers and became widely used on many types of guitars, and was adapted to the mandolin and ukulele.
Resonator guitars:
Tampa Red played a gold-plated National Tricone style 4, and was one of the first black musicians to record with it. Delta blues pioneer, Son House, played this type of guitar on many songs including the classic, "Death Letter". A resonator guitar with a metal body was played by Bukka White ("Parchman Farm Blues" and "Fixin' to Die Blues").
Lap slide guitar:
"Lap slide guitar" does not refer to a specific instrument, rather a style of playing blues or rock music with the guitar placed horizontally, a position historically known as Hawaiian style. This is a lap-steel guitar, but musicians in these genres prefer the term slide instead of steel; they sometimes play the style with a flat pick or with fingers instead of finger picks. There are various instruments specifically made (or adapted) to play in the horizontal position, including the following: a traditional guitar that has been adapted for lap slide playing by raising the bridge and/or the nut to make the strings higher off the fretboard; steel guitars, (electrified) including lap steel, console steel, and pedal steel, in which a solid metal bar, typically referred to as a "steel", is pressed against the strings and is the source of the name steel guitar; a National or Dobro-type guitar. These are typically acoustic steel guitars with a resonator. Each manufacturer made wood and steel-bodied versions, but National is most associated with the latter.: 38 The types do not sound the same — the Nationals are brassier and are usually preferred by blues players.: 38 Nationals can be played either in the traditional position or horizontally.
Lap slide guitar:
a Hawaiian-style guitar modified by adding drone and sympathetic strings used in Indian classical music known as a mohan veena.
Lap slide guitar pioneers:
Buddy Woods was a Louisiana street performer who recorded in the 1930s. He was called "The Lone Wolf" after the title of his most successful song, "Lone Wolf Blues". Between 1936 and 1938, he recorded ten songs which are today considered classics, including "Don't Sell It, Don't Give It Away". Woods recorded five songs for the US Library of Congress in 1940 in Shreveport, Louisiana, including "Boll Weevil Blues" and "Sometimes I Get a Thinkin'"."Black Ace" Turner (born Babe Karo Turner), a blues artist from Texas, was befriended and mentored by Buddy Woods. Historian Gérard Herzhaft said, "Black Ace is one of the few blues guitarists to have played in the purest Hawaiian style, that is, with the guitar flat on the knees." Turner played a square-neck National "style 2" Tri-cone metal body guitar and used a glass medicine bottle as a slide. Turner was also a good storyteller, which enabled him to host a radio program in Fort Worth called The Black Ace. His career effectively ended when he entered military service in 1943. His album, I Am the Boss Card in Your Hand, contained Turner's original 1930s recordings as well as new songs recorded in 1960. Turner was featured in a 1962 documentary film entitled The Blues.Freddie Roulette (born Frederick Martin Roulette) is a San Francisco-based lap steel blues artist who became interested in the lap steel guitar at an early age and became proficient enough to play in Chicago blues clubs with prominent players. He played an A7 tuning with a slant-bar style and never used finger picks. He earned a spot in Earl Hooker's band and recorded with Hooker in the 1960s. Roulette had played lap steel in other genres before focusing on blues – he stated this helped him add more complex chords to the basic blues played by Hooker and said, "it worked". Roulette was recruited to San Francisco in the mid-1970s by Charlie Musselwhite. In 1997, he recorded a solo album, Back in Chicago: Jammin' with Willie Kent and the Gents, which won Best Blues Album of 1997 by Living Blues Magazine. Roulette's contribution to the lap slide guitar was to prove that a lap-played instrument was capable of holding its own in Chicago blues style.
Slides and steels:
A slide used around a player's finger can be made with any type of smooth hard material that allows tones to resonate. Different materials cause subtle differences in sustain, timbre, and loudness; glass or metal are the most common choices. Longer slides are used to bridge across all six guitar strings at once, but take away the fretting ability of that finger entirely. A shorter slide allows the fingertip to protrude from the slide and allow that finger to be used to fret.Improvised slides are common, including pipes, rings, knives, spoons, and glass bottle necks. Early blues players sometimes used a knife, such as Blind Willie Johnson (pocket- or penknife) and CeDell Davis (butterknife). Duane Allman used a glass Coricidin medicine bottle. Pink Floyd founder Syd Barrett was fond of using a Zippo lighter as a slide, but this was largely for special effects. Jimi Hendrix also used a cigarette lighter for part of his solo on "All Along the Watchtower". It is one of the few recordings with Hendrix on slide, and biographer Harry Shapiro notes he performed it with the guitar on his lap.For guitars designed to be played on the lap, the performer uses a solid piece of steel rather than a hollow tube. The choice of shape and size is a matter of personal preference. The most common steel is a solid metal cylinder with one end rounded into a dome shape. Some lap slide guitar players choose a steel with a deep indentation or groove on each side so it can be held firmly, and may have squared-off ends. The better grip may facilitate playing the rapid vibratos in blues music. This design facilitates hammer-on and pull-off notes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**LGA 4094**
LGA 4094:
LGA 4094 may refer to four physically identical but electrically incompatible CPU sockets for AMD processors: Socket SP3, an AMD server processor socket for Epyc-branded CPUs.
Socket TR4, or Socket SP3r2, an AMD desktop processor socket for first- and second-generation Ryzen Threadripper-branded CPUs Socket sTRX4, or Socket SP3r3, an AMD desktop processor socket for third-generation Ryzen Threadripper-branded CPUs Socket sWRX8, or Socket SP3r4, an AMD desktop processor socket for third- and fourth-generation Ryzen Threadripper Pro-branded CPUs | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electrical cable**
Electrical cable:
An electrical cable is an assembly of one or more wires running side by side or bundled, which is used to carry electric current.
Electrical cable:
One or more electrical cables and their corresponding connectors may be formed into a cable assembly, which is not necessarily suitable for connecting two devices but can be a partial product (e.g. to be soldered onto a printed circuit board with a connector mounted to the housing). Cable assemblies can also take the form of a cable tree or cable harness, used to connect many terminals together.
Etymology:
The original meaning of cable in the electrical wiring sense was for submarine telegraph cables that were armoured with iron or steel wires. Early attempts to lay submarine cables without armouring failed because they were too easily damaged. The armouring in these early days (mid-19th century) was implemented in separate factories to the factories making the cable cores. These companies were specialists in manufacturing wire rope of the kind used for nautical cables. Hence, the finished armoured cores were also called cables. The term was later extended to any bundle of electrical conductors (or even a single conductor) enclosed in an outer sheath, whether or not it was armoured. The term is now also applied to telecommunications cables with fibre-optic cores within the outer sheath rather than copper conductors.
Uses:
Electrical cables are used to connect two or more devices, enabling the transfer of electrical signals or power from one device to the other. Long-distance communication takes place over undersea communication cables. Power cables are used for bulk transmission of alternating and direct current power, especially using high-voltage cable. Electrical cables are extensively used in building wiring for lighting, power and control circuits permanently installed in buildings. Since all the circuit conductors required can be installed in a cable at one time, installation labor is saved compared to certain other wiring methods.
Uses:
Physically, an electrical cable is an assembly consisting of one or more conductors with their own insulations and optional screens, individual covering(s), assembly protection and protective covering(s). Electrical cables may be made more flexible by stranding the wires. In this process, smaller individual wires are twisted or braided together to produce larger wires that are more flexible than solid wires of similar size. Bunching small wires before concentric stranding adds the most flexibility. Copper wires in a cable may be bare, or they may be plated with a thin layer of another metal, most often tin but sometimes gold, silver or some other material. Tin, gold, and silver are much less prone to oxidation than copper, which may lengthen wire life, and makes soldering easier. Tinning is also used to provide lubrication between strands. Tinning was used to help removal of rubber insulation. Tight lays during stranding makes the cable extensible (CBA – as in telephone handset cords).In the 19th century and early 20th century, electrical cable was often insulated using cloth, rubber or paper. Plastic materials are generally used today, except for high-reliability power cables. The first thermoplastic used was gutta-percha (a natural latex) which was found useful for underwater cables in the 19th century. The first, and still very common, man-made plastic used for cable insulation was polyethylene. This was invented in 1930, but not available outside military use until after World War 2 during which a telegraph cable using it was laid across the English Channel to support troops following D-Day.Cables can be securely fastened and organized, such as by using trunking, cable trays, cable ties or cable lacing. Continuous-flex or flexible cables used in moving applications within cable carriers can be secured using strain relief devices or cable ties.
Uses:
At high frequencies, current tends to run along the surface of the conductor. This is known as the skin effect.
Characteristics:
Any current-carrying conductor, including a cable, radiates an electromagnetic field. Likewise, any conductor or cable will pick up energy from any existing electromagnetic field around it. These effects are often undesirable, in the first case amounting to unwanted transmission of energy which may adversely affect nearby equipment or other parts of the same piece of equipment; and in the second case, unwanted pickup of noise which may mask the desired signal being carried by the cable, or, if the cable is carrying power supply or control voltages, pollute them to such an extent as to cause equipment malfunction.
Characteristics:
The first solution to these problems is to keep cable lengths in buildings short since pick up and transmission are essentially proportional to the length of the cable. The second solution is to route cables away from trouble. Beyond this, there are particular cable designs that minimize electromagnetic pickup and transmission. Three of the principal design techniques are shielding, coaxial geometry, and twisted-pair geometry.
Characteristics:
Shielding makes use of the electrical principle of the Faraday cage. The cable is encased for its entire length in foil or wire mesh. All wires running inside this shielding layer will be to a large extent decoupled from external electrical fields, particularly if the shield is connected to a point of constant voltage, such as earth or ground. Simple shielding of this type is not greatly effective against low-frequency magnetic fields, however - such as magnetic "hum" from a nearby power transformer. A grounded shield on cables operating at 2.5 kV or more gathers leakage current and capacitive current, protecting people from electric shock and equalizing stress on the cable insulation.
Characteristics:
Coaxial design helps to further reduce low-frequency magnetic transmission and pickup. In this design the foil or mesh shield has a circular cross section and the inner conductor is exactly at its center. This causes the voltages induced by a magnetic field between the shield and the core conductor to consist of two nearly equal magnitudes which cancel each other.
Characteristics:
A twisted pair has two wires of a cable twisted around each other. This can be demonstrated by putting one end of a pair of wires in a hand drill and turning while maintaining moderate tension on the line. Where the interfering signal has a wavelength that is long compared to the pitch of the twisted pair, alternate lengths of wires develop opposing voltages, tending to cancel the effect of the interference.
Fire protection:
Electrical cable jacket material is usually constructed of flexible plastic which will burn. The fire hazard of grouped cables can be significant. Cables jacketing materials can be formulated can prevent fire spread (see Mineral-insulated copper-clad cable). Alternately, fire spread amongst combustible cables can be prevented by the application of fire retardant coatings directly on the cable exterior, or the fire threat can be isolated by the installation of boxes constructed of noncombustible materials around the bulk cable installation.
Types:
Coaxial cable – used for radio frequency signals, for example in cable television distribution systems.
Types:
Direct-buried cable Flexible cables Filled cable Heliax cable Non-metallic sheathed cable (or nonmetallic building wire, NM, NM-B) Armored cable (or BX) Multicore cable (consist of more than one wire and is covered by cable jacket) Paired cable – Composed of two individually insulated conductors that are usually used in DC or low-frequency AC applications Portable cord – Flexible cable for AC power in portable applications Ribbon cable – Useful when many wires are required. This type of cable can easily flex, and it is designed to handle low-level voltages.
Types:
Shielded cable – Used for sensitive electronic circuits or to provide protection in high-voltage applications.
Types:
Single cable (from time to time this name is used for wire) Structured cabling Submersible cable Twin and earth Twinax cable Twin-lead – This type of cable is a flat two-wire line. It is commonly called a 300 Ω line because the line has an impedance of 300 Ω. It is often used as a transmission line between an antenna and a receiver (e.g., TV and radio). These cables are stranded to lower skin effects.
Types:
Twisted pair – Consists of two interwound insulated wires. It resembles a paired cable, except that the paired wires are twistedCENELEC HD 361 is a ratified standard published by CENELEC, which relates to wire and cable marking type, whose goal is to harmonize cables. Deutsches Institut für Normung (DIN, VDE) has released a similar standard (DIN VDE 0292).
Hybrid cables:
Hybrid optical and electrical cables can be used in wireless outdoor fiber-to-the-antenna (FTTA) applications. In these cables, the optical fibers carry information, and the electrical conductors are used to transmit power. These cables can be placed in several environments to serve antennas mounted on poles, towers or other structures. Local safety regulations may apply. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Karen McGrane**
Karen McGrane:
Karen McGrane is a content strategist and website accessibility advocate, who wrote a book called Content Strategy for Mobile.McGrane teaches Design Management at School of Visual Arts in New York. Her design philosophy is "every company is a technology company" and "every business is in the user experience business." McGrane was an early proponent of designing web content for mobile devices and is a frequent speaker at technology conferences. She was also the co-executive producer, with Jared Spool, of the UX Advantage Conference and cohost of the UX Advantage podcast. She co-hosted the Responsive Web Design podcast from 2014-2018 with Ethan Marcotte.McGrane has done user experience design work for many major media companies including Condé Nast, Disney, and Citibank; in her position at Razorfish she was the design lead on the New York Times' 2006 redesign. Prior to that she was Vice President and National Lead for User Experience at Razorfish where she was their first information architect hire in 1998. In August 2020 she co-founded the consultancy Autogram with Ethan Marcotte and Jeff Eaton.
Content Strategy for Mobile:
Content Strategy for Mobile was published in 2012 by A Book Apart. It has been called an essential guide for people publishing serial content online, one that has a clear "plan of action." McGrane advocates for "adaptive content," small chunks of content that can appear on different platforms and in different contexts. For this to happen, content needs to have good metadata and exist within a content management system which is itself easy to use. Companies also need to do research into both their audience needs and the approaches of their competition in order to do this effectively. If done correctly, web content will "work everywhere, all the time."She published her second book, Going Responsive, with A Book Apart in 2015.
Early life and education:
McGrane has a BA in American Studies and Philosophy from the University of Minnesota and a Master's degree from Rensselaer Polytechnic Institute in Human Computer interaction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Battle royale game**
Battle royale game:
A battle royale game is an online multiplayer video game genre that blends last-man-standing gameplay with the survival, exploration and scavenging elements of a survival game. Battle royale games involve dozens to hundreds of players, who start with minimal equipment and then must eliminate all other opponents while avoiding being trapped outside of a shrinking "safe area", with the winner being the last player or team alive.
Battle royale game:
The name for the genre is taken from the 2000 Japanese film Battle Royale, itself based on the novel of the same name, which presents a similar theme of a last-man-standing competition in a shrinking play zone. The genre's origins arose from mods for large-scale online survival games like Minecraft and ARMA 2 in the early 2010s. By the end of the decade, the genre became a cultural phenomenon, with standalone games such as PUBG: Battlegrounds (2017), Fortnite Battle Royale (2017), Apex Legends (2019) and Call of Duty: Warzone (2020) each having received tens of millions of players within months of their releases.
Concept:
Battle royale games are played between many individual players, pairs of two players or a number of small squads (typically of 3-5 players). In each match, the goal is to be the last player or team standing by eliminating all other opponents. A match starts by placing the player-characters into a large map space, typically by having all players skydive from a large aircraft within a brief time limit. The map may have random distribution or allow players to have some control of where they start. All players start with minimal equipment, giving no player an implicit advantage at the onset. Equipment, usually used for combat, survival or transport is randomly scattered around the map, often at landmarks on the map, such as within buildings in ghost towns. Players need to search the map for these items while avoiding being killed by other players, who cannot be visually marked or distinguishable either on-screen or on the map, requiring the player to solely use their own eyes and ears to deduce their positions. Equipment from eliminated players can usually be looted as well. These games often include some mechanic to push opponents closer together as the game progresses, usually taking the form of a gradually shrinking safe zone, with players outside the zone facing elimination.
Concept:
Typically, battle royale contestants are only given life to play, not multiple lives; any players who die are rarely allowed to respawn. Games with team support may allow players to enter a temporary, not permanent, near-death state once health is depleted, giving allies the opportunity to revive them before they give out or are finished off by an opponent. The match is over when only one player or team remains, and the game typically provides some type of reward, such as in-game currency used for cosmetic items, to all players based on how long they survived. The random nature of starting point, item placement, and safe area reduction enables the battle royale genre to challenge players to think and react quickly and improve strategies throughout the match as to be the last man/team standing. In addition to standalone games, the battle royale concept may also be present as part of one of many game modes within a larger game, or may be applied as a user-created mod created for another game.There are various modifications that can be implemented atop the fundamentals of the battle royale. For example, Fortnite introduced a temporary mode in an event which is 50-versus-50 player mode in its Fortnite Battle Royale free-to-play game; players are assigned one of the two teams, and work with their teammates to collect resources and weapons towards constructing fortifications as the safe area of the game shrinks down, with the goal to eliminate all the players on the other team.
History:
Formulative elements of the battle royale genre had existed prior to the 2010s. Gameplay modes featuring last man standing rules has been a frequent staple of multiplayer online action games, though generally with fewer total players, as early as 1990's Bomberman, which introduced multiplayer game modes with players all starting with the same minimal abilities who collected power ups and fought until the last player was left standing. The elements of scavenging and surviving on a large open-world map were popularized through survival games.The 2000 Japanese film Battle Royale, along with Koushun Takami's earlier 1999 novel of the same name and its 2000 manga adaptation, set out the basic rules of the genre, including players being forced to kill each other until there is a single survivor, taking place on a shrinking map, and the need to scavenge for weapons and items. It soon inspired a wave of battle royale themed Japanese manga and anime, such as Gantz (2000), Future Diary (2006), and Btooom! (2009), and then the battle royale formula eventually appeared in The Hunger Games franchise. Fictional battle royale video games were depicted in Btooom!, and in the Phantom Bullet (Gun Gale Online) arc of the light novel series Sword Art Online (2010 in print) as the "Bullet of Bullets" tournament.Initial attempts at adapting the Battle Royale formula into video games came in the form of Japanese visual novel games that focused on storytelling and puzzle-solving, such as Higurashi: When They Cry (2002), Zero Escape (2009) and Danganronpa (2010). However, these visual novel games are distinct from the genre which became known as battle royale games, which emerged when Western developers later adapted the Battle Royale formula into a shooter game format.
History:
Early mods and games (2012–2016) Shortly after the release of the 2012 film The Hunger Games, which had a similar premise to the earlier film Battle Royale, a server plug-in named Hunger Games (later changed to Survival Games) was developed for Minecraft. Survival Games takes inspiration from the film, initially placing players at the center of the map near a set of equipment chests. When the game commences, players can compete over the central resources or spread out to find items stored in chests scattered around the play area. Players killed are eliminated and the last surviving player wins the match.In DayZ, a mod for Arma 2, released in August 2012, players struggle alongside or against each other to obtain basic necessities to continue living in a persistent sandbox filled with various dangers. The mod was designed to include player versus player encounters, but generally these events were infrequent due to the size of the game's map and the persistence of the game world. This led to the development of game mods that sacrificed DayZ open-endedness in favor of focusing on more frequent hostile interactions between players to determine an eventual winner.
History:
The most influential battle royale mod was created by Brendan Greene, known by his online alias "PlayerUnknown", whose Battle Royale mod of DayZ first released in 2013. This mod was directly inspired by Battle Royale. In contrast to Hunger Games-inspired mods, in Greene's mod weapons were randomly scattered around the map. Greene recreated this mod for Arma 3 in 2014. Greene continued to use his format as a consultant for H1Z1: King of the Kill before becoming the creative developer at Bluehole of a standalone game representing his vision of the battle royale genre, which would later be released as PUBG: Battlegrounds.
History:
Games from other developers took inspiration from highly played battle royale-style mods, as well as the popularity of The Hunger Games film series. Ark: Survival Evolved by Studio Wildcard introduced its "Survival of the Fittest" mode in July 2015, which was geared to be used for esports tournaments. The mode was temporarily broken off as its own free-to-play game during 2016 before the developers opted to merge it back into the main game for ease of maintenance of the overall game.In 2016, a battle royale mobile game, Btooom Online, based on the 2009 manga Btooom, was developed and released in Japan. Despite some initial success on the Japanese mobile charts, Btooom Online was ultimately a commercial failure in Japan.
History:
Formation of standalone games (2017–2018) While formative elements of the battle royale genre had been established before 2017, the genre grew from two principal titles through 2017 and 2018: PUBG: Battlegrounds, which soon inspired Fortnite Battle Royale. Both games drew tens of millions of players in short periods of time, proving them as commercial successes and leading to future growth after 2018. H1Z1: King of the Kill, which predated these two titles in the genre, has become a fixture in the top most played games on Steam by the start of 2017, but has not been able to maintain its playerbase.PUBG: Battlegrounds was created by Brendan Greene, its title based on his online alias "PlayerUnknown". The game was based on his previous Battle Royale mod for ARMA 2 and DayZ first released in 2013. Building on his earlier work, Greene went to work at Bluehole in South Korea, becoming the creative developer of a standalone game representing his vision of the battle royale genre, PUBG: Battlegrounds. While Battlegrounds was not the first battle royale game, its release to early access in March 2017 drew a great deal of attention, selling over twenty million copies by the end of the year, and is considered the defining game of the genre. In September 2017, the game broke the previous record for highest number of concurrent players on Steam, with over 1.3 million users playing the game simultaneously. Battlegrounds' explosive growth and how it established the battle royale genre was considered one of the top trends in the video game industry in 2017. Battlegrounds' popularity created a new interest in the battle royale genre. Numerous games that copied the fundamental gameplay of Battlegrounds appeared in China, shortly after Battlegrounds' release.Epic Games had released Fortnite, a cooperative survival game, into early access near the release of Battlegrounds. Epic saw the potential to create their own battle royale mode, and by September 2017, released the free-to-play Fortnite Battle Royale which combined some of the survival elements and mechanics from the main Fortnite game with the Battle Royale gameplay concept. The game saw similar player counts as Battlegrounds, with twenty million unique players reported by Epic Games by November 2017. Bluehole expressed concern at this move, less due to being a clone of Battlegrounds, but more so that they had been working with Epic Games for technical support of the Unreal Engine in Battlegrounds, and thus they were worried that Fortnite may be able to include planned features to their battle royale mode before they could release those in Battlegrounds. Battleground's developer, PUBG Corporation, filed a lawsuit against Epic in South Korea in January 2018 claiming Fortnite Battle Royale infringements on Battlegrounds' copyrights. Market observers predicted that there would be little likelihood of Bluehole winning the case, as it would be difficult to establish the originality of PUBG in court due to itself being derived from Battle Royale. By the end of June 2018, the lawsuit had been closed by PUBG, under undisclosed reasons.In 2018, Fortnite Battle Royale rivaled Battlegrounds in player numbers and surpassed it in revenue, which was attributed to its free-to-play business model and cross-platform support, as well as its accessibility to casual players. Battlegrounds creator Brendan Greene credited it with further growing the battle royale genre. Its mainstream publicity further increased following a stream by Tyler "Ninja" Blevins with Drake, JuJu Smith-Schuster and Travis Scott. which set a Twitch record for concurrent viewership. It accumulated a total playerbase of 45 million in January and 3.4 million concurrent players in February. Polygon labeled it "the biggest game of 2018" and "a genuine cultural phenomenon", with "everyone from NFL players to famous actors" playing it, including Red Sox player Xander Bogaerts and Bayern Munich's youth team borrowing celebrations from the game. In Asia, however, PUBG remains the most popular battle royale game.Other popular battle royale games released in 2017, inspired by the success of PUBG: Battlegrounds, include two NetEase titles, Rules of Survival which was discontinued on 2022 July and the mobile game Knives Out, and the mobile game Free Fire by Garena which has over 150 Million daily active players as of 2021. Each of these games has received hundreds of millions of downloads, mostly in Asia, by 2018.
History:
Mainstream popularity (2018–present) With the success of Battlegrounds and Fortnite, the battle royale genre expanded greatly. Major publishers, including Electronic Arts Activision, and Ubisoft acknowledged the impact of the growing genre on their future plans and on the industry as a whole. Activision's Call of Duty series features a battle royale mode titled Blackout in its 2018 installment, Call of Duty: Black Ops 4, as does EA's Battlefield V. Other established games added battle royale-inspired gamemodes in updates, such as Grand Theft Auto Online, Paladins, Dota 2, Battlerite, and Counter-Strike: Global Offensive. In February 2019, EA released the free-to-play Apex Legends, which exceeded 50 million player accounts within a month. The second main battle royale installment in the Call of Duty franchise, titled Call of Duty: Warzone, was released in March 2020, as a part of the Call of Duty: Modern Warfare video game but does not require purchase of it; the game reached more than 50 million players in its first month of release.The battle royale approach has also been used in games from genres not normally associated with shooter games. Tetris 99 is a 2019 game released by Nintendo for the Nintendo Switch that has 99 players simultaneously competing in a game of Tetris. Players can direct "attacks" on other players for each line they complete, attempting to remain the last player standing. Tetris 99 served as a template for the Switch games Super Mario Bros. 35 and Pac-Man 99. Blizzard Entertainment added a battle royale-inspired "Battlegrounds" mode to its digital card game Hearthstone, where eight players vie to win over the others through several rounds of drafting new cards and fighting in one-on-one events. The racing game Forza Horizon 4 from Playground Games added a battle royale mode called "The Eliminator" where players all start with the same car, but can gain upgrades by beating other players and discovering "drops" around the map; Microsoft stated in 2021 that it was the most popular multiplayer mode in the game. Babble Royale is a game developed by Frank Lantz that uses Scrabble as a basis for a word-based battle royale game.As of December 2019 dozens of battle royale games have debuted but, similar to the MOBA genre, only two or three titles have maintained mainstream popularity at the same time. Other games and battle royale modes had briefly become popular before their concurrent player count dropped and players returned to Fortnite or Battlegrounds; Apex Legends was the year's only new successful battle royale game. In contrast to other multiplayer-only games, the large number of players typically involved in battle royale games generally require a large enough concurrent player base for matchmaking in a reasonable amount of time. The Culling, by Xaviant Studios, was released in early access in 2016, and was designed to be a streaming-friendly battle royale mode for 16 players. However, following the release of Battlegrounds, The Culling lost much of its player base, and a few months after releasing the full version of the game, Xaviant announced they were ending further development on it to move onto other projects. Radical Heights by Boss Key Productions was launched in April 2018 but within two weeks had lost 80% of its player base. SOS, a battle royale game released by Outpost Games in December 2017, had its player counts drop into the double-digits by May 2018, leading Outpost to announce the game's closure by November 2018. While several major battle royale announcements occurred at E3 in 2018, only Fallout 76's battle royale mode appeared at the trade show in 2019.The Chinese government, through its Audio and Video and Numeral Publishing Association, stated in October 2017 that it will discourage its citizens from playing battle royale games as they deem them too violent, which "deviates from the values of socialism and is deemed harmful to young consumers", as translated by Bloomberg. Gaming publications in the west speculated that this would make it difficult or impossible to publish battle royale within the country. In November 2017, PUBG Corporation announced its partnership with Tencent to publish the game in China, making some changes in the game to "make sure they accord with socialist core values, Chinese traditional culture and moral rules" to satisfy Chinese regulations and censors. However, during mid-2018, the Chinese government revamped how it reviewed and classified games that are to be published in China, and by December 2018, after the formation of the new Online Ethics Review Committee, several battle royale titles, including Fortnite and PUBG, were listed as prohibited or must be withdrawn from play. While PUBG Corporation was working with Tencent on a Chinese release, many clones of Battlegrounds were released in China, and created a new genre called "chicken-eating game", named based on the congratulatory line to the last player standing in Battlegrounds, "Winner Winner Chicken Dinner!"
Impact:
The rapid growth and success of the battle royale genre has been attributed to several factors, including the way all players start in the same vulnerable state and eliminating any intrinsic advantage for players, and being well-suited for being a spectator esport. Other factors including specific games' business models, such as Fortnite Battle Royale being free and available across computers, consoles, and mobile devices. A University of Utah professor also considers that battle royale games realize elements of Maslow's Hierarchy of Needs, a scheme to describe human motivation, more-so than video games have in the past. While the lowest tiers of Maslow's hierarchy, physiological and safety, are met by the survival elements of battle royales, the love/belonging and esteem tiers are a result of the battle royale being necessarily a social and competitive game, and the final tier of self-actualization comes from becoming skilled in the game to win frequently.Business Insider projected that battle royale games would bring in over $2 billion during 2018 alone, and would generate a total of $20 billion by the end of 2019. SuperData Research reported that, in 2018, the three top-grossing battle royale games (Fortnite, PUBG and Call of Duty: Black Ops 4) generated nearly $4 billion in combined digital revenue that year. SuperData Research reported that the top four highest-grossing battle royale games of 2020 (PUBG Mobile, Garena Free Fire, Call of Duty: Warzone and Fortnite) generated more than $7 billion worldwide in combined digital revenue that year. Fortnite grossed over $9 billion worldwide by 2019, while PUBG Mobile grossed over $8 billion by early 2022.Sensor Tower reported that 2018's top three most-downloaded mobile battle royale games (PUBG Mobile, Garena Free Fire and Fortnite) received over 500 million downloads combined that year. As of 2020, the most-played battle royale games include PUBG Mobile with 600 million players, Fortnite with 350 million players, NetEase's mobile game Knives Out with over 250 million players, Rules of Survival with 230 million players, and Garena Free Fire with over 180 million players.Turtle Beach Corporation, a manufacturer of headphones and microphones for gaming, reported an increase of over 200% in net revenues for the second quarter of 2018 over the same quarter in 2017, which they attributed to the popularity of the battle royale genre.In a 2022 study conducted on Japanese students who regularly play online games, battle royale gameplay was shown to have statistically significant correlations with gaming addiction and a sense of underachievement. The study also suggested that the battle royale genre requires more attention than other esports genres, particularly in terms of its link with aggressive feelings. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Barefoot skiing**
Barefoot skiing:
Barefoot skiing is water skiing behind a motorboat without the use of water skis, commonly referred to as "barefooting". Barefooting requires the skier to travel at higher speeds (30–45 mph/48–72 km/h) than conventional water skiing (20–35 miles per hour/32–56 km/h). The necessary speed required to keep the skier upright varies by the weight of the barefooter and can be approximated by the following formula: (W / 10) + 20, where W is the skier's weight in pounds and the result is in miles per hour. It is an act performed in show skiing, and on its own.
History of barefooting:
Barefoot water skiing originated in Winter Haven, Florida. According to the Water Ski Hall of Fame, and witnesses of the event, 17-year-old A.G. Hancock became the first person ever to barefoot water ski in 1947. That same year, Richard Downing "Dick" Pope Jr., was the first person ever to be photographed barefooting, stepping off his skis on a training boom alongside the boat. In 1950, the first barefoot competition was held in Cypress Gardens, with Pope and Mexican competitor Emilio Zamudio as the only two known barefooters in the world at the time. The first woman to waterski barefoot was Charlene Zint in 1951.Throughout the 1950s, additional barefoot starting techniques were invented including the two-ski jump out, the beach start (invented by Ken Tibado in 1955), and the deep water start (invented by Joe Cash in 1958). The tumble-turn maneuver was 'invented' by accident during a double barefoot routine in 1960 when Terry Vance fell onto his back during a step-off and partner Don Thomson (still on his skis) spun him around forward, enabling Vance to regain a standing posture. In 1961, Randy Rabe became the first backward barefooter by stepping off a trick ski backwards, a maneuver Dick Pope had first tried in 1950 but vowed never to try again after a painful fall. The early 1960s saw Don Thomson appear as the first "superstar" of the sport, developing both back-to-front and front-to-back turnarounds, and performing the first barefoot tandem ride in a show at Cypress Gardens.During this time barefooting began developing in Australia as well. In April 1963, the first national competition was held in Australia, with 38 competitors. The Australians were the first to develop barefoot jumping, one of the three events in modern barefoot competition, as well as pioneer many new tricks. In November 1978, the first world championships were held in Canberra, Australia, where 54 skiers competed for a total of 10 different countries. Australians Brett Wing and Colleen Wilkinson captured the men's and women's titles.
History of barefooting:
In 1976 Briton Keith Donnelly set the first (officially recognized) World Barefoot Jump record of 13.25 meters.
Equipment:
Equipment required for barefooting: Boat – Barefooting requires a boat or other towing object that can travel to a speed of 30-45 mph with a barefooter under tow. Some boats are made specifically for barefooting, as they have small wakes and can travel at fast speeds. ABC Boats maintains a current list of boats approved by the American Barefoot Club.
Equipment:
Handles and ropes – Normally a handled rope is used but may be optionally replaced with a ski boom (see below). A safety release may be used with the rope so that it can be detached from the boat in the event the barefooter becomes tangled in the rope. Though it is possible to barefoot with a normal 75.00-foot nylon tow rope and handle, many skiers use special ropes made out of Poly-E or Spectra to reduce spring. Barefoot handles have plastic tubing around them, so the skier can wrap their feet around the rope without getting rope burn and can have small modifications for frontward and backward toe holds.
Equipment:
Personal Flotation Device – It is recommended and in many locations required that skiers and barefooters wear a flotation device or padded wetsuit.
Equipment:
Helmet - In many locations, it is only required for jump.Optional equipment: Barefoot wetsuit – The skier wears a fitted, padded neoprene barefoot wetsuit which has built-in flotation so that the need of a life jacket is unnecessary. It is possible to ski with a Coast Guard approved Type III flotation vest though this does not pad the skier well and the skier will not be able to perform many tricks.
Equipment:
Padded shorts – Though not necessary, many barefooters wear padded neoprene shorts. These help pad the skier's buttocks which is very helpful in performing the deep water start and tumble turns.
Equipment:
Booms – Barefoot booms are used for learning barefooting and also, learning new barefoot tricks. The boom is a long pole that hangs over the edge of the boat and allows the barefooter to ski directly alongside the boat. Because the pole is fixed the barefooter may lean his or her body weight onto the pole and recover from falls more easily than on a rope.
Equipment:
Shoe Skis – Shoe skis may be used for training. Shoe skis are small 'skis' put on the foot that are only a few inches longer and wider than the skier's foot. Shoe skiing is performed at a much lower speed (approx 18.00 mph) than barefooting because of the increased lift provided by the surface area of the ski. As an intermediate step to barefooting, flat soled street shoes may also be worn. This provides more lift than bare feet, but a more similar experience to barefooting than actual wooden 'shoe skis'.
Competition:
Barefoot water skiing has a competitive aspect which is very established. In traditional competition, there are three events: Tricks – The skier has two passes of 15 seconds to complete as many different tricks as possible. All tricks have specific point values depending on difficulty. The skier also is awarded points for the start trick they performed to get up. Mikey Caruso is the youngest barefoot water skier to ever compete longline in a tournament at the age of three at the 1988 Banana George Blairfoot Bananza, where he borrowed Parks Bonifay's wetsuit.The current world record for Men's Open division is 13,350 points set by David Small on August 14, 2018, For the Women's Open division the world record is 10,100 pts. set by Ashleigh Stebbeings on March 13, 2014.
Competition:
In the Boy's division Jackson Gerard set a record of 12.850 points on July 28, 2018. It also counted as the Men's Open record until broken two weeks later by World Barefoot Center teammate David Small.
For the Girls division a record of 7400 points was set by Georgia Groen on April 1, 2013.
Slalom – The skier has two passes of 15 seconds to cross the wake as many times as possible. The skier can cross the wake forwards or backwards and on two feet or one foot.The world record for Men's Open division was set by Keith St. Onge on January 6, 2006 (20.6).
Ashleigh Stebbeings set the Women's Open division world record on October 8, 2014, with a score of 17.2 points.
The Boys division world record of 19.2 pts. was set on January 6, 2006, by Heinrich Sam. It was tied by Jackson Gerard on August 16, 2018.
Nadine De Villiers set the Girl's division world record of 16.1 pts. on April 5, 1997.
Jump – The skier travels over a small, fiberglass jump ramp. They have three jumps and the longest one successfully landed counts. Professionals can jump as far as 90 feet (27 meters).The current world record for Men's Open division of 29.9 meters (98 feet) was set by David Small on August 11, 2010.
With a jump of 23.4 meters (77 feet) Ashleigh Stebbeings set the Women's Open world record on February 19, 2017.
Competition:
Tee-Jay Russo jumped 26.7 meters (88 feet) to set the Boy's division world record on December 29, 2018, The Girl's division world record of 12.1 meters (40 feet) was set by Kim Rowswell on August 13, 2010.Some other barefoot competitions feature endurance events. These include: Figure 8 – Two skiers on opposite sides of the wake ski while the boat drives in the pattern of a figure 8. The skier who is the last one standing wins.
Competition:
Team Endurance – This is a race between a variety of teams. Each team has a boat and the skiers take turns skiing. This generally takes place on a long river, where race distances can be up to about 45 miles. The first team to cross the finish line wins.The newest form of Barefoot competition is an event which brings together all three events Tricks, Slalom and Jump into a single set. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**LED filament**
LED filament:
A LED filament light bulb is a LED lamp which is designed to resemble a traditional incandescent light bulb with visible filaments for aesthetic and light distribution purposes, but with the high efficiency of light-emitting diodes (LEDs). It produces its light using LED filaments, which are series-connected strings of diodes that resemble in appearance the filaments of incandescent light bulbs. They are direct replacements for conventional clear (or frosted) incandescent bulbs, as they are made with the same envelope shapes, the same bases that fit the same sockets, and work at the same supply voltage. They may be used for their appearance, similar when lit to a clear incandescent bulb, or for their wide angle of light distribution, typically 300°. They are also more efficient than many other LED lamps.
History:
A LED filament type design light bulb was produced by Ushio Lighting in 2008, intended to mimic the appearance of a standard light bulb. Contemporary bulbs typically used a single large LED or matrix of LEDs attached to one large heatsink. As a consequence, these bulbs typically produced a beam only 180 degrees wide. By about 2015, LED filament bulbs had been introduced by several manufacturers. These designs used several LED filament light emitters, similar in appearance when lit to the filament of a clear, standard incandescent bulb, and very similar in detail to the multiple filaments of the early Edison incandescent bulbs.LED filament bulbs were patented by Ushio and Sanyo in 2008. Panasonic described a flat arrangement with modules similar to filaments in 2013. Two other independent patent applications were filed in 2014 but were never granted. The early filed patents included a heat drain under the LEDs. At that time, luminous efficacy of LEDs was under 100 lm/W. By the late 2010s, this had risen to near 160 lm/W.
Design:
The LED filament consists of multiple series-connected LEDs on a transparent substrate, referred to as chip-on-glass (COG). These transparent substrates are made of glass or sapphire materials. This transparency allows the emitted light to disperse evenly and uniformly without any interference. An even coating of yellow phosphor in a silicone resin binder material converts the blue light generated by the LEDs into light approximating white light of the desired colour temperature—typically 2700 K to match the warm white of an incandescent bulb. Degradation of silicone binder and leakage of blue light are design issues in LED filament lights.
Design:
A benefit of the filament design is potentially higher efficiency due to the use of more LED emitters with lower driving currents. A major benefit of the design is the ease with which near full "global" (360°) illumination can be obtained from arrays of filaments, but two zones emitting less light appear diagonal to the substrate.
Design:
The lifespan of LED emitters is reduced by high operating temperatures. LED filament bulbs have many smaller, lower-power LED chips than other types, avoiding the need for a heatsink, but they must still pay attention to thermal management; multiple heat-dissipation paths are needed for reliable operation. The lamp may contain a high-thermal-conductivity gas (helium) blend to better conduct heat from the LED filament to the glass bulb. The LED filaments can be arranged to optimize heat dissipation. The life expectancy of the LED chips correlates to the junction temperature (Tj); light output falls faster with time at higher junction temperatures. Achieving a 30,000 hour life expectancy while maintaining 90% luminous flux requires the junction temperature to be maintained below 85 °C. Also worth noting is that LED filaments can burn out quickly if the controlled gas fill is ever lost for any reason.
Design:
The power supply in a clear bulb must be very small to fit into the base of the lamp. The large number of LEDs (typically 28 per filament) simplifies the power supply compared to other LED lamps, as the voltage per blue LED is between 2.48 and 3.7 volts DC. Some types may additionally use red LEDs (1.63 V= to 2.03 V=). Two filaments with a mix of red and blue is thus close to 110 V=, and four are close to 220–240 V=, compared to the mains AC voltage reduction to between 3 V= and 12 V= needed for other LED lamps. Four filaments are usually used, and the appearance is similar to an overrun carbon filament lamp. Typically, a mix of phosphors is used to give a higher color rendering index (as distinct from color temperature) than the early blue LEDs with yellow phosphor.The simple linear regulator used by some cheaper bulbs will cause some flickering at twice the frequency of the mains alternating current, which can be difficult to detect, but possibly contributes to eyestrain and headaches. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ellerman bombs**
Ellerman bombs:
Ellerman bombs are small-scale brightenings in the Sun's lower chromosphere. They typically take place in areas with strong magnetic fields and near emerging flux regions. They are named after Ferdinand Ellerman who studied them in detail in the 20th century. The phenomenon was first reported by W. M. Mitchell in early 1900. Although Ellerman first described them in 1917, the physical mechanism behind them is still debated. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Climbing Silver**
Climbing Silver:
Climbing Silver (棒銀 bōgin, literally "pole-silver") is a shogi strategy.
Climbing Silver:
Climbing Silver involves advancing a silver upward along with an advanced or dropped pawn supported by the rook aiming to break through the opponent's camp on their bishop's side.Many different Static Rook shogi openings include a Climbing Silver component. For instance, Climbing Silver can played as part of Double Wing Attack, Fortress, or Bishop Exchange openings. (However, there are other variants of these openings that don't include Climbing Silver.) Climbing Silver can also be used against Ranging Rook opponents as well.
Climbing Silver:
Diagonal Climbing Silver or Oblique Climbing Silver (斜め棒銀 naname bōgin) is a Climbing Silver attack involving the left silver which moves diagonally from its starting position on 7i to attack on the third or second files. This type of Climbing Silver is typical in Static Rook vs Ranging Rook games.
Positioning:
In the adjacent diagrams, the Black's silver advances to rank 5.
Positioning:
Once the silver has reached the e file (S-15 in the adjacent diagram), Black can attempt to attack White's bishop pawn at 23 by advancing their pawn (P-24). White can capture Black's pawn, but the silver can recapture White's pawn. Because White did not properly defend their bishop's head here, White's camp is somewhat weaker and more susceptible to subsequent attacks from Black.
Positioning:
Similarly, it's also possible to play Climbing Silver when Black has no pawn on the second file. Here the silver can climb to the empty 25 square. And, if there's a pawn in hand, then that pawn can be dropped to 24.
In the board diagram to the right, the Black's silver has successfully climbed to rank 5 on the first file (15). A subsequent attack by Black, for example, could aim to sacrifice this silver in order to remove White's lance and then drop a dangling pawn within White's camp that threatens to promote.
Climbing Silver formations may be used with several different Static Rook openings such as Fortress, Double Wing, and Bishop Exchange as well as Ranging Rook openings such as Fourth File Rook. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Darcy friction factor formulae**
Darcy friction factor formulae:
In fluid dynamics, the Darcy friction factor formulae are equations that allow the calculation of the Darcy friction factor, a dimensionless quantity used in the Darcy–Weisbach equation, for the description of friction losses in pipe flow as well as open-channel flow.
The Darcy friction factor is also known as the Darcy–Weisbach friction factor, resistance coefficient or simply friction factor; by definition it is four times larger than the Fanning friction factor.
Notation:
In this article, the following conventions and definitions are to be understood: The Reynolds number Re is taken to be Re = V D / ν, where V is the mean velocity of fluid flow, D is the pipe diameter, and where ν is the kinematic viscosity μ / ρ, with μ the fluid's Dynamic viscosity, and ρ the fluid's density.
Notation:
The pipe's relative roughness ε / D, where ε is the pipe's effective roughness height and D the pipe (inside) diameter.
f stands for the Darcy friction factor. Its value depends on the flow's Reynolds number Re and on the pipe's relative roughness ε / D.
The log function is understood to be base-10 (as is customary in engineering fields): if x = log(y), then y = 10x.
The ln function is understood to be base-e: if x = ln(y), then y = ex.
Flow regime:
Which friction factor formula may be applicable depends upon the type of flow that exists: Laminar flow Transition between laminar and turbulent flow Fully turbulent flow in smooth conduits Fully turbulent flow in rough conduits Free surface flow.
Transition flow Transition (neither fully laminar nor fully turbulent) flow occurs in the range of Reynolds numbers between 2300 and 4000. The value of the Darcy friction factor is subject to large uncertainties in this flow regime.
Flow regime:
Turbulent flow in smooth conduits The Blasius correlation is the simplest equation for computing the Darcy friction factor. Because the Blasius correlation has no term for pipe roughness, it is valid only to smooth pipes. However, the Blasius correlation is sometimes used in rough pipes because of its simplicity. The Blasius correlation is valid up to the Reynolds number 100000.
Flow regime:
Turbulent flow in rough conduits The Darcy friction factor for fully turbulent flow (Reynolds number greater than 4000) in rough conduits can be modeled by the Colebrook–White equation.
Free surface flow The last formula in the Colebrook equation section of this article is for free surface flow. The approximations elsewhere in this article are not applicable for this type of flow.
Choosing a formula:
Before choosing a formula it is worth knowing that in the paper on the Moody chart, Moody stated the accuracy is about ±5% for smooth pipes and ±10% for rough pipes. If more than one formula is applicable in the flow regime under consideration, the choice of formula may be influenced by one or more of the following: Required accuracy Speed of computation required Available computational technology: calculator (minimize keystrokes) spreadsheet (single-cell formula) programming/scripting language (subroutine).
Choosing a formula:
Colebrook–White equation The phenomenological Colebrook–White equation (or Colebrook equation) expresses the Darcy friction factor f as a function of Reynolds number Re and pipe relative roughness ε / Dh, fitting the data of experimental studies of turbulent flow in smooth and rough pipes.
The equation can be used to (iteratively) solve for the Darcy–Weisbach friction factor f.
Choosing a formula:
For a conduit flowing completely full of fluid at Reynolds numbers greater than 4000, it is expressed as: log 3.7 2.51 Ref) or log 14.8 2.51 Ref) where: Hydraulic diameter, Dh (m, ft) – For fluid-filled, circular conduits, Dh = D = inside diameter Hydraulic radius, Rh (m, ft) – For fluid-filled, circular conduits, Rh = D/4 = (inside diameter)/4Note: Some sources use a constant of 3.71 in the denominator for the roughness term in the first equation above.
Choosing a formula:
Solving The Colebrook equation is usually solved numerically due to its implicit nature. Recently, the Lambert W function has been employed to obtain explicit reformulation of the Colebrook equation.
Choosing a formula:
14.8 2.51 Re log (ax+b) or 10 −x2=ax+b 10 −12 will get: px=ax+b ln ln p−ba then: ln 10 10 ln 10 −ba)2 Expanded forms Additional, mathematically equivalent forms of the Colebrook equation are: 1.7384 log 18.574 Ref) where: 1.7384... = 2 log (2 × 3.7) = 2 log (7.4) 18.574 = 2.51 × 3.7 × 2and 1.1364 log log 9.287 Re(ε/Dh)f) or 1.1364 log 9.287 Ref) where: 1.1364... = 1.7384... − 2 log (2) = 2 log (7.4) − 2 log (2) = 2 log (3.7) 9.287 = 18.574 / 2 = 2.51 × 3.7.The additional equivalent forms above assume that the constants 3.7 and 2.51 in the formula at the top of this section are exact. The constants are probably values which were rounded by Colebrook during his curve fitting; but they are effectively treated as exact when comparing (to several decimal places) results from explicit formulae (such as those found elsewhere in this article) to the friction factor computed via Colebrook's implicit equation.
Choosing a formula:
Equations similar to the additional forms above (with the constants rounded to fewer decimal places, or perhaps shifted slightly to minimize overall rounding errors) may be found in various references. It may be helpful to note that they are essentially the same equation.
Free surface flow Another form of the Colebrook-White equation exists for free surfaces. Such a condition may exist in a pipe that is flowing partially full of fluid. For free surface flow: log 12 2.51 Ref).
Choosing a formula:
The above equation is valid only for turbulent flow. Another approach for estimating f in free surface flows, which is valid under all the flow regimes (laminar, transition and turbulent) is the following: 24 0.86 1.35 1.34 ln 12.21 (Rhϵ)]2}(1−a)(1−b) where a is: 678 8.4 and b is: 150 1.8 where Reh is Reynolds number where h is the characteristic hydraulic length (hydraulic radius for 1D flows or water depth for 2D flows) and Rh is the hydraulic radius (for 1D flows) or the water depth (for 2D flows). The Lambert W function can be calculated as follows: 1.35 ln 1.35 ln ln 1.35 ln ln 1.35 ln 1.35 ln ln 1.35 ln ln 1.35 ln 1.35 Reh]2)
Approximations of the Colebrook equation:
Haaland equation The Haaland equation was proposed in 1983 by Professor S.E. Haaland of the Norwegian Institute of Technology. It is used to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation, but the discrepancy from experimental data is well within the accuracy of the data.
The Haaland equation is expressed: 1.8 log 3.7 1.11 6.9 Re] Swamee–Jain equation The Swamee–Jain equation is used to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation.
0.25 log 3.7 5.74 0.9 )]2 Serghides's solution Serghides's solution is used to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation. It was derived using Steffensen's method.The solution involves calculating three intermediate values and then substituting those values into a final equation.
log 3.7 12 Re) log 3.7 2.51 ARe) log 3.7 2.51 BRe) 1f=A−(B−A)2C−2B+A The equation was found to match the Colebrook–White equation within 0.0023% for a test set with a 70-point matrix consisting of ten relative roughness values (in the range 0.00004 to 0.05) by seven Reynolds numbers (2500 to 108).
Approximations of the Colebrook equation:
Goudar–Sonnad equation Goudar equation is the most accurate approximation to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation. Equation has the following form ln 10 ) 3.7 ln 10 5.02 ln (d) q=ss/(s+1) ln dq ln qg DLA=zgg+1 DCFA=DLA(1+z/2(g+1)2+(z/3)(2g−1)) ln (d/q)+DCFA] Brkić solution Brkić shows one approximation of the Colebrook equation based on the Lambert W-function ln 1.816 ln 1.1 ln 1.1 Re) log 3.71 2.18 SRe) The equation was found to match the Colebrook–White equation within 3.15%.
Approximations of the Colebrook equation:
Brkić-Praks solution Brkić and Praks show one approximation of the Colebrook equation based on the Wright ω -function, a cognate of the Lambert W-function 0.8686 1.038 0.332 +x] 8.0884 , 0.7794 {\textstyle B\approx \mathrm {ln} \,\left(Re\right)-0.7794} , {\textstyle C=} ln(x) , and {\textstyle x=A+B} The equation was found to match the Colebrook–White equation within 0.0497%.
Praks-Brkić solution Praks and Brkić show one approximation of the Colebrook equation based on the Wright ω -function, a cognate of the Lambert W-function 0.8685972 0.5588 1.2079 ] 8.0897 , 0.779626 {\textstyle B\approx \mathrm {ln} \,\left(Re\right)-0.779626} , {\textstyle C=} ln(x) , and {\textstyle x=A+B} The equation was found to match the Colebrook–White equation within 0.0012%.
Approximations of the Colebrook equation:
Niazkar's solution Since Serghides's solution was found to be one of the most accurate approximation of the implicit Colebrook–White equation, Niazkar modified the Serghides's solution to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe.Niazkar's solution is shown in the following: log 3.7 4.5547 0.8784 ) log 3.7 2.51 ARe) log 3.7 2.51 BRe) 1f=A−(B−A)2C−2B+A Niazkar's solution was found to be the most accurate correlation based on a comparative analysis conducted in the literature among 42 different explicit equations for estimating Colebrook friction factor.
Approximations of the Colebrook equation:
Blasius correlations Early approximations for smooth pipes by Paul Richard Heinrich Blasius in terms of the Darcy–Weisbach friction factor are given in one article of 1913: 0.3164 Re−14 .Johann Nikuradse in 1932 proposed that this corresponds to a power law correlation for the fluid velocity profile.Mishra and Gupta in 1979 proposed a correction for curved or helically coiled tubes, taking into account the equivalent curve radius, Rc: 0.316 0.0075 D2Rc ,with, Rc=R[1+(H2πR)2] where f is a function of: Pipe diameter, D (m, ft) Curve radius, R (m, ft) Helicoidal pitch, H (m, ft) Reynolds number, Re (dimensionless)valid for: Retr < Re < 105 6.7 < 2Rc/D < 346.0 0 < H/D < 25.4 Table of Approximations The following table lists historical approximations to the Colebrook–White relation for pressure-driven flow. Churchill equation (1977) is the only equation that can be evaluated for very slow flow (Reynolds number < 1), but the Cheng (2008), and Bellos et al. (2018) equations also return an approximately correct value for friction factor in the laminar flow region (Reynolds number < 2300). All of the others are for transitional and turbulent flow only. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sandwich-structured composite**
Sandwich-structured composite:
In materials science, a sandwich-structured composite is a special class of composite materials that is fabricated by attaching two thin-but-stiff skins to a lightweight but thick core. The core material is normally low strength, but its higher thickness provides the sandwich composite with high bending stiffness with overall low density.
Open- and closed-cell-structured foams like polyethersulfone, polyvinylchloride, polyurethane, polyethylene or polystyrene foams, balsa wood, syntactic foams, and honeycombs are commonly used core materials. Sometimes, the honeycomb structure is filled with other foams for added strength. Open- and closed-cell metal foam can also be used as core materials.
Laminates of glass or carbon fiber-reinforced thermoplastics or mainly thermoset polymers (unsaturated polyesters, epoxies...) are widely used as skin materials. Sheet metal is also used as skin material in some cases.
The core is bonded to the skins with an adhesive or with metal components by brazing together.
History:
A summary of the important developments in sandwich structures is given below.
230 BC Archimedes describes the laws of levers and a way to calculate density.
25 BC Vitruvius reports about the efficient use of materials in Roman truss roof structures.
1493 Leonardo da Vinci discovers the neutral axis and load deflection relation in three-point bending.
1570 Palladio presents truss-beam constructions with diagonal beams to prevent shear deformations.
1638 Galileo Galilei describes the efficiency of tubes versus solid rods.
1652 Wendelin Schildknecht reports about sandwich beam structures with curved wooden beam reinforcements.
1726 Jacob Leupold documents tubular bridges with compression loaded roofs.
1786 Victor Louis uses iron sandwich beams in the galleries of the Palais-Royal in Paris.
1802 Jean-Baptiste Rondelet analyses and documents the sandwich effect in a beam with spacers.
1820 Alphonse Duleau discovers and publishes the moment of inertia for sandwich constructions.
1830 Robert Stephenson builds the Planet locomotive using a sandwich beam frame made of wood plated with iron 1914 R. Höfler and S. Renyi patent the first use of honeycomb structures for structural applications.
1915 Hugo Junkers patents the first honeycomb cores for aircraft application.
1934 Edward G. Budd patents welded steel honeycomb sandwich panel from corrugated metal sheets.
1937 Claude Dornier patents a honeycomb sandwich panel with skins pressed in a plastic state into the core cell walls.
1938 Norman de Bruyne patents the structural adhesive bonding of honeycomb sandwich structures.
1940 The de Havilland Mosquito was built with sandwich composites; a balsawood core with plywood skins.
Types of sandwich structures:
Metal composite material (MCM) is a type of sandwich formed from two thin skins of metal bonded to a plastic core in a continuous process under controlled pressure, heat, and tension.Recycled paper is also now being used over a closed-cell recycled kraft honeycomb core, creating a lightweight, strong, and fully repulpable composite board. This material is being used for applications including point-of-purchase displays, bulkheads, recyclable office furniture, exhibition stands, wall dividers and terrace boards.To fix different panels, among other solutions, a transition zone is normally used, which is a gradual reduction of the core height, until the two fiber skins are in touch. In this place, the fixation can be made by means of bolts, rivets, or adhesive.
Types of sandwich structures:
With respect to the core type and the way the core supports the skins, sandwich structures can be divided into the following groups: homogeneously supported, locally supported, regionally supported, unidirectionally supported, bidirectionally supported. The latter group is represented by honeycomb structure which, due to an optimal performance-to-weight ratio, is typically used in most demanding applications including aerospace.
Properties of sandwich structures:
The strength of the composite material is dependent largely on two factors: The outer skins: If the sandwich is supported on both sides, and then stressed by means of a downward force in the middle of the beam, then the bending moment will introduce shear forces in the material. The shear forces result in the bottom skin in tension and the top skin in compression. The core material spaces these two skins apart. The thicker the core material the stronger the composite. This principle works in much the same way as an I-beam does.
Properties of sandwich structures:
The interface between the core and the skin: Because the shear stresses in the composite material change rapidly between the core and the skin, the adhesive layer also sees some degree of shear force. If the adhesive bond between the two layers is too weak, the most probable result will be delamination. The failure of the interface between the skin and core is critical and the most common damage mode. The propensity of this damage to propagate through the interface or dive either into the skin or core is governed by the shear component.
Application of sandwich structures:
Sandwich structures can be widely used in sandwich panels, with different types such as FRP sandwich panel, aluminium composite panel, etc. FRP polyester reinforced composite honeycomb panel (sandwich panel) is made of polyester reinforced plastic, multi-axial high-strength glass fiber and PP honeycomb panel in special antiskid tread pattern mold through the process of constant temperature vacuum adsorption & agglutination and solidification.
Theory:
Sandwich theory describes the behaviour of a beam, plate, or shell which consists of three layers - two face sheets and one core. The most commonly used sandwich theory is linear and is an extension of first order beam theory. Linear local buckling sandwich theory is of importance for the design and analysis of Sandwich plates or sandwich panels, which are of use in building construction, vehicle construction, airplane construction and refrigeration engineering. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Measured quantity**
Measured quantity:
In a physical setting a measurement instrument may be gauged to measuring substances of a specific physical quantity. In such a context the specific physical quantity is called a measured quantity.
The synonymous notion "observable" often is used in the context of quantum mechanics.
Scientific models and ensuing mathematical models of a physical setting permit to calculate expected values of related non-measured physical quantities. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aquasar**
Aquasar:
Aquasar is a supercomputer (a high-performance computer) prototype created by IBM Labs in collaboration with ETH Zurich in Zürich, Switzerland and ETH Lausanne in Lausanne, Switzerland. While most supercomputers use air as their coolant of choice, the Aquasar uses hot water to achieve its great computing efficiency. Along with using hot water as the main coolant, an air-cooled section is also included to be used to compare the cooling efficiency of both coolants. The comparison could later be used to help improve the hot water coolant's performance. The research program was first termed to be: "Direct use of waste heat from liquid-cooled supercomputers: the path to energy saving, emission-high performance computers and data centers." The waste heat produced by the cooling system is able to be recycled back in the building's heating system, potentially saving money. Beginning in 2009, the three-year collaborative project was introduced and developed in the interest of saving energy and being environmentally-safe while delivering top-tier performance.
History:
Development The Aquasar supercomputer first came in to use at the Department of Mechanical and Process Engineering, at the Swiss Federal Institute of Technology Zurich (ETH Zurich) in 2010. ETH Zurich is one of two schools that is a part of the Swiss Federal Institute of Technology with the other school being ETH Lausanne. High energy efficiency, environmentally friendly computing, and high computing performance were a few of the main interests in the development of the Aquasar. A key part of being environmentally friendly was the focus of attempting to lower the output of carbon dioxide emissions. 50% of an air-cooled data center's energy consumption and carbon pollution actually comes from the cooling system of the data centers rather than from the actual computing process. The creation of the Aquasar started in 2009. It was a part of IBM's First-Of-A-Kind (FOAK) program (a program encouraging IBM researchers and clients to develop potential new technologies to assist with real world problems in business). One other supercomputer would later use the same idea of a hot water coolant in their developments, the SuperMUC supercomputer. Future development of more powerful supercomputers also explored the possibilities of using on-chip cooling as their main cooling source to achieve greater computer efficiency.
History:
Further Exploration of Hot Water Coolant An academic paper written in 2018 explored the many possibilities for developing new Exascale computing (a higher scale performance of supercomputing). Exascale supercomputers will be needed in future computing which means high energy efficiency and high cooling efficiency are needed out of these supercomputers to achieve peak performance. The scientists looked to the possibility of "on-chip" cooling, inspired because of the Aquasar supercomputer.
Cooling:
The Aquasar supercomputer employs "on-chip" cooling. It uses a unique method that uses micro-channel coolers which are directly attached to the computer's processing units (the main circuits that perform most of the computer's processing) which produce some of the most heat in the computer system. Micro-channels are small channels that have a diameter under 1mm with the warm coolant liquid running through them. Water's high thermal conductivity (the ability to conduct heat) and specific heat capacity (the amount of heat required to raise the temperature of 1 gram by 1 °C) allows the warm water coolant to be set at approximately 60 °C (roughly 140 °F). Because of the high thermal conductivity of water, more heat can be carried by the water away from the processing units. Water has approximately 4,000 times more heat capacity than that of air thus allowing heat transportation to work more efficiently. The high heat capacity allows for the water to absorb a great amount of heat. The water temperature allows the processing units to operate below the maximum temperature of 85 °C (roughly 185 °F).
Mechanical Description:
Hardware The Aquasar contains water-cooled IBM BladeCenter Servers (IBM's versions of the bare-bones server computer) and air-cooled IBM BladeCenter Servers in order to contrast the performance of the hot-water coolant and the air cooling. The air-cooled and water-cooled BladeCenters are made up of IBM BladeCenter H Chassis, using a combination of IBM BladeCenter QS22 Servers and IBM BladeCenter HS22 Servers in both of the BladeCenter systems. The system employs 6 teraflops (flops are a unit using to determine computing speed) and attains an energy efficiency of about 450 megaflops per watt. Pipelines connect the individual BladeCenter servers to the main network where it is then further connected to the water transportation pipeline network. These pipelines can be disconnected and reconnected too. 10 liters of water for cooling are used with a pump, producing a flow of approximately 30 liters per minute. A sensor system has also been installed into the Aquasar system to further monitor the performance. The scientists hope to optimize the system using the information they obtain from these sensors.
Mechanical Description:
Heat Recycling The warm water cooling system is a closed-loop system. The coolant is constantly being heated up by the processing units. The warm water is then cooled back down via a heat exchanger (a way of transferring heat between fluids). The transferred heat then gets directly used into the heating system of the building such as in the ETH Zurich building, thus allowing the heat to be reused effectively. Up to around 80% of the produced heat is recaptured and reused to heat the buildings. At the SuperMUC supercomputer, the heat that is created by the hot water coolant is used to further heat the rest of the campus, saving the Leibniz-Rechenzentrum campus around $1.25 USD million per year. Approximately nine kilowatts of thermal energy are put into the heating system where the waste heat will later be used to heat the ETH Zurich building.
Benefits:
Supercomputer data centers expend 50% of their electrical demands on their conventional air cooling system. The use of computers worldwide consumes an estimated 330 terawatt-hours of energy. The air cooling system is the main culprit of the high energy consumption of supercomputers. The Aquasar consumes approximately 40% less energy than that of normal air-cooled supercomputers. Along with that, the ability to recycle heat back into the heating system allows the Aqusar's carbon emissions to be reduced by approximately 85% since fewer fossil-fuels are needed to be burned to provide heat into the heating system. Low energy usage and liquid-cooled supercomputers are able to operate with about 3 times less energy cost than that of air-cooled datacenter supercomputers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dodecameric protein**
Dodecameric protein:
A dodecameric protein has a quaternary structure consisting of 12 protein subunits in a complex. Dodecameric complexes can have a number of subunit 'topologies', but typically only a few of the theoretically possible subunit arrangements are observed in protein structures.
A dodecamer (protein) is a protein complex with 12 protein subunits.
A common subunit arrangement involves a tetrahedral distribution of subunit trimers (or 3-4-point symmetry). Another observed arrangement of subunits puts two rings of six subunits side by side along the sixfold axis (or 2-6-point symmetry).
Dodecameric proteins include:
Complete gap junction channel, composed of two hexamers.
glutamine synthetase (PDB code: 2gls) Dodecameric ferritin (PDB code: 1qgh) Aβ42 - Amyloid-beta 42 Helicobacter pylori urease HHV capsid portal protein
Propionyl-CoA carboxylase:
When multiple copies of a polypeptide encoded by a gene form an aggregate, this protein structure is referred to as a multimer. When a multimer is formed from polypeptides produced by two different mutant alleles of a particular gene, the mixed multimer may exhibit greater functional activity than the unmixed multimers formed by each of the mutants alone. In such a case, the phenomenon is referred to as intragenic complementation or interallelic complementation.Propionyl-CoA carboxylase (PCC) is a dodecameric heteropolymer composed of α and β subunits in a α6β6 structure. Mutations in PCC, either in the α subunit (PCCα) or β subunit (PCCβ) can cause propionic acidemia in humans. When different mutant skin fibroblast cell lines defective in PCCβ were fused in pairwise combinations, the β heteromultimeric protein formed as a result often exhibited a higher level of activity than would be expected based on the activities of the parental enzymes. This finding of intragenic complementation indicated that the multimeric dodecameric structure of PCC allows cooperative interactions between the constituent PCCβ monomers that can generate a more functional form of the holoenzyme. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Elbs reaction**
Elbs reaction:
The Elbs reaction is an organic reaction describing the pyrolysis of an ortho methyl substituted benzophenone to a condensed polyaromatic. The reaction is named after its inventor, the German chemist Karl Elbs, also responsible for the Elbs oxidation. The reaction was published in 1884. Elbs however did not correctly interpret the reaction product due to a lack of knowledge about naphthalene structure.
Scope:
The Elbs reaction enables the synthesis of condensed aromatic systems. As already demonstrated by Elbs in 1884 it is possible to obtain anthracene through dehydration. Larger aromatic systems like pentacene are also feasible. This reaction does not take place in a single step but leads first to dihydropentacene that is dehydrogenated in a second step with copper as a catalyst.
Scope:
The acyl compounds required for this reaction can be obtained through a Friedel-Crafts acylation with aluminum chloride.The Elbs reaction is sometimes accompanied by elimination of substituents and can be unsuited for substituted polyaromatics.
Mechanism:
At least three plausible mechanisms for the Elbs reaction have been suggested. The first mechanism, suggested by Fieser, begins with a heat-induced cyclisation of the benzophenone, followed by a [1,3]-hydride shift to give the compound . A dehydration reaction then affords the polyaromatic.
Alternatively, in the second mechanism, due to Cook, the methylated aromatic compound instead first undergoes a tautomerization followed by an electrocyclic reaction to give the same intermediate, which then similarly undergoes a [1,3]-hydride shift and dehydration.
A third mechanism has also been proposed, involving pyrolytic radical generation.
Variations:
It is also possible to synthesise heterocyclic compounds via the Elbs reaction. In 1956 an Elbs reaction of a thiophene derivative was published. The expected linear product was not obtained due to a change in reaction mechanism after formation of the first intermediate which caused multiple free radical reaction steps. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Palladium black**
Palladium black:
Palladium black is a coarse, sponge-like form of elemental palladium which offers a large surface area for catalytic activity. It is used in organic synthesis as a catalyst for hydrogenation reactions.The term palladium black is also used colloquially to refer to a black precipitate of elemental palladium, which forms via decomposition of various palladium complexes.
Preparation:
Palladium black is typically prepared from palladium(II) chloride or palladium(II)-ammonium chloride. The palladium chloride process entails the formation of palladium hydroxide using lithium hydroxide followed by reduction under hydrogen gas while the palladium(II)-ammonium chloride route employs a solution of formic acid followed by the precipitation of the catalyst using potassium hydroxide. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Intel 80486SL**
Intel 80486SL:
The Intel i486SL is the power-saving variant of the i486DX microprocessor. The SL was designed for use in mobile computers. It was produced between November 1992 and June 1993. Clock speeds available were 20, 25 and 33 MHz. The i486SL contained all features of the i486DX.
In addition, the System Management Mode (SMM) (the same mode introduced with i386SL) was included with this processor. The system management mode makes it possible to shut down the processor without losing data. To achieve this, the processor state is saved in an area of static RAM (SMRAM).
In mid-1993, Intel incorporated the SMM feature in all its new 80486 processors and discontinued the SL series.
Refer to the respective section of the list of Intel microprocessors for technical details. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paola Sebastiani**
Paola Sebastiani:
Paola Sebastiani is a biostatistician and a professor at Boston University working in the field of genetic epidemiology, building prognostic models that can be used for the dissection of complex traits. Her research interests include Bayesian modeling of biomedical data, particularly genetic and genomic data.
Education and career:
Sebastiani obtained a first degree in mathematics from the University of Perugia, Italy (1987), an M.Sc. in statistics from University College London (1990), and a Ph.D. in statistics from the Sapienza University of Rome (1992). She came to Boston University in 2003, after previously having been an assistant professor in the Department of Mathematics and Statistics at the University of Massachusetts Amherst.
Contributions:
Her most important contribution is a model based on a Bayesian network that integrates more than 60 single-nucleotide polymorphisms (SNPs) and other biomarkers to compute the risk for stroke in patients with sickle cell anemia. This model was shown to have high sensitivity and specificity and demonstrated, for the first time, how an accurate risk prediction model of a complex genetic trait that is modulated by several interacting genes can be built using Bayesian networks.A controversial paper regarding the genetics of aging with which she was associated was retracted from the journal Science in 2011 due to flawed data. The corrected version was published in PLOS ONE, and several of the genes found associated with exceptional human longevity were replicated in other studies of centenarians.
Publications:
She has published several peer-reviewed papers. According to Scopus the most cited ones are: Ramoni M.F.; Sebastiani P.; Kohane I.S. (2002). "Cluster analysis of gene expression dynamics" (2002)". Proceedings of the National Academy of Sciences of the United States of America. 99 (14): 9121–9126. doi:10.1073/pnas.132656399. PMC 123104. PMID 12082179.
Sebastiani P.; Ramoni M.F.; Nolan V.; Baldwin C.T.; Steinberg M.H. (2005). "Genetic dissection and prognostic modeling of overt stroke in sickle cell anemia" (2005)". Nature Genetics. 37 (4): 435–440. doi:10.1038/ng1533. PMC 2896308. PMID 15778708.
Mandl K.D.; Overhage J.M.; Wagner M.M.; Lober W.B.; Sebastiani P.; Mostashari F.; Pavlin J.A.; Gesteland P.H.; Treadwell T.; Koski E.; Hutwagner L.; Buckeridge D.L.; Aller R.D.; Grannis S. (2003). "Implementing syndromic surveillance: A practical guide informed by the early experience" (2004)". Journal of the American Medical Informatics Association. 11 (2): 141–150. doi:10.1197/jamia.m1356. PMC 353021. PMID 14633933.
Sebastiani P.; Gussoni E.; Kohane I.S.; Ramoni M.F.; Baker H.V. (2003). "Statistical challenges in functional genomics" (2003)". Statistical Science. 18 (1): 33–70. doi:10.1214/ss/1056397486.
Awards and honors:
She became a fellow of the American Statistical Association in 2017. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CADPAC**
CADPAC:
CADPAC, the Cambridge Analytic Derivatives Package, is a suite of programs for ab initio computational chemistry calculations. It has been developed by R. D. Amos with contributions from I. L. Alberts, J. S. Andrews, S. M. Colwell, N. C. Handy, D. Jayatilaka, P. J. Knowles, R. Kobayashi, K. E. Laidig, G. Laming, A. M. Lee, P. E. Maslen, C. W. Murray, J. E. Rice, E. D. Simandiras, A. J. Stone, M.-D. Su and D. J. Tozer. at Cambridge University since 1981. It is capable of molecular Hartree–Fock calculations, Møller–Plesset calculations, various other correlated calculations and density functional theory calculations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Utility (C++)**
Utility (C++):
utility is a header file in the C++ Standard Library. This file has two key components: rel_ops, a namespace containing set of templates which define default behavior for the relational operators !=, >, <=, and >= between objects of the same type, based on user-defined operators == and <.
pair, a container template which holds two member objects (first and second) of arbitrary type(s). Additionally, the header defines default relational operators for pairs which have both types in common.
rel_ops:
GCC's implementation declares the rel_ops namespace (nested within namespace std) in the following manner: Consider the following declaration of class A, which defines equality and less-than operators for comparison against other objects of the same type: By invoking the rel_ops templates, one can assign a default meaning to the remaining relational operators. However, if a similar type-specific (i.e. non-template) operator exists in the current scope, even outside the class definition, the compiler will prefer it instead.
rel_ops:
One could of course declare the following in tandem with rel_ops, allowing the derivation of all relational operators from <:
pair:
An object declared, for example, as std::pair<int, float> will contain two members, int first; and float second;, plus three constructor functions.
The first (default) constructor initializes both members with the default values 0 and 0.0, whereas the second one accepts one parameter of each type. The third is a template copy-constructor which will accept any std::pair<_U1, _U2>, provided the types _U1 and _U2 are capable of implicit conversion to int and float respectively.
GCC's implementation defines the pair mechanism as follows.
pair:
Additionally this header defines all six relational operators for pair instances with both types in common. These define a strict weak ordering for objects of type std::pair<_T1, _T2>, based on the first elements and then upon the second elements only when the first ones are equal. Additionally the header contains a template-function make_pair() which deduces its return type based on parameters: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mafenide**
Mafenide:
Mafenide (INN; usually as mafenide acetate, trade name Sulfamylon) is a sulfonamide-type medication used as an antibiotic. It was approved by the FDA in 1948.
Uses:
Mafenide is used to treat severe burns. It is used topically as an adjunctive therapy for second- and third-degree burns. It is bacteriostatic against many gram-positive and gram-negative organisms, including Pseudomonas aeruginosa. Some sources state that mafenide is more appropriate for non-facial burns, while chloramphenicol/prednisolone or bacitracin are more appropriate for facial burns.
Mechanism of action:
Mafenide works by reducing the bacterial population present in the avascular tissues of burns and permits spontaneous healing of deep partial-thickness burns.
Adverse reactions:
Adverse reactions can include superinfection, pain or burning upon application, rash, pruritus, tachypnea, or hyperventilation. Mafenide is metabolized to a carbonic anhydrase inhibitor, which could potentially result in metabolic acidosis.
Contraindications:
Mafenide is contraindicated in those with sulfonamide hypersensitivity or renal impairment.
Dosage:
For use as adjunctive therapy for second- and third-degree burns to prevent infection, adults and children should apply topically to a thickness of approximately 1.6 mm to cleaned and debrided wound once or twice per day with a sterile gloved hand. The burned area should be covered with cream at all times. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Data-driven programming**
Data-driven programming:
In computer programming, data-driven programming is a programming paradigm in which the program statements describe the data to be matched and the processing required rather than defining a sequence of steps to be taken. Standard examples of data-driven languages are the text-processing languages sed and AWK, and the document transformation language XSLT, where the data is a sequence of lines in an input stream – these are thus also known as line-oriented languages – and pattern matching is primarily done via regular expressions or line numbers.
Related paradigms:
Data-driven programming is similar to event-driven programming, in that both are structured as pattern matching and resulting processing, and are usually implemented by a main loop, though they are typically applied to different domains. The condition/action model is also similar to aspect-oriented programming, where when a join point (condition) is reached, a pointcut (action) is executed. A similar paradigm is used in some tracing frameworks such as DTrace, where one lists probes (instrumentation points) and associated actions, which execute when the condition is satisfied.
Related paradigms:
Adapting abstract data type design methods to object-oriented programming results in a data-driven design. This type of design is sometimes used in object-oriented programming to define classes during the conception of a piece of software.
Applications:
Data-driven programming is typically applied to streams of structured data, for filtering, transforming, aggregating (such as computing statistics), or calling other programs. Typical streams include log files, delimiter-separated values, or email messages, notably for email filtering. For example, an AWK program may take as input a stream of log statements, and for example send all to the console, write ones starting with WARNING to a "WARNING" file, and send an email to a sysadmin in case any line starts with "ERROR". It could also record how many warnings are logged per day. Alternatively, one can process streams of delimiter-separated values, processing each line or aggregated lines, such as the sum or max. In email, a language like procmail can specify conditions to match on some emails, and what actions to take (deliver, bounce, discard, forward, etc.).
Applications:
Some data-driven languages are Turing-complete, such as AWK and even sed, while others are intentionally very limited, notably for filtering. An extreme example of the latter is pcap, which only consists of filtering, with the only action being “capture”. Less extremely, sieve has filters and actions, but in the base standard has no variables or loops, only allowing stateless filtering statements: each input element is processed independently. Variables allow state, which allow operations that depend on more than one input element, such as aggregation (summing inputs) or throttling (allow at most 5 mails per hour from each sender, or limiting repeated log messages).
Applications:
Data-driven languages frequently have a default action: if no condition matches, line-oriented languages may print the line (as in sed), or deliver a message (as in sieve). In some applications, such as filtering, matching is may be done exclusively (so only first matching statement), while in other cases all matching statements are applied. In either case, failure to match any pattern may be "default behavior" or can be seen as an error, to be caught by a catch-all statement at the end.
Benefits and issues:
While the benefits and issues may vary between implementation, there are a few big potential benefits of and problems with this paradigm. Functionality simply requires that it knows the abstract data type of the variables it is working with. Functions and interfaces can be used on all objects with the same data fields, for instance the object's "position". Data can be grouped into objects or "entities" according to preference with little to no consequence.
Benefits and issues:
While data-driven design does prevent coupling of data and functionality, in some cases, data-driven programming has been argued to lead to bad object-oriented design, especially when dealing with more abstract data. This is because a purely data-driven object or entity is defined by the way it is represented. Any attempt to change the structure of the object would immediately break the functions that rely on it.
Benefits and issues:
As an example, one might represent driving directions as a series of intersections (two intersecting streets) where the driver must turn right or left. If an intersection (in the United States) is represented in data by the zip code (5-digit number) and two street names (strings of text), bugs may appear when a city where streets intersect multiple times is encountered. While this example may be oversimplified, restructuring of data is a fairly common problem in software engineering, either to eliminate bugs, increase efficiency, or support new features.
Languages:
AWK Oz Perl – data-driven programming as in AWK and sed is one paradigm supported by Perl sed Lua Clojure Tab (language) fdm maildrop procmail Sieve BASIC XSLT | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Germline mosaicism**
Germline mosaicism:
Germline mosaicism, also called gonadal mosaicism, is a type of genetic mosaicism where more than one set of genetic information is found specifically within the gamete cells; conversely, somatic mosaicism is a type of genetic mosaicism found in somatic cells. Germline mosaicism can be present at the same time as somatic mosaicism or individually, depending on when the conditions occur. Pure germline mosaicism refers to mosaicism found exclusively in the gametes and not in any somatic cells. Germline mosaicism can be caused either by a mutation that occurs after conception, or by epigenetic regulation, alterations to DNA such as methylation that do not involve changes in the DNA coding sequence.
Germline mosaicism:
A mutation in an allele acquired by a somatic cell early in its development can be passed on to its daughter cells, including those that later specialize to gametes. With such mutation within the gamete cells, a pair of medically typical individuals may have repeated succession of children who suffer from certain genetic disorders such as Duchenne muscular dystrophy and osteogenesis imperfecta because of germline mosaicism. It is possible for parents unaffected by germline mutations to produce an offspring with an autosomal dominant (AD) disorder due to a random new mutation within one’s gamete cells known as sporadic mutation; however, if these parents produce more than one child with an AD disorder, germline mosaicism is more likely the cause than a sporadic mutation. In the first documented case of its kind, two offspring of a French woman who had no phenotypic expression of the AD disorder hypertrophic cardiomyopathy, inherited the disease.
Inheritance:
Germline mosaicism disorders are usually inherited in a pattern that suggests that the condition is dominant in either or both of the parents. That said, diverging from Mendelian gene inheritance patterns, a parent with a recessive allele can produce offspring expressing the phenotype as dominant through germline mosaicism. A situation may also arise in which the parents have milder phenotypic expression of a mutation yet produce offspring with more expressive phenotypic variance and a more frequent sibling recurrences of the mutation.Diseases caused by germline mosaicism can be difficult to diagnose as genetically-inherited because the mutant alleles are not likely to be present in the somatic cells. Somatic cells are more commonly used for genetic analysis because they are easier to obtain than gametes. If the disease is a result of pure germline mosaicism, then the disease causing mutant allele would never be present in the somatic cells. This is a source of uncertainty for genetic counselling. An individual may still be a carrier for a certain disease even if the disease causing mutant allele is not present in the cells that were analyzed because the causative mutation could still exist in some of the individual's gametes.Germline mosaicism may contribute to the inheritance of many genetic conditions. Conditions that are inherited by means of germline mosaicism are often mistaken as being the result of de novo mutations. Various diseases are now being re-examined for presence of mutant alleles in the germline of the parents in order to further our understanding of how they can be passed on. The frequency of germline mosaicism is not known due to the sporadic nature of the mutations causing it and the difficulty in obtaining the gametes that must be tested to diagnose it.
Diagnosis:
Autosomal dominant or X-linked familial disorders often prompt prenatal testing for germline mosaicism. This diagnosis may involve minimally invasive procedures, such as blood sampling or amniotic fluid sampling. Collected samples can be sequenced via common DNA testing methods, such as Sanger Sequencing, MLPA, or Southern Blot analysis, to look for variations on relevant genes connected to the disorder.
Recurrence rate:
The recurrence rate of conditions caused by germline mosaicism varies greatly between subjects. Recurrence is proportional to the number of gamete cells that carry the particular mutation with the condition. If the mutation occurred earlier on in the development of the gamete cells, then the recurrence rate would be higher because a greater number of cells would carry the mutant allele.
Case studies:
A Moroccan family consisting of two healthy unrelated parents and three offspring—including two with Noonan syndrome, a rare autosomal dominant disorder with varying expression and genetic heterogeneity—underwent genetic testing revealing that both of the siblings with NS share the same PTPN11 haplotype from both parents, while a distinct paternal and maternal haplotype was inherited by the unaffected sibling. In the paper Germline and somatic mosaicism in transgenic mice published in 1986, Thomas M.Wilkie, Ralph L.Brinster, and Richard D.Palmiter analyzed a germline mosaicism experiment done on 262 transgenic mice and concluded that 30% of founder transgenic mice are mosaic in the germline. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Yahalom (protocol)**
Yahalom (protocol):
Yahalom is an authentication and secure key-sharing protocol designed for use on an insecure network such as the Internet. Yahalom uses a trusted arbitrator to distribute a shared key between two people. This protocol can be considered as an improved version of Wide Mouth Frog protocol (with additional protection against man-in-the-middle attack), but less secure than the Needham–Schroeder protocol.
Protocol description:
If Alice (A) initiates the communication to Bob (B) with S is a server trusted by both parties, the protocol can be specified as follows using security protocol notation: A and B are identities of Alice and Bob respectively KAS is a symmetric key known only to A and S KBS is a symmetric key known only to B and S NA and NB are nonces generated by A and B respectively KAB is a symmetric, generated key, which will be the session key of the session between A and B A→B:A,NA Alice sends a message to Bob requesting communication.
Protocol description:
B→S:B,{A,NA,NB}KBS Bob sends a message to the Server encrypted under KBS .S→A:{B,KAB,NA,NB}KAS,{A,KAB}KBS The Server sends to Alice a message containing the generated session key KAB and a message to be forwarded to Bob.
A→B:{A,KAB}KBS,{NB}KAB Alice forwards the message to Bob and verifies NA has not changed. Bob will verify NB has not changed when he receives the message.
BAN-Yahalom:
Burrows, Abadi and Needham proposed a variant of this protocol in their 1989 paper as follows: A→B:A,NA B→S:B,NB,{A,NA}KBS S→A:NB,{B,KAB,NA}KAS,{A,KAB,NB}KBS A→B:{A,KAB,NB}KBS,{NB}KAB In 1994, Paul Syverson demonstrated two attacks on this protocol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**G.165**
G.165:
G.165 is an ITU-T standard for echo cancellers. It is primarily used in telephony. Echo can occur on telephone lines when a user's voice is reflected back to them from further down the line. This can be distracting for the user and even make conversation unintelligible. Echo can also interfere with data transmission. The standard was released for usage in 1993, it was superseded by the G.168. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Special needs**
Special needs:
In clinical diagnostic and functional development, special needs (or additional needs) refers to individuals who require assistance for disabilities that may be medical, mental, or psychological. Guidelines for clinical diagnosis are given in both the Diagnostic and Statistical Manual of Mental Disorders and the International Classification of Diseases 9th edition. Special needs can range from people with autism, cerebral palsy, Down syndrome, dyslexia, dyscalculia, dyspraxia, dysgraphia, blindness, deafness, ADHD, and cystic fibrosis. They can also include cleft lips and missing limbs. The types of special needs vary in severity, and a student with a special need is classified as being a severe case when the student's IQ is between 20 and 35. These students typically need assistance in school, and have different services provided for them to succeed in a different setting.In the United Kingdom, special needs usually refers to special needs within an educational context. This is also referred to as special educational needs (SEN) or special educational needs and disabilities (SEND). In the United States, 19.4 percent of all children under the age of 18 (14,233,174 children) had special health care needs as of 2018.The term is seen as a dysphemism by many disability rights advocates and is deprecated by a number of style guides (e.g. APA style).
U.S. special needs and adoption statistics:
In the United States "special needs" is a legal term applying in foster care, derived from the language in the Adoption and Safe Families Act of 1997. It is a diagnosis used to classify children as needing more services than those children without special needs who are in the foster care system. It is a diagnosis based on behavior, childhood and family history, and is usually made by a health care professional.
U.S. special needs and adoption statistics:
More than 150,000 children with special needs in the US have been waiting for permanent homes. Traditionally, children with special needs have been considered harder to place for adoption than other children, but experience has shown that many children with special needs can be placed successfully with families who want them. The Adoption and Safe Families Act of 1997 (P.L. 105–89) has focused more attention on finding homes for children with special needs and making sure they receive the post-adoption services they need. Pre-adoption services are also of critical importance to ensure that adoptive parents are well prepared and equipped with the necessary resources for a successful adoption. The United States Congress enacted the law to ensure that children in foster care who cannot be reunited with their birth parents are freed for adoption and placed with permanent families as quickly as possible.
U.S. special needs and adoption statistics:
The disruption rate for special needs adoption is found to be somewhere between ten and sixteen percent. A 1989 study performed by Richard Barth and Marianne Berry found that of the adoptive parents that disrupted, 86% said they would likely or definitely adopt again. 50% said that they would adopt the same child, given a greater awareness of what the adoption of special needs children requires. Also, within disrupted special needs adoption cases, parents often said that they were not aware of the child's history or the severity of the child's issues before the adoption. There is also more care that goes into it when a child of special needs is in the process of getting adopted. Because of the Adoption Assistance and Child Welfare Act of 1980 P.L. 96-272, the child's needs have to be met within the home before allowing adoption, including being able to financially support the child.
Education:
The term special needs is a short form of special education needs and is a way to refer to students with disabilities, in which their learning may be altered or delayed compared to other students. The term special needs in the education setting comes into play whenever a child's education program is officially altered from what would normally be provided to students through an Individual Education Plan, which is sometimes referred to as an Individual Program plan. Special education aids the student's learning environment to create a uniform system for all children.In the past, individuals with disabilities were often shunned or kept in isolation in mental hospitals or institutions. In many countries, disabled people were seen as an embarrassment to society, often facing punishments of torture and even execution. In the US, after the creation of the Individuals with Disabilities Education Act and many other regulations, students with disabilities could not be excluded or discriminated against in the education system.
Education:
Integrated learning environments In many cases, the integration of special needs students into general-learning classrooms has had many benefits. A study done by Douglas Marston tested the effects of an integrated learning environment on the academic success of students with special needs. He first gathered students in from three different categories: those in isolated learning environments, those in integrated learning environments, and those in a combination of both isolated and integrated learning environments. He calculated the average number of words read by each group in the fall and again in the spring, and compared the outcome. The findings showed that those in integrated learning environments or a combination of isolated and integrated environments experienced greater improvements in their reading skills than those in strictly isolated environments.Integrated classrooms can also have many social benefits on students with special needs. By surrounding special needs students with their fully functioning peers, they are exposed to diversity. Their close contact with other students will allow them to develop friendships and improve interpersonal skills.
Education:
Special needs and education worldwide The integration of children with special needs into school systems is an issue that is being addressed worldwide. In Europe, the number of students with special needs in regular classrooms is rising, while the number of those in segregated exclusive special needs classrooms is declining. However, in other countries such as China, educational opportunities for those with disabilities have been a longstanding issue. Certain cultural beliefs and ideologies have prevented the integration of all students regardless of ability, yet in recent years, China has progressed significantly by allocating more funding to programs to support disabled people and striving to create more inclusive communities within schools. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Decision tree pruning**
Decision tree pruning:
Pruning is a data compression technique in machine learning and search algorithms that reduces the size of decision trees by removing sections of the tree that are non-critical and redundant to classify instances. Pruning reduces the complexity of the final classifier, and hence improves predictive accuracy by the reduction of overfitting.
Decision tree pruning:
One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risks overfitting the training data and poorly generalizing to new samples. A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as the horizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information.Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by a cross-validation set. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance.
Techniques:
Pruning processes can be divided into two types (pre- and post-pruning).
Techniques:
Pre-pruning procedures prevent a complete induction of the training set by replacing a stop () criterion in the induction algorithm (e.g. max. Tree depth or information gain (Attr)> minGain). Pre-pruning methods are considered to be more efficient because they do not induce an entire set, but rather trees remain small from the start. Prepruning methods share a common problem, the horizon effect. This is to be understood as the undesired premature termination of the induction by the stop () criterion.
Techniques:
Post-pruning (or just pruning) is the most common way of simplifying trees. Here, nodes and subtrees are replaced with leaves to reduce complexity. Pruning can not only significantly reduce the size but also improve the classification accuracy of unseen objects. It may be the case that the accuracy of the assignment on the train set deteriorates, but the accuracy of the classification properties of the tree increases overall.
Techniques:
The procedures are differentiated on the basis of their approach in the tree (top-down or bottom-up).
Bottom-up pruning These procedures start at the last node in the tree (the lowest point). Following recursively upwards, they determine the relevance of each individual node. If the relevance for the classification is not given, the node is dropped or replaced by a leaf. The advantage is that no relevant sub-trees can be lost with this method.
These methods include Reduced Error Pruning (REP), Minimum Cost Complexity Pruning (MCCP), or Minimum Error Pruning (MEP).
Techniques:
Top-down pruning In contrast to the bottom-up method, this method starts at the root of the tree. Following the structure below, a relevance check is carried out which decides whether a node is relevant for the classification of all n items or not. By pruning the tree at an inner node, it can happen that an entire sub-tree (regardless of its relevance) is dropped. One of these representatives is pessimistic error pruning (PEP), which brings quite good results with unseen items.
Pruning algorithms:
Reduced error pruning One of the simplest forms of pruning is reduced error pruning. Starting at the leaves, each node is replaced with its most popular class. If the prediction accuracy is not affected then the change is kept. While somewhat naive, reduced error pruning has the advantage of simplicity and speed.
Pruning algorithms:
Cost complexity pruning Cost complexity pruning generates a series of trees T0…Tm where T0 is the initial tree and Tm is the root alone. At step i , the tree is created by removing a subtree from tree i−1 and replacing it with a leaf node with value chosen as in the tree building algorithm. The subtree that is removed is chosen as follows: Define the error rate of tree T over data set S as err (T,S) The subtree t that minimizes err prune err leaves leaves prune (T,t))| is chosen for removal.The function prune (T,t) defines the tree obtained by pruning the subtrees t from the tree T . Once the series of trees has been created, the best tree is chosen by generalized accuracy as measured by a training set or cross-validation.
Examples:
Pruning could be applied in a compression scheme of a learning algorithm to remove the redundant details without compromising the model's performances. In neural networks, pruning removes entire neurons or layers of neurons. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Matrix (protocol)**
Matrix (protocol):
Matrix (sometimes stylized as [matrix]) is an open standard and communication protocol for real-time communication. It aims to make real-time communication work seamlessly between different service providers, in the way that standard Simple Mail Transfer Protocol email currently does for store-and-forward email service, by allowing users with accounts at one communications service provider to communicate with users of a different service provider via online chat, voice over IP, and videotelephony. It therefore serves a similar purpose to protocols like XMPP, but is not based on any existing communication protocol.
Matrix (protocol):
From a technical perspective, it is an application layer communication protocol for federated real-time communication. It provides HTTP APIs and open source reference implementations for securely distributing and persisting messages in JSON format over an open federation of servers. It can integrate with standard web services via WebRTC, facilitating browser-to-browser applications.
History:
The initial project was created inside Amdocs, while building a chat tool called "Amdocs Unified Communications", by Matthew Hodgson and Amandine Le Pape. Amdocs then funded most of the development work from 2014 to October 2017. Matrix was the winner of the Innovation award at WebRTC 2014 Conference & Expo, and of the "Best in Show" award at WebRTC World in 2015. The protocol received praise mixed with some cautionary notes after it launched in 2014. Reviewers noted that other attempts at defining an open instant messaging or multimedia signalling protocol of this type had difficulties becoming widely adopted—e.g. XMPP and IRCv3—and have highlighted the challenges involved, both technological and political. Some were unclear if there was enough demand among users for services which interoperate among providers. In 2015, a subsidiary of Amdocs was created, named "Vector Creations Limited", and the Matrix staff was moved there.In July 2017, the funding by Amdocs was announced to be cut and in the following weeks the core team created their own UK-based company, "New Vector Limited", which was mainly built to support the development of Matrix and Riot, which was later renamed to Element. During this time period, there were multiple calls for support to the community and companies that build on Matrix, to help pay for the wages of at least part of the core team. Patreon and Liberapay crowdfunding accounts were created, and the core team started a video podcast, called Matrix "Live" to keep the contributors up to speed with ongoing developments. This was expanded by a weekly blog format, called "This Week in Matrix", where interested community members could read, or submit their own, Matrix-related news. The company was created with the goal of offering consultancy services for Matrix and paid hosting of Matrix servers (as a platform called modular.im, which was later renamed to Element matrix services) to generate income.In the early weeks after its creation, the Matrix team and the company Purism published plans to collaborate in the creation of the Librem 5 phone. The Librem 5 was intended to be a Matrix native phone, where the default pre-installed messaging and caller app should use Matrix for audio and video calls and instant messaging.In 2017, KDE announced it was working on including support for the protocol in its IRC client Konversation.In late January 2018, the company received an investment of US$5 million from Status, an Ethereum based startup.
History:
In April 2018, the French Government announced plans to create their own instant messaging tool. Work on the application based on Riot and Matrix protocol—called Tchap after French scientists Claude Chappe—had started in early 2018, and the program was open-sourced and released on iOS and Android in April 2019.In October 2018, a Community Interest Company called "The Matrix.org Foundation C.I.C." was incorporated, to serve as a neutral legal entity for further development of the standard.In February 2019, the KDE community announced plans to adopt Matrix for its internal communications needs, as a decentralized alternative to other instant messaging servers like Telegram, Slack, and Discord, and operate its own server instance.In April 2019, Matrix.org suffered a security breach in which the production servers were compromised.
History:
This breach was not an issue with the Matrix protocol and did not directly affect home servers other than matrix.org.
History:
In June 2019, the Matrix protocol is out of beta with the version 1.0 across all APIs (and Synapse, at the time the reference home server), and the Matrix foundation is officially launched.In October 2019, New Vector raised an additional US$8.5 million to develop Matrix.In December 2019, German Ministry of Defense announced a pilot project called BwMessenger for secure instant messaging tool based on Matrix protocol, Synapse server and Riot application. This is modeled after French Tchap project. The long-term goal of the Federal Government is the secure use of messenger services that covers all ministries and subordinate authorities.In December 2019, Mozilla announced that it would begin to use Matrix as a replacement for IRC. In the announcement, they said that they would be completing the move in late January 2020. The Mozilla IRC server, irc.mozilla.org, is said to be removed "no later than March of next year [2020]". In March 2020, the IRC server was turned off and users were directed to join chat.mozilla.org, Mozilla's Element instance.In May 2020, Matrix enabled end-to-end encryption by default for private conversations.In October 2020, Element acquired Gitter from GitLab. This meant that all Gitter users would be transitioned over to Matrix.In March 2021, matrix.org announced that there are 28 million global visible accounts.In September 2022, some security issues were found in the implementation of one client-side encryption library. Due to the interoperable architecture, only the affected client applications needed upgrade and third-party implementations were not affected. All critical issues were fixed, with the remaining ones being either non-exploitable in practice, or already prominently warned for in the client.
Protocol:
Matrix targets use cases like voice over IP, Internet of Things and instant messaging, including group communication, along with a longer-term goal to be a generic messaging and data synchronization system for the web. The protocol supports security and replication, maintaining full conversation history, with no single points of control or failure. Existing communication services can integrate with the Matrix ecosystem.Client software is available for open-federated Instant Messaging (IM), voice over IP (VoIP) and Internet of Things (IoT) communication.
Protocol:
The Matrix standard specifies RESTful HTTP APIs for securely transmitting and replicating JSON data between Matrix-capable clients, servers and services. Clients send data by PUTing it to a ‘room’ on their server, which then replicates the data over all the Matrix servers participating in this ‘room’. This data is signed using a git-style signature to mitigate tampering, and the federated traffic is encrypted with HTTPS and signed with each server's private key to avoid spoofing. Replication follows eventual consistency semantics, allowing servers to function even if offline or after data-loss by re-synchronizing missing history from other participating servers.
Protocol:
The Olm library provides for optional end-to-end encryption on a room-by-room basis via a Double Ratchet Algorithm implementation. It can ensure that conversation data at rest is only readable by the room participants. With it configured, data transmitted over Matrix is only visible as ciphertext to the Matrix servers, and can be decrypted only by authorized participants in the room. The encryption protocol is called Olm; Megolm is an expansion of Olm to better suit the need for bigger rooms. There are two main implementations: vodozemac, the current reference implementation, written in Rust. In 2022, it has been audited by Least Authority, whose findings are publicly available and have been addressed by the Matrix team. The review was partially funded by Germany's national agency for the healthcare system digitalisation (Gematik).
Protocol:
libolm, the former reference implementation, has been subject of a cryptographic review by NCC Group, whose findings are publicly available, and have been addressed by the Matrix team. The review was sponsored by the Open Technology Fund.
Bridges:
Matrix supports bridging messages from different chat applications into Matrix rooms. These bridges are programs that run on the server and communicate with the non-Matrix servers. Bridges can either be acting as puppets or relays, where in the former the individual user's account is visibly posting the messages, and in the latter a bot posts the messages for non-puppeteered user accounts.
Bridges:
Currently there are official bridges for: Gitter IRC Slack/Mattermost XMPPBridges for the following notable applications are maintained by the community:
Clients:
Element is the reference implementation of a client. The following client implementations exist; a possibly more complete list can be found on Matrix's website:
Servers:
Synapse is the reference implementation of a Matrix home server, written in Python.
A "second generation Matrix home server" called Dendrite is being developed by the Matrix core team. Dendrite is in beta.
The following server implementations exist; a possibly more complete list can be found on Matrix's website:
Adoption:
Communication among the public agents of France's central administration happens on a Matrix-based internal network, named Tchap.
The project is developed by the Interministerial Directorate for Digital Affairs (DINUM) with the explicit goals of security and digital sovereignty, both of which were deemed to be impossible through WhatsApp, Telegram and Slack.
Germany's national healthcare system's internal communication network uses a Matrix-based system (Ti-Messenger) for real-time communication among Germany's healthcare organizations and sharing of sensitive patient data, and is developed by the national agency for the digitalisation of the healthcare system (Gematik GmbH).
Reasons for choosing Matrix included federated identity management, which allows to reuse the existing identity infrastructure into the new chat system; the decentralized architecture, which allows cross-linking data from disparate sources; and the open protocol, which ensures interoperability and future-proof data exchange and prevents vendor lock-in.
Employees of the Bundeswehr (Germany's armed forces) communicate with each other, and share classified documents (German VS-NfD), on a private Matrix network, with a customized version of the Matrix Element app.
Luxembourg has developed a Matrix-based chat service for government officials, named Luxchat4Gov, planned to be released in the second quartal of 2023.
The Swedish Social Insurance Agency (Försäkringskassan) is using Matrix for internal communications.
Rocket.Chat is based on Matrix since version 4.7.0.
It is used in private networks of public governmental offices, private companies and NGOs, across the world.
The FOSDEM was held on Matrix in 2021 and 2022.
The hosting was provided by Element Matrix Services, which published the technical details for public review soon after the event for 2021 and 2022. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**StealthNet**
StealthNet:
StealthNet is an anonymous P2P file sharing software based on the original RShare client, and has been enhanced. It was first named 'RShare CE' (RShare Community Edition). It use the same network and protocols as RShare.
In 2011 a fork named DarkNode was released, but one year later the website is cleared with the source code.
History:
Development was stopped in March 2011, with version 0.8.7.9, with no official explanation on the web site.In 2012, the developers had an incident with the software Apache Subversion, a part of the source code of StealthNet was lost : versions 0.8.7.5 (February 2010) to 0.8.7.9 (October 2010). However the source code is still available as files.
In 2017 it was forked into version 0.8.8.0.
Features:
Some of the features of StealhNet: Mix network (use for the pseudo anonymous process) Easy to use (same principles as eMule) Multi-source download : 'Swarming' (Segmented file transfer) Resumption of interrupted downloads Can filter the file types searched (allowing to search only among the videos/archives/musics/... files) SNCollection: this type of file contain a list of files shared into StealthNet. Like the "eMule collection" file and Torrents files Point-to-Point traffic encryption with AES standard process (Advanced Encryption Standard, 256 bits) EndPoint to EndPoint traffic encryption with RSA standard (1024 bits) Strong file hashes based on SHA-512 algorithm Anti-flooding measures Text mode client available for OS with Mono support like Linux, OSX and others
Drawbacks:
Has no support for UPnP (Universal Plug and Play), the user must open a port number (ie: 6097) into his router.
Anonymity is unproven.
The source code contains no documentation at all.
The encryption level (AES 256 bits and RSA 1024 bits in the v0.8.7.9) that was strong in the 2000s is now medium since the 2010s. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Well-structured transition system**
Well-structured transition system:
In computer science, specifically in the field of formal verification, well-structured transition systems (WSTSs) are a general class of infinite state systems for which many verification problems are decidable, owing to the existence of a kind of order between the states of the system which is compatible with the transitions of the system. WSTS decidability results can be applied to Petri nets, lossy channel systems, and more.
Formal definition:
Recall that a well-quasi-ordering ≤ on a set X is a quasi-ordering (i.e., a preorder or reflexive, transitive binary relation) such that any infinite sequence of elements x0,x1,x2,… , from X contains an increasing pair xi≤xj with i<j . The set X is said to be well-quasi-ordered, or shortly wqo.
For our purposes, a transition system is a structure S=⟨S,→,⋯⟩ , where S is any set (its elements are called states), and →⊆ S×S (its elements are called transitions). In general a transition system may have additional structure like initial states, labels on transitions, accepting states, etc. (indicated by the dots), but they do not concern us here.
A well-structured transition system consists of a transition system ⟨S,→,≤⟩ , such that ≤⊆ S×S is a well-quasi-ordering on the set of states.
Formal definition:
≤ is upward compatible with → : that is, for all transitions s1→s2 (by this we mean ∈→ ) and for all t1 such that s1≤t1 , there exists t2 such that t1→∗t2 (that is, t2 can be reached from t1 by a sequence of zero or more transitions) and s2≤t2 Well-structured systems A well-structured system is a transition system (S,→) with state set S=Q×D made up from a finite control state set Q , a data values set D , furnished with a decidable pre-order ≤⊆ D×D which is extended to states by (q,d)≤(q′,d′)⇔q=q′∧d≤d′ , which is well-structured as defined above ( → is monotonic, i.e. upward compatible, with respect to ≤ ) and in addition has a computable set of minima for the set of predecessors of any upward closed subset of S Well-structured systems adapt the theory of well-structured transition systems for modelling certain classes of systems encountered in computer science and provide the basis for decision procedures to analyse such systems, hence the supplementary requirements: the definition of a WSTS itself says nothing about the computability of the relations ≤ , →
Uses in Computer Science:
Well-structured Systems Coverability can be decided for any well-structured system, and so can reachability of a given control state, by the backward algorithm of Abdulla et al. or for specific subclasses of well-structured systems (subject to strict monotonicity, e.g. in the case of unbounded Petri nets) by a forward analysis based on a Karp-Miller coverability graph.
Uses in Computer Science:
Backward Algorithm The backward algorithm allows the following question to be answered: given a well-structured system and a state s , is there any transition path that leads from a given start state s0 to a state s′≥s (such a state is said to cover s )? An intuitive explanation for this question is: if s represents an error state, then any state containing it should also be regarded as an error state. If a well-quasi-order can be found that models this "containment" of states and which also fulfills the requirement of monotonicity with respect to the transition relation, then this question can be answered.
Uses in Computer Science:
Instead of one minimal error state s , one typically considers an upward closed set Se of error states.
The algorithm is based on the facts that in a well-quasi-order (A,≤) , any upward closed set has a finite set of minima, and any sequence S1⊆S2⊆...
of upward-closed subsets of A converges after finitely many steps (1).
Uses in Computer Science:
The algorithm needs to store an upward-closed set Ss of states in memory, which it can do because an upward-closed set is representable as a finite set of minima. It starts from the upward closure of the set of error states Se and computes at each iteration the (by monotonicity also upward-closed) set of immediate predecessors and adding it to the set Ss . This iteration terminates after a finite number of steps, due to the property (1) of well-quasi-orders. If s0 is in the set finally obtained, then the output is "yes" (a state of Se can be reached), otherwise it is "no" (it is not possible to reach such a state). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Federation (information technology)**
Federation (information technology):
A federation is a group of computing or network providers agreeing upon standards of operation in a collective fashion.
The term may be used when describing the inter-operation of two distinct, formally disconnected, telecommunications networks that may have different internal structures. The term "federated cloud" refers to facilitating the interconnection of two or more geographically separate computing clouds.The term may also be used when groups attempt to delegate collective authority of development to prevent fragmentation.
In a telecommunication interconnection, the internal modi operandi of the different systems are irrelevant to the existence of a federation.
Joining two distinct networks: Yahoo! and Microsoft announced that Yahoo! Messenger and MSN Messenger would be interoperable.Collective authority: The MIT X Consortium was founded in 1988 to prevent fragmentation in development of the X Window System.
Federation (information technology):
OpenID, a form of federated identity.In networking systems, to be federated means users are able to send messages from one network to the other. This is not the same as having a client that can operate with both networks, but interacts with both independently. For example, in 2009, Google allowed GMail users to log into their AOL Instant Messenger (AIM) accounts from GMail. One could not send messages from GTalk accounts or XMPP (which Google/GTalk is federated with—XMPP lingo for federation is s2s, which Facebook and MSN Live's implementations do not support) to AIM screen names, nor vice versa. In May 2011, AIM and Gmail federated, allowing users of each network to add and communicate with each other. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Autoprotolysis**
Autoprotolysis:
In chemistry, autoprotolysis is a chemical reaction in which a proton is transferred between two identical molecules, one of which acts as a Brønsted acid, releasing a proton which is accepted by the other molecule acting as a Brønsted base. For example, water undergoes autoprotolysis in the self-ionization of water reaction. It is a type of molecular autoionization.
OH −+H3O+ Any solvent that contains both acidic hydrogen and lone pairs of electrons to accept H+ can undergo autoprotolysis.
For example, ammonia in its purest form may undergo autoprotolysis: NH NH NH 4+ Another example is acetic acid: CH COOH CH COO CH COOH 2+ | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mapnik**
Mapnik:
Mapnik is an open-source mapping toolkit for desktop and server based map rendering, written in C++. Artem Pavlenko, the original developer of Mapnik, set out with the explicit goal of creating beautiful maps by employing the sub-pixel anti-aliasing of the Anti-Grain Geometry (AGG) library. Mapnik now also has a Cairo rendering backend. For handling common software tasks such as memory management, file system access, regular expressions, and XML parsing, Mapnik utilizes the Boost C++ libraries. An XML file can be used to define a collection of mapping objects that determine the appearance of a map, or objects can be constructed programmatically in C++, Python, and Node.js.
Data format:
A number of data formats are supported in Mapnik using a plugin framework. Current plugins exist that utilize OGR and GDAL to read a range of vector and raster datasets. Mapnik also has custom Shapefile, PostGIS and GeoTIFF readers. There is also an osm2pgsql utility, that converts OpenStreetMap data into a format that can be loaded into PostgreSQL. Mapnik can then be used to render the OSM data into maps with the appearance the user wants.
Platforms:
Mapnik is a cross platform toolkit that runs on Windows, Mac, Unix-like systems like Linux and Solaris (since release 0.4).
Usage:
One of its many users is the OpenStreetMap project (OSM), which uses it in combination with an Apache Web Server module (mod_tile) and openstreetmap-carto style to render tiles that make up the OSM default layer. Mapnik is also used by CloudMade, MapQuest, and MapBox.
License:
Mapnik is free software and is released under LGPL (GNU Lesser General Public Licence). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Uranium ore**
Uranium ore:
Uranium ore deposits are economically recoverable concentrations of uranium within the Earth's crust. Uranium is one of the most common elements in the Earth's crust, being 40 times more common than silver and 500 times more common than gold. It can be found almost everywhere in rock, soil, rivers, and oceans. The challenge for commercial uranium extraction is to find those areas where the concentrations are adequate to form an economically viable deposit. The primary use for uranium obtained from mining is in fuel for nuclear reactors.
Uranium ore:
Globally, the distribution of uranium ore deposits is widespread on all continents, with the largest deposits found in Australia, Kazakhstan, and Canada. To date, high-grade deposits are only found in the Athabasca Basin region of Canada.
Uranium deposits are generally classified based on host rocks, structural setting, and mineralogy of the deposit. The most widely used classification scheme was developed by the International Atomic Energy Agency (IAEA) and subdivides deposits into 15 categories.
Uranium:
Uranium is a silvery-gray metallic weakly radioactive chemical element. It has the chemical symbol U and atomic number 92. The most common isotopes in natural uranium are 238U (99.27%) and 235U (0.72%). All uranium isotopes present in natural uranium are radioactive and fissionable, and 235U is fissile (will support a neutron-mediated chain reaction). Uranium, thorium, and one radioactive isotope of potassium (40K) as well as their decay products are the main elements contributing to natural terrestrial radioactivity. Cosmogenic radionuclides are of less importance but unlike the aforementioned primordial radionuclides, which date back to the formation of the planet and have since slowly decayed away, they are replenished at roughly the same rate they decay by the bombardment of earth with cosmic rays.
Uranium:
Uranium has the highest atomic weight of the naturally occurring elements and is approximately 70% denser than lead, but not as dense as tungsten, gold, platinum, iridium, or osmium. It is always found combined with other elements. Along with all elements having atomic weights higher than that of iron, it is only naturally formed in supernova explosions.
Uranium minerals:
The primary uranium ore mineral is uraninite (UO2) (previously known as pitchblende). A range of other uranium minerals can be found in various deposits. These include carnotite, tyuyamunite, torbernite and autunite. The davidite-brannerite-absite type uranium titanates, and the euxenite-fergusonite-samarskite group are other uranium minerals.
A large variety of secondary uranium minerals are known, many of which are brilliantly coloured and fluorescent. The most common are gummite (a mixture of minerals), autunite (with calcium), saleeite (magnesium) and torbernite (with copper); and hydrated uranium silicates such as coffinite, uranophane (with calcium) and sklodowskite (magnesium).
Ore genesis:
There are several themes of uranium ore deposit formation, which are caused by geological and chemical features of rocks and the element uranium. The basic themes of uranium ore genesis are host mineralogy, reduction-oxidation potential, and porosity.
Uranium is a highly soluble, as well as a radioactive, heavy metal. It can be easily dissolved, transported and precipitated within ground waters by subtle changes in oxidation conditions. Uranium also does not usually form very insoluble mineral species, which is a further factor in the wide variety of geological conditions and places in which uranium mineralization may accumulate.
Uranium is an incompatible element within magmas, and as such it tends to become accumulated within highly fractionated and evolved granite melts, particularly alkaline examples. These melts tend to become highly enriched in uranium, thorium and potassium, and may in turn create internal pegmatites or hydrothermal systems into which uranium may dissolve.
Classification schemes:
IAEA Classification (1996) The International Atomic Energy Agency (IAEA) assigns uranium deposits to 15 main categories of deposit types, according to their geological setting and genesis of mineralization, arranged according to their approximate economic significance.
Classification schemes:
Unconformity-related deposits Sandstone deposits Quartz-pebble conglomerate deposits Breccia complex deposits Vein deposits Intrusive deposits (Alaskites) Phosphorite deposits Collapse breccia pipe deposits Volcanic deposits Surficial deposits Metasomatite deposits Metamorphic deposits Lignite Black shale deposits Other types of deposits Alternate scheme The IAEA classification scheme works well, but is far from ideal, as it does not consider that similar processes may form many deposit types, yet in a different geological setting. The following table groups the above deposit types based on their environment of deposition.
Deposit types (IAEA Classification):
Unconformity-related deposits Unconformity-type uranium deposits host high grades relative to other uranium deposits and include some of the largest and richest deposits known. They occur in close proximity to unconformities between relatively quartz-rich sandstones comprising the basal portion of relatively undeformed sedimentary basins and deformed metamorphic basement rocks. These sedimentary basins are typically of Proterozoic age, however some Phanerozoic examples exist.
Deposit types (IAEA Classification):
Phanerozoic unconformity-related deposits occur in Proterozoic metasediments below an unconformity at the base of overlying Phanerozoic sandstone. These deposits are small and low-grade (Bertholene and Aveyron deposits, in France).The two most significant areas for this style of deposit are currently the Athabasca Basin in Saskatchewan, Canada, and the McArthur Basin in the Northern Territory, Australia.
Deposit types (IAEA Classification):
Athabasca Basin The highest grade uranium deposits are found in the Athabasca Basin in Canada, including the two largest high grade uranium deposits in the world, Cigar Lake with 217 million pounds (99,000 t) U3O8 at an average grade of 18% and McArthur River with 324 million pounds (147,000 t) U3O8 at an average grade of 17%. These deposits occur below, across and immediately above the unconformity. Additionally, another high grade discovery is in the development stage at Patterson Lake (Triple R deposit) with an estimated mineral resource identified as; "Indicated Mineral Resources" estimated to total 2,291,000 tons at an average grade of 1.58% U3O8 containing 79,610,000 pounds of U3O8. "Inferred Mineral Resources" are estimated to total 901,000 tons at an average grade of 1.30% U3O8 containing 25,884,000 pounds of U3O8.
Deposit types (IAEA Classification):
McArthur Basin The deposits of the McArthur River basin in the East Alligator Rivers region of the Northern Territory of Australia (including Jabiluka, Ranger, and Nabarlek) are below the unconformity and are at the low-grade end of the unconformity deposit range but are still high grade compared to most uranium deposit types. There has been very little exploration in Australia to locate deeply concealed deposits lying above the unconformity similar to those in Canada. It is possible that very high grade deposits occur in the sandstones above the unconformity in the Alligator Rivers/Arnhem Land area.
Deposit types (IAEA Classification):
Sandstone deposits Sandstone deposits are contained within medium to coarse-grained sandstones deposited in a continental fluvial or marginal marine sedimentary environment. Impermeable shale or mudstone units are interbedded in the sedimentary sequence and often occur immediately above and below the mineralised horizon. Uranium is mobile under oxidising conditions and precipitates under reducing conditions, and thus the presence of a reducing environment is essential for the formation of uranium deposits in sandstone.Primary mineralization consists of pitchblende and coffinite, with weathering producing secondary mineralization. Sandstone deposits constitute about 18% of world uranium resources. Orebodies of this type are commonly low to medium grade (0.05–0.4% U3O8) and individual orebodies are small to medium in size (ranging up to a maximum of 50,000 t U3O8).Sandstone hosted uranium deposits are widespread globally and span a broad range of host rock ages. Some of the major provinces and production centers include: the Wyoming basins the Grants District of New Mexico deposits in Central Europe and KazakhstanSignificant potential remains in most of these centers as well as in Australia, Mongolia, South America, and Africa.
Deposit types (IAEA Classification):
This model type can be further subdivided into the following sub-types: tabular roll front basal channel structurally relatedMany deposits represent combinations of these types.
Deposit types (IAEA Classification):
Tabular Tabular deposits consist of irregular tabular or elongate lenticular zones of uranium mineralisation within selectively reduced sediments. The mineralised zones are oriented parallel to the direction of groundwater flow, but on a small scale the ore zones may cut across sedimentary features of the host sandstone. Deposits of this nature commonly occur within palaeochannels cut in the underlying basement rocks.
Deposit types (IAEA Classification):
Tabular sandstone uranium deposits contains many of the highest grades of the sandstone class, however the average deposit size is very small.
Deposit types (IAEA Classification):
Roll front Roll-front uranium deposits are generally hosted within permeable and porous sandstones or conglomerates. The mechanism for deposit formation is dissolution of uranium from the formation or nearby strata and the transport of this soluble uranium into the host unit. When the fluids change redox state, generally in contact with carbon-rich organic matter, uranium precipitates to form a 'front'.
Deposit types (IAEA Classification):
The Rollfront subtype deposits typically represent the largest of the sandstone-hosted uranium deposits and one of the largest uranium deposit types with an average of 21 million lb (9,500 t) U3O8. Included in this class are the Inkai deposit in Kazakhstan and the Smith Ranch deposit in Wyoming. Probably more significant than their larger size, rollfront deposits have the advantage of being amenable to low cost in-situ leach recovery.
Deposit types (IAEA Classification):
Typical characteristics: roll-front deposits are crescent-shaped bodies that transect the host lithology typically the convex side points down the hydraulic gradient.
the limbs or tails tend to be peneconcordant with the lithology.
most ore-bodies consist of several interconnected rolls.
individual roll-front deposits are quite small but collectively can extend for considerable distances.
Deposit types (IAEA Classification):
Basal channel (palaeochannel) Basal channel deposits are often grouped with tabular or rollfront deposits, depending on their unique characteristics. The model for formation of palaeochannel deposits is similar to that for roll-front deposits, above, except that the source of uranium may be in the watershed leading into a stream, or the bed load of the palaeochannel itself. This uranium is transported through the groundwaters and is deposited either at a reduced boundary, or in ephemeral drainage systems such as those in deserts of Namibia and Australia, it is deposited in calcretised evaporation sites or even in saline lakes as the ground water evaporates.
Deposit types (IAEA Classification):
Some particularly rich uranium deposits are formed in palaeochannels which are filled in the lower parts by lignite or brown coal, which acts as a particularly efficient reductive trap for uranium. Sometimes, elements such as scandium, gold and silver may be concentrated within these lignite-hosted uranium deposits.The Frome Embayment in South Australia hosts several deposits of this type including Honeymoon, Oban, Beverley and [Four-Mile] (which is the largest deposit of this class). These deposits are hosted in palaeochannels filled with Cainozoic sediments and sourced their uranium from uranium-rich Palaeo- to Mesoproterozoic rocks of the Mount Painter Inlier and the Olary Domain of the Curnamona Province.
Deposit types (IAEA Classification):
Structurally related Tectonic-lithologic controlled uranium deposits occur in sandstones adjacent to a permeable fault zone which cuts the sandstone/mudstone sequence. Mineralisation forms tongue-shaped ore zones along the permeable sandstone layers adjacent to the fault. Often there are a number of mineralised zones 'stacked' vertically on top of each other within sandstone units adjacent to the fault zone.
Deposit types (IAEA Classification):
Quartz-pebble conglomerate deposits Quartz pebble conglomerate hosted uranium deposits are of historical significance as the major source of primary production for several decades after World War II. This type of deposit has been identified in eight localities around the world, however the most significant deposits are in the Huronian Supergroup in Ontario, Canada and in the Witwatersrand Supergroup of South Africa. These deposits make up approximately 13% of the world's uranium resources.Two main sub-types have been identified: Elliot Lake WitwatersrandQuartz pebble conglomerate hosted uranium deposits formed from the transport and deposition of uraninite in a fluvial sedimentary environment and are defined as stratiform and stratabound paleoplacer deposits. Host rocks are typically submature to supermature, polymictic conglomerates and sandstones deposited in alluvial fan and braided stream environments. The host conglomerates of the Huronian deposits in Canada are situated at the base of the sequence, whereas the mineralized horizons in the Witwatersand are arguably along tectonized intraformational unconformities.
Deposit types (IAEA Classification):
Uranium minerals were derived from uraniferous pegmatites in the sediment source areas. These deposits are restricted to the Archean and early Paleoproterozoic and do not occur in sediments younger than about 2,200 million years when oxygen levels in the atmosphere reached a critical level, making simple uranium oxides no longer stable in near-surface environments.Quartz pebble conglomerate uranium deposits are typically low grade but characterized by high tonnages. The Huronian deposits in Canada generally contain higher grades (0.15% U3O8) and greater resources (as shown by the Denison and Quirke mines), however some of the South African gold deposits also contain sizeable low grade (0.01% U3O8) uranium resources.
Deposit types (IAEA Classification):
Witwatersrand sub-type In the Witwatersrand deposits ores are found along unconformities, shale and siltstone beds, and carbonaceous seams. The West Rand Group of sediments tend to host the most uranium within the Witwatersrand Supergroup. The uranium rich Dominion Reef is located at the base of the West Rand Supergroup. The Vaal Reef is the most uranium rich reef of the Central Rand Group of sediments. Structural controls on the regional scale are normal faults while on the deposit scale are bedding parallel shears and thrusts. Textural evidence indicates that the uranium and gold have been remobilized to their current sites; however the debate continues if the original deposition was detrital or was entirely hydrothermal, or alternatively related to high grade diagenesis.
Deposit types (IAEA Classification):
Uranium minerals in the Witwatersrand deposits are typically uraninite with lesser uranothorite, brannerite, and coffinite. The uranium is especially concentrated along thin carbonaceous seams or carbon leaders. Strong regional scale alteration consists of pyrophyllite, chloritoid, muscovite, chlorite, quartz, rutile, and pyrite. The main elements associated with the uranium are gold and silver. Gold contents are much higher than in the Elliot Lake type with U:Au ranging between 5:1 and 500:1, which indicates that these gold-rich ores are essentially very low grade uranium deposits with gold.
Deposit types (IAEA Classification):
Elliot Lake sub-type Sedimentological controls on the Huronian deposits of the Elliot Lake district appear to be much stronger than in the Witwatersrand deposits. Ores grade from uranium through thorium to titanium-rich with decreasing pebble size and increasing distance from their source. While evidence of post-diagenetic remobilization has been identified, these effects appear far subordinate to the sedimentological controls.
Deposit types (IAEA Classification):
Ore consists of uraninite with lesser brannerite and thucholite. These occur in thin beds exhibiting graded bedding reminiscent of placer sorting. Alteration is nonexistent to very weak at best and the weak chlorite and sericite are believed to be mainly post-ore effects. Other post-depositional alteration includes pyritization, silicification, and alteration of titanium minerals. The most prominent geochemical associations with the uranium are thorium and titanium.
Deposit types (IAEA Classification):
This schematic model represents the original depositional setting. The Huronian underwent mild post-depositional folding during the Penokean orogeny around 1.9 billion years. The main regional structure is the Quirke syncline along the margins of which the majority of the known deposits are situated. Due to this structural overprint ore bodies range from subhorizontal to steeply dipping.
Deposit types (IAEA Classification):
Breccia complex deposits (IOCG-U) Only one iron-ore-copper-gold (IOCG) deposit of this type is known to contain economically significant quantities of uranium. Olympic Dam in South Australia is the world's largest resource of low-grade uranium and accounts for about 66% of Australia's reserves plus resources.Uranium occurs with copper, gold, silver, and rare earth elements (REE) in a large hematite-rich granite breccia complex in the Gawler Craton overlain by approximately 300 metres of flat-lying sedimentary rocks of the Stuart Shelf geological province.
Deposit types (IAEA Classification):
Another example for the Breccia type is the Mount Gee area in the Mount Painter Inlier, South Australia. Uranium mineralised quartz-hematite breccia is related to Palaeoproterozoic granites with uranium contents of up to 100 ppm. Hydrothermal processes at about 300 million years ago remobilised uranium from these granites and enriched them in the quartz-hematite breccias. The breccias in the area host a low grade resource of about 31,400 t U3O8 at 615 ppm in average.
Deposit types (IAEA Classification):
Vein deposits Vein deposits play a special role in the history of uranium: the term "pitchblende" ("Pechblende") originates from German vein deposits when they were mined for silver in the 16th century. F.E. Brückmann made the first mineralogical description of the mineral in 1727 and the vein deposit Jachymov in the Czech Republic became the type locality for uraninite. In 1789 the German chemist M. H. Klaproth discovered the element of uranium in a sample of pitchblende from the Johanngeorgenstadt vein deposit. The first industrial production of uranium was made from the Jachymov deposit and Marie and Pierre Curie used the tailings of the mine for their discovery of polonium and radium.
Deposit types (IAEA Classification):
Vein deposits consist of uranium minerals filling in cavities such as cracks, veins, fractures, breccias, and stockworks associated with steeply dipping fault systems. There are three major subtypes of vein style uranium mineralisation: intragranitic veins (Central Massif, France) veins in metasedimentary rocks in exocontacts of granites quartz-carbonate uranium veins (Erzgebirge Mts, Germany/Czech Republic; Bohemian Massif, Czech Republic) uranium-polymetal veins (Erzgebirge Mts, Germany/Czech Republic; Saskatchewan, Canada) mineralised fault and shear zones (central Africa; Bohemian Massif, Czech Republic)Intragranitic veins form in the late phase of magmatic activity when hot fluids derived from the magma precipitate uranium on cracks within the newly formed granite. Such mineralisation contributed much to the uranium production of France. Veins hosted by metasedimentary units in the exocontact of granites are the most important sources of uranium mineralisation in central Europe including the world class deposits Schneeberg-Schlema-Alberoda in Germany (96,000 t uranium content) as well as Pribram (50,000 t uranium content) and Jachymov (~10,000 t uranium content) in the Czech Republic. Also they are closely related to the granites, the mineralization is much younger with a time gap between granite formation and mineralisation of 20 million years. The initial uranium mineralisation consists of quartz, carbonate, fluorite and pitchblende. Remobilisation of uranium occurred at later stages producing polymetal veins containing silver, cobalt, nickel, arsenic and other elements. Large deposits of this type can contain more than 1,000 individual mineralized veins. However, only 5 to 12% of the vein areas carry mineralization and although massive lenses of pitchblende can occur, the overall ore grade is only about 0.1% uranium.The Bohemian Massif also contains shear zone hosted uranium deposits with the most important one being Rozna-Olsi in Moravia northwest of Brno. Rozna is currently the only operating uranium mine in central Europe with a total uranium content of 23,000 t and an average grade of 0.24%. The formation of this mineralisation occurred in several stages. After the Variscan Orogeny, extension took place and hydrothermal fluids overprinted fine grained materials in shear zones with a sulfide-chlorite alteration. Fluids from the overlying sediments entered the basement mobilising uranium and while uprising on the shear zone, the chlorite-pyrite material caused precipitation of uranium minerals in form of coffinite, pitchblende and U-Zr-silicates. This initial mineralisation event took place at about 277 million to 264 million years. During the Triassic a further mineralisation event took place relocating uranium into quartz-carbonate-uranium veins. Another example of this mineralisation style is the Shinkolobwe deposit in Congo, Africa, containing about 30,000 t of uranium.
Deposit types (IAEA Classification):
Intrusive associated deposits Intrusive deposits make up a large proportion of the world's uranium resources. Included in this type are those associated with intrusive rocks including alaskite, granite, pegmatite and monzonites. Major world deposits include Rossing (Namibia), Ilimaussaq intrusive complex (Greenland) and Palabora (South Africa).
Phosphorite deposits Marine sedimentary phosphorite deposits can contain low grade concentrations of uranium, up to 0.01–0.015% U3O8, within fluorite or apatite. These deposits can have a significant tonnage. Very large phosphorite deposits occur in Florida and Idaho in the United States, Morocco, and some middle eastern countries.
Deposit types (IAEA Classification):
Collapse breccia pipe deposits Collapse breccia pipe deposits occur within vertical, circular solution collapse structures, formed by the dissolution of limestone by groundwater. Pipes are typically filled with down-dropped coarse fragments of limestone and overlying sediments and can be from 30 to 200 metres (100 to 660 ft) wide and up to 1,000 metres (3,300 ft) deep.Primary ore minerals are uraninite and pitchblende, which occur as cavity fills and coatings on quartz grains within permeable sandstone breccias within the pipe. Resources within individual pipes can range up to 2500 tonnes U3O8 at an average grade of between 0.3 and 1.0% U3O8.The best known examples of this deposit type are in the Arizona breccia pipe uranium mineralization in the US, where several of these deposits have been mined.
Deposit types (IAEA Classification):
Volcanic deposits Volcanic deposits occur in felsic to intermediate volcanic to volcaniclastic rocks and associated caldera subsidence structures, comagmatic intrusions, ring dykes and diatremes.Mineralization occurs either as structurally controlled veins and breccias discordant to the stratigraphy and less commonly as stratabound mineralization either in extrusive rocks or permeable sedimentary facies. Mineralization may be primary, that is magmatic-related or as secondary mineralization due to leaching, remobilization and re-precipitation. The principal uranium mineral in volcanic deposits is pitchblende, which is usually associated with molybdenite and minor amounts of lead, tin and tungsten mineralization.Volcanic hosted uranium deposits occur in host rocks spanning the Precambrian to the Cenozoic but because of the shallow levels at which they form, preservation favors younger age deposits. Some of the more important deposits or districts are Streltsovskoye, Russia; Dornod, Mongolia; and McDermitt, Nevada.
Deposit types (IAEA Classification):
The average deposit size is rather small with grades of 0.02% to 0.2% U3O8. These deposits make up only a small proportion of the world's uranium resources. The only volcanic hosted deposits currently being exploited are those of the Streltsovkoye district of eastern Siberia. This is in fact not a single stand-alone deposit, but 18 individual deposits occurring within the Streltsovsk caldera complex. Nevertheless, the average size of these deposits is far greater than the average volcanic type.
Deposit types (IAEA Classification):
Surficial deposits (calcretes) Surficial deposits are broadly defined as Tertiary to Recent near-surface uranium concentrations in sediments or soils. Mineralization in calcrete (calcium and magnesium carbonates) are the largest of the surficial deposits. They are interbedded with Tertiary sand and clay, which are usually cemented by calcium and magnesium carbonates. Surficial deposits also occur in peat bogs, karst caverns and soils.
Deposit types (IAEA Classification):
Surficial deposits account for approximately 4% of world uranium resources. The Yeelirrie deposit is by far the world's largest surficial deposit, averaging 0.15% U3O8. Langer Heinrich in Namibia is another significant surficial deposit.
Deposit types (IAEA Classification):
Metasomatite deposits Metasomatite deposits consist of disseminated uranium minerals within structurally deformed rocks that have been affected by intense sodium metasomatism. Ore minerals are uraninite and brannerite. Th/U ratio in the ores is mostly less than 0.1. Metasomatites are typically small in size and generally contain less than 1000 t U3O8. Giant (up to 100 thousands t U) U deposits in sodium metasomatites (albitites) are known in Central Ukraine and Brazil.Two subtypes are defined based on host lithologies: metasomatized granite; ex. Ross Adams deposit in Alaska, United States, Novokostantynivka deposit in Kirovogradska oblast, Ukraine.
Deposit types (IAEA Classification):
metasomatised metasediment; Zhovta River and ex. Pervomayske deposits in Dnipropetrovska oblast, Ukraine and Valhalla deposit in northwestern Queensland, Australia.
Metamorphic deposits Metamorphic deposits those that occur in metasediments or metavolcanic rocks where there is no direct evidence for mineralization post-dating metamorphism. These deposits were formed during regional metamorphism of uranium bearing or mineralized sediments or volcanic precursors.
The most prominent deposits of this type are Mary Kathleen, Queensland, Australia, and Forstau, Austria.
Deposit types (IAEA Classification):
Lignite Lignite deposits (soft brown coal) can contain significant uranium mineralization. Mineralization can also be found in clay and sandstone immediately adjacent to lignite deposits. Uranium has been adsorbed onto carbonaceous matter and as a result no discrete uranium minerals have formed. Deposits of this type are known from the Serres Basin, in Greece, and in North and South Dakota in the USA. The uranium content in these deposits is very low, on average less than 0.005% U3O8, and does not currently warrant commercial extraction.
Deposit types (IAEA Classification):
Black shale deposits Black shale mineralisations are large low-grade resources of uranium. They form in submarine environments under oxygen-free conditions. Organic matter in clay-rich sediments will not be converted to CO2 by biological processes in this environment and it can reduce and immobilise uranium dissolved in seawater. Average uranium grades of black shales are 50 to 250 ppm. The largest explored resource is Ranstad in Sweden containing 254,000 t of uranium. However, there are estimates for black shales in the US and Brazil assuming a uranium content of over 1 million tonnes, but at grades below 100 ppm uranium. The Chattanooga Shale in the southeastern USA for example is estimated to contain 4 to 5 million tonnes at an average grade of 54 ppm.Because of their low grades, no black shale deposit ever produced significant amounts of uranium with one exception: the Ronneburg deposit in eastern Thuringia, Germany. The Ordovician and Silurian black shales at Ronneburg have a background uranium content of 40 to 60 ppm. However, hydrothermal and supergene processes caused remobilsation and enrichment of the uranium. The production between 1950 and 1990 was about 100,000 t of uranium at average grades of 700 to 1,000 ppm. Measured and inferred resources containing 87,000 t uranium at grades between 200 and 900 ppm are left.
Deposit types (IAEA Classification):
Other types of deposits There are also uranium deposits, of other types, in the Jurassic Todilto Limestone in the Grants District, New Mexico, USA.
The Freital/Dresden-Gittersee deposit in eastern Germany produced about 3.700 t of uranium from Permian hard coal and its host rocks. The average ore grade was 0.11%. The deposit formed in a combination of syngenetic and diagenetic processes.
In some countries, for example China, trials are underway to extract uranium from fly ash.
Additional sources:
Dahlkamp, Franz (1993). Uranium Ore Deposits. Berlin, Germany: Springer-Verlag. ISBN 3-540-53264-1.
Burns, P.C.; Finch, R., eds. (1999), Reviews in Mineralogy, vol. 38: Uranium: Mineralogy, Geochemistry and the Environment., Washington D.C., U.S.A.: Mineralogical Society of America, ISBN 0-939950-50-2 "Geoscience Australia Uranium factsheet" (PDF). Archived from the original (PDF) on 2007-09-11. Retrieved 2007-08-14.
"Uranium Ore Deposits". WISE Uranium Project. Retrieved 2008-09-20. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PBR theorem**
PBR theorem:
The Pusey–Barrett–Rudolph (PBR) theorem is a no-go theorem in quantum foundations due to Matthew Pusey, Jonathan Barrett, and Terry Rudolph (for whom the theorem is named) in 2012. It has particular significance for how one may interpret the nature of the quantum state.
With respect to certain realist hidden variable theories that attempt to explain the predictions of quantum mechanics, the theorem rules that pure quantum states must be "ontic" in the sense that they correspond directly to states of reality, rather than "epistemic" in the sense that they represent probabilistic or incomplete states of knowledge about reality.
PBR theorem:
The PBR theorem may also be compared with other no-go theorems like Bell's theorem and the Bell–Kochen–Specker theorem, which, respectively, rule out the possibility of explaining the predictions of quantum mechanics with local hidden variable theories and noncontextual hidden variable theories. Similarly, the PBR theorem could be said to rule out preparation independent hidden variable theories, in which quantum states that are prepared independently have independent hidden variable descriptions.
PBR theorem:
This result was cited by theoretical physicist Antony Valentini as "the most important general theorem relating to the foundations of quantum mechanics since Bell's theorem".
Theorem:
This theorem, which first appeared as an arXiv preprint and was subsequently published in Nature Physics, concerns the interpretational status of pure quantum states. Under the classification of hidden variable models of Harrigan and Spekkens, the interpretation of the quantum wavefunction |ψ⟩ can be categorized as either ψ-ontic if "every complete physical state or ontic state in the theory is consistent with only one pure quantum state" and ψ-epistemic "if there exist ontic states that are consistent with more than one pure quantum state." The PBR theorem proves that either the quantum state |ψ⟩ is ψ-ontic, or else non-entangled quantum states violate the assumption of preparation independence, which would entail action at a distance. In conclusion, we have presented a no-go theorem, which—modulo assumptions—shows that models in which the quantum state is interpreted as mere information about an objective physical state of a system cannot reproduce the predictions of quantum theory. The result is in the same spirit as Bell’s theorem, which states that no local theory can reproduce the predictions of quantum theory. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cetomacrogol 1000**
Cetomacrogol 1000:
Cetomacrogol 1000 is the tradename for polyethylene glycol hexadecyl ether, which is nonionic surfactant produced by the ethoxylation of cetyl alcohol to give a material with the general formula HO(C2H4O)nC16H33. Several grades of this material are available depending on the level of ethoxylation performed, with repeat units (n) of polyethylene glycol varying between 2 and 20. Commercially it can be known as Brij 58 (when n=20) or Brij 56 (when n=10). Brij is a trademark of Croda International.
Cetomacrogol 1000:
It is used as a solubilizer and emulsifying agent in foods, cosmetics, and pharmaceuticals, often as an ointment base. It is used as an oil in water (O/W) emulsifier for creams/lotions, and a wetting agent. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DELPHI experiment**
DELPHI experiment:
DELPHI (DEtector with Lepton, Photon and Hadron Identification) was one of the four main detectors of the Large Electron–Positron Collider (LEP) at CERN, one of the largest particle accelerators ever made. Like the other three detectors, it recorded and analyzed the result of the collision between LEP's colliding particle beams. The specific focus of DELPHI was on particle identification, three-dimensional information, high granularity (detail), and precise vertex determination.
Construction:
The construction of DELPHI started in 1983 and was completed in 1988, ready for LEP starting operation in 1989. After LEP finished operating in November 2000, most of DELPHI began to be dismantled, and dismantling was complete in September 2001. The central section was kept and moved to an unused space (now the location of the LHCb experiment) where it was prepared as a 'museum' setup.
Experimental setup:
DELPHI had the shape of a cylinder over 10 metres in length and diameter, and a weight of 3500 tons. In operation, electrons and positrons from the accelerator went through a pipe going through the center of the cylinder, and collided in the middle of the detector. The collision products then travelled outwards from the pipe and were analyzed by many subdetectors designed to identify the nature and trajectories of the particles produced by the collision.
Experimental setup:
Subdetectors There were five tracking detectors in the barrel part of the detector: the vertex detector (VD), the inner detector (ID), the time projection chamber (TPC), the outer detector (OD), and the barrel muon chambers (MUB).The VD is an advanced silicon detector closest to the collision point, and has the purpose of providing precise tracking. Short-lived particles are found by extrapolating tracks back to an interaction point. An upgrade of the VD was completed in 1997 for it to form the barrel part of the silicon tracker.The ID, between the VD and TPC, provides intermediate position and trigger data. The two parts of the detector are the JET drift chamber and the trigger layers (TL), producing points per track and polar angle coverage. The gas used in the JET chamber is mostly CO2, with a small amount of isobutane, which allows signals caused by incoming particle tracks to arrive at the same time.The TPC is the principle tracking device for DELPHI, also measuring the particle energy loss (dE/dX). The OD provides a final direction measurements after the Barrel Ring Imaging Cherenkov detector.DELPHI is able to use the Ring Imaging Cherenkov technique to differentiate secondary charged particles produced by collisions. This was done using two RICH radiators of different refractive indices for particle identification in different ranges. The Barrel-RICH detector and the Forward-RICH detector were two independent detectors that covered different polar angles.
Experimental setup:
Tracking chambers There were also four different tracking chambers in the forward part of the detector: forward chambers A (FCA) and B (FCB), the very forward tracker (VFT), the forward muon chambers (MUF) and the surround muon chambers (SMC). The forward chambers covered various polar angles of the forward part of the detector. The muon chambers were furthest from the collision point since muons can pass through the calorimeters.
Experimental setup:
Calorimeters and counters The electromagnetic calorimetry system consisted of two very forward calorimeters and two small angle calorimeter. The high-density projection chamber (HPC) was a barrel electromagnetic calorimeter mounted on the inside of the solenoid outside the OD. The forward electromagnetic calorimeter (FEMC) consisted of two 5 m diameter disks, made of lead glass. Additional scintillators were installed to ensure high energy photon did not escape.The hadron calorimeter (HCAL) allows for calorimetric energy measurements of hadrons. It is a sampling gas detector which is incorporated in the magnet yoke, and covers a certain polar angle region.Luminosity is measured using the small angle tile calorimeter (STIC) and the very small angle tagger (VSAT). To measure the luminosity, the number of events of a known process must be counted, which in the DELPHI experiment was chosen to be Bhabha scattering at small angles. The STIC is a lead-scintillator calorimeter consisting of two cylindrical detectors on either side of the interaction region, which covers a large angular region. The VSAT consists of four calorimeter modules and detects electrons and positrons produced in Bhabha scattering.
Experimental setup:
Trigger system The purpose of the trigger system for DELPHI is to select all events from original electron-positron interactions. The trigger system has four levels of selectivity of increasing nature (T1, T2, T3, T4), using data contributions from each subdetector to inform the trigger decision of the first two levels. The last two levels are software filters.
Results:
The data produced from DELPHI allowed the e+e− → W+W reaction to be studied for the first time. This was done by having center-of-mass energies over the threshold of WW pair production. From the data, the mass of the W boson was determined as 80.40 ± 0.45 GeV/c2 which was then combined with results from the other LEP collaborations to produce a final result compatible with other experiments.The Higgs boson was also a subject of high interest for the DELPHI experiment, as Higgs bosons are produced in e+e− collisions. The cross section of this interaction is strongly dependent on the Higgs mass, so it can be calculated from measurements. The Higgs boson mass wasn't able to be determined using DELPHI, so only a mass exclusion limit could be given.Furthermore during the LEP1 data taking runs in 1989-1994, hadronic and leptonic decays of the Z boson at 91 GeV were investigated and the widths of different branches were obtained. The results were in good agreement with the standard model predictions and expectations. In 1995, the experiment ran at intermediate energies of 130 and 136 GeV where, together with other LEP experiments, the results found were in agreement with model predictions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aileen Lee**
Aileen Lee:
Aileen Lee (born 1970) is a U.S. venture capital angel investor and co-founder of Cowboy Ventures.Lee coined the often-used Silicon Valley term unicorn in a TechCrunch article "Welcome To The Unicorn Club: Learning from Billion-Dollar Startups" as profiled in The New York Times. A unicorn is generally defined as a privately held startup that has a $1 billion valuation or more – something rare (like a unicorn).
Education:
Lee earned her bachelor's degree from the MIT Sloan School of Management in 1992. After MIT, she worked as a financial analyst for two years at Morgan Stanley. She earned her MBA from Harvard Business School in 1997.
Career:
Lee joined Kleiner Perkins (KPCB) in 1999 and was the founding CEO of RMG Networks, a company backed by KPCB. Lee worked at Kleiner Perkins for 13 years and left in 2012.In 2012, she left KPCB to start seed-stage venture firm Cowboy Ventures. In 2017, Lee added Ted Wang to the firm as a general partner.Cowboy Ventures is one of the first female-led venture capital firms. Over the past six years, Cowboy Ventures has received three large funds, the most recent reaching $95 million.Through Cowboy Ventures, Lee has made investments in many early-stage companies, including August, Dollar Shave Club, Textio, Accompany and Tally Technologies. She is a public advocate of increasing the number of female founders and investors in the Silicon Valley.
Philanthropy:
In 2018, Lee co-founded All Raise, a nonprofit organization which seeks to increase the amount of funding that female investors receive. The organization was founded as a collective by more than 30 venture capitalists who advocate for increasing the presence of women in venture capital. Lee described the organization's importance in saying “We believe that by improving the success of women in the venture-backed tech ecosystem, we can build a more accessible community that reflects the diversity of the world around us.”
Awards and recognition:
Lee was invited to speak at the 2018 Code Conference put on by Recode and additionally at the 2018 GeekWire Summit. She also spoke at the 2019 Silicon Slopes Tech Summit. and is recognized as a speaker for the organization Lesbians Who Tech and the Female Founders Conference.Lee has appeared on Forbes' list of The World's 100 Most Powerful Women (position #97 as of 2020) and the Midas List in 2020 (position #80), 2019 (position #82), and 2018 (position #97). She also appeared on Time's list of 100 Most Influential People in 2019.
Personal life:
Lee grew up in New Jersey and is the daughter of Chinese immigrants. Aileen Lee is married. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Histamine dihydrochloride**
Histamine dihydrochloride:
Histamine dihydrochloride (INN, trade name Ceplene) is a salt of histamine that is used as a drug for the prevention of relapse in patients diagnosed with acute myeloid leukemia (AML).
It is also an FDA-approved active ingredient for topical analgesic use for the temporary relief of minor aches and pains of muscles and joints associated with arthritis, simple backache, bruises, sprains, and strains and is available in over-the-counter (OTC) products such as Australian Dream and Golden Creme.
Use in leukemia:
Histamine dihydrochloride is administered in conjunction with low doses of the immune-activating cytokine interleukin-2 (IL-2) in the post-remission phase of AML, i.e. when patients have completed the initial chemotherapy. This combination has been reported to significantly reduce the risk of relapse in AML. The effect is particularly pronounced in patients in their first remission who are below the age of 60.The combination of histamine dihydrochloride and interleukin-2 was approved for use in AML patients within the European Union in October 2008 and will be marketed in the EU by the Swedish pharmaceutical company Meda. The drug is also available through a named patient program in several other countries (excluding the US).
Proposed mechanism of action:
Histamine dihydrochloride acts by improving the immune-enhancing properties of IL-2, and laboratory studies have shown that this combination can induce immune-mediated killing of leukemic cells. The treatment (in the form of subcutaneous injections) is given in 3-week cycles by the patients at home for 18 months, thus coinciding with the period of highest relapse risk. The side-effects include transient flush and headache, whereas IL-2 may induce low-grade fever and inflammation at the site of injection. Histamine dihydrochloride has been developed by researchers at the University of Gothenburg, Sweden. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zoo Biology**
Zoo Biology:
Zoo Biology is a peer-reviewed scientific journal "concerned with reproduction, demographics, genetics, behavior, medicine, husbandry, nutrition, conservation and all empirical aspects of the exhibition and maintenance of wild animals in wildlife parks, zoos, and aquariums." It is published by Wiley-Liss. The executive editor is Bethany L. Krebs (San Francisco Zoological Society).According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.421, ranking it 79th out of 146 journals in the category "Veterinary Sciences" and 91st out of 175 journals in the category "Zoology". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vybrid Series**
Vybrid Series:
The Vybrid Series is a low power System on chip from Freescale Semiconductor with ARM Cortex-A5 and optional Cortex-M4 cores. The full featured VF6xx comes with asymmetrical multiprocessing using both cores. Lower cost alternative such as the VF5xx and VF3xx only support the ARM Cortex-A5. The ARM Cortex-A5 cores run from 266 MHz to 500 MHz depending on package options and ARM Cortex-M4 Cores at 168 MHz if present. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zero insertion force**
Zero insertion force:
Zero insertion force (ZIF) is a type of IC socket or electrical connector that requires very little (but not literally zero) force for insertion. With a ZIF socket, before the IC is inserted, a lever or slider on the side of the socket is moved, pushing all the sprung contacts apart so that the IC can be inserted with very little force - generally the weight of the IC itself is sufficient and no external downward force is required. The lever is then moved back, allowing the contacts to close and grip the pins of the IC. ZIF sockets are much more expensive than standard IC sockets and also tend to take up a larger board area due to the space taken up by the lever mechanism. Typically, they are only used when there is a good reason to do so.
Design:
A normal integrated circuit (IC) socket requires the IC to be pushed into sprung contacts which then grip by friction. For an IC with hundreds of pins, the total insertion force can be very large (hundreds of newtons), leading to a danger of damage to the device or the circuit board. Also, even with relatively small pin counts, each pin extraction is fairly awkward and carries a significant risk of bending pins, particularly if the person performing the extraction hasn't had much practice or if the board is crowded. Low insertion force (LIF) sockets reduce the issues of insertion and extraction, but because of its lower insertion force than a conventional socket, are likely to produce less reliable connections.
Design:
Large ZIF sockets are only commonly found mounted on PC motherboards, being used from about the mid 1990s forward. These CPU sockets are designed to support a particular range of CPUs, allowing computer retailers and consumers to assemble motherboard/CPU combinations based on individual budget and requirements. The rest of the electronics industry has largely abandoned sockets (of any kind) and instead moved to the use of surface mount components soldered directly to the board.
Design:
Smaller ZIF sockets are commonly used in chip-testing and programming equipment, e.g., programming and testing on EEPROMs, Microcontrollers, etc.
Universal test sockets:
Standard DIP packages come in a number of widths (measured between pin centers), with 0.3 in (7.62 mm) and 0.6 in (15.24 mm) being the most common. To allow design of programmers and similar devices that support a range of devices universal test sockets are produced. These have wide slots into which the pins drop allowing devices of differing widths to be inserted.
Ball grid array sockets:
ZIF sockets can be used for ball grid array chips, particularly during development. These sockets tend to be unreliable, failing to grab all the solder balls. Another type of BGA socket, also free of insertion force but not a "ZIF socket" in the traditional sense, does a better job by using spring pins to push up underneath the balls.
ZIF wire-to-board connectors:
ZIF wire-to-board connectors are used for attaching wires to printed circuit boards inside electronic equipment. An example would be the cable between the LCD screen and motherboard in laptops. The wires, often formed into a ribbon cable, are pre-stripped and the bare ends placed inside the connector. The two sliding parts of the connector are then pushed together, causing it to grip the wires. The most important advantage of this system is that it does not require a mating half to be fitted to the wire ends, therefore saving space and cost inside miniaturised equipment. See flexible flat cable.
Hard disk drives:
ZIF tape connections are used for connecting Parallel ATA and Serial ATA disk drives (mostly drives in the 1.8-inch form factor). PATA hard drives with ZIF-style connectors were used primarily in the design of ultra-portable notebooks. They have since been phased out, as SATA has a relatively small-form-factor connector by default. Mini-SATA (mSATA) can be used where even smaller form factors are required.
Hard disk drives:
Internally, nearly all hard drives use ZIF tape to connect their circuit board to their platter motor. ZIF tape connections are also heavily used in the design of the iPod range of portable media players, not just for the hard drive but also for other connections from the main circuit board. Three types of ZIF connectors are known to exist on 1.8 inch PATA drives. ZIF-24, ZIF-40, and ZIF-50 have 24, 40, and 50 pins respectively. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HELRAM**
HELRAM:
The Northrop Grumman High Energy Laser for Rockets, Artillery and Mortars (HELRAM) system is a ground-based directed energy weapon intended to be used mainly against short-range ballistic targets.
It is supposed to be able to shoot down mortar bombs, rocket-propelled mortar bombs, artillery shells and artillery rockets by pointing a high-energy laser beam at them, thereby causing them to explode in the air.
Northrop Grumman unveiled the HELRAM concept in October 2004, saying that it "could be available within 18 months of a contract".
Its technology is based on that of the THEL system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Somitomere**
Somitomere:
In the developing vertebrate embryo, the somitomeres (or somatomeres) are collections of cells that are derived from the loose masses of paraxial mesoderm that are found alongside the developing neural tube. In human embryogenesis they appear towards the end of the third gestational week. The approximately 50 pairs of somitomeres in the human embryo, begin developing in the cranial (head) region, continuing in a caudal (tail) direction until the end of week four.
Development:
The first seven somitomeres give rise to the striated muscles of the face, jaws, and throat.The remaining somitomeres, likely driven by periodic expression of the hairy gene, begin expressing adhesion proteins such as N-cadherin and fibronectin, compact, and bud off forming somites. The somites give rise to the vertebral column (sclerotome), associated muscles (myotome), and overlying dermis (dermatome).
There are a total of 37 somite pairs at the end of the fifth week of development, after the first occipital somite and 5-7 coccygeal somites disappear from the original 42-44 somites. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Praseodymium(III) acetate**
Praseodymium(III) acetate:
Praseodymium(III) acetate is an inorganic salt composed of a Praseodymium atom trication and three acetate groups as anions. This compound commonly forms the dihydrate, Pr(O2C2H3)3·2H2O.
Preparation:
Praseodymium(III) acetate can be formed by the reaction of acetic acid and praseodymium(III) oxide: 6CH3COOH+Pr2O3→T2Pr(CH3COO)3+3H2O Praseodymium(III) carbonate and praseodymium(III) hydroxide can also be used: 6CH3COOH+Pr2(CO3)3→T2Pr(CH3COO)3+3H2O+3CO2 ↑6CH3COOH+2Pr(OH)3→T2Pr(CH3COO)3+6H2O
Decomposition:
When the dihydrate is heated, it decomposes to the anhydrous, which then decomposes into praseodymium(III) oxyacetate(PrO(O2C2H3)) then to praseodymium(III) oxycarbonate, and at last to praseodymium(III) oxide. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**EF-Tu**
EF-Tu:
EF-Tu (elongation factor thermo unstable) is a prokaryotic elongation factor responsible for catalyzing the binding of an aminoacyl-tRNA (aa-tRNA) to the ribosome. It is a G-protein, and facilitates the selection and binding of an aa-tRNA to the A-site of the ribosome. As a reflection of its crucial role in translation, EF-Tu is one of the most abundant and highly conserved proteins in prokaryotes. It is found in eukaryotic mitochondria as TUFM.As a family of elongation factors, EF-Tu also includes its eukaryotic and archaeal homolog, the alpha subunit of eEF-1 (EF-1A).
Background:
Elongation factors are part of the mechanism that synthesizes new proteins through translation in the ribosome. Transfer RNAs (tRNAs) carry the individual amino acids that become integrated into a protein sequence, and have an anticodon for the specific amino acid that they are charged with. Messenger RNA (mRNA) carries the genetic information that encodes the primary structure of a protein, and contains codons that code for each amino acid. The ribosome creates the protein chain by following the mRNA code and integrating the amino acid of an aminoacyl-tRNA (also known as a charged tRNA) to the growing polypeptide chain.There are three sites on the ribosome for tRNA binding. These are the aminoacyl/acceptor site (abbreviated A), the peptidyl site (abbreviated P), and the exit site (abbreviated E). The P-site holds the tRNA connected to the polypeptide chain being synthesized, and the A-site is the binding site for a charged tRNA with an anticodon complementary to the mRNA codon associated with the site. After binding of a charged tRNA to the A-site, a peptide bond is formed between the growing polypeptide chain on the P-site tRNA and the amino acid of the A-site tRNA, and the entire polypeptide is transferred from the P-site tRNA to the A-site tRNA(?...or is transferred from A-site tRNA to P-site?). Then, in a process catalyzed by the prokaryotic elongation factor EF-G (historically known as translocase), the coordinated translocation of the tRNAs and mRNA occurs, with the P-site tRNA moving to the E-site, where it dissociates from the ribosome, and the A-site tRNA moves to take its place in the P-site.
Biological functions:
Protein synthesis EF-Tu participates in the polypeptide elongation process of protein synthesis. In prokaryotes, the primary function of EF-Tu is to transport the correct aa-tRNA to the A-site of the ribosome. As a G-protein, it uses GTP to facilitate its function. Outside of the ribosome, EF-Tu complexed with GTP (EF-Tu • GTP) complexes with aa-tRNA to form a stable EF-Tu • GTP • aa-tRNA ternary complex. EF-Tu • GTP binds all correctly-charged aa-tRNAs with approximately identical affinity, except those charged with initiation residues and selenocysteine. This can be accomplished because although different amino acid residues have varying side-chain properties, the tRNAs associated with those residues have varying structures to compensate for differences in side-chain binding affinities.The binding of an aa-tRNA to EF-Tu • GTP allows for the ternary complex to be translocated to the A-site of an active ribosome, in which the anticodon of the tRNA binds to the codon of the mRNA. If the correct anticodon binds to the mRNA codon, the ribosome changes configuration and alters the geometry of the GTPase domain of EF-Tu, resulting in the hydrolysis of the GTP associated with the EF-Tu to GDP and Pi. As such, the ribosome functions as a GTPase-activating protein (GAP) for EF-Tu. Upon GTP hydrolysis, the conformation of EF-Tu changes drastically and dissociates from the aa-tRNA and ribosome complex. The aa-tRNA then fully enters the A-site, where its amino acid is brought near the P-site's polypeptide and the ribosome catalyzes the covalent transfer of the polypeptide onto the amino acid.In the cytoplasm, the deactivated EF-Tu • GDP is acted on by the prokaryotic elongation factor EF-Ts, which causes EF-Tu to release its bound GDP. Upon dissociation of EF-Ts, EF-Tu is able to complex with a GTP due to the 5– to 10–fold higher concentration of GTP than GDP in the cytoplasm, resulting in reactivated EF-Tu • GTP, which can then associate with another aa-tRNA.
Biological functions:
Maintaining translational accuracy EF-Tu contributes to translational accuracy in three ways. In translation, a fundamental problem is that near-cognate anticodons have similar binding affinity to a codon as cognate anticodons, such that anticodon-codon binding in the ribosome alone is not sufficient to maintain high translational fidelity. This is addressed by the ribosome not activating the GTPase activity of EF-Tu if the tRNA in the ribosome's A-site does not match the mRNA codon, thus preferentially increasing the likelihood for the incorrect tRNA to leave the ribosome. Additionally, regardless of tRNA matching, EF-Tu also induces a delay after freeing itself from the aa-tRNA, before the aa-tRNA fully enters the A-site (a process called accommodation). This delay period is a second opportunity for incorrectly charged aa-tRNAs to move out of the A-site before the incorrect amino acid is irreversibly added to the polypeptide chain. A third mechanism is the less well understood function of EF-Tu to crudely check aa-tRNA associations and reject complexes where the amino acid is not bound to the correct tRNA coding for it.
Biological functions:
Other functions EF-Tu has been found in large quantities in the cytoskeletons of bacteria, co-localizing underneath the cell membrane with MreB, a cytoskeletal element that maintains cell shape. Defects in EF-Tu have been shown to result in defects in bacterial morphology. Additionally, EF-Tu has displayed some chaperone-like characteristics, with some experimental evidence suggesting that it promotes the refolding of a number of denatured proteins in vitro.
Structure:
EF-Tu is a monomeric protein with molecular weight around 43 kDa in Escherichia coli. The protein consists of three structural domains: a GTP-binding domain and two oligonucleotide-binding domains, often referred to as domain 2 and domain 3. The N-terminal domain I of EF-Tu is the GTP-binding domain. It consists of a six beta-strand core flanked by six alpha-helices. Domains II and III of EF-Tu, the oligonucleotide-binding domains, both adopt beta-barrel structures.The GTP-binding domain I undergoes a dramatic conformational change upon GTP hydrolysis to GDP, allowing EF-Tu to dissociate from aa-tRNA and leave the ribosome. Reactivation of EF-Tu is achieved by GTP binding in the cytoplasm, which leads to a significant conformational change that reactivates the tRNA-binding site of EF-Tu. In particular, GTP binding to EF-Tu results in a ~90° rotation of domain I relative to domains II and III, exposing the residues of the tRNA-binding active site.Domain 2 adopts a beta-barrel structure, and is involved in binding to charged tRNA. This domain is structurally related to the C-terminal domain of EF2, to which it displays weak sequence similarity. This domain is also found in other proteins such as translation initiation factor IF-2 and tetracycline-resistance proteins. Domain 3 represents the C-terminal domain, which adopts a beta-barrel structure, and is involved in binding to both charged tRNA and to EF1B (or EF-Ts).
Structure:
Evolution The GTP-binding domain is conserved in both EF-1alpha/EF-Tu and also in EF-2/EF-G and thus seems typical for GTP-dependent proteins which bind non-initiator tRNAs to the ribosome. The GTP-binding protein synthesis factor family also includes the eukaryotic peptide chain release factor GTP-binding subunits and prokaryotic peptide chain release factor 3 (RF-3); the prokaryotic GTP-binding protein lepA and its homologue in yeast (GUF1) and Caenorhabditis elegans (ZK1236.1); yeast HBS1; rat statin S1; and the prokaryotic selenocysteine-specific elongation factor selB.
Disease relevance:
Along with the ribosome, EF-Tu is one of the most important targets for antibiotic-mediated inhibition of translation. Antibiotics targeting EF-Tu can be categorized into one of two groups, depending on the mechanism of action, and one of four structural families. The first group includes the antibiotics pulvomycin and GE2270A, and inhibits the formation of the ternary complex. The second group includes the antibiotics kirromycin and enacyloxin, and prevents the release of EF-Tu from the ribosome after GTP hydrolysis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SGMS1**
SGMS1:
Phosphatidylcholine:ceramide cholinephosphotransferase 1 is an enzyme that in humans is encoded by the SGMS1 gene.
Function:
The protein encoded by this gene is predicted to be a five-pass transmembrane protein. This gene may be predominately expressed in brain.
Model organisms:
Model organisms have been used in the study of SGMS1 function. A conditional knockout mouse line called Sgms1tm1a(EUCOMM)Wtsi was generated at the Wellcome Trust Sanger Institute. Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Additional screens performed: - In-depth immunological phenotyping | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Virial coefficient**
Virial coefficient:
Virial coefficients Bi appear as coefficients in the virial expansion of the pressure of a many-particle system in powers of the density, providing systematic corrections to the ideal gas law. They are characteristic of the interaction potential between the particles and in general depend on the temperature. The second virial coefficient B2 depends only on the pair interaction between the particles, the third ( B3 ) depends on 2- and non-additive 3-body interactions, and so on.
Derivation:
The first step in obtaining a closed expression for virial coefficients is a cluster expansion of the grand canonical partition function Ξ=∑nλnQn=e(pV)/(kBT) Here p is the pressure, V is the volume of the vessel containing the particles, kB is Boltzmann's constant, T is the absolute temperature, exp [μ/(kBT)] is the fugacity, with μ the chemical potential. The quantity Qn is the canonical partition function of a subsystem of n particles: tr [e−H(1,2,…,n)/(kBT)].
Derivation:
Here H(1,2,…,n) is the Hamiltonian (energy operator) of a subsystem of n particles. The Hamiltonian is a sum of the kinetic energies of the particles and the total n -particle potential energy (interaction energy). The latter includes pair interactions and possibly 3-body and higher-body interactions. The grand partition function Ξ can be expanded in a sum of contributions from one-body, two-body, etc. clusters. The virial expansion is obtained from this expansion by observing that ln Ξ equals pV/(kBT) . In this manner one derives B2=V(12−Q2Q12) B3=V2[2Q2Q12(2Q2Q12−1)−13(6Q3Q13−1)] .These are quantum-statistical expressions containing kinetic energies. Note that the one-particle partition function Q1 contains only a kinetic energy term. In the classical limit ℏ=0 the kinetic energy operators commute with the potential operators and the kinetic energies in numerator and denominator cancel mutually. The trace (tr) becomes an integral over the configuration space. It follows that classical virial coefficients depend on the interactions between the particles only and are given as integrals over the particle coordinates.
Derivation:
The derivation of higher than B3 virial coefficients becomes quickly a complex combinatorial problem. Making the classical approximation and neglecting non-additive interactions (if present), the combinatorics can be handled graphically as first shown by Joseph E. Mayer and Maria Goeppert-Mayer.They introduced what is now known as the Mayer function: exp [−u(|r→1−r→2|)kBT]−1 and wrote the cluster expansion in terms of these functions. Here u(|r→1−r→2|) is the interaction potential between particle 1 and 2 (which are assumed to be identical particles).
Definition in terms of graphs:
The virial coefficients Bi are related to the irreducible Mayer cluster integrals βi through Bi+1=−ii+1βi The latter are concisely defined in terms of graphs.
Definition in terms of graphs:
The sum of all connected, irreducible graphs with one white and black vertices The rule for turning these graphs into integrals is as follows: Take a graph and label its white vertex by k=0 and the remaining black vertices with k=1,..,i Associate a labelled coordinate k to each of the vertices, representing the continuous degrees of freedom associated with that particle. The coordinate 0 is reserved for the white vertex With each bond linking two vertices associate the Mayer f-function corresponding to the interparticle potential Integrate over all coordinates assigned to the black vertices Multiply the end result with the symmetry number of the graph, defined as the inverse of the number of permutations of the black labelled vertices that leave the graph topologically invariant.The first two cluster integrals are The expression of the second virial coefficient is thus: B2=−2π∫r2(e−u(r)/(kBT)−1)dr, where particle 2 was assumed to define the origin ( r→2=0→ ).
Definition in terms of graphs:
This classical expression for the second virial coefficient was first derived by Leonard Ornstein in his 1908 Leiden University Ph.D. thesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hipertext.net**
Hipertext.net:
Hipertext.net is a biannual open access Peer review academic journal covering all aspects of information, documentation and archives in the digital world and Interactive Communication. It is published by the Information Science Section of the Communication Department of the Pompeu Fabra University and was established in 2003 by Cristòfol Rovira, Lluís Codina, and Mari-Carmen Marcos (Pompeu Fabra University).
The scope of the journal is investigating the different connections among user experience and semantic web, including issues of library and information science: accessibility, management tools, on-line cultural resources, and websites museum, Science 2.0, cybermedia, Libraries 2.0, SEO, visibility, CMS, on-line media, the information architecture, taxonomy in Web sites, usability, web design, etc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wall (Unix)**
Wall (Unix):
wall (an abbreviation of write to all) is a Unix command-line utility that displays the contents of a computer file or standard input to all logged-in users. It is typically used by root to send out shutting down message to all users just before poweroff.
Invocation:
wall reads the message from standard input by default when the filename is omitted. This is done by piping the output of the echo command: The message may also be typed in much the same way cat is used: invoking wall by typing wall and pressing ↵ Enter followed by a message, pressing ↵ Enter and Ctrl+D: Using a here-string: Reading from a file is also supported: All the commands above should display the following output on terminals that users allow write access to (see mesg(1)): | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**R-406A**
R-406A:
R-406A is a refrigerant invented by George H. Goble. It is a mixture of three components: chlorodifluoromethane (R-22), isobutane (R-600a), and chlorodifluoroethane (R-142b) in the ratio 55/4/41.This refrigerant was designed as a drop-in replacement for dichlorodifluoromethane (R-12) which is compatible with the typical mineral oil lubricants used in R-12 systems. Since it is a zeotropic mixture, it has a range of boiling points which may increase the effectiveness of the heat transfer elements in refrigeration equipment.
R-406A:
Because it contains R-22, its future is limited due to eventual phase-out of this refrigerant. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ebselen**
Ebselen:
Ebselen (also called PZ 51, DR3305, and SPI-1005), is a synthetic organoselenium drug molecule with anti-inflammatory, anti-oxidant and cytoprotective activity. It acts as a mimic of glutathione peroxidase and can also react with peroxynitrite. It is being investigated as a possible treatment for reperfusion injury and stroke, Ménière's disease, hearing loss and tinnitus, and bipolar disorder.Additionally, ebselen may be effective against Clostridioides difficile infections and has been shown to have antifungal activity against Aspergillus fumigatus.Ebselen is a potent scavenger of hydrogen peroxide as well as hydroperoxides including membrane bound phospholipid and cholesterylester hydroperoxides. Several ebselen analogs have been shown to scavenge hydrogen peroxide in the presence of thiols.
Possible anti-COVID-19 activity:
Preliminary studies demonstrate that Ebselen exhibits promising inhibitory activity against COVID-19 in cell-based assays. The effect was attributed to irreversible inhibition of the main protease via a covalent bond formation with the thiol group of the active center's cysteine (Cys-145).
Synthesis:
Generally, synthesis of the characteristic scaffold of ebselen, the benzoisoselenazolone ring system, can be achieved either through reaction of primary amines (RNH2) with 2-(chloroseleno)benzoyl chloride (Route I), by ortho-lithiation of benzanilides followed by oxidative cyclization (Route II) mediated by cupric bromide (CuBr2), or through the efficient Cu-catalyzed selenation / heterocyclization of o-halobenzamides, a methodology developed by Kumar et al. (Route III).
History:
The first patent for 2-Phenyl-1,2-benzoselenazol-3(2H)-one was filed in 1980 and granted in 1982. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Canadian Journal of Biochemistry and Physiology**
Canadian Journal of Biochemistry and Physiology:
The Canadian Journal of Biochemistry and Physiology is a defunct peer-reviewed scientific journal of biochemistry and physiology established in 1954 as the continuation of the Canadian Journal of Medical Sciences and published by NRC Research Press. In 1964 it split into two different journals Canadian Journal of Biochemistry and Canadian Journal of Physiology and Pharmacology.
Canadian Journal of Biochemistry and Physiology:
During its life the Canadian Journal of Biochemistry and Physiology published almost 2000 papers, of which one had been cited 48000 times by early 2023, half of the total number for the journal. The journal was particularly strong in relation to early enzymology, including, for example, a study of the spectrophotometric determination of various proteolytic enzymes, and one of the first systematic treatments of the kinetics of two-substrate reactions, antedating the better known work of W. Wallace Cleland. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**VideoThang**
VideoThang:
VideoThang was free video editing software for Windows 2000, XP, and Vista. The software has three parts to it which are My Stuff, Edit My Stuff, and My Mix. The software accepts MOV, AVI, MPG, MP4, PNG, WMV, FLV, and MP3 standards. Its official website is now no longer available.
Reception:
Jan Ozer, of Pcmag, said that the software "suffers from several unfortunate design and implementation flaws that dramatically limit output quality and overall utility." Jon L. Jacobi, of PC World, said that the software "may not be the most flexible multimedia editor in the world, but the trim/zoom basics are there, it's free, and it's so simple to use that just about anyone in the world should be able figure it out." Amit Agarwal, of Digital Inspiration, said that the software "doesn’t offer loads of features like other video editors but is perfect for making quick video slideshows of your pictures that you can upload on the web or share via email." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lindbladian**
Lindbladian:
In quantum mechanics, the Gorini–Kossakowski–Sudarshan–Lindblad equation (GKSL equation, named after Vittorio Gorini, Andrzej Kossakowski, George Sudarshan and Göran Lindblad), master equation in Lindblad form, quantum Liouvillian, or Lindbladian is one of the general forms of Markovian master equations describing open quantum systems. It generalizes the Schrödinger equation to open quantum systems; that is, systems in contacts with their surroundings. The resulting dynamics is no longer unitary, but still satisfies the property of being trace-preserving and completely positive for any initial condition.The Schrödinger equation or, actually, the von Neumann equation, is a special case of the GKSL equation, which has led to some speculation that quantum mechanics may be productively extended and expanded through further application and analysis of the Lindblad equation. The Schrödinger equation deals with state vectors, which can only describe pure quantum states and are thus less general than density matrices, which can describe mixed states as well.
Motivation:
In the canonical formulation of quantum mechanics, a system's time evolution is governed by unitary dynamics. This implies that there is no decay and phase coherence is maintained throughout the process, and is a consequence of the fact that all participating degrees of freedom are considered. However, any real physical system is not absolutely isolated, and will interact with its environment. This interaction with degrees of freedom external to the system results in dissipation of energy into the surroundings, causing decay and randomization of phase. More so, understanding the interaction of a quantum system with its environment is necessary for understanding many commonly observed phenomena like the spontaneous emission of light from excited atoms, or the performance of many quantum technological devices, like the laser.
Motivation:
Certain mathematical techniques have been introduced to treat the interaction of a quantum system with its environment. One of these is the use of the density matrix, and its associated master equation. While in principle this approach to solving quantum dynamics is equivalent to the Schrödinger picture or Heisenberg picture, it allows more easily for the inclusion of incoherent processes, which represent environmental interactions. The density operator has the property that it can represent a classical mixture of quantum states, and is thus vital to accurately describe the dynamics of so-called open quantum systems.
Definition:
The Lindblad master equation for system's density matrix ρ can be written as (for a pedagogical introduction you may refer to) ρ˙=−iℏ[H,ρ]+∑iγi(LiρLi†−12{Li†Li,ρ}) where {a,b}=ab+ba is the anticommutator, H is the system Hamiltonian, describing the unitary aspects of the dynamics, and Li are a set of jump operators describing the dissipative part of the dynamics. The shape of the jump operators describes how the environment acts on the system, and must ultimately be determined from microscopic models of the system-environment dynamics. Finally, γi≥0 are a set of non-negative coefficients called damping rates. If all γi=0 one recovers the von Neumann equation ρ˙=−(i/ℏ)[H,ρ] describing unitary dynamics, which is the quantum analog of the classical Liouville equation.
Definition:
More generally, the GKSL equation has the form ρ˙=−iℏ[H,ρ]+∑n,mhnm(AnρAm†−12{Am†An,ρ}) where {Am} are arbitrary operators and h is a positive semidefinite matrix. The latter is a strict requirement to ensure the dynamics is trace-preserving and completely positive. The number of Am operators is arbitrary, and they do not have to satisfy any special properties. But if the system is N -dimensional, it can be shown that the master equation can be fully described by a set of N2−1 operators, provided they form a basis for the space of operators. Since the matrix h is positive semidefinite, it can be diagonalized with a unitary transformation u: u†hu=[γ10⋯00γ2⋯0⋮⋮⋱⋮00⋯γN2−1] where the eigenvalues γi are non-negative. If we define another orthonormal operator basis Li=∑jujiAj This reduces the master equation to the same form as before: ρ˙=−iℏ[H,ρ]+∑iγi(LiρLi†−12{Li†Li,ρ}) Quantum dynamical semigroup The maps generated by a Lindbladian for various times are collectively referred to as a quantum dynamical semigroup—a family of quantum dynamical maps ϕt on the space of density matrices indexed by a single time parameter t≥0 that obey the semigroup property 0.
Definition:
The Lindblad equation can be obtained by L(ρ)=limΔt→0ϕΔt(ρ)−ϕ0(ρ)Δt which, by the linearity of ϕt , is a linear superoperator. The semigroup can be recovered as ϕt+s(ρ)=eLsϕt(ρ).
Invariance properties The Lindblad equation is invariant under any unitary transformation v of Lindblad operators and constants, γiLi→γi′Li′=∑jvijγjLj, and also under the inhomogeneous transformation Li→Li′=Li+aiI, H→H′=H+12i∑jγj(aj∗Lj−ajLj†)+bI, where ai are complex numbers and b is a real number.
However, the first transformation destroys the orthonormality of the operators Li (unless all the γi are equal) and the second transformation destroys the tracelessness. Therefore, up to degeneracies among the γi, the Li of the diagonal form of the Lindblad equation are uniquely determined by the dynamics so long as we require them to be orthonormal and traceless.
Heisenberg picture The Lindblad-type evolution of the density matrix in the Schrödinger picture can be equivalently described in the Heisenberg picture using the following (diagonalized) equation of motion for each quantum observable X: X˙=iℏ[H,X]+∑iγi(Li†XLi−12{Li†Li,X}).
A similar equation describes the time evolution of the expectation values of observables, given by the Ehrenfest theorem.
Corresponding to the trace-preserving property of the Schrödinger picture Lindblad equation, the Heisenberg picture equation is unital, i.e. it preserves the identity operator.
Physical derivation:
The Lindblad master equation describes the evolution of various types of open quantum systems, e.g. a system weakly coupled to a Markovian reservoir.
Note that the H appearing in the equation is not necessarily equal to the bare system Hamiltonian, but may also incorporate effective unitary dynamics arising from the system-environment interaction.
Physical derivation:
A heuristic derivation, e.g., in the notes by Preskill, begins with a more general form of an open quantum system and converts it into Lindblad form by making the Markovian assumption and expanding in small time. A more physically motivated standard treatment covers three common types of derivations of the Lindbladian starting from a Hamiltonian acting on both the system and environment: the weak coupling limit (described in detail below), the low density approximation, and the singular coupling limit. Each of these relies on specific physical assumptions regarding, e.g., correlation functions of the environment. For example, in the weak coupling limit derivation, one typically assumes that (a) correlations of the system with the environment develop slowly, (b) excitations of the environment caused by system decay quickly, and (c) terms which are fast-oscillating when compared to the system timescale of interest can be neglected. These three approximations are called Born, Markov, and rotating wave, respectively.The weak-coupling limit derivation assumes a quantum system with a finite number of degrees of freedom coupled to a bath containing an infinite number of degrees of freedom. The system and bath each possess a Hamiltonian written in terms of operators acting only on the respective subspace of the total Hilbert space. These Hamiltonians govern the internal dynamics of the uncoupled system and bath. There is a third Hamiltonian that contains products of system and bath operators, thus coupling the system and bath. The most general form of this Hamiltonian is H=HS+HB+HBS The dynamics of the entire system can be described by the Liouville equation of motion, χ˙=−i[H,χ] . This equation, containing an infinite number of degrees of freedom, is impossible to solve analytically except in very particular cases. What's more, under certain approximations, the bath degrees of freedom need not be considered, and an effective master equation can be derived in terms of the system density matrix, tr Bχ . The problem can be analyzed more easily by moving into the interaction picture, defined by the unitary transformation M~=U0MU0† , where M is an arbitrary operator, and U0=ei(HS+HB)t . Also note that U(t,t0) is the total unitary operator of the entire system. It is straightforward to confirm that the Liouville equation becomes χ~˙=−i[H~BS,χ~] where the Hamiltonian H~BS=ei(HS+HB)tHBSe−i(HS+HB)t is explicitly time dependent. Also, according to the interaction picture, χ~=UBS(t,t0)χUBS†(t,t0) , where UBS=U0†U(t,t0) . This equation can be integrated directly to give χ~(t)=χ~(0)−i∫0tdt′[H~BS(t′),χ~(t′)] This implicit equation for χ~ can be substituted back into the Liouville equation to obtain an exact differo-integral equation χ~˙=−i[H~BS,χ~(0)]−∫0tdt′[H~BS(t),[H~BS(t′),χ~(t′)]] We proceed with the derivation by assuming the interaction is initiated at t=0 , and at that time there are no correlations between the system and the bath. This implies that the initial condition is factorable as χ(0)=ρ(0)R0 , where R0 is the density operator of the bath initially.
Physical derivation:
Tracing over the bath degrees of freedom, tr Rχ~=ρ~ , of the aforementioned differo-integral equation yields tr R{[H~BS(t),[H~BS(t′),χ~(t′)]]} This equation is exact for the time dynamics of the system density matrix but requires full knowledge of the dynamics of the bath degrees of freedom. A simplifying assumption called the Born approximation rests on the largeness of the bath and the relative weakness of the coupling, which is to say the coupling of the system to the bath should not significantly alter the bath eigenstates. In this case the full density matrix is factorable for all times as χ~(t)=ρ~(t)R0 . The master equation becomes tr R{[H~BS(t),[H~BS(t′),ρ~(t′)R0]]} The equation is now explicit in the system degrees of freedom, but is very difficult to solve. A final assumption is the Born-Markov approximation that the time derivative of the density matrix depends only on its current state, and not on its past. This assumption is valid under fast bath dynamics, wherein correlations within the bath are lost extremely quickly, and amounts to replacing ρ(t′)→ρ(t) on the right hand side of the equation.
Physical derivation:
tr R{[H~BS(t),[H~BS(t′),ρ~(t)R0]]} If the interaction Hamiltonian is assumed to have the form HBS=∑iαiΓi for system operators αi and bath operators Γi then H~BS=∑iα~iΓ~i . The master equation becomes tr R{[α~i(t)Γ~i(t),[α~j(t′)Γ~j(t′),ρ~(t)R0]]} which can be expanded as ρ~˙=−∑i∫0tdt′[(α~i(t)α~j(t′)ρ~(t)−α~i(t)ρ~(t)α~j(t′))⟨Γ~i(t)Γ~j(t′)⟩+(ρ~(t)α~j(t′)α~i(t)−α~j(t′)ρ~(t)α~i(t))⟨Γ~j(t′)Γ~i(t)⟩] The expectation values tr {ΓiΓjR0} are with respect to the bath degrees of freedom.
By assuming rapid decay of these correlations (ideally ⟨Γi(t)Γj(t′)⟩∝δ(t−t′) ), above form of the Lindblad superoperator L is achieved.
Examples:
For one jump operator F and no unitary evolution, the Lindblad superoperator, acting on the density matrix ρ , is D[F](ρ)=FρF†−12(F†Fρ+ρF†F) Such a term is found regularly in the Lindblad equation as used in quantum optics, where it can express absorption or emission of photons from a reservoir. If one wants to have both absorption and emission, one would need a jump operator for each. This leads to the most common Lindblad equation describing the damping of a quantum harmonic oscillator (representing e.g. a Fabry–Perot cavity) coupled to a thermal bath, with jump operators F1=a,γ1=γ2(n¯+1),F2=a†,γ2=γ2n¯.
Examples:
Here n¯ is the mean number of excitations in the reservoir damping the oscillator and γ is the decay rate. If we also add additional unitary evolution generated by the quantum harmonic oscillator Hamiltonian with frequency ωc , we obtain ρ˙=−i[ωca†a,ρ]+γ1D[F1](ρ)+γ2D[F2](ρ).
Additional Lindblad operators can be included to model various forms of dephasing and vibrational relaxation. These methods have been incorporated into grid-based density matrix propagation methods. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bacterial taxonomy**
Bacterial taxonomy:
Bacterial taxonomy is subfield of taxonomy devoted to the classification of bacteria specimens into taxonomic ranks.
Bacterial taxonomy:
In the scientific classification established by Carl Linnaeus, each species is assigned to a genus resulting in a two-part name. This name denotes the two lowest levels in a hierarchy of ranks, increasingly larger groupings of species based on common traits. Of these ranks, domains are the most general level of categorization. Presently, scientists classify all life into just three domains, Eukaryotes, Bacteria and Archaea.Bacterial taxonomy is the classification of strains within the domain Bacteria into hierarchies of similarity. This classification is similar to that of plants, mammals, and other taxonomies. However, biologists specializing in different areas have developed differing taxonomic conventions over time. For example, bacterial taxonomists name types based on descriptions of strains. Zoologists among others use a type specimen instead.
Diversity:
Bacteria (prokaryotes, together with Archaea) share many common features. These commonalities include the lack a of nuclear membrane, unicellularity, division by binary-fission and generally small size. The various species can be differentiated through the comparison of on several characteristics, allowing their identification and classification. Examples include: Phylogeny: All bacteria stem from a common ancestor and diversified since, and consequently possess different levels of evolutionary relatedness (see Bacterial phyla and Timeline of evolution) Metabolism: Different bacteria may have different metabolic abilities (see Microbial metabolism) Environment: Different bacteria thrive in different environments, such as high/low temperature and salt (see Extremophiles) Morphology: There are many structural differences between bacteria, such as cell shape, Gram stain (number of lipid bilayers) or bilayer composition (see Bacterial cellular morphologies, Bacterial cell structure)
History:
First descriptions Bacteria were first observed by Antonie van Leeuwenhoek in 1676, using a single-lens microscope of his own design. He called them "animalcules" and published his observations in a series of letters to the Royal Society.Early described genera of bacteria include Vibrio and Monas, by O. F. Müller (1773, 1786), then classified as Infusoria (however, many species before included in those genera are regarded today as protists); Polyangium, by H. F. Link (1809), the first bacterium still recognized today; Serratia, by Bizio (1823); and Spirillum, Spirochaeta and Bacterium, by Ehrenberg (1838).The term Bacterium, introduced as a genus by Ehrenberg in 1838, became a catch-all for rod-shaped cells.
History:
Early formal classifications Bacteria were first classified as plants constituting the class Schizomycetes, which along with the Schizophyceae (blue green algae/Cyanobacteria) formed the phylum Schizophyta.Haeckel in 1866 placed the group in the phylum Moneres (from μονήρης: simple) in the kingdom Protista and defines them as completely structureless and homogeneous organisms, consisting only of a piece of plasma. He subdivided the phylum into two groups: die Gymnomoneren (no envelope) Protogenes – such as Protogenes primordialis, now classed as a eukaryote and not a bacterium Protamaeba – now classed as a eukaryote and not a bacterium Vibrio – a genus of comma shaped bacteria first described in 1854) Bacterium – a genus of rod shaped bacteria first described in 1828, that later gave its name to the members of the Monera, formerly referred to as "a moneron" (plural "monera") in English and "eine Moneren"(fem. pl. "Moneres") in German Bacillus – a genus of spore-forming rod shaped bacteria first described in 1835 Spirochaeta – thin spiral shaped bacteria first described in 1835 Spirillum – spiral shaped bacteria first described in 1832 etc.
History:
die Lepomoneren (with envelope) Protomonas – now classed as a eukaryote and not a bacterium. The name was reused in 1984 for an unrelated genus of Bacteria Vampyrella – now classed as a eukaryote and not a bacteriumThe classification of Ferdinand Cohn (1872) was influential in the nineteenth century, and recognized six genera: Micrococcus, Bacterium, Bacillus, Vibrio, Spirillum, and Spirochaeta.The group was later reclassified as the Prokaryotes by Chatton.The classification of Cyanobacteria (colloquially "blue green algae") has been fought between being algae or bacteria (for example, Haeckel classified Nostoc in the phylum Archephyta of Algae).
History:
in 1905, Erwin F. Smith accepted 33 valid different names of bacterial genera and over 150 invalid names, and Vuillemin, in a 1913 study, concluded that all species of the Bacteria should fall into the genera Planococcus, Streptococcus, Klebsiella, Merista, Planomerista, Neisseria, Sarcina, Planosarcina, Metabacterium, Clostridium, Serratia, Bacterium, and Spirillum.
History:
Cohn recognized four tribes: Spherobacteria, Microbacteria, Desmobacteria, and Spirobacteria. Stanier and van Neil recognized the kingdom Monera with two phyla, Myxophyta and Schizomycetae, the latter comprising classes Eubacteriae (three orders), Myxobacteriae (one order), and Spirochetae (one order). Bisset distinguished 1 class and 4 orders: Eubacteriales, Actinomycetales, Streptomycetales, and Flexibacteriales. Walter Migula's system, which was the most widely accepted system of its time and included all then-known species but was based only on morphology, contained the three basic groups Coccaceae, Bacillaceae, and Spirillaceae, but also Trichobacterinae for filamentous bacteria. Orla-Jensen established two orders: Cephalotrichinae (seven families) and Peritrichinae (presumably with only one family). Bergey et al. presented a classification which generally followed the 1920 Final Report of the Society of American Bacteriologists Committee (Winslow et al.), which divided class Schizomycetes into four orders: Myxobacteriales, Thiobacteriales, Chlamydobacteriales, and Eubacteriales, with a fifth group being four genera considered intermediate between bacteria and protozoans: Spirocheta, Cristospira, Saprospira, and Treponema.
History:
However, different authors often reclassified the genera due to the lack of visible traits to go by, resulting in a poor state which was summarised in 1915 by Robert Earle Buchanan. By then, the whole group received different ranks and names by different authors, namely: Schizomycetes (Naegeli 1857) Bacteriaceae (Cohn 1872 a) Bacteria (Cohn 1872 b) Schizomycetaceae (DeToni and Trevisan 1889)Furthermore, the families into which the class was subdivided changed from author to author and for some, such as Zipf, the names were in German and not in Latin.The first edition of the Bacteriological Code in 1947 sorted out several problems.A. R. Prévot's system) had four subphyla and eight classes, as follows: Eubacteriales (classes Asporulales and Sporulales) Mycobacteriales (classes Actinomycetales, Myxobacteriales, and Azotobacteriales) Algobacteriales (classes Siderobacteriales and Thiobacteriales) Protozoobacteriales (class Spirochetales) Informal groups based on Gram staining Despite there being little agreement on the major subgroups of the Bacteria, Gram staining results were most commonly used as a classification tool. Consequently, until the advent of molecular phylogeny, the Kingdom Prokaryota was divided into four divisions, A classification scheme still formally followed by Bergey's manual of systematic bacteriology for tome order Gracilicutes (gram-negative) Photobacteria (photosynthetic): class Oxyphotobacteriae (water as electron donor, includes the order Cyanobacteriales=blue-green algae, now phylum Cyanobacteria) and class Anoxyphotobacteriae (anaerobic phototrophs, orders: Rhodospirillales and Chlorobiales Scotobacteria (non-photosynthetic, now the Proteobacteria and other gram-negative nonphotosynthetic phyla) Firmacutes [sic] (gram-positive, subsequently corrected to Firmicutes) several orders such as Bacillales and Actinomycetales (now in the phylum Actinobacteria) Mollicutes (gram variable, e.g. Mycoplasma) Mendocutes (uneven gram stain, "methanogenic bacteria", now known as the Archaea) Molecular era "Archaic bacteria" and Woese's reclassification Woese argued that the bacteria, archaea, and eukaryotes represent separate lines of descent that diverged early on from an ancestral colony of organisms. However, a few biologists argue that the Archaea and Eukaryota arose from a group of bacteria. In any case, it is thought that viruses and archaea began relationships approximately two billion years ago, and that co-evolution may have been occurring between members of these groups. It is possible that the last common ancestor of the bacteria and archaea was a thermophile, which raises the possibility that lower temperatures are "extreme environments" in archaeal terms, and organisms that live in cooler environments appeared only later. Since the Archaea and Bacteria are no more related to each other than they are to eukaryotes, the term prokaryote's only surviving meaning is "not a eukaryote", limiting its value.With improved methodologies it became clear that the methanogenic bacteria were profoundly different and were (erroneously) believed to be relics of ancient bacteria thus Carl Woese, regarded as the forerunner of the molecular phylogeny revolution, identified three primary lines of descent: the Archaebacteria, the Eubacteria, and the Urkaryotes, the latter now represented by the nucleocytoplasmic component of the Eukaryotes. These lineages were formalised into the rank Domain (regio in Latin) which divided Life into 3 domains: the Eukaryota, the Archaea and the Bacteria.
History:
Subdivisions In 1987 Carl Woese divided the Eubacteria into 11 divisions based on 16S ribosomal RNA (SSU) sequences, which with several additions are still used today.
Opposition While the three domain system is widely accepted, some authors have opposed it for various reasons.
History:
One prominent scientist who opposes the three domain system is Thomas Cavalier-Smith, who proposed that the Archaea and the Eukaryotes (the Neomura) stem from Gram positive bacteria (Posibacteria), which in turn derive from gram negative bacteria (Negibacteria) based on several logical arguments, which are highly controversial and generally disregarded by the molecular biology community (c.f. reviewers' comments on, e.g. Eric Bapteste is "agnostic" regarding the conclusions) and are often not mentioned in reviews (e.g.) due to the subjective nature of the assumptions made.However, despite there being a wealth of statistically supported studies towards the rooting of the tree of life between the Bacteria and the Neomura by means of a variety of methods, including some that are impervious to accelerated evolution—which is claimed by Cavalier-Smith to be the source of the supposed fallacy in molecular methods—there are a few studies which have drawn different conclusions, some of which place the root in the phylum Firmicutes with nested archaea.Radhey Gupta's molecular taxonomy, based on conserved signature sequences of proteins, includes a monophyletic Gram negative clade, a monophyletic Gram positive clade, and a polyphyletic Archeota derived from Gram positives. Hori and Osawa's molecular analysis indicated a link between Metabacteria (=Archeota) and eukaryotes. The only cladistic analyses for bacteria based on classical evidence largely corroborate Gupta's results (see comprehensive mega-taxonomy).
History:
James Lake presented a 2 primary kingdom arrangement (Parkaryotae + eukaryotes and eocytes + Karyotae) and suggested a 5 primary kingdom scheme (Eukaryota, Eocyta, Methanobacteria, Halobacteria, and Eubacteria) based on ribosomal structure and a 4 primary kingdom scheme (Eukaryota, Eocyta, Methanobacteria, and Photocyta), bacteria being classified according to 3 major biochemical innovations: photosynthesis (Photocyta), methanogenesis (Methanobacteria), and sulfur respiration (Eocyta). He has also discovered evidence that Gram-negative bacteria arose from a symbiosis between 2 Gram-positive bacteria.
Authorities:
Classification is the grouping of organisms into progressively more inclusive groups based on phylogeny and phenotype, while nomenclature is the application of formal rules for naming organisms.
Nomenclature authority Despite there being no official and complete classification of prokaryotes, the names (nomenclature) given to prokaryotes are regulated by the International Code of Nomenclature of Bacteria (Bacteriological Code), a book which contains general considerations, principles, rules, and various notes, and advises in a similar fashion to the nomenclature codes of other groups.
Authorities:
Classification authorities As taxa proliferated, computer aided taxonomic systems were developed. Early non networked identification software entering widespread use was produced by Edwards 1978, Kellogg 1979, Schindler, Duben, and Lysenko 1979, Beers and Lockhard 1962, Gyllenberg 1965, Holmes and Hill 1985, Lapage et al 1970 and Lapage et al 1973.: 63 Today the taxa which have been correctly described are reviewed in Bergey's manual of Systematic Bacteriology, which aims to aid in the identification of species and is considered the highest authority. An online version of the taxonomic outline of bacteria and archaea (TOBA) is available [1].
Authorities:
List of Prokaryotic names with Standing in Nomenclature (LPSN) is an online database which currently contains over two thousand accepted names with their references, etymologies and various notes.
Authorities:
Description of new species The International Journal of Systematic Bacteriology/International Journal of Systematic and Evolutionary Microbiology (IJSB/IJSEM) is a peer reviewed journal which acts as the official international forum for the publication of new prokaryotic taxa. If a species is published in a different peer review journal, the author can submit a request to IJSEM with the appropriate description, which if correct, the new species will be featured in the Validation List of IJSEM.
Authorities:
Distribution Microbial culture collections are depositories of strains which aim to safeguard them and to distribute them. The main ones being:
Analyses:
Bacteria were at first classified based solely on their shape (vibrio, bacillus, coccus etc.), presence of endospores, gram stain, aerobic conditions and motility. This system changed with the study of metabolic phenotypes, where metabolic characteristics were used. Recently, with the advent of molecular phylogeny, several genes are used to identify species, the most important of which is the 16S rRNA gene, followed by 23S, ITS region, gyrB and others to confirm a better resolution. The quickest way to identify to match an isolated strain to a species or genus today is done by amplifying its 16S gene with universal primers and sequence the 1.4kb amplicon and submit it to a specialised web-based identification database, namely either Ribosomal Database Project[2], which align the sequence to other 16S sequences using infernal, a secondary structure bases global alignment, or ARB SILVA, which aligns sequences via SINA (SILVA incremental aligner), which does a local alignment of a seed and extends it [3].Several identification methods exists: Phenotypic analyses fatty acid analyses Growth conditions (Agar plate, Biolog multiwell plates) Genetic analyses DNA-DNA hybridization DNA profiling Sequence GC ratios Phylogenetic analyses 16S-based phylogeny phylogeny based on other genes Multi-gene sequence analysis Whole-genome sequence based analysis
New species:
The minimal standards for describing a new species depend on which group the species belongs to. c.f.
Candidatus:
Candidatus is a component of the taxonomic name for a bacterium that cannot be maintained in a Bacteriology Culture Collection. It is an interim taxonomic status for noncultivable organisms. e.g. "Candidatus Pelagibacter ubique"
Species concept:
Bacteria divide asexually and for the most part do not show regionalisms ("Everything is everywhere"), therefore the concept of species, which works best for animals, becomes entirely a matter of judgment.
Species concept:
The number of named species of bacteria and archaea (approximately 13,000) is surprisingly small considering their early evolution, genetic diversity and residence in all ecosystems. The reason for this is the differences in species concepts between the bacteria and macro-organisms, the difficulties in growing/characterising in pure culture (a prerequisite to naming new species, vide supra) and extensive horizontal gene transfer blurring the distinction of species.The most commonly accepted definition is the polyphasic species definition, which takes into account both phenotypic and genetic differences.
Species concept:
However, a quicker diagnostic ad hoc threshold to separate species is less than 70% DNA–DNA hybridisation, which corresponds to less than 97% 16S DNA sequence identity. It has been noted that if this were applied to animal classification, the order primates would be a single species.
For this reason, more stringent species definitions based on whole genome sequences have been proposed.
Pathology vs. phylogeny:
Ideally, taxonomic classification should reflect the evolutionary history of the taxa, i.e. the phylogeny. Although some exceptions are present when the phenotype differs amongst the group, especially from a medical standpoint. Some examples of problematic classifications follow.
Pathology vs. phylogeny:
Escherichia coli: overly large and polyphyletic In the family Enterobacteriaceae of the class Gammaproteobacteria, the species in the genus Shigella (S. dysenteriae, S. flexneri, S. boydii, S. sonnei) from an evolutionary point of view are strains of the species Escherichia coli (polyphyletic), but due to genetic differences cause different medical conditions in the case of the pathogenic strains. Confusingly, there are also E. coli strains that produce Shiga toxin known as STEC.
Pathology vs. phylogeny:
Escherichia coli is a badly classified species as some strains share only 20% of their genome. Being so diverse it should be given a higher taxonomic ranking. However, due to the medical conditions associated with the species, it will not be changed to avoid confusion in medical context.
Pathology vs. phylogeny:
Bacillus cereus group: close and polyphyletic In a similar way, the Bacillus species (=phylum Firmicutes) belonging to the "B. cereus group" (B. anthracis, B. cereus, B . thuringiensis, B. mycoides, B. pseudomycoides, B. weihenstephanensis and B. medusa) have 99-100% similar 16S rRNA sequence (97% is a commonly cited adequate species cut-off) and are polyphyletic, but for medical reasons (anthrax etc.) remain separate.
Pathology vs. phylogeny:
Yersinia pestis: extremely recent species Yersinia pestis is in effect a strain of Yersinia pseudotuberculosis, but with a pathogenicity island that confers a drastically different pathology (Black plague and tuberculosis-like symptoms respectively) which arose 15,000 to 20,000 years ago.
Nested genera in Pseudomonas In the gammaproteobacterial order Pseudomonadales, the genus Azotobacter and the species Azomonas macrocytogenes are actually members of the genus Pseudomonas, but were misclassified due to nitrogen fixing capabilities and the large size of the genus Pseudomonas which renders classification problematic. This will probably rectified in the close future.
Nested genera in Bacillus Another example of a large genus with nested genera is the genus Bacillus, in which the genera Paenibacillus and Brevibacillus are nested clades. There is insufficient genomic data at present to fully and effectively correct taxonomic errors in Bacillus.
Pathology vs. phylogeny:
Agrobacterium: resistance to name change Based on molecular data it was shown that the genus Agrobacterium is nested in Rhizobium and the Agrobacterium species transferred to the genus Rhizobium (resulting in the following comp. nov.: Rhizobium radiobacter (formerly known as A. tumefaciens), R. rhizogenes, R. rubi, R. undicola and R. vitis) Given the plant pathogenic nature of Agrobacterium species, it was proposed to maintain the genus Agrobacterium and the latter was counter-argued
Nomenclature:
Taxonomic names are written in italics (or underlined when handwritten) with a majuscule first letter with the exception of epithets for species and subspecies. Despite it being common in zoology, tautonyms (e.g. Bison bison) are not acceptable and names of taxa used in zoology, botany or mycology cannot be reused for Bacteria (Botany and Zoology do share names).
Nomenclature:
Nomenclature is the set of rules and conventions which govern the names of taxa. The difference in nomenclature between the various kingdoms/domains is reviewed in.For Bacteria, valid names must have a Latin or Neolatin name and can only use basic latin letters (w and j inclusive, see History of the Latin alphabet for these), consequently hyphens, accents and other letters are not accepted and should be transliterated correctly (e.g. ß=ss). Ancient Greek being written in the Greek alphabet, needs to be transliterated into the Latin alphabet.
Nomenclature:
When compound words are created, a connecting vowel is needed depending on the origin of the preceding word, regardless of the word that follows, unless the latter starts with a vowel in which case no connecting vowel is added. If the first compound is Latin then the connecting vowel is an -i-, whereas if the first compound is Greek, the connecting vowel is an -o-.For etymologies of names consult LPSN.
Nomenclature:
Rules for higher taxa For the Prokaryotes (Bacteria and Archaea) the rank kingdom is not used (although some authors refer to phyla as kingdoms) If a new or amended species is placed in new ranks, according to Rule 9 of the Bacteriological Code the name is formed by the addition of an appropriate suffix to the stem of the name of the type genus. For subclass and class the recommendation from is generally followed, resulting in a neutral plural, however a few names do not follow this and instead keep into account graeco-latin grammar (e.g. the female plurals Thermotogae, Aquificae and Chlamydiae, the male plurals Chloroflexi, Bacilli and Deinococci and the greek plurals Spirochaetes, Gemmatimonadetes and Chrysiogenetes).
Nomenclature:
Phyla endings Until 2021, phyla were not covered by the Bacteriological code, so they were named informally. This resulted in a variety of approaches to naming phyla. Some phyla, like Firmicutes, were named according to features shared across the phylum. Others, like Chlamydiae, were named using a class name or genus name as the stem (e.g., Chlamydia). In 2021, the decision was made to include names under the Bacteriological Code. Consequently, many phylum names were updated according to the new nomenclatural rules. The higher taxa proposed by Cavalier-Smith are generally disregarded by the molecular phylogeny community (e.g.) (vide supra). Under the new rules, the name of a phylum is derived from the type genus: Names after people Several species are named after people, either the discoverer or a famous person in the field of microbiology, for example Salmonella is after D.E. Salmon, who discovered it (albeit as "Bacillus typhi").For the generic epithet, all names derived from people must be in the female nominative case, either by changing the ending to -a or to the diminutive -ella, depending on the name.For the specific epithet, the names can be converted into either adjectival form (adding -nus (m.), -na (f.), -num (n.) according to the gender of the genus name) or the genitive of the Latinised name.
Nomenclature:
Names after places Many species (the specific epithet) are named after the place they are present or found (e.g. Thiospirillum jenense). Their names are created by forming an adjective by joining the locality's name with the ending -ensis (m. or f.) or ense (n.) in agreement with the gender of the genus name, unless a classical Latin adjective exists for the place. However, names of places should not be used as nouns in the genitive case.
Vernacular names:
Despite the fact that some hetero/homogeneus colonies or biofilms of bacteria have names in English (e.g. dental plaque or Star jelly), no bacterial species has a vernacular/trivial/common name in English.
Vernacular names:
For names in the singular form, plurals cannot be made (singulare tantum) as would imply multiple groups with the same label and not multiple members of that group (by analogy, in English, chairs and tables are types of furniture, which cannot be used in the plural form "furnitures" to describe both members), conversely names plural form are pluralia tantum. However, a partial exception to this is made by the use of vernacular names.
Vernacular names:
However, to avoid repetition of taxonomic names which break the flow of prose, vernacular names of members of a genus or higher taxa are often used and recommended, these are formed by writing the name of the taxa in sentence case roman ("standard" in MS Office) type, therefore treating the proper noun as an English common noun (e.g. the salmonellas), although there is some debate about the grammar of plurals, which can either be regular plural by adding -(e)s (the salmonellas) or using the ancient Greek or Latin plural form (irregular plurals) of the noun (the salmonellae); the latter is problematic as the plural of - bacter would be -bacteres, while the plural of myces (N.L. masc. n. from Gr. masc. n. mukes) is mycetes.Customs are present for certain names, such as those ending in -monas are converted into -monad (one pseudomonad, two aeromonads and not -monades).
Vernacular names:
Bacteria which are the etiological cause for a disease are often referred to by the disease name followed by a describing noun (bacterium, bacillus, coccus, agent or the name of their phylum) e.g. cholera bacterium (Vibrio cholerae) or Lyme disease spirochete (Borrelia burgdorferi), note also rickettsialpox (Rickettsia akari) (for more see).
Treponema is converted into treponeme and the plural is treponemes and not treponemata.
Some unusual bacteria have special names such as Quin's oval (Quinella ovalis) and Walsby's square (Haloquadratum walsbyi).
Vernacular names:
Before the advent of molecular phylogeny, many higher taxonomic groupings had only trivial names, which are still used today, some of which are polyphyletic, such as Rhizobacteria. Some higher taxonomic trivial names are: Blue-green algae are members of the phylum "Cyanobacteria" Green non-sulfur bacteria are members of the phylum Chloroflexota Green sulfur bacteria are members of the Chlorobiota Purple bacteria are some, but not all, members of the phylum Pseudomonadota Purple sulfur bacteria are members of the order Chromatiales low G+C Gram-positive bacteria are members of the phylum Bacillota, regardless of GC content high G+C Gram-positive bacteria are members of the phylum Actinomycetota, regardless of GC content Rhizobia are members of various genera of Pseudomonadota Lactic acid bacteria are members of the order Lactobacillales Coryneform bacteria are members of the family Corynebacteriaceae Fruiting gliding bacteria or myxobacteria are members of the phylum Myxococcota Enterics are members of the order Enterobacteriales (although the term is avoided if they do not live in the intestines, such as Pectobacterium) Acetic acid bacteria are members of the family Acetobacteraceae
Terminology:
The abbreviation for species is sp. (plural spp.) and is used after a generic epithet to indicate a species of that genus. Often used to denote a strain of a genus for which the species is not known either because has the organism has not been described yet as a species or insufficient tests were conducted to identify it. For example Halomonas sp. GFAJ-1 If a bacterium is known and well-studied but not culturable, it is given the term Candidatus in its name A basonym is original name of a new combination, namely the first name given to a taxon before it was reclassified A synonym is an alternative name for a taxon, i.e. a taxon was erroneously described twice When a taxon is transferred it becomes a new combination (comb. nov.) or new name (nom. nov.) paraphyly, monophyly, and polyphyly | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Azoxy compounds**
Azoxy compounds:
In chemistry, azoxy compounds are a group of chemical compounds sharing a common functional group with the general structure R−N=N+(−O−)−R. They are considered N-oxides of azo compounds. Azoxy compounds are 1,3-dipoles. They undergo 1,3-dipolar cycloaddition with double bonds.
Preparation:
Most azoxy-containing compounds have aryl substituents. They are typically prepared by reduction of nitrocompounds, such as the reduction of nitrobenzene with arsenous oxide to azoxybenzene. Such reactions are proposed to proceed via the intermediacy of the hydroxylamine and nitroso compounds, e.g. phenylhydroxylamine and nitrosobenzene (Ph = phenyl, C6H5): PhNHOH PhNO PhN NPh +H2O
Safety:
Alkyl azoxy compounds, e.g. azoxymethane are suspected to be genotoxic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NDH-2**
NDH-2:
NDH-2, also known as type II NADH:quinone oxidoreductase or alternative NADH dehydrogenase, is an enzyme (EC: 1.6.99.3) which catalyzes the electron transfer from NADH (electron donor) to a quinone (electron acceptor), being part of the electron transport chain. NDH-2 are peripheral membrane protein, functioning as dimers in vivo, with approximately 45 KDa per subunit and a single FAD as their cofactor.NDH-2 are the only enzymes, with NADH dehydrogenase activity, expressed in the respiratory chain of some pathogenic organisms (e.g. Staphylococcus aureus), and for that they have been proposed as new targets for rational drug design.
Structure:
The structure/fold from these proteins may be divided into three domains: first dinucleotide binding domain (green in the figure), second dinucleotide binding domain (orange in the figure) and C-terminal domain (blue in the figure).
The first domain is responsible for the noncovalent binding of FAD, while the second dinucleotide binding domain binds NADH. Both these domain are structurally organized in Rossmann folds, with the characteristic GxGxxG motif present.
The third domain, C-terminal, is responsible for the protein-membrane interaction. Upon expression of a C-terminal truncated version of NDH-2, it was observed an intracellular delocalization from the membrane to the cytoplasm. The third domain, together with part of the first domain, is also partially responsible for the binding of the electron acceptor (quinone).
There are currently crystallographic structures for NDH-2 from four different organisms: Staphylococcus aureus (PDB ID:5NA4) Caldalkalibacillus thermarum (PDB ID:4NWZ) Saccharomyces cerevisiae (PDB ID:4G73) Plasmodium falciparum (PDB ID: 5JWB)
Reaction:
The enzymatic oxidoreduction reaction catalyzed by NDH-2 may be described as follows: NADH + Q + H+ -----> NAD+ + QH2(Q - quinone; QH2 - quinol) In this case, the electron donor is NADH and the electron acceptor is the quinone. Depending on the organism, the reduced quinone changes between menaquinone, ubiquinone or plastoquinone. The mechanism of the reaction may be divided in two half-reactions: 1stHR and 2ndHR.
Reaction:
In the 1stHR, 2 electrons and 1 proton from NADH are transferred (simultaneously with an additional proton from the bulk) to the prosthetic group (FAD), giving rise to its protonated form FADH2. In this phase, an Enzyme-Substrate complex is established, characterized by the appearance of a "Charge-transfer complex".
Na 2stHR, the quinone binds and the 2 electrons and one of the FAD protons are transferred for this second substrate (again, with an additional proton from the bulk), forming the product quinol.
It is now accepted that the overall mechanism occurs by a ternary complex (simultaneous binding of both substrates to the enzyme), instead of the previously proposed [[:pt:Cin%C3%A9tica enzim%C3%A1tica#Mecanismos ping%E2.80%93pong|ping-pong mechanism]].
Phylogenetic distribution:
The presence of NDH-2 in organisms which genome as already been fully sequenced was studied by Bioinformatics. In this study, NDH-2 were identified in 83% of Eukaryotes, 60% of bacteria and 32% of Archaeas. It was also observed the absence of NDH-2 in phyla composed of anaerobic organisms.
Despite being considered absent (hence being considered as drug targets), in this same study, the presence of a gene coding for a NDH-2 homolog was observed in the human genome. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Grammatical particle**
Grammatical particle:
In grammar, the term particle (abbreviated PTCL) has a traditional meaning, as a part of speech that cannot be inflected, and a modern meaning, as a function word (functor) associated with another word or phrase, in order to impart meaning. Although a particle may have an intrinsic meaning, and may fit into other grammatical categories, the fundamental idea of the particle is to add context to the sentence, expressing a mood or indicating a specific action. In English, for example, the phrase "oh well" has no purpose in speech other than to convey a mood. The word 'up' would be a particle in the phrase to 'look up' (as in the phrase "look up this topic"), implying that one researches something, rather than literally gazing skywards. Many languages use particles, in varying amounts and for varying reasons. In Hindi, they may be used as honorifics, or to indicate emphasis or negation. In some languages they are clearly defined, e.g. Chinese which has three types of zhùcí (助詞; particles): Structural, Aspectual, and Modal. Structural particles are used for grammatical relations. Aspectual particles signal grammatical aspects. Modal particles express linguistic modality. Polynesian languages, which are almost devoid of inflection, use particles extensively to indicate mood, tense, and case.
Modern meaning:
In modern grammar, a particle is a function word that must be associated with another word or phrase to impart meaning, i.e., it does not have its own lexical definition. According to this definition, particles are a separate part of speech and are distinct from other classes of function words, such as articles, prepositions, conjunctions and adverbs. Languages vary widely in how much they use particles, some using them extensively and others more commonly using alternative devices such as prefixes/suffixes, inflection, auxiliary verbs and word order. Particles are typically words that encode grammatical categories (such as negation, mood, tense, or case), clitics, fillers or (oral) discourse markers such as well, um, etc. Particles are never inflected.
Afrikaans:
Some commonly used particles in Afrikaans include: nie2: Afrikaans has a double negation system, as in Sy is nie1 moeg nie2 'She is not tired PTCL.NEG' (meaning 'She is not tired'). The first nie1 is analysed as an adverb, while the second nie2 as a negation particle.
te: Infinitive verbs are preceded by the complementiser om and the infinitival particle te, e.g. Jy moet onthou om te eet 'You must remember for COMP PTCL.INF eat' (meaning 'You must remember to eat').
se or van: Both se and van are genitive particles, e.g. Peter se boek 'Peter PTCL.GEN book' (meaning 'Peter's book'), or die boek van Peter 'the book PTCL.GEN Peter' (meaning 'Peter's book').
so and soos: These two particles are found in constructions like so groot soos 'n huis 'PTCL.CMPR big PTCL.CMPR a house' (meaning 'as big as a house').
Arabic:
Particles in Arabic can take the form of a single root letter before a given word, like "-و" (and), "-ف" (so) and "-ل" (to). However, other particles like "هل" (which marks a question) can be complete words as well.
Chinese:
There are three types of zhùcí (助詞; particles) in Chinese: Structural, Aspectual, and Modal. Structural particles are used for grammatical relations. Aspectual particles signal grammatical aspects. Modal particles express linguistic modality. Note that particles are different from zhùdòngcí (助動詞; modal verbs) in Chinese.
English:
Particle is a somewhat nebulous term for a variety of small words that do not conveniently fit into other classes of words. The Concise Oxford Companion to the English Language defines a particle as a "word that does not change its form through inflection and does not fit easily into the established system of parts of speech". The term includes the "adverbial particles" like up or out in verbal idioms (phrasal verbs) such as "look up" or "knock out"; it also includes the "infinitival particle" to, the "negative particle" not, the "imperative particles" do and let, and sometimes "pragmatic particles" (also called "fillers" or "discourse markers") like oh and well.
German:
A German modal particle serves no necessary syntactical function, but expresses the speaker's attitude towards the utterance. Modal particles include ja, halt, doch, aber, denn, schon and others. Some of these also appear in non-particle forms. Aber, for example, is also the conjunction but. In Er ist Amerikaner, aber er spricht gut Deutsch, "He is American, but he speaks German well," aber is a conjunction connecting two sentences. But in Er spricht aber gut Deutsch!, the aber is a particle, with the sentence perhaps best translated as "What good German he speaks!" These particles are common in speech but rarely found in written language, except that which has a spoken quality (such as online messaging).
Hindi:
There are different types of particles present in Hindi: emphatic particles, limiter particles, negation particles, affirmative particles, honorific particles, topic-marker particle and case-marking particles. Some common particles of Hindi are mentioned in the table below:
Japanese and Korean:
The term particle is often used in descriptions of Japanese and Korean, where they are used to mark nouns according to their grammatical case or thematic relation in a sentence or clause. Linguistic analyses describe them as suffixes, clitics, or postpositions. There are sentence-tagging particles such as Japanese question markers.
Polynesian languages:
Polynesian languages are almost devoid of inflection, and use particles extensively to indicate mood, tense, and case. Suggs, discussing the deciphering of the rongorongo script of Easter Island, describes them as all-important. In Māori for example, the versatile particle "e" can signal the imperative mood, the vocative case, the future tense, or the subject of a sentence formed with most passive verbs. The particle "i" signals the past imperfect tense, the object of a transitive verb or the subject of a sentence formed with "neuter verbs" (a form of passive verb), as well as the prepositions in, at and from.
Polynesian languages:
Tokelauan In Tokelauan, ia is used when describing personal names, month names, and nouns used to describe a collaborative group of people participating in something together. It also can be used when a verb does not directly precede a pronoun to describe said pronouns. Its use for pronouns is optional but mostly in this way. Ia cannot be used if the noun it is describing follows any of the prepositions e, o, a, or ko. A couple of the other ways unrelated to what is listed above that ia is used is when preceding a locative or place name. However, if ia is being used in this fashion, the locative or place name must be the subject of the sentence. Another particle in Tokelauan is a, or sometimes ā. This article is used before a person's name as well as the names of months and the particle a te is used before pronouns when these instances are following the prepositions i or ki. Ia te is a particle used if following the preposition mai.
Turkish:
Turkish particles have no meaning alone; among other words, it takes part in the sentence. In some sources, exclamations and conjunctions are also considered Turkish particles. In this article, exclamations and conjunctions will not be dealt with, but only Turkish particles. The main particles used in Turkish are: Particles can be used with the simple form of the names to which they are attached or in other cases. Some of particles uses with attached form, and some particles are always used after the relevant form. For examples, "-den ötürü", "-e dek", "-den öte", "-e doğru": Bu çiçekleri annem için alıyorum. ("anne" is nominative) Yarına kadar bu ödevi bitirmem lazım. (dative) Düşük notlarından ötürü çok çalışman gerekiyor. (ablative)Turkish particles according to their functions. Başka, gayrı, özge used for other, another, otherwise, new, diverse, either Senden gayrı kimsem yok. No one other than you.
Turkish:
Yardım istemekten başka çaremiz kalmadı. We have no choice but to ask for help.Göre, nazaran, dâir, rağmen used for by, in comparison, about, despite.
Çok çalışmama rağmen sınavda hedeflediğim başarıyı yakalayamadım.
Duyduğuma göre bitirme sınavları bir hafta erken gerçekleşecekmiş.
Şirketteki son değişikliklere dâir bilgi almak istiyorum.İçin, üzere, dolayı, ötürü, nâşi, diye used for for, with, because, because of, how.
Açılış konuşmasını yapmak üzere kürsüye çıktı.
Bu raporu bitirebilmek için zamana ihtiyacım var.
Kardeşim hastalığından nâşi gelemedi. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sand art and play**
Sand art and play:
Sand art is the practice of modelling sand into an artistic form, such as sand brushing, sand sculpting, sand painting, or creating sand bottles. A sandcastle is a type of sand sculpture resembling a miniature building, often a castle. The drip castle variation uses wet sand that is dribbled down to form organic shapes before the sands dries.
Sand art and play:
Most sand play takes place on sandy beaches, where the two basic building ingredients, sand and water, are available in abundance. Some sand play occurs in dry sandpits and sandboxes, though mostly by children and rarely for art forms. Tidal beaches generally have sand that limits height and structure because of the shape of the sand grains. Good sculpture sand is somewhat dirty, having silt and clay that helps lock the irregular-shaped sand grains together.
Sand art and play:
Sand castles are typically made by children for fun, but there are also sand-sculpture contests for adults that involve large, complex constructions. The largest sandcastle made in a contest was 18 feet tall; the owner, Ronald Malcnujio, a five-foot-tall man, had to use several ladders, each the height of the sandcastle. His sculpture consisted of one ton of sand and 10 litres of water to sculpt.
Sand castles and sculptures:
Sand grains will always stick together unless the sand is reasonably fine. While dry sand is loose, wet sand is adherent if the proper amounts of sand and water are mixed. The reason for this is that water forms little "bridges" between the grains of sand when it is damp due to the forces of surface tension.When the sand dries out or gets wet, the shape of a structure may change, and "landslides" are common. A mix of fine (mostly sharper) and coarse sand granules is very important to achieve good "sand construction" results. Fine granules that have been rounded by the natural influences of seas, rivers or fluvials, in turn negatively influence the bonding between the individual granules as they more easily slide past each other.
Sand castles and sculptures:
Shovels and buckets are the main construction tools used in creating sand castles and sand sculptures, although some people use only their hands. A simple sand castle can be made by filling a bucket with damp sand, placing it upside-down on the beach, and removing the bucket. For larger constructions, water from the sea to mix with the sand can be brought to the building site with a bucket or other container. Sometimes forms of other materials, such as wood or plastic are constructed to hold piles of sand in place and in specific shapes.Tunnels large enough to enter are extremely hazardous; children and adults die every year when such underground chambers collapsed under weight and instability of sand, or due to the tide coming up or the structure being hit by a wave. Sometimes, a dam can be built to hold back the water, tidal forts, which are incredibly large sandcastles with thick walls to protect the keep from the sea, can be built, or canals can be dug to contain the water.
Sand castles and sculptures:
A variant to a formed sculpture is the drip castle, made by dribbling very wet sand.Sand sculpting as an art form has become popular in coastal beach areas. Hundreds of annual competitions are held all over the world. Techniques can be quite sophisticated, and record-breaking achievements have been noted in the Guinness World Records. Sometimes, contests are staged as advertising or promotional events. Most sand sculptors come from other disciplines but a few earn their living solely from sand-related activities.
Sand castles and sculptures:
Notable sand sculpture artists include Sudarsan Pattnaik and M N Gowri who created the Mysore Sand Sculpture Museum.
Sand castles and sculptures:
Festivals and competitions From 1989 until 2009, a World Championship in Sand Sculpture was held in Harrison Hot Springs in Harrison, British Columbia, Canada, also known as "Harrisand". The competition had solo, double and team categories. The world championship was held in Ft. Myers, Florida, and other venues for a limited time. Other countries hold their own versions of the world championships as it is not possible to get all the people who may qualify in the same place at the same time due to the expense and logistics.
Sand castles and sculptures:
The world's tallest sand castle was built on Myrtle Beach in South Carolina by Team Sandtastic as part of the 2007 Sun Fun Festival. The structure was 49.55 feet (15.1 m) high. It took 10 days to construct and used 300 truckloads of sand. This record was broken in 2019 when a 58-feet tall sand castle was unveiled at Rügen in Germany. The tallest-ever sand castle was built by a group of international artists and was constructed with 11,000 tons of sand.Since 2003, Bettystown beach, Meath in Ireland has been home to the Irish annual National Sandcastle and Sand Sculpturing competition.In Lappeenranta, Finland, there is an annual tourist sight called the Sandcastle (Hiekkalinna), where a work of art made of sand according to a changing theme is created every year.The record for the number of individual sandcastles built in one hour, was set at Scarborough, England, on 18 August 2012. Four hundred people constructed 683 castles, with each being two feet wide and high, accompanied by four turrets.
Sand castles and sculptures:
Sand pagodas In southeast Asia sand pagodas are created in order to build Buddhist merit The tradition has been going on since the 1500s.
Play:
One of the main attractions of a sandy beach, especially for children, is playing with the sand; it presents more possibilities than an ordinary sandbox. One can make a mountain, a pit (encountering clay or the water table), canals, tunnels, bridges, a sculpture (representing a person, animal, etc., like a statue, or a scale model of a building), amongst many other things.
Play:
Burying someone up to their neck in sand or burying oneself in such a manner is another popular beach activity.
Sand angels Sand angels are made in the same manner as snow angels; a person lies on their back in the sand, extending their arms and legs and swishes them back and forth.
Fight against the tide A popular game is building a heap of sand, as high as possible, to withstand the upcoming tide.
Sand art:
Sand raking and beach murals Sand raking is performed on a sandy beach where the artist rakes the dry top layer of sand, exposing the wet underlayer to create light-and-dark contrasts. Usually the designs are quite large and are similar to man-made art crop circles. The designs are ephemeral, and wash away with the next tide. Some notable artists working in this medium include Andres Amador, Sean Corcoran and Marc Treanor.
Sand art:
Other A sand glass is a display in which there are multiple colors of sand in water between two sheets of glass. Unlike sand paintings, a sand glass is meant to be turned; the sand, traditionally in black and a light color, moves into new shapes with each turn. The term "sand glass" is a translation of the Portuguese phrase quadro de areia, literally "sand frame" or "sand picture". Unlike sand paintings, which are a traditional craft, these are found around the world in many colors and sizes.
Sand art:
Sandpainting is the art of pouring colored sands and other pigments onto a surface to make a painting.
Sand bottles are created by pouring colored sands into a bottle to make a scene.
Sand drawing is the creation of a drawing by scratching it out in a flat base of sand.
Sand animation is the making of animation by manipulating sand to build figures, textures and movement, frame by frame. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SDET**
SDET:
SDET also stands for software development engineer in test, a type of software engineer.SDET is a benchmark used in the systems software research community for measuring the throughput of a multi-user computer operating system.
Its name stands for SPEC Software Development Environment Throughput (SDET), and is packaged along with Kenbus in the SPEC SDM91 benchmark.
A more modern benchmark that is related to SDET is the reaim package, which is itself an up-to-date implementation of the venerable AIM Multiuser Benchmark.
Sources and external links:
SDM91 Perspectives on the SPEC SDET Benchmark | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Air mass (astronomy)**
Air mass (astronomy):
In astronomy, air mass or airmass is a measure of the amount of air along the line of sight when observing a star or other celestial source from below Earth's atmosphere (Green 1992). It is formulated as the integral of air density along the light ray.
As it penetrates the atmosphere, light is attenuated by scattering and absorption; the thicker atmosphere through which it passes, the greater the attenuation. Consequently, celestial bodies when nearer the horizon appear less bright than when nearer the zenith. This attenuation, known as atmospheric extinction, is described quantitatively by the Beer–Lambert law.
Air mass (astronomy):
"Air mass" normally indicates relative air mass, the ratio of absolute air masses (as defined above) at oblique incidence relative to that at zenith. So, by definition, the relative air mass at the zenith is 1. Air mass increases as the angle between the source and the zenith increases, reaching a value of approximately 38 at the horizon. Air mass can be less than one at an elevation greater than sea level; however, most closed-form expressions for air mass do not include the effects of the observer's elevation, so adjustment must usually be accomplished by other means.
Air mass (astronomy):
Tables of air mass have been published by numerous authors, including Bemporad (1904), Allen (1973), and Kasten & Young (1989).
Definition:
The absolute air mass is defined as: where ρ is volumetric density of air. Thus σ is a type of oblique column density.
In the vertical direction, the absolute air mass at zenith is: So σzen is a type of vertical column density.
Finally, the relative air mass is: Assuming air density is uniform allows removing it out of the integrals. The absolute air mass then simplifies to a product: where ρ¯=const.
is the average density and the arc length s of the oblique and zenith light paths are: In the corresponding simplified relative air mass, the average density cancels out in the fraction, leading to the ratio of path lengths: Further simplifications are often made, assuming straight-line propagation (neglecting ray bending), as discussed below.
Calculation:
Background The angle of a celestial body with the zenith is the zenith angle (in astronomy, commonly referred to as the zenith distance). A body's angular position can also be given in terms of altitude, the angle above the geometric horizon; the altitude h and the zenith angle z are thus related by Atmospheric refraction causes light entering the atmosphere to follow an approximately circular path that is slightly longer than the geometric path. Air mass must take into account the longer path (Young 1994). Additionally, refraction causes a celestial body to appear higher above the horizon than it actually is; at the horizon, the difference between the true zenith angle and the apparent zenith angle is approximately 34 minutes of arc. Most air mass formulas are based on the apparent zenith angle, but some are based on the true zenith angle, so it is important to ensure that the correct value is used, especially near the horizon.
Calculation:
Plane-parallel atmosphere When the zenith angle is small to moderate, a good approximation is given by assuming a homogeneous plane-parallel atmosphere (i.e., one in which density is constant and Earth's curvature is ignored). The air mass X then is simply the secant of the zenith angle z At a zenith angle of 60°, the air mass is approximately 2. However, because the Earth is not flat, this formula is only usable for zenith angles up to about 60° to 75°, depending on accuracy requirements. At greater zenith angles, the accuracy degrades rapidly, with sec z becoming infinite at the horizon; the horizon air mass in the more-realistic spherical atmosphere is usually less than 40.
Calculation:
Interpolative formulas Many formulas have been developed to fit tabular values of air mass; one by Young & Irvine (1967) included a simple corrective term: where zt is the true zenith angle. This gives usable results up to approximately 80°, but the accuracy degrades rapidly at greater zenith angles. The calculated air mass reaches a maximum of 11.13 at 86.6°, becomes zero at 88°, and approaches negative infinity at the horizon. The plot of this formula on the accompanying graph includes a correction for atmospheric refraction so that the calculated air mass is for apparent rather than true zenith angle.
Calculation:
Hardie (1962) introduced a polynomial in sec z−1 which gives usable results for zenith angles of up to perhaps 85°. As with the previous formula, the calculated air mass reaches a maximum, and then approaches negative infinity at the horizon.
Rozenberg (1966) suggested which gives reasonable results for high zenith angles, with a horizon air mass of 40.
Kasten & Young (1989) developed which gives reasonable results for zenith angles of up to 90°, with an air mass of approximately 38 at the horizon. Here the second z term is in degrees.
Young (1994) developed in terms of the true zenith angle zt , for which he claimed a maximum error (at the horizon) of 0.0037 air mass.
Pickering (2002) developed where h is apparent altitude 90 ∘−z) in degrees. Pickering claimed his equation to have a tenth the error of Schaefer (1998) near the horizon.
Atmospheric models Interpolative formulas attempt to provide a good fit to tabular values of air mass using minimal computational overhead. The tabular values, however, must be determined from measurements or atmospheric models that derive from geometrical and physical considerations of Earth and its atmosphere.
Nonrefracting spherical atmosphere If atmospheric refraction is ignored, it can be shown from simple geometrical considerations (Schoenberg 1929, 173) that the path s of a light ray at zenith angle z through a radially symmetrical atmosphere of height yatm above the Earth is given by or alternatively, where RE is the radius of the Earth.
Calculation:
The relative air mass is then: Homogeneous atmosphere If the atmosphere is homogeneous (i.e., density is constant), the atmospheric height yatm follows from hydrostatic considerations as: where k is Boltzmann's constant, T0 is the sea-level temperature, m is the molecular mass of air, and g is the acceleration due to gravity. Although this is the same as the pressure scale height of an isothermal atmosphere, the implication is slightly different. In an isothermal atmosphere, 37% (1/e) of the atmosphere is above the pressure scale height; in a homogeneous atmosphere, there is no atmosphere above the atmospheric height.
Calculation:
Taking 288.15 K , 28.9644 1.6605 10 27 kg , and 9.80665 m/s2 gives 8435 m . Using Earth's mean radius of 6371 km, the sea-level air mass at the horizon is The homogeneous spherical model slightly underestimates the rate of increase in air mass near the horizon; a reasonable overall fit to values determined from more rigorous models can be had by setting the air mass to match a value at a zenith angle less than 90°. The air mass equation can be rearranged to give matching Bemporad's value of 19.787 at z = 88° gives RE/yatm ≈ 631.01 and Xhoriz ≈ 35.54. With the same value for RE as above, yatm ≈ 10,096 m.
Calculation:
While a homogeneous atmosphere isn't a physically realistic model, the approximation is reasonable as long as the scale height of the atmosphere is small compared to the radius of the planet. The model is usable (i.e., it does not diverge or go to zero) at all zenith angles, including those greater than 90° (see § Homogeneous spherical atmosphere with elevated observer). The model requires comparatively little computational overhead, and if high accuracy is not required, it gives reasonable results.
Calculation:
However, for zenith angles less than 90°, a better fit to accepted values of air mass can be had with several of the interpolative formulas.
Calculation:
Variable-density atmosphere In a real atmosphere, density is not constant (it decreases with elevation above mean sea level. The absolute air mass for the geometrical light path discussed above, becomes, for a sea-level observer, Isothermal atmosphere Several basic models for density variation with elevation are commonly used. The simplest, an isothermal atmosphere, gives where ρ0 is the sea-level density and H is the pressure scale height. When the limits of integration are zero and infinity, the result is known as Chapman function. An approximate result is obtained if some high-order terms are dropped, yielding (Young 1974, p. 147), An approximate correction for refraction can be made by taking (Young 1974, p. 147) where RE is the physical radius of the Earth. At the horizon, the approximate equation becomes Using a scale height of 8435 m, Earth's mean radius of 6371 km, and including the correction for refraction, Polytropic atmosphere The assumption of constant temperature is simplistic; a more realistic model is the polytropic atmosphere, for which where T0 is the sea-level temperature and α is the temperature lapse rate. The density as a function of elevation is where κ is the polytropic exponent (or polytropic index). The air mass integral for the polytropic model does not lend itself to a closed-form solution except at the zenith, so the integration usually is performed numerically.
Calculation:
Layered atmosphere Earth's atmosphere consists of multiple layers with different temperature and density characteristics; common atmospheric models include the International Standard Atmosphere and the US Standard Atmosphere. A good approximation for many purposes is a polytropic troposphere of 11 km height with a lapse rate of 6.5 K/km and an isothermal stratosphere of infinite height (Garfinkel 1967), which corresponds very closely to the first two layers of the International Standard Atmosphere. More layers can be used if greater accuracy is required.
Calculation:
Refracting radially symmetrical atmosphere When atmospheric refraction is considered, ray tracing becomes necessary (Kivalov 2007), and the absolute air mass integral becomes where nobs is the index of refraction of air at the observer's elevation yobs above sea level, n is the index of refraction at elevation y above sea level, robs=RE+yobs ,r=RE+y is the distance from the center of the Earth to a point at elevation y , and ratm=RE+yatm is distance to the upper limit of the atmosphere at elevation yatm . The index of refraction in terms of density is usually given to sufficient accuracy (Garfinkel 1967) by the Gladstone–Dale relation Rearrangement and substitution into the absolute air mass integral gives The quantity nobs−1 is quite small; expanding the first term in parentheses, rearranging several times, and ignoring terms in (nobs−1)2 after each rearrangement, gives (Kasten & Young 1989) Homogeneous spherical atmosphere with elevated observer In the figure at right, an observer at O is at an elevation yobs above sea level in a uniform radially symmetrical atmosphere of height yatm . The path length of a light ray at zenith angle z is s ; RE is the radius of the Earth. Applying the law of cosines to triangle OAC, expanding the left- and right-hand sides, eliminating the common terms, and rearranging gives Solving the quadratic for the path length s, factoring, and rearranging, The negative sign of the radical gives a negative result, which is not physically meaningful. Using the positive sign, dividing by yatm , and cancelling common terms and rearranging gives the relative air mass: With the substitutions r^=RE/yatm and y^=yobs/yatm , this can be given as When the observer's elevation is zero, the air mass equation simplifies to In the limit of grazing incidence, the absolute airmass equals the distance to the horizon. Furthermore, if the observer is elevated, the horizon zenith angle can be greater than 90°.
Calculation:
Nonuniform distribution of attenuating species Atmospheric models that derive from hydrostatic considerations assume an atmosphere of constant composition and a single mechanism of extinction, which isn't quite correct. There are three main sources of attenuation (Hayes & Latham 1975): Rayleigh scattering by air molecules, Mie scattering by aerosols, and molecular absorption (primarily by ozone). The relative contribution of each source varies with elevation above sea level, and the concentrations of aerosols and ozone cannot be derived simply from hydrostatic considerations.
Calculation:
Rigorously, when the extinction coefficient depends on elevation, it must be determined as part of the air mass integral, as described by Thomason, Herman & Reagan (1983). A compromise approach often is possible, however. Methods for separately calculating the extinction from each species using closed-form expressions are described in Schaefer (1993) and Schaefer (1998). The latter reference includes source code for a BASIC program to perform the calculations. Reasonably accurate calculation of extinction can sometimes be done by using one of the simple air mass formulas and separately determining extinction coefficients for each of the attenuating species (Green 1992, Pickering 2002).
Implications:
Air mass and astronomy In optical astronomy, the air mass provides an indication of the deterioration of the observed image, not only as regards direct effects of spectral absorption, scattering and reduced brightness, but also an aggregation of visual aberrations, e.g. resulting from atmospheric turbulence, collectively referred to as the quality of the "seeing". On bigger telescopes, such as the WHT (Wynne & Worswick 1988) and VLT (Avila, Rupprecht & Beckers 1997), the atmospheric dispersion can be so severe that it affects the pointing of the telescope to the target. In such cases an atmospheric dispersion compensator is used, which usually consists of two prisms.
Implications:
The Greenwood frequency and Fried parameter, both relevant for adaptive optics, depend on the air mass above them (or more specifically, on the zenith angle).
Implications:
In radio astronomy the air mass (which influences the optical path length) is not relevant. The lower layers of the atmosphere, modeled by the air mass, do not significantly impede radio waves, which are of much lower frequency than optical waves. Instead, some radio waves are affected by the ionosphere in the upper atmosphere. Newer aperture synthesis radio telescopes are especially affected by this as they “see” a much larger portion of the sky and thus the ionosphere. In fact, LOFAR needs to explicitly calibrate for these distorting effects (van der Tol & van der Veen 2007; de Vos, Gunst & Nijboer 2009), but on the other hand can also study the ionosphere by instead measuring these distortions (Thidé 2007).
Implications:
Air mass and solar energy In some fields, such as solar energy and photovoltaics, air mass is indicated by the acronym AM; additionally, the value of the air mass is often given by appending its value to AM, so that AM1 indicates an air mass of 1, AM2 indicates an air mass of 2, and so on. The region above Earth's atmosphere, where there is no atmospheric attenuation of solar radiation, is considered to have "air mass zero" (AM0).
Implications:
Atmospheric attenuation of solar radiation is not the same for all wavelengths; consequently, passage through the atmosphere not only reduces intensity but also alters the spectral irradiance. Photovoltaic modules are commonly rated using spectral irradiance for an air mass of 1.5 (AM1.5); tables of these standard spectra are given in ASTM G 173-03. The extraterrestrial spectral irradiance (i.e., that for AM0) is given in ASTM E 490-00a.For many solar energy applications when high accuracy near the horizon is not required, air mass is commonly determined using the simple secant formula described in the section Plane-parallel atmosphere. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Valerio Meletti**
Valerio Meletti:
Valerio Meletti is a media monitoring specialist working for the main Italian media monitoring agency since 2006. Managing web and social media monitoring are his main skills.
He founded and managed three music publishing companies: Ethnoworld, Silent Revolution (London, 2005–2008) and Ignorelands (since 2016).
As a percussionist Valerio Meletti worked extensively with F.B.A. and the Celtic Harp Orchestra, appearing on some of the bands' finest works: above all "Till the sky shall fall" (F.B.A.) and "The Myst" (C.H.O.), both critically acclaimed and worldwide distributed.
As a band manager V.M. has been working since 2004 with The Afterglow releasing three albums (including the 2006 much acclaimed "Decalogue of Modern Life", mixed by Steve Orchard), touring England/Scotland four times, and shooting videoclips in London and Liverpool.
As an event manager V.Meletti worked for Milan's Festa della Musica 2004 and organized festivals and events such like the Bocconi University Ethnic Festival ("Ethnobocconi") or "Musiche della Terra".Meletti held workshops and classes focused on record labels' management, including lessons at the Bocconi University CLEACC.
He also self published a theatre play ("The Ninja Cricketer and Scratched", a handbook on how to play the Celtic drum bodhràn ("Suonare il bodhràn", in Italian) and a book on Italians living in Venezuela during the '50s and '60s ("Italiani in Venezuela", in Italian, with his father Giorgio Meletti). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Model-based design**
Model-based design:
Model-based design (MBD) is a mathematical and visual method of addressing problems associated with designing complex control, signal processing and communication systems. It is used in many motion control, industrial equipment, aerospace, and automotive applications. Model-based design is a methodology applied in designing embedded software.
Overview:
Model-based design provides an efficient approach for establishing a common framework for communication throughout the design process while supporting the development cycle (V-model). In model-based design of control systems, development is manifested in these four steps: modeling a plant, analyzing and synthesizing a controller for the plant, simulating the plant and controller, integrating all these phases by deploying the controller.The model-based design is significantly different from traditional design methodology. Rather than using complex structures and extensive software code, designers can use Model-based design to define plant models with advanced functional characteristics using continuous-time and discrete-time building blocks. These built models used with simulation tools can lead to rapid prototyping, software testing, and verification. Not only is the testing and verification process enhanced, but also, in some cases, hardware-in-the-loop simulation can be used with the new design paradigm to perform testing of dynamic effects on the system more quickly and much more efficiently than with traditional design methodology.
History:
As early as the 1920s two aspects of engineering, control theory and control systems, converged to make large-scale integrated systems possible. In those early days controls systems were commonly used in the industrial environment. Large process facilities started using process controllers for regulating continuous variables such as temperature, pressure, and flow rate. Electrical relays built into ladder-like networks were one of the first discrete control devices to automate an entire manufacturing process. Control systems gained momentum, primarily in the automotive and aerospace sectors. In the 1950s and 1960s, the push to space generated interest in embedded control systems. Engineers constructed control systems such as engine control units and flight simulators, that could be part of the end product. By the end of the twentieth century, embedded control systems were ubiquitous, as even major household consumer appliances such as washing machines and air conditioners contained complex and advanced control algorithms, making them much more "intelligent".
History:
In 1969, the first computer-based controllers were introduced. These early programmable logic controllers (PLC) mimicked the operations of already available discrete control technologies that used the out-dated relay ladders. The advent of PC technology brought a drastic shift in the process and discrete control market. An off-the-shelf desktop loaded with adequate hardware and software can run an entire process unit, and execute complex and established PID algorithms or work as a Distributed Control System (DCS).
Steps:
The main steps in model-based design approach are: Plant modeling. Plant modeling can be data-driven or based on first principles. Data-driven plant modeling uses techniques such as System identification. With system identification, the plant model is identified by acquiring and processing raw data from a real-world system and choosing a mathematical algorithm with which to identify a mathematical model. Various kinds of analysis and simulations can be performed using the identified model before it is used to design a model-based controller. First-principles based modeling is based on creating a block diagram model that implements known differential-algebraic equations governing plant dynamics. A type of first-principles based modeling is physical modeling, where a model consists in connected blocks that represent the physical elements of the actual plant.
Steps:
Controller analysis and synthesis. The mathematical model conceived in step 1 is used to identify dynamic characteristics of the plant model. A controller can then be synthesized based on these characteristics.
Steps:
Offline simulation and real-time simulation. The time response of the dynamic system to complex, time-varying inputs is investigated. This is done by simulating a simple LTI (Linear Time-Invariant) model, or by simulating a non-linear model of the plant with the controller. Simulation allows specification, requirements, and modeling errors to be found immediately, rather than later in the design effort. Real-time simulation can be done by automatically generating code for the controller developed in step 2. This code can be deployed to a special real-time prototyping computer that can run the code and control the operation of the plant. If a plant prototype is not available, or testing on the prototype is dangerous or expensive, code can be automatically generated from the plant model. This code can be deployed to the special real-time computer that can be connected to the target processor with running controller code. Thus a controller can be tested in real-time against a real-time plant model.
Steps:
Deployment. Ideally this is done via code generation from the controller developed in step 2. It is unlikely that the controller will work on the actual system as well as it did in simulation, so an iterative debugging process is carried out by analyzing results on the actual target and updating the controller model. Model-based design tools allow all these iterative steps to be performed in a unified visual environment.
Disadvantages:
The disadvantages of model-based design are fairly well understood this late in development lifecycle of the product and development. One major disadvantage is that the approach taken is a blanket or coverall approach to standard embedded and systems development. Often the time it takes to port between processors and ecosystems can outweigh the temporal value it offers in the simpler lab based implementations.Much of the compilation tool chain is closed source, and prone to fence post errors, and other such common compilation errors that are easily corrected in traditional systems engineering.Design and reuse patterns can lead to implementations of models that are not well suited to that task. Such as implementing a controller for a conveyor belt production facility that uses a thermal sensor, speed sensor, and current sensor. That model is generally not well suited for re-implementation in a motor controller etc. Though its very easy to port such a model over, and introduce all the software faults therein.
Disadvantages:
While Model-based design has the ability to simulate test scenarios and interpret simulations well, in real world production environments, it is often not suitable. Over reliance on a given toolchain can lead to significant rework and possibly compromise entire engineering approaches. While it's suitable for bench work, the choice to use this for a production system should be made very carefully.
Advantages:
Some of the advantages model-based design offers in comparison to the traditional approach are: Model-based design provides a common design environment, which facilitates general communication, data analysis, and system verification between various (development) groups.
Engineers can locate and correct errors early in system design, when the time and financial impact of system modification are minimized.
Advantages:
Design reuse, for upgrades and for derivative systems with expanded capabilities, is facilitated.Because of the limitations of graphical tools, design engineers previously relied heavily on text-based programming and mathematical models. However, developing these models was time-consuming, and highly prone to error. In addition, debugging text-based programs is a tedious process, requiring much trial and error before a final fault-free model could be created, especially since mathematical models undergo unseen changes during the translation through the various design stages.
Advantages:
Graphical modeling tools aim to improve these aspects of design. These tools provide a very generic and unified graphical modeling environment, and they reduce the complexity of model designs by breaking them into hierarchies of individual design blocks. Designers can thus achieve multiple levels of model fidelity by simply substituting one block element with another. Graphical models also help engineers to conceptualize the entire system and simplify the process of transporting the model from one stage to another in the design process. Boeing's simulator EASY5 was among the first modeling tools to be provided with a graphical user interface, together with AMESim, a multi-domain, multi-level platform based on the Bond Graph theory. This was soon followed by tool like 20-sim and Dymola, which allowed models to be composed of physical components like masses, springs, resistors, etc. These were later followed by many other modern tools such as Simulink and LabVIEW. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Control lock**
Control lock:
A control lock, guard lock or stop lock differs from a normal canal lock in that its primary purpose is controlling variances in water level rather than raising or lowering vessels. A control lock may also be known as a tide lock where it is used to control seawater entering into a body of fresh water.
Examples:
The T. J. O’Brien Lock and Dam at Chicago, Illinois is a guard lock that controls the outflow of water from Lake Michigan into the Illinois Waterway while locking vessels through between the waterway and Lake Michigan.Lock 8 near the south end of the Welland Canal at Port Colborne, Ontario, Canada is a guard lock. Due to the large expanse of shallow water in Lake Erie, changes in wind direction and force create water level changes as great as 11 feet (3.4 m) at Port Colborne. Lock 8 controls the water level in the canal, keeping it independent of the fluctuations of Lake Erie, but allows ships to enter Lake Erie regardless of its level. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Free-electron laser**
Free-electron laser:
A free-electron laser (FEL) is a (fourth generation) light source producing extremely brilliant and short pulses of radiation. An FEL functions and behaves in many ways like a laser, but instead of using stimulated emission from atomic or molecular excitations, it employs relativistic electrons as a gain medium. Radiation is generated by a bunch of electrons passing through a magnetic structure (called undulator or wiggler). In an FEL, this radiation is further amplified as the radiation re-interacts with the electron bunch such that the electrons start to emit coherently, thus allowing an exponential increase in overall radiation intensity. As electron kinetic energy and undulator parameters can be adapted as desired, free-electron lasers are tunable and can be built for a wider frequency range than any other type of laser, currently ranging in wavelength from microwaves, through terahertz radiation and infrared, to the visible spectrum, ultraviolet, and X-ray.
Free-electron laser:
The first free-electron laser was developed by John Madey in 1971 at Stanford University using technology developed by Hans Motz and his coworkers, who built an undulator at Stanford in 1953, using the wiggler magnetic configuration. Madey used a 43 MeV electron beam and 5 m long wiggler to amplify a signal.
Beam creation:
To create an FEL, a beam of electrons is accelerated to almost the speed of light. The beam passes through a periodic arrangement of magnets with alternating poles across the beam path, which creates a side to side magnetic field. The direction of the beam is called the longitudinal direction, while the direction across the beam path is called transverse. This array of magnets is called an undulator or a wiggler, because the Lorentz force of the field forces the electrons in the beam to wiggle transversely, traveling along a sinusoidal path about the axis of the undulator.
Beam creation:
The transverse acceleration of the electrons across this path results in the release of photons, which are monochromatic but still incoherent, because the electromagnetic waves from randomly distributed electrons interfere constructively and destructively in time. The resulting radiation power scales linearly with the number of electrons. Mirrors at each end of the undulator create an optical cavity, causing the radiation to form standing waves, or alternately an external excitation laser is provided.
Beam creation:
The radiation becomes sufficiently strong that the transverse electric field of the radiation beam interacts with the transverse electron current created by the sinusoidal wiggling motion, causing some electrons to gain and others to lose energy to the optical field via the ponderomotive force.
Beam creation:
This energy modulation evolves into electron density (current) modulations with a period of one optical wavelength. The electrons are thus longitudinally clumped into microbunches, separated by one optical wavelength along the axis. Whereas an undulator alone would cause the electrons to radiate independently (incoherently), the radiation emitted by the bunched electrons is in phase, and the fields add together coherently.
Beam creation:
The radiation intensity grows, causing additional microbunching of the electrons, which continue to radiate in phase with each other. This process continues until the electrons are completely microbunched and the radiation reaches a saturated power several orders of magnitude higher than that of the undulator radiation.
The wavelength of the radiation emitted can be readily tuned by adjusting the energy of the electron beam or the magnetic-field strength of the undulators.
Beam creation:
FELs are relativistic machines. The wavelength of the emitted radiation, λr , is given by λr=λu2γ2(1+K22) or when the wiggler strength parameter K, discussed below, is small λr∝λu2γ2 where λu is the undulator wavelength (the spatial period of the magnetic field), γ is the relativistic Lorentz factor and the proportionality constant depends on the undulator geometry and is of the order of 1.
Beam creation:
This formula can be understood as a combination of two relativistic effects. Imagine you are sitting on an electron passing through the undulator. Due to Lorentz contraction the undulator is shortened by a γ factor and the electron experiences much shorter undulator wavelength λu/γ . However, the radiation emitted at this wavelength is observed in the laboratory frame of reference and the relativistic Doppler effect brings the second γ factor to the above formula. In an X-ray FEL the typical undulator wavelength of 1 cm is transformed to X-ray wavelengths on the order of 1 nm by γ ≈ 2000, i.e. the electrons have to travel with the speed of 0.9999998c.
Beam creation:
Wiggler strength parameter K K, a dimensionless parameter, defines the wiggler strength as the relationship between the length of a period and the radius of bend, K=γλu2πρ=eB0λu2πmec where ρ is the bending radius, B0 is the applied magnetic field, me is the electron mass, and e is the elementary charge.
Expressed in practical units, the dimensionless undulator parameter is 0.934 [T] [cm] Quantum effects In most cases, the theory of classical electromagnetism adequately accounts for the behavior of free electron lasers. For sufficiently short wavelengths, quantum effects of electron recoil and shot noise may have to be considered.
FEL construction:
Free-electron lasers require the use of an electron accelerator with its associated shielding, as accelerated electrons can be a radiation hazard if not properly contained. These accelerators are typically powered by klystrons, which require a high-voltage supply. The electron beam must be maintained in a vacuum, which requires the use of numerous vacuum pumps along the beam path. While this equipment is bulky and expensive, free-electron lasers can achieve very high peak powers, and the tunability of FELs makes them highly desirable in many disciplines, including chemistry, structure determination of molecules in biology, medical diagnosis, and nondestructive testing.
FEL construction:
Infrared and terahertz FELs The Fritz Haber Institute in Berlin completed a mid-infrared and terahertz FEL in 2013.
FEL construction:
X-ray FELs The lack of mirror materials that can reflect extreme ultraviolet and x-rays means that X-ray Free Electron Lasers (XFEL) need to work without a resonant cavity. Consequently, in an X-ray FEL (XFEL) the beam is produced by a single pass of radiation through the undulator. This requires that there be enough amplification over a single pass to produce an appropriate beam.
FEL construction:
Hence, XFELs use long undulator sections that are tens or hundreds of meters long. This allows XFELs to produce the brightest X-ray pulses of any human-made x-ray source. The intense pulses from the X-ray laser lies in the principle of self-amplified spontaneous emission (SASE), which leads to microbunching. Initially all electrons are distributed evenly and emit only incoherent spontaneous radiation. Through the interaction of this radiation and the electrons' oscillations, they drift into microbunches separated by a distance equal to one radiation wavelength. This interaction drives all electrons to begin emitting coherent radiation. Emitted radiation can reinforce itself perfectly whereby wave crests and wave troughs are optimally superimposed on one another. This results in an exponential increase of emitted radiation power, leading to high beam intensities and laser-like properties. Examples of facilities operating on the SASE FEL principle include the Free electron LASer in Hamburg (FLASH), the Linac Coherent Light Source (LCLS) at the SLAC National Accelerator Laboratory, the European x-ray free electron laser (EuXFEL) in Hamburg, the SPring-8 Compact SASE Source (SCSS) in Japan, the SwissFEL at the Paul Scherrer Institute (Switzerland), the SACLA at the RIKEN Harima Institute in Japan, and the PAL-XFEL (Pohang Accelerator Laboratory X-ray Free-Electron Laser) in Korea.
FEL construction:
In 2022, an upgrade to Stanford University’s Linac Coherent Light Source (LCLS-II) used temperatures around −271 °C to produce 106 pulses/second of near light-speed electrons, using superconducting niobium cavities.
FEL construction:
Self-seeding One problem with SASE FELs is the lack of temporal coherence due to a noisy startup process. To avoid this, one can "seed" an FEL with a laser tuned to the resonance of the FEL. Such a temporally coherent seed can be produced by more conventional means, such as by high harmonic generation (HHG) using an optical laser pulse. This results in coherent amplification of the input signal; in effect, the output laser quality is characterized by the seed. While HHG seeds are available at wavelengths down to the extreme ultraviolet, seeding is not feasible at x-ray wavelengths due to the lack of conventional x-ray lasers.
FEL construction:
In late 2010, in Italy, the seeded-FEL source FERMI@Elettra started commissioning, at the Trieste Synchrotron Laboratory. FERMI@Elettra is a single-pass FEL user-facility covering the wavelength range from 100 nm (12 eV) to 10 nm (124 eV), located next to the third-generation synchrotron radiation facility ELETTRA in Trieste, Italy.
FEL construction:
In 2012, scientists working on the LCLS overcame the seeding limitation for x-ray wavelengths by self-seeding the laser with its own beam after being filtered through a diamond monochromator. The resulting intensity and monochromaticity of the beam were unprecedented and allowed new experiments to be conducted involving manipulating atoms and imaging molecules. Other labs around the world are incorporating the technique into their equipment.
Research:
Biomedical Basic research Researchers have explored free-electron lasers as an alternative to synchrotron light sources that have been the workhorses of protein crystallography and cell biology.Exceptionally bright and fast X-rays can image proteins using x-ray crystallography. This technique allows first-time imaging of proteins that do not stack in a way that allows imaging by conventional techniques, 25% of the total number of proteins. Resolutions of 0.8 nm have been achieved with pulse durations of 30 femtoseconds. To get a clear view, a resolution of 0.1–0.3 nm is required. The short pulse durations allow images of X-ray diffraction patterns to be recorded before the molecules are destroyed. The bright, fast X-rays were produced at the Linac Coherent Light Source at SLAC. As of 2014 LCLS was the world's most powerful X-ray FEL.Due to the increased repetition rates of the next-generation X-ray FEL sources, such as the European XFEL, the expected number of diffraction patterns is also expected to increase by a substantial amount. The increase in the number of diffraction patterns will place a large strain on existing analysis methods. To combat this, several methods have been researched to sort the huge amount of data typical X-ray FEL experiments will generate. While the various methods have been shown to be effective, it is clear that to pave the way towards single-particle X-ray FEL imaging at full repetition rates, several challenges have to be overcome before the next resolution revolution can be achieved. New biomarkers for metabolic diseases: taking advantage of the selectivity and sensitivity when combining infrared ion spectroscopy and mass spectrometry scientists can provide a structural fingerprint of small molecules in biological samples, like blood or urine. This new and unique methodology is generating exciting new possibilities to better understand metabolic diseases and develop novel diagnostic and therapeutic strategies.
Research:
Surgery Research by Glenn Edwards and colleagues at Vanderbilt University's FEL Center in 1994 found that soft tissues including skin, cornea, and brain tissue could be cut, or ablated, using infrared FEL wavelengths around 6.45 micrometres with minimal collateral damage to adjacent tissue. This led to surgeries on humans, the first ever using a free-electron laser. Starting in 1999, Copeland and Konrad performed three surgeries in which they resected meningioma brain tumors. Beginning in 2000, Joos and Mawn performed five surgeries that cut a window in the sheath of the optic nerve, to test the efficacy for optic nerve sheath fenestration. These eight surgeries produced results consistent with the standard of care and with the added benefit of minimal collateral damage. A review of FELs for medical uses is given in the 1st edition of Tunable Laser Applications.
Research:
Fat removal Several small, clinical lasers tunable in the 6 to 7 micrometre range with pulse structure and energy to give minimal collateral damage in soft tissue have been created. At Vanderbilt, there exists a Raman shifted system pumped by an Alexandrite laser.Rox Anderson proposed the medical application of the free-electron laser in melting fats without harming the overlying skin. At infrared wavelengths, water in tissue was heated by the laser, but at wavelengths corresponding to 915, 1210 and 1720 nm, subsurface lipids were differentially heated more strongly than water. The possible applications of this selective photothermolysis (heating tissues using light) include the selective destruction of sebum lipids to treat acne, as well as targeting other lipids associated with cellulite and body fat as well as fatty plaques that form in arteries which can help treat atherosclerosis and heart disease.
Research:
Military FEL technology is being evaluated by the US Navy as a candidate for an antiaircraft and anti-missile directed-energy weapon. The Thomas Jefferson National Accelerator Facility's FEL has demonstrated over 14 kW power output. Compact multi-megawatt class FEL weapons are undergoing research. On June 9, 2009 the Office of Naval Research announced it had awarded Raytheon a contract to develop a 100 kW experimental FEL. On March 18, 2010 Boeing Directed Energy Systems announced the completion of an initial design for U.S. Naval use. A prototype FEL system was demonstrated, with a full-power prototype scheduled by 2018.
FEL Prize Winners:
The FEL prize is given to a person who has contributed significantly to the advancement of the field of Free-Electron Lasers. In addition, it gives the international FEL community the opportunity to recognize one of its members for her or his outstanding achievements.
FEL Prize Winners:
1988 John Madey 1989 William Colson 1990 Todd Smith and Luis Elias 1991 Phillip Sprangle and Nikolai Vinokurov 1992 Robert Phillips 1993 Roger Warren 1994 Alberto Renieri and Giuseppe Dattoli 1995 Richard Pantell and George Bekefi 1996 Charles Brau 1997 Kwang-Je Kim 1998 John Walsh 1999 Claudio Pellegrini 2000 Stephen V. Benson, Eisuke J. Minehara, and George R. Neil 2001 Michel Billardon, Marie-Emmanuelle Couprie, and Jean-Michel Ortega 2002 H. Alan Schwettman and Alexander F.G. van der Meer 2003 Li-Hua Yu 2004 Vladimir Litvinenko and Hiroyuki Hama 2005 Avraham (Avi) Gover 2006 Evgueni Saldin and Jörg Rossbach 2007 Ilan Ben-Zvi and James Rosenzweig 2008 Samuel Krinsky 2009 David Dowell and Paul Emma 2010 Sven Reiche 2011 Tsumoru Shintake 2012 John Galayda 2013 Luca Giannessi and Young Uk Jeong 2014 Zhirong Huang and William Fawley 2015 Mikhail Yurkov and Evgeny Schneidmiller 2017 Bruce Carlsten, Dinh Nguyen and Richard Sheffield 2019 Enrico Allaria, Gennady Stupakov, and Alex Lumpkin 2022 Brian McNeil and Ying Wu Young Scientist FEL Award The Young Scientist FEL Award (or "Young Investigator FEL Prize") is intended to honor outstanding contributions to FEL science and technology from a person who is less than 35 years of age.
FEL Prize Winners:
2008 Michael Röhrs 2009 Pavel Evtushenko 2010 Guillaume Lambert 2011 Marie Labat 2012 Daniel F. Ratner 2013 Dao Xiang 2014 Erik Hemsing 2015 Agostino Marinelli and Haixiao Deng 2017 Eugenio Ferrari and Eléonore Roussel 2019 Joe Duris and Chao Feng 2022 Zhen Zhang, Jiawei Yan, and Svitozar Serkez | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**S. Simon Wong**
S. Simon Wong:
S. Simon Wong is a professor in the Stanford Department of Electrical Engineering. He is affiliated faculty in the Stanford Non-Volatile Memory Technology Research Initiative (NMTRI), System X Alliance, and Bio-X.
Education:
S. Simon Wong received two bachelor's degrees from the University of Minnesota: electrical engineering and mechanical engineering. He completed a M.S. and Ph.D. in electrical engineering from the University of California, Berkeley.
Academic career and research:
Wong joined the faculty of Stanford Department of Electrical Engineering in 1988.
He studies the fabrication and design of high-performance integrated circuits. His work focuses on understanding and overcoming the limitations of circuit performance imposed by device, interconnect, and on-chip components. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Voice portal**
Voice portal:
Voice portals are the voice equivalent of web portals, giving access to information through spoken commands and voice responses. Ideally a voice portal could be an access point for any type of information, services, or transactions found on the Internet. Common uses include movie time listings and stock trading.
In telecommunications circles, voice portals may be referred to as interactive voice response (IVR) systems, but this term also includes DTMF services.
With the emergence of conversational assistants such as Apple's Siri, Amazon Alexa, Google Assistant, Microsoft Cortana, and Samsung's Bixby, Voice Portals can now be accessed through mobile devices and Far Field voice smart speakers such as the Amazon Echo and Google Home.
Advantages:
Voice portals have no dependency on the access device; even low end mobile handsets can access the service. Voice portals talk to users in their local language and there is reduced customer learning required for using voice services compared to Internet/SMS based services.A complex search query that otherwise would take multiple widgets (drop down, check box, text box filling), can easily and effortlessly be formulated by anyone who can speak without needing to be familiar with any visual interfaces. For instance, one can say, "Find me an eyeliner, not too thick, dark brown, from Estee Lauder MAC, that's below thirty dollars" or "What is the closest liquor store from here and what time do they close?"
Limitations:
Voice is the most natural communication medium, but the information that can be provided is limited compared to visual media.For example, most Internet users try a search term, scan results, then adjust the search term to eliminate irrelevant results. They may take two or three quick iterations to get a list that they are confident will contain what they are looking for. The equivalent approach is not practical when results are spoken, as it would take far too long. In this case, a multimodal interaction would be preferable to a voice-only interface.
Trends:
Live-agent and Internet-based voice portals are converging, and the range of information they can provide is expanding.
Trends:
Live-agent portals are introducing greater automation through speech recognition and text-to-speech technology, in many cases providing fully automated service, while automated Internet-based portals are adding operator fallback in premium services. The live-agent portals, which used to rely entirely on pre-structured databases holding specific types of information are expanding into more free-form Internet access, while the Internet-based portals are adding pre-structured content to improve automation of the more common types of request.Speech technology is starting to introduce Artificial Intelligence concepts that make it practical to recognise a much broader range of utterances, learning from experience. This promises to make it practical to greatly improve speaker recognition rates and expand the range of information that can be provided by a voice portal.
Technology providers:
A number of web-based companies are dedicated to providing voice-based access to Internet information to consumers.
Quack.com launched its service in March 2000 and has since obtained the first overall voiceportal patent.Quack.com was acquired by AOL in 2000 and relaunched as AOL By Phone later that year. Tellme Networks was acquired by Microsoft in 2007.
Nuance, the dominant provider of speech recognition and text-to-speech technology, is starting to deliver voice portal solutions. Other companies in this space include TelSurf Networks, FonGenie, Apptera and Call Genie.
Apart from public voice portal services, a number of technology companies, including Alcatel-Lucent, Avaya, and Cisco, offer commercial enterprise-grade voice portal products to be used by companies to serve their clients. Avaya also has a carrier-grade portfolio. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inferring horizontal gene transfer**
Inferring horizontal gene transfer:
Horizontal or lateral gene transfer (HGT or LGT) is the transmission of portions of genomic DNA between organisms through a process decoupled from vertical inheritance. In the presence of HGT events, different fragments of the genome are the result of different evolutionary histories. This can therefore complicate investigations of the evolutionary relatedness of lineages and species. Also, as HGT can bring into genomes radically different genotypes from distant lineages, or even new genes bearing new functions, it is a major source of phenotypic innovation and a mechanism of niche adaptation. For example, of particular relevance to human health is the lateral transfer of antibiotic resistance and pathogenicity determinants, leading to the emergence of pathogenic lineages.Inferring horizontal gene transfer through computational identification of HGT events relies upon the investigation of sequence composition or evolutionary history of genes. Sequence composition-based ("parametric") methods search for deviations from the genomic average whereas evolutionary history-based ("phylogenetic") approaches identify genes whose evolutionary history significantly differs from that of the host species. The evaluation and benchmarking of HGT inference methods typically rely upon simulated genomes, for which the true history is known. On real data, different methods tend to infer different HGT events, and as a result it can be difficult to ascertain all but simple and clear-cut HGT events.
Overview:
Horizontal gene transfer was first observed in 1928, in Frederick Griffith's experiment: showing that virulence was able to pass from virulent to non-virulent strains of Streptococcus pneumoniae, Griffith demonstrated that genetic information can be horizontally transferred between bacteria via a mechanism known as transformation. Similar observations in the 1940s and 1950s showed evidence that conjugation and transduction are additional mechanisms of horizontal gene transfer.To infer HGT events, which may not necessarily result in phenotypic changes, most contemporary methods are based on analyses of genomic sequence data. These methods can be broadly separated into two groups: parametric and phylogenetic methods. Parametric methods search for sections of a genome that significantly differ from the genomic average, such as GC content or codon usage. Phylogenetic methods examine evolutionary histories of genes involved and identify conflicting phylogenies. Phylogenetic methods can be further divided into those that reconstruct and compare phylogenetic trees explicitly, and those that use surrogate measures in place of the phylogenetic trees.The main feature of parametric methods is that they only rely on the genome under study to infer HGT events that may have occurred on its lineage. It has been a considerable advantage at the early times of the sequencing era, when few closely related genomes were available for comparative methods. However, because they rely on the uniformity of the host's signature to infer HGT events, not accounting for the host's intra-genomic variability will result in overpredictions—flagging native segments as possible HGT events. Similarly, the transferred segments need to exhibit the donor's signature and to be significantly different from the recipient's. Furthermore, genomic segments of foreign origin are subject to the same mutational processes as the rest of the host genome, and so the difference between the two tends to vanish over time, a process referred to as amelioration. This limits the ability of parametric methods to detect ancient HGTs.
Overview:
Phylogenetic methods benefit from the recent availability of many sequenced genomes. Indeed, as for all comparative methods, phylogenetic methods can integrate information from multiple genomes, and in particular integrate them using a model of evolution. This lends them the ability to better characterize the HGT events they infer—notably by designating the donor species and time of the transfer. However, models have limits and need to be used cautiously. For instance, the conflicting phylogenies can be the result of events not accounted for by the model, such as unrecognized paralogy due to duplication followed by gene losses. Also, many approaches rely on a reference species tree that is supposed to be known, when in many instances it can be difficult to obtain a reliable tree. Finally, the computational costs of reconstructing many gene/species trees can be prohibitively expensive. Phylogenetic methods tend to be applied to genes or protein sequences as basic evolutionary units, which limits their ability to detect HGT in regions outside or across gene boundaries.
Overview:
Because of their complementary approaches—and often non-overlapping sets of HGT candidates—combining predictions from parametric and phylogenetic methods can yield a more comprehensive set of HGT candidate genes. Indeed, combining different parametric methods has been reported to significantly improve the quality of predictions. Moreover, in the absence of a comprehensive set of true horizontally transferred genes, discrepancies between different methods might be resolved through combining parametric and phylogenetic methods. However, combining inferences from multiple methods also entails a risk of an increased false-positive rate.
Parametric methods:
Parametric methods to infer HGT use characteristics of the genome sequence specific to particular species or clades, also called genomic signatures. If a fragment of the genome strongly deviates from the genomic signature, this is a sign of a potential horizontal transfer. For example, because bacterial GC content falls within a wide range, GC content of a genome segment is a simple genomic signature. Commonly used genomic signatures include nucleotide composition, oligonucleotide frequencies, or structural features of the genome.To detect HGT using parametric methods, the host's genomic signature needs to be clearly recognizable. However, the host's genome is not always uniform with respect to the genome signature: for example, GC content of the third codon position is lower close to the replication terminus and GC content tends to be higher in highly expressed genes. Not accounting for such intra-genomic variability in the host can result in over-predictions, flagging native segments as HGT candidates. Larger sliding windows can account for this variability at the cost of a reduced ability to detect smaller HGT regions.Just as importantly, horizontally transferred segments need to exhibit the donor's genomic signature. This might not be the case for ancient transfers where transferred sequences are subjected to the same mutational processes as the rest of the host genome, potentially causing their distinct signatures to "ameliorate" and become undetectable through parametric methods. For example, Bdellovibrio bacteriovorus, a predatory δ-Proteobacterium, has homogeneous GC content, and it might be concluded that its genome is resistant to HGT. However, subsequent analysis using phylogenetic methods identified a number of ancient HGT events in the genome of B. bacteriovorus. Similarly, if the inserted segment was previously ameliorated to the host's genome, as is the case for prophage insertions, parametric methods might miss predicting these HGT events. Also, the donor's composition must significantly differ from the recipient's to be identified as abnormal, a condition that might be missed in the case of short- to medium-distance HGT, which are the most prevalent. Furthermore, it has been reported that recently acquired genes tend to be AT-richer than the recipient's average, which indicates that differences in GC-content signature may result from unknown post-acquisition mutational processes rather than from the donor's genome.
Parametric methods:
Nucleotide composition Bacterial GC content falls within a wide range, with Ca. Zinderia insecticola having a GC content of 13.5% and Anaeromyxobacter dehalogenans having a GC content of 75%. Even within a closely related group of α-Proteobacteria, values range from approximately 30% to 65%. These differences can be exploited when detecting HGT events as a significantly different GC content for a genome segment can be an indication of foreign origin.
Parametric methods:
Oligonucleotide spectrum The oligonucleotide spectrum (or k-mer frequencies) measures the frequency of all possible nucleotide sequences of a particular length in the genome. It tends to vary less within genomes than between genomes and therefore can also be used as a genomic signature. A deviation from this signature suggests that a genomic segment might have arrived through horizontal transfer.
Parametric methods:
The oligonucleotide spectrum owes much of its discriminatory power to the number of possible oligonucleotides: if n is the size of the vocabulary and w is oligonucleotide size, the number of possible distinct oligonucleotides is nw; for example, there are 45=1024 possible pentanucleotides. Some methods can capture the signal recorded in motifs of variable size, thus capturing both rare and discriminative motifs along with frequent, but more common ones.
Parametric methods:
Codon usage bias, a measure related to codon frequencies, was one of the first detection methods used in methodical assessments of HGT. This approach requires a host genome which contains a bias towards certain synonymous codons (different codons which code for the same amino acid) which is clearly distinct from the bias found within the donor genome. The simplest oligonucleotide used as a genomic signature is the dinucleotide, for example the third nucleotide in a codon and the first nucleotide in the following codon represent the dinucleotide least restricted by amino acid preference and codon usage.It is important to optimise the size of the sliding window in which to count the oligonucleotide frequency: a larger sliding window will better buffer variability in the host genome at the cost of being worse at detecting smaller HGT regions. A good compromise has been reported using tetranucleotide frequencies in a sliding window of 5 kb with a step of 0.5kb.A convenient method of modelling oligonucleotide genomic signatures is to use Markov chains. The transition probability matrix can be derived for endogenous vs. acquired genes, from which Bayesian posterior probabilities for particular stretches of DNA can be obtained.
Parametric methods:
Structural features Just as the nucleotide composition of a DNA molecule can be represented by a sequence of letters, its structural features can be encoded in a numerical sequence. The structural features include interaction energies between neighbouring base pairs, the angle of twist that makes two bases of a pair non-coplanar, or DNA deformability induced by the proteins shaping the chromatin.The autocorrelation analysis of some of these numerical sequences show characteristic periodicities in complete genomes. In fact, after detecting archaea-like regions in the thermophilic bacteria Thermotoga maritima, periodicity spectra of these regions were compared to the periodicity spectra of the homologous regions in the archaea Pyrococcus horikoshii. The revealed similarities in the periodicity were strong supporting evidence for a case of massive HGT between the bacteria and the archaea kingdoms.
Parametric methods:
Genomic context The existence of genomic islands, short (typically 10–200kb long) regions of a genome which have been acquired horizontally, lends support to the ability to identify non-native genes by their location in a genome. For example, a gene of ambiguous origin which forms part of a non-native operon could be considered to be non-native. Alternatively, flanking repeat sequences or the presence of nearby integrases or transposases can indicate a non-native region. A machine-learning approach combining oligonucleotide frequency scans with context information was reported to be effective at identifying genomic islands. In another study, the context was used as a secondary indicator, after removal of genes which are strongly thought to be native or non-native through the use of other parametric methods.
Phylogenetic methods:
The use of phylogenetic analysis in the detection of HGT was advanced by the availability of many newly sequenced genomes. Phylogenetic methods detect inconsistencies in gene and species evolutionary history in two ways: explicitly, by reconstructing the gene tree and reconciling it with the reference species tree, or implicitly, by examining aspects that correlate with the evolutionary history of the genes in question, e.g., patterns of presence/absence across species, or unexpectedly short or distant pairwise evolutionary distances.
Phylogenetic methods:
Explicit phylogenetic methods The aim of explicit phylogenetic methods is to compare gene trees with their associated species trees. While weakly supported differences between gene and species trees can be due to inference uncertainty, statistically significant differences can be suggestive of HGT events. For example, if two genes from different species share the most recent ancestral connecting node in the gene tree, but the respective species are spaced apart in the species tree, an HGT event can be invoked. Such an approach can produce more detailed results than parametric approaches because the involved species, time and direction of transfer can potentially be identified.
Phylogenetic methods:
As discussed in more detail below, phylogenetic methods range from simple methods merely identifying discordance between gene and species trees to mechanistic models inferring probable sequences of HGT events. An intermediate strategy entails deconstructing the gene tree into smaller parts until each matches the species tree (genome spectral approaches).
Phylogenetic methods:
Explicit phylogenetic methods rely upon the accuracy of the input rooted gene and species trees, yet these can be challenging to build. Even when there is no doubt in the input trees, the conflicting phylogenies can be the result of evolutionary processes other than HGT, such as duplications and losses, causing these methods to erroneously infer HGT events when paralogy is the correct explanation. Similarly, in the presence of incomplete lineage sorting, explicit phylogeny methods can erroneously infer HGT events. That is why some explicit model-based methods test multiple evolutionary scenarios involving different kinds of events, and compare their fit to the data given parsimonious or probabilistic criteria.
Phylogenetic methods:
Tests of topologies To detect sets of genes that fit poorly to the reference tree, one can use statistical tests of topology, such as the Kishino–Hasegawa (KH), Shimodaira–Hasegawa (SH), and Approximately Unbiased (AU) tests. These tests assess the likelihood of the gene sequence alignment when the reference topology is given as the null hypothesis.
The rejection of the reference topology is an indication that the evolutionary history for that gene family is inconsistent with the reference tree. When these inconsistencies cannot be explained using a small number of non-horizontal events, such as gene loss and duplication, an HGT event is inferred.
Phylogenetic methods:
One such analysis checked for HGT in groups of homologs of the γ-Proteobacterial lineage. Six reference trees were reconstructed using either the highly conserved small subunit ribosomal RNA sequences, a consensus of the available gene trees or concatenated alignments of orthologs. The failure to reject the six evaluated topologies, and the rejection of seven alternative topologies, was interpreted as evidence for a small number of HGT events in the selected groups.
Phylogenetic methods:
Tests of topology identify differences in tree topology taking into account the uncertainty in tree inference but they make no attempt at inferring how the differences came about. To infer the specifics of particular events, genome spectral or subtree pruning and regraft methods are required.
Genome spectral approaches In order to identify the location of HGT events, genome spectral approaches decompose a gene tree into substructures (such as bipartitions or quartets) and identify those that are consistent or inconsistent with the species tree.
Phylogenetic methods:
Bipartitions Removing one edge from a reference tree produces two unconnected sub-trees, each a disjoint set of nodes—a bipartition. If a bipartition is present in both the gene and the species trees, it is compatible; otherwise, it is conflicting. These conflicts can indicate an HGT event or may be the result of uncertainty in gene tree inference. To reduce uncertainty, bipartition analyses typically focus on strongly supported bipartitions such as those associated with branches with bootstrap values or posterior probabilities above certain thresholds. Any gene family found to have one or several conflicting, but strongly supported, bipartitions is considered as an HGT candidate.Quartet decomposition Quartets are trees consisting of four leaves. In bifurcating (fully resolved) trees, each internal branch induces a quartet whose leaves are either subtrees of the original tree or actual leaves of the original tree. If the topology of a quartet extracted from the reference species tree is embedded in the gene tree, the quartet is compatible with the gene tree. Conversely, incompatible strongly supported quartets indicate potential HGT events. Quartet mapping methods are much more computationally efficient and naturally handle heterogeneous representation of taxa among gene families, making them a good basis for developing large-scale scans for HGT, looking for highways of gene sharing in databases of hundreds of complete genomes.
Phylogenetic methods:
Subtree pruning and regrafting A mechanistic way of modelling an HGT event on the reference tree is to first cut an internal branch—i.e., prune the tree—and then regraft it onto another edge, an operation referred to as subtree pruning and regrafting (SPR). If the gene tree was topologically consistent with the original reference tree, the editing results in an inconsistency. Similarly, when the original gene tree is inconsistent with the reference tree, it is possible to obtain a consistent topology by a series of one or more prune and regraft operations applied to the reference tree. By interpreting the edit path of pruning and regrafting, HGT candidate nodes can be flagged and the host and donor genomes inferred. To avoid reporting false positive HGT events due to uncertain gene tree topologies, the optimal "path" of SPR operations can be chosen among multiple possible combinations by considering the branch support in the gene tree. Weakly supported gene tree edges can be ignored a priori or the support can be used to compute an optimality criterion.Because conversion of one tree to another by a minimum number of SPR operations is NP-Hard, solving the problem becomes considerably more difficult as more nodes are considered. The computational challenge lies in finding the optimal edit path, i.e., the one that requires the fewest steps, and different strategies are used in solving the problem. For example, the HorizStory algorithm reduces the problem by first eliminating the consistent nodes; recursive pruning and regrafting reconciles the reference tree with the gene tree and optimal edits are interpreted as HGT events. The SPR methods included in the supertree reconstruction package SPRSupertrees substantially decrease the time of the search for the optimal set of SPR operations by considering multiple localised sub-problems in large trees through a clustering approach. The T-REX (webserver) includes a number of HGT detection methods (mostly SPR-based) and allows users to calculate the bootstrap support of the inferred transfers.
Phylogenetic methods:
Model-based reconciliation methods Reconciliation of gene and species trees entails mapping evolutionary events onto gene trees in a way that makes them concordant with the species tree. Different reconciliation models exist, differing in the types of event they consider to explain the incongruences between gene and species tree topologies. Early methods exclusively modelled horizontal transfers (T). More recent ones also account for duplication (D), loss (L), incomplete lineage sorting (ILS) or homologous recombination (HR) events. The difficulty is that by allowing for multiple types of events, the number of possible reconciliations increases rapidly. For instance, a conflicting gene tree topologies might be explained in terms of a single HGT event or multiple duplication and loss events. Both alternatives can be considered plausible reconciliation depending on the frequency of these respective events along the species tree.
Phylogenetic methods:
Reconciliation methods can rely on a parsimonious or a probabilistic framework to infer the most likely scenario(s), where the relative cost/probability of D, T, L events can be fixed a priori or estimated from the data. The space of DTL reconciliations and their parsimony costs—which can be extremely vast for large multi-copy gene family trees—can be efficiently explored through dynamic programming algorithms. In some programs, the gene tree topology can be refined where it was uncertain to fit a better evolutionary scenario as well as the initial sequence alignment. More refined models account for the biased frequency of HGT between closely related lineages, reflecting the loss of efficiency of HR with phylogenetic distance, for ILS, or for the fact that the actual donor of most HGT belong to extinct or unsampled lineages. Further extensions of DTL models are being developed towards an integrated description of the genome evolution processes. In particular, some of them consider horizontal at multiple scales—modelling independent evolution of gene fragments or recognising co-evolution of several genes (e.g., due to co-transfer) within and across genomes.
Phylogenetic methods:
Implicit phylogenetic methods In contrast to explicit phylogenetic methods, which compare the agreement between gene and species trees, implicit phylogenetic methods compare evolutionary distances or sequence similarity. Here, an unexpectedly short or long distance from a given reference compared to the average can be suggestive of an HGT event. Because tree construction is not required, implicit approaches tend to be simpler and faster than explicit methods.
Phylogenetic methods:
However, implicit methods can be limited by disparities between the underlying correct phylogeny and the evolutionary distances considered. For instance, the most similar sequence as obtained by the highest-scoring BLAST hit is not always the evolutionarily closest one.
Phylogenetic methods:
Top sequence match in a distant species A simple way of identifying HGT events is by looking for high-scoring sequence matches in distantly related species. For example, an analysis of the top BLAST hits of protein sequences in the bacteria Thermotoga maritima revealed that most hits were in archaea rather than closely related bacteria, suggesting extensive HGT between the two; these predictions were later supported by an analysis of the structural features of the DNA molecule.However, this method is limited to detecting relatively recent HGT events. Indeed, if the HGT occurred in the common ancestor of two or more species included in the database, the closest hit will reside within that clade and therefore the HGT will not be detected by the method. Thus, the threshold of the minimum number of foreign top BLAST hits to observe to decide a gene was transferred is highly dependent on the taxonomic coverage of sequence databases. Therefore, experimental settings may need to be defined in an ad-hoc way.
Phylogenetic methods:
Discrepancy between gene and species distances The molecular clock hypothesis posits that homologous genes evolve at an approximately constant rate across different species. If one only considers homologous genes related through speciation events (referred to as “orthologous" genes), their underlying tree should by definition correspond to the species tree. Therefore, assuming a molecular clock, the evolutionary distance between orthologous genes should be approximately proportional to the evolutionary distances between their respective species. If a putative group of orthologs contains xenologs (pairs of genes related through an HGT), the proportionality of evolutionary distances may only hold among the orthologs, not the xenologs.Simple approaches compare the distribution of similarity scores of particular sequences and their orthologous counterparts in other species; HGT are inferred from outliers. The more sophisticated DLIGHT ('Distance Likelihood-based Inference of Genes Horizontally Transferred') method considers simultaneously the effect of HGT on all sequences within groups of putative orthologs: if a likelihood-ratio test of the HGT hypothesis versus a hypothesis of no HGT is significant, a putative HGT event is inferred. In addition, the method allows inference of potential donor and recipient species and provides an estimation of the time since the HGT event.
Phylogenetic methods:
Phylogenetic profiles A group of orthologous or homologous genes can be analysed in terms of the presence or absence of group members in the reference genomes; such patterns are called phylogenetic profiles. To find HGT events, phylogenetic profiles are scanned for an unusual distribution of genes. Absence of a homolog in some members of a group of closely related species is an indication that the examined gene might have arrived via an HGT event. For example, the three facultatively symbiotic Frankia sp. strains are of strikingly different sizes: 5.43 Mbp, 7.50 Mbp and 9.04 Mbp, depending on their range of hosts. Marked portions of strain-specific genes were found to have no significant hit in the reference database, and were possibly acquired by HGT transfers from other bacteria. Similarly, the three phenotypically diverse Escherichia coli strains (uropathogenic, enterohemorrhagic and benign) share about 40% of the total combined gene pool, with the other 60% being strain-specific genes and consequently HGT candidates. Further evidence for these genes resulting from HGT was their strikingly different codon usage patterns from the core genes and a lack of gene order conservation (order conservation is typical of vertically evolved genes). The presence/absence of homologs (or their effective count) can thus be used by programs to reconstruct the most likely evolutionary scenario along the species tree. Just as with reconciliation methods, this can be achieved through parsimonious or probabilistic estimation of the number of gain and loss events. Models can be complexified by adding processes, like the truncation of genes, but also by modelling the heterogeneity of rates of gain and loss across lineages and/or gene families.
Phylogenetic methods:
Clusters of polymorphic sites Genes are commonly regarded as the basic units transferred through an HGT event. However it is also possible for HGT to occur within genes. For example, it has been shown that horizontal transfer between closely related species results in more exchange of ORF fragments, a type a transfer called gene conversion, mediated by homologous recombination. The analysis of a group of four Escherichia coli and two Shigella flexneri strains revealed that the sequence stretches common to all six strains contain polymorphic sites, consequences of homologous recombination. Clusters of excess of polymorphic sites can thus be used to detect tracks of DNA recombined with a distant relative. This method of detection is, however, restricted to the sites in common to all analysed sequences, limiting the analysis to a group of closely related organisms.
Evaluation:
The existence of the numerous and varied methods to infer HGT raises the question of how to validate individual inferences and of how to compare the different methods.
Evaluation:
A main problem is that, as with other types of phylogenetic inferences, the actual evolutionary history cannot be established with certainty. As a result, it is difficult to obtain a representative test set of HGT events. Furthermore, HGT inference methods vary considerably in the information they consider and often identify inconsistent groups of HGT candidates: it is not clear to what extent taking the intersection, the union, or some other combination of the individual methods affects the false positive and false negative rates.Parametric and phylogenetic methods draw on different sources of information; it is therefore difficult to make general statements about their relative performance. Conceptual arguments can however be invoked. While parametric methods are limited to the analysis of single or pairs of genomes, phylogenetic methods provide a natural framework to take advantage of the information contained in multiple genomes. In many cases, segments of genomes inferred as HGT based on their anomalous composition can also be recognised as such on the basis of phylogenetic analyses or through their mere absence in genomes of related organisms. In addition, phylogenetic methods rely on explicit models of sequence evolution, which provide a well-understood framework for parameter inference, hypothesis testing, and model selection. This is reflected in the literature, which tends to favour phylogenetic methods as the standard of proof for HGT. The use of phylogenetic methods thus appears to be the preferred standard, especially given that the increase in computational power coupled with algorithmic improvements has made them more tractable, and that the ever denser sampling of genomes lends more power to these tests.
Evaluation:
Considering phylogenetic methods, several approaches to validating individual HGT inferences and benchmarking methods have been adopted, typically relying on various forms of simulation. Because the truth is known in simulation, the number of false positives and the number of false negatives are straightforward to compute. However, simulating data do not trivially resolve the problem because the true extent of HGT in nature remains largely unknown, and specifying rates of HGT in the simulated model is always hasardous. Nonetheless, studies involving the comparison of several phylogenetic methods in a simulation framework could provide quantitative assessment of their respective performances, and thus help the biologist in choosing objectively proper tools.Standard tools to simulate sequence evolution along trees such as INDELible or PhyloSim can be adapted to simulate HGT. HGT events cause the relevant gene trees to conflict with the species tree. Such HGT events can be simulated through subtree pruning and regrafting rearrangements of the species tree. However, it is important to simulate data that are realistic enough to be representative of the challenge provided by real datasets, and simulation under complex models are thus preferable. A model was developed to simulate gene trees with heterogeneous substitution processes in addition to the occurrence of transfer, and accounting for the fact that transfer can come from now extinct donor lineages. Alternatively, the genome evolution simulator ALF directly generates gene families subject to HGT, by accounting for a whole range of evolutionary forces at the base level, but in the context of a complete genome. Given simulated sequences which have HGT, analysis of those sequences using the methods of interest and comparison of their results with the known truth permits study of their performance. Similarly, testing the methods on sequence known not to have HGT enables the study of false positive rates.
Evaluation:
Simulation of HGT events can also be performed by manipulating the biological sequences themselves. Artificial chimeric genomes can be obtained by inserting known foreign genes into random positions of a host genome. The donor sequences are inserted into the host unchanged or can be further evolved by simulation, e.g., using the tools described above.
One important caveat to simulation as a way to assess different methods is that simulation is based on strong simplifying assumptions which may favour particular methods. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Canon EOS 6D Mark II**
Canon EOS 6D Mark II:
The Canon EOS 6D Mark II is a 26.2-megapixel full-frame digital single-lens reflex camera announced by Canon on June 29, 2017.Impressions from the Canon press event were mixed, with many saying the camera is "a sizeable upgrade, but feels dated". Critics point out that the 6D does not support 4K video shooting and its 45 AF points are dense around the center, resulting in slower focus and recompose maneuvers when photographing moving subjects.
Main features:
New features over the EOS 6D are: New 26.2 megapixel CMOS sensor with Dual Pixel CMOS AF, (total 27.1 megapixels), instead of 20mp CMOS sensor with contrast detect.
DIGIC 7, standard ISO 100–40000, expandable from L: 50 to H1: 51200, H2: 102400, compares (DIGIC 5+, ISO 100 – 25600, H 51200, H2 102400).
New 7560-pixel RGB+IR metering sensor to aid the AF system 45 cross-type AF points, compared to 11, with center point is the only cross-type.
Main features:
At f/8, autofocusing is only possible with the center AF point. However, 27 will autofocus when the body is attached to only 2 lens/teleconverter combination with a maximum aperture of f/8. The EF 100–400mm f/4.5–5.6L IS II lens, with Extender EF 1.4x III and EF 200–400mm f/4L IS Extender 1.4x lens, used with Extender EF 2x III (AF at 27 focus points, and the central 9 points acting as cross-type points). The EOS 6D Mark II is the first non-professional full-frame EOS body that can autofocus in this situation; previous non-professional bodies could not autofocus if the maximum aperture of an attached lens/teleconverter combination was smaller than f/5.6 (This feature had previously been included in three non-professional APS-C bodies: first the 80D, followed by the 77D and EOS 800D/Rebel T7i.) At f/5.6, center AF point supporting down to EV -2.
Main features:
With f/2.8 lens, AF sensitive down to EV -3.
High-speed Continuous Shooting at up to 6.5 fps, 4 fps in Live view mode with Servo AF.
For anti-flicker shooting: max. approx. 5.6 shots/sec.
Low speed continuous shooting: 3 fps Built-in NFC and Bluetooth.
1080p at 60/50 fps video recording capability 4K time-lapse movie Built-in HDR and time-lapse recording capability Anti-flicker Flash Sync Speed 1/180 Battery life: 1,200 shots. (1,100 at 0 °C/32 °F; 380 with live view or 340 with live view at 0 °C/32 °F.) New Intelligent viewfinder with grid, dual axis electronic level, warning icons.
Fully articulated touchscreen compared fixed screen on EOS 6D.
Supporting Panning mode in SCN. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GBLD-345**
GBLD-345:
GBLD-345 is an anxiolytic drug used in scientific research, which acts as a non-selective, full-efficacy positive allosteric modulator of the GABAA receptor. It has similar effects to benzodiazepine drugs, but is structurally distinct and so is classed as a nonbenzodiazepine anxiolytic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SPRED3**
SPRED3:
Sprouty-related, EVH1 domain-containing protein 3 also known as Spread-3 is a protein that in humans is encoded by the SPRED3 gene.Spread-3 is a member of the Sprouty (see SPRY1/SPRED) family of proteins that regulate growth factor-induced activation of the MAP kinase cascade. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**P2i**
P2i:
P2i is a nanotechnology development company that works with manufacturers to produce liquid repellent nano-coating protection to products for the electronics, lifestyle, life sciences, filtration and Energy, and military and institutional sectors.The company was established in 2004 to commercialize technologies developed by the UK MoD's Defense Science and Technology Laboratory. In 2010 the company acquired Surface Innovations Limited, adding new technologies such as antimicrobial, super hydrophilic and protein resistance coatings.In addition to headquarters in the United Kingdom, P2i has a processing facility in the United States, offices in Shenzhen, China and Taipei City, Taiwan, and representation in Korea. As of February 2016 P2i had deployed hundreds of its nano-coating systems globally, including factories in Brazil, Argentina, USA, China, India, Japan, Switzerland, Germany and the UK.
Technology:
The treatment decreases the surface energy of objects by adding an perfluorinated carbon polymer coating onto exposed surfaces. The coating reduces the disruption of intermolecular bonds within a liquid. Liquids thus tend to bead up instead of penetrating or absorbing into the object.The treatment process coats objects using a pulsed plasma-enhanced chemical vapor deposition apparatus at room temperature. The coating is introduced as a vapor and ionized. This deposits a polymerized layer of the plasma monomers, which binds covalently and durably to the object's surface. The mild temperature and pressure conditions of this process permit a broad variety of items and materials to be treated. Occurring at low pressure, the coating penetrates complex three-dimensional objects, protecting it internally and externally in the one process. The coating, being very thin, does not alter the look or feel of solid objects, or the vapor porosity of fabrics.
History:
P2i's process is based on the research and development work of Stephen Coulson and Jas Pal Badyal at Durham University. Coulson's work was funded by the United Kingdom's Defence Science and Technology Laboratory (DSTL) which aimed to protect clothing from water and other liquids while maximizing comfort.
P2i Ltd was established as a stand-alone company in 2004, as the first DSTL technology spin-out managed by Ploughshare Innovations (the DSTL's technology transfer company).On 13 July 2010, P2i announced the acquisition of Surface Innovations Ltd, a UK-based technology company with several functional nano-coating patent families in areas such as anti-bacterial resistance and liquid attracting (super wettable).
Applications and brands:
According to P2i's website, current applications for its technology are in the categories of ‘Electronics, Lifestyle, Life Sciences, Filtration & Energy and Military & Institutional’ For consumer-facing products the company has created sector-specific trademarked brands.
Applications and brands:
The ion-mask brand was used in lifestyle products such as footwear, outdoor clothing and accessories (gloves and headwear). ion-mask products were on sale from several international footwear companies including Timberland, Nike, Adidas Golf, Hi-Tec, Magnum Boots, Van Dal, Teva, and K-Swiss.P2i's nano-coating is used in consumer electronics, focusing initially on the hearing aid sector where the company claims to be applied to 70% of the world's hearing devices. In 2013, P2i diversified its marketing, replacing Aridion with the term Splash-proof, to identify the traditional hydrophobic layer technology.P2i has partnered with Motorola and Huawei and several other top smartphone brands, as well as another leading Chinese manufacturers. P2i has the capability to treat over half a billion phones in 2019. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ZKSCAN1**
ZKSCAN1:
Zinc finger protein with KRAB and SCAN domains 1 is a protein that in humans is encoded by the ZKSCAN1 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multi-state modeling of biomolecules**
Multi-state modeling of biomolecules:
Multi-state modeling of biomolecules refers to a series of techniques used to represent and compute the behaviour of biological molecules or complexes that can adopt a large number of possible functional states.
Multi-state modeling of biomolecules:
Biological signaling systems often rely on complexes of biological macromolecules that can undergo several functionally significant modifications that are mutually compatible. Thus, they can exist in a very large number of functionally different states. Modeling such multi-state systems poses two problems: The problem of how to describe and specify a multi-state system (the "specification problem") and the problem of how to use a computer to simulate the progress of the system over time (the "computation problem"). To address the specification problem, modelers have in recent years moved away from explicit specification of all possible states, and towards rule-based modeling that allow for implicit model specification, including the κ-calculus, BioNetGen, the Allosteric Network Compiler and others. To tackle the computation problem, they have turned to particle-based methods that have in many cases proved more computationally efficient than population-based methods based on ordinary differential equations, partial differential equations, or the Gillespie stochastic simulation algorithm. Given current computing technology, particle-based methods are sometimes the only possible option. Particle-based simulators further fall into two categories: Non-spatial simulators such as StochSim, DYNSTOC, RuleMonkey, and NFSim and spatial simulators, including Meredys, SRSim and MCell. Modelers can thus choose from a variety of tools; the best choice depending on the particular problem. Development of faster and more powerful methods is ongoing, promising the ability to simulate ever more complex signaling processes in the future.
Introduction:
Multi-state biomolecules in signal transduction In living cells, signals are processed by networks of proteins that can act as complex computational devices. These networks rely on the ability of single proteins to exist in a variety of functionally different states achieved through multiple mechanisms, including post-translational modifications, ligand binding, conformational change, or formation of new complexes. Similarly, nucleic acids can undergo a variety of transformations, including protein binding, binding of other nucleic acids, conformational change and DNA methylation.
Introduction:
In addition, several types of modifications can co-exist, exerting a combined influence on a biological macromolecule at any given time. Thus, a biomolecule or complex of biomolecules can often adopt a very large number of functionally distinct states. The number of states scales exponentially with the number of possible modifications, a phenomenon known as "combinatorial explosion". This is of concern for computational biologists who model or simulate such biomolecules, because it raises questions about how such large numbers of states can be represented and simulated.
Introduction:
Examples of combinatorial explosion Biological signaling networks incorporate a wide array of reversible interactions, post-translational modifications and conformational changes. Furthermore, it is common for a protein to be composed of several - identical or nonidentical - subunits, and for several proteins and/or nucleic acid species to assemble into larger complexes. A molecular species with several of those features can therefore exist in a large number of possible states.
Introduction:
For instance, it has been estimated that the yeast scaffold protein Ste5 can be a part of 25666 unique protein complexes. In E. coli, chemotaxis receptors of four different kinds interact in groups of three, and each individual receptor can exist in at least two possible conformations and has up to eight methylation sites, resulting in billions of potential states. The protein kinase CaMKII is a dodecamer of twelve catalytic subunits, arranged in two hexameric rings. Each subunit can exist in at least two distinct conformations, and each subunit features various phosphorylation and ligand binding sites. A recent model incorporated conformational states, two phosphorylation sites and two modes of binding calcium/calmodulin, for a total of around one billion possible states per hexameric ring. A model of coupling of the EGF receptor to a MAP kinase cascade presented by Danos and colleagues accounts for 10 23 distinct molecular species, yet the authors note several points at which the model could be further extended. A more recent model of ErbB receptor signalling even accounts for more than one googol ( 10 100 ) distinct molecular species. The problem of combinatorial explosion is also relevant to synthetic biology, with a recent model of a relatively simple synthetic eukaryotic gene circuit featuring 187 species and 1165 reactions.Of course, not all of the possible states of a multi-state molecule or complex will necessarily be populated. Indeed, in systems where the number of possible states is far greater than that of molecules in the compartment (e.g. the cell), they cannot be. In some cases, empirical information can be used to rule out certain states if, for instance, some combinations of features are incompatible. In the absence of such information, however, all possible states need to be considered a priori. In such cases, computational modeling can be used to uncover to what extent the different states are populated.
Introduction:
The existence (or potential existence) of such large numbers of molecular species is a combinatorial phenomenon: It arises from a relatively small set of features or modifications (such as post-translational modification or complex formation) that combine to dictate the state of the entire molecule or complex, in the same way that the existence of just a few choices in a coffee shop (small, medium or large, with or without milk, decaf or not, extra shot of espresso) quickly leads to a large number of possible beverages (24 in this case; each additional binary choice will double that number). Although it is difficult for us to grasp the total numbers of possible combinations, it is usually not conceptually difficult to understand the (much smaller) set of features or modifications and the effect each of them has on the function of the biomolecule. The rate at which a molecule undergoes a particular reaction will usually depend mainly on a single feature or a small subset of features. It is the presence or absence of those features that dictates the reaction rate. The reaction rate is the same for two molecules that differ only in features which do not affect this reaction. Thus, the number of parameters will be much smaller than the number of reactions. (In the coffee shop example, adding an extra shot of espresso will cost 40 cent, no matter what size the beverage is and whether or not it has milk in it). It is such "local rules" that are usually discovered in laboratory experiments. Thus, a multi-state model can be conceptualised in terms of combinations of modular features and local rules. This means that even a model that can account for a vast number of molecular species and reactions is not necessarily conceptually complex.
Introduction:
Specification vs computation The combinatorial complexity of signaling systems involving multi-state proteins poses two kinds of problems. The first problem is concerned with how such a system can be specified; i.e. how a modeler can specify all complexes, all changes those complexes undergo and all parameters and conditions governing those changes in a robust and efficient way. This problem is called the "specification problem". The second problem concerns computation. It asks questions about whether a combinatorially complex model, once specified, is computationally tractable, given the large number of states and the even larger number of possible transitions between states, whether it can be stored electronically, and whether it can be evaluated in a reasonable amount of computing time. This problem is called the "computation problem". Among the approaches that have been proposed to tackle combinatorial complexity in multi-state modeling, some are mainly concerned with addressing the specification problem, some are focused on finding effective methods of computation. Some tools address both specification and computation. The sections below discuss rule-based approaches to the specification problem and particle-based approaches to solving the computation problem. A wide range of computational tools exist for multi-state modeling.
The specification problem:
Explicit specification The most naïve way of specifying, e.g., a protein in a biological model is to specify each of its states explicitly and use each of them as a molecular species in a simulation framework that allows transitions from state to state. For instance, if a protein can be ligand-bound or not, exist in two conformational states (e.g. open or closed) and be located in two possible subcellular areas (e.g. cytosolic or membrane-bound), then the eight possible resulting states can be explicitly enumerated as: bound, open, cytosol bound, open, membrane bound, closed, cytosol bound, closed, membrane unbound, open, cytosol unbound, open, membrane unbound, closed, cytosol unbound, closed, membraneEnumerating all possible states is a lengthy and potentially error-prone process. For macromolecular complexes that can adopt multiple states, enumerating each state quickly becomes tedious, if not impossible. Moreover, the addition of a single additional modification or feature to the model of the complex under investigation will double the number of possible states (if the modification is binary), and it will more than double the number of transitions that need to be specified.
The specification problem:
Rule-based model specification It is clear that an explicit description, which lists all possible molecular species (including all their possible states), all possible reactions or transitions these species can undergo, and all parameters governing these reactions, very quickly becomes unwieldy as the complexity of the biological system increases. Modelers have therefore looked for implicit, rather than explicit, ways of specifying a biological signaling system. An implicit description is one that groups reactions and parameters that apply to many types of molecular species into one reaction template. It might also add a set of conditions that govern reaction parameters, i.e. the likelihood or rate at which a reaction occurs, or whether it occurs at all. Only properties of the molecule or complex that matter to a given reaction (either affecting the reaction or being affected by it) are explicitly mentioned, and all other properties are ignored in the specification of the reaction.
The specification problem:
For instance, the rate of ligand dissociation from a protein might depend on the conformational state of the protein, but not on its subcellular localization. An implicit description would therefore list two dissociation processes (with different rates, depending on conformational state), but would ignore attributes referring to subcellular localization, because they do not affect the rate of ligand dissociation, nor are they affected by it. This specification rule has been summarized as "Don't care, don't write".Since it is not written in terms of reactions, but in terms of more general "reaction rules" encompassing sets of reactions, this kind of specification is often called "rule-based". This description of the system in terms of modular rules relies on the assumption that only a subset of features or attributes are relevant for a particular reaction rule. Where this assumption holds, a set of reactions can be coarse-grained into one reaction rule. This coarse-graining preserves the important properties of the underlying reactions. For instance, if the reactions are based on chemical kinetics, so are the rules derived from them.
The specification problem:
Many rule-based specification methods exist. In general, the specification of a model is a separate task from the execution of the simulation. Therefore, among the existing rule-based model specification systems, some concentrate on model specification only, allowing the user to then export the specified model into a dedicated simulation engine. However, many solutions to the specification problem also contain a method of interpreting the specified model. This is done by providing a method to simulate the model or a method to convert it into a form that can be used for simulations in other programs.
The specification problem:
An early rule-based specification method is the κ-calculus, a process algebra that can be used to encode macromolecules with internal states and binding sites and to specify rules by which they interact. The κ-calculus is merely concerned with providing a language to encode multi-state models, not with interpreting the models themselves. A simulator compatible with Kappa is KaSim.BioNetGen is a software suite that provides both specification and simulation capacities. Rule-based models can be written down using a specified syntax, the BioNetGen language (BNGL). The underlying concept is to represent biochemical systems as graphs, where molecules are represented as nodes (or collections of nodes) and chemical bonds as edges. A reaction rule, then, corresponds to a graph rewriting rule. BNGL provides a syntax for specifying these graphs and the associated rules as structured strings. BioNetGen can then use these rules to generate ordinary differential equations (ODEs) to describe each biochemical reaction. Alternatively, it can generate a list of all possible species and reactions in SBML, which can then be exported to simulation software packages that can read SBML. One can also make use of BioNetGen's own ODE-based simulation software and its capability to generate reactions on-the-fly during a stochastic simulation. In addition, a model specified in BNGL can be read by other simulation software, such as DYNSTOC, RuleMonkey, and NFSim.Another tool that generates full reaction networks from a set of rules is the Allosteric Network Compiler (ANC). Conceptually, ANC sees molecules as allosteric devices with a Monod-Wyman-Changeux (MWC) type regulation mechanism, whose interactions are governed by their internal state, as well as by external modifications. A very useful feature of ANC is that it automatically computes dependent parameters, thereby imposing thermodynamic correctness.An extension of the κ-calculus is provided by React(C). The authors of React C show that it can express the stochastic π calculus. They also provide a stochastic simulation algorithm based on the Gillespie stochastic algorithm for models specified in React(C).ML-Rules is similar to React(C), but provides the added possibility of nesting: A component species of the model, with all its attributes, can be part of a higher-order component species. This enables ML-Rules to capture multi-level models that can bridge the gap between, for instance, a series of biochemical processes and the macroscopic behaviour of a whole cell or group of cells. For instance, a proof-of-concept model of cell division in fission yeast includes cyclin/cdc2 binding and activation, pheromone secretion and diffusion, cell division and movement of cells. Models specified in ML-Rules can be simulated using the James II simulation framework. A similar nested language to represent multi-level biological systems has been proposed by Oury and Plotkin. A specification formalism based on molecular finite automata (MFA) framework can then be used to generate and simulate a system of ODEs or for stochastic simulation using a kinetic Monte Carlo algorithm.Some rule-based specification systems and their associated network generation and simulation tools have been designed to accommodate spatial heterogeneity, in order to allow for the realistic simulation of interactions within biological compartments. For instance, the Simmune project includes a spatial component: Users can specify their multi-state biomolecules and interactions within membranes or compartments of arbitrary shape. The reaction volume is then divided into interfacing voxels, and a separate reaction network generated for each of these subvolumes.
The specification problem:
The Stochastic Simulator Compiler (SSC) allows for rule-based, modular specification of interacting biomolecules in regions of arbitrarily complex geometries. Again, the system is represented using graphs, with chemical interactions or diffusion events formalised as graph-rewriting rules. The compiler then generates the entire reaction network before launching a stochastic reaction-diffusion algorithm. A different approach is taken by PySB, where model specification is embedded in the programming language Python. A model (or part of a model) is represented as a Python programme. This allows users to store higher-order biochemical processes such as catalysis or polymerisation as macros and re-use them as needed. The models can be simulated and analysed using Python libraries, but PySB models can also be exported into BNGL, kappa, and SBML.Models involving multi-state and multi-component species can also be specified in Level 3 of the Systems Biology Markup Language (SBML) using the multi package. A draft specification is available.Thus, by only considering states and features important for a particular reaction, rule-based model specification eliminates the need to explicitly enumerate every possible molecular state that can undergo a similar reaction, and thereby allows for efficient specification.
The computation problem:
When running simulations on a biological model, any simulation software evaluates a set of rules, starting from a specified set of initial conditions, and usually iterating through a series of time steps until a specified end time. One way to classify simulation algorithms is by looking at the level of analysis at which the rules are applied: they can be population-based, single-particle-based or hybrid.
The computation problem:
Population-based rule evaluation In Population-based rule evaluation, rules are applied to populations. All molecules of the same species in the same state are pooled together. Application of a specific rule reduces or increases the size of one of the pools, possibly at the expense of another.
Some of the best-known classes of simulation approaches in computational biology belong to the population-based family, including those based on the numerical integration of ordinary and partial differential equations and the Gillespie stochastic simulation algorithm.
Differential equations describe changes in molecular concentrations over time in a deterministic manner. Simulations based on differential equations usually do not attempt to solve those equations analytically, but employ a suitable numerical solver.
The computation problem:
The stochastic Gillespie algorithm changes the composition of pools of molecules through a progression of randomness reaction events, the probability of which is computed from reaction rates and from the numbers of molecules, in accordance with the stochastic master equation.In population-based approaches, one can think of the system being modeled as being in a given state at any given time point, where a state is defined according to the nature and size of the populated pools of molecules. This means that the space of all possible states can become very large. With some simulation methods implementing numerical integration of ordinary and partial differential equations or the Gillespie stochastic algorithm, all possible pools of molecules and the reactions they undergo are defined at the start of the simulation, even if they are empty. Such "generate-first" methods scale poorly with increasing numbers of molecular states. For instance, it has recently been estimated that even for a simple model of CaMKII with just 6 states per subunits and 10 subunits, it would take 290 years to generate the entire reaction network on a 2.54 GHz Intel Xeon processor. In addition, the model generation step in generate-first methods does not necessarily terminate, for instance when the model includes assembly of proteins into complexes of arbitrarily large size, such as actin filaments. In these cases, a termination condition needs to be specified by the user.Even if a large reaction system can be successfully generated, its simulation using population-based rule evaluation can run into computational limits. In a recent study, a powerful computer was shown to be unable to simulate a protein with more than 8 phosphorylation sites ( 256 phosphorylation states) using ordinary differential equations.Methods have been proposed to reduce the size of the state space. One is to consider only the states adjacent to the present state (i.e. the states that can be reached within the next iteration) at each time point. This eliminates the need for enumerating all possible states at the beginning. Instead, reactions are generated "on-the-fly" at each iteration. These methods are available both for stochastic and deterministic algorithms. These methods still rely on the definition of an (albeit reduced) reaction network - in contrast to the "network-free" methods discussed below.
The computation problem:
Even with "on-the-fly" network generation, networks generated for population-based rule evaluation can become quite large, and thus difficult - if not impossible - to handle computationally. An alternative approach is provided by particle-based rule evaluation.
The computation problem:
Particle-based rule evaluation In particle-based (sometimes called "agent-based") simulations, proteins, nucleic acids, macromolecular complexes or small molecules are represented as individual software objects, and their progress is tracked through the course of the entire simulation. Because particle-based rule evaluation keeps track of individual particles rather than populations, it comes at a higher computational cost when modeling systems with a high total number of particles, but a small number of kinds (or pools) of particles. In cases of combinatorial complexity, however, the modeling of individual particles is an advantage because, at any given point in the simulation, only existing molecules, their states and the reactions they can undergo need to be considered. Particle-based rule evaluation does not require the generation of complete or partial reaction networks at the start of the simulation or at any other point in the simulation and is therefore called "network-free".
The computation problem:
This method reduces the complexity of the model at the simulation stage, and thereby saves time and computational power. The simulation follows each particle, and at each simulation step, a particle only "sees" the reactions (or rules) that apply to it. This depends on the state of the particle and, in some implementation, on the states of its neighbours in a holoenzyme or complex. As the simulation proceeds, the states of particles are updated according to the rules that are fired.Some particle-based simulation packages use an ad-hoc formalism for specification of reactants, parameters and rules. Others can read files in a recognised rule-based specification format such as BNGL.
The computation problem:
Non-spatial particle-based methods StochSim is a particle-based stochastic simulator used mainly to model chemical reactions and other molecular transitions. The algorithm used in StochSim is different from the more widely known Gillespie stochastic algorithm in that it operates on individual entities, not entity pools, making it particle-based rather than population-based.
The computation problem:
In StochSim, each molecular species can be equipped with a number of binary state flags representing a particular modification. Reactions can be made dependent on a set of state flags set to particular values. In addition, the outcome of a reaction can include a state flag being changed. Moreover, entities can be arranged in geometric arrays (for instance, for holoenzymes consisting of several subunits), and reactions can be "neighbor-sensitive", i.e. the probability of a reaction for a given entity is affected by the value of a state flag on a neighboring entity. These properties make StochSim ideally suited to modeling multi-state molecules arranged in holoenzymes or complexes of specified size. Indeed, StochSim has been used to model clusters of bacterial chemotactic receptors, and CaMKII holoenzymes.An extension to StochSim includes a particle-based simulator DYNSTOC, which uses a StochSim-like algorithm to simulate models specified in the BioNetGen language (BNGL), and improves the handling of molecules within macromolecular complexes.Another particle-based stochastic simulator that can read BNGL input files is RuleMonkey. Its simulation algorithm differs from the algorithms underlying both StochSim and DYNSTOC in that the simulation time step is variable.
The computation problem:
The Network-Free Stochastic Simulator (NFSim) differs from those described above by allowing for the definition of reaction rates as arbitrary mathematical or conditional expressions and thereby facilitates selective coarse-graining of models. RuleMonkey and NFsim implement distinct but related simulation algorithms. A detailed review and comparison of both tools is given by Yang and Hlavacek.It is easy to imagine a biological system where some components are complex multi-state molecules, whereas others have few possible states (or even just one) and exist in large numbers. A hybrid approach has been proposed to model such systems: Within the Hybrid Particle/Population (HPP) framework, the user can specify a rule-based model, but can designate some species to be treated as populations (rather than particles) in the subsequent simulation. This method combines the computational advantages of particle-based modeling for multi-state systems with relatively low molecule numbers and of population-based modeling for systems with high molecule numbers and a small number of possible states. Specification of HPP models is supported by BioNetGen, and simulations can be performed with NFSim.
The computation problem:
Spatial particle-based methods Spatial particle-based methods differ from the methods described above by their explicit representation of space.
The computation problem:
One example of a particle-based simulator that allows for a representation of cellular compartments is SRSim. SRSim is integrated in the LAMMPS molecular dynamics simulator and allows the user to specify the model in BNGL. SRSim allows users to specify the geometry of the particles in the simulation, as well as interaction sites. It is therefore especially good at simulating the assembly and structure of complex biomolecular complexes, as evidenced by a recent model of the inner kinetochore.MCell allows individual molecules to be traced in arbitrarily complex geometric environments which are defined by the user. This allows for simulations of biomolecules in realistic reconstructions of living cells, including cells with complex geometries like those of neurons. The reaction compartment is a reconstruction of a dendritic spine.MCell uses an ad-hoc formalism within MCell itself to specify a multi-state model: In MCell, it is possible to assign "slots" to any molecular species. Each slot stands for a particular modification, and any number of slots can be assigned to a molecule. Each slot can be occupied by a particular state. The states are not necessarily binary. For instance, a slot describing binding of a particular ligand to a protein of interest could take the states "unbound", "partially bound", and "fully bound".
The computation problem:
The slot-and-state syntax in MCell can also be used to model multimeric proteins or macromolecular complexes. When used in this way, a slot is a placeholder for a subunit or a molecular component of a complex, and the state of the slot will indicate whether a specific protein component is absent or present in the complex. A way to think about this is that MCell macromolecules can have several dimensions: A "state dimension" and one or more "spatial dimensions". The "state dimension" is used to describe the multiple possible states making up a multi-state protein, while the spatial dimension(s) describe topological relationships between neighboring subunits or members of a macromolecular complex. One drawback of this method for representing protein complexes, compared to Meredys, is that MCell does not allow for the diffusion of complexes, and hence, of multi-state molecules. This can in some cases be circumvented by adjusting the diffusion constants of ligands that interact with the complex, by using checkpointing functions or by combining simulations at different levels.
Examples of multi-state models in biology:
A (by no means exhaustive) selection of models of biological systems involving multi-state molecules and using some of the tools discussed here is give in the table below. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Progesterone receptor C**
Progesterone receptor C:
The progesterone receptor C (PR-C) is one of three known isoforms of the progesterone receptor (PR), the main biological target of the endogenous progestogen sex hormone progesterone. The other isoforms of the PR include the PR-A and PR-B. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Laparotomy**
Laparotomy:
A laparotomy is a surgical procedure involving a surgical incision through the abdominal wall to gain access into the abdominal cavity. It is also known as a celiotomy.
Origins and history:
The first successful laparotomy was performed without anesthesia by Ephraim McDowell in 1809 in Danville, Kentucky. On July 13, 1881, George E. Goodfellow treated a miner outside Tombstone, Arizona Territory, who had been shot in the abdomen with a .32-caliber Colt revolver. Goodfellow was able to operate on the man nine days after he was shot, when he performed the first laparotomy to treat a bullet wound.
Terminology:
The term comes from the Greek word λᾰπάρᾱ (lapara) 'the soft part of the body between the ribs and hip, flank' and the suffix -tomy, from the Greek word τομή (tome) '(surgical) cut'.
In diagnostic laparotomy (most often referred to as an exploratory laparotomy and abbreviated ex-lap), the nature of the disease is unknown, and laparotomy is deemed the best way to identify the cause.
In therapeutic laparotomy, a cause has been identified (e.g. colon cancer) and the operation is required for its therapy.
Usually, only exploratory laparotomy is considered a stand-alone surgical operation. When a specific operation is already planned, laparotomy is considered merely the first step of the procedure.
Spaces accessed:
Depending on incision placement, laparotomy may give access to any abdominal organ or space, and is the first step in any major diagnostic or therapeutic surgical procedure of these organs, which include: the digestive tract (the stomach, duodenum, jejunum, ileum and colon) the liver, pancreas, gallbladder, and spleen the bladder the male prostate the female reproductive organs (the uterus and ovaries) the retroperitoneum (the kidneys, the aorta, abdominal lymph nodes)
Types of incisions:
Midline The most common incision for laparotomy is a vertical incision in the middle of the abdomen which follows the linea alba.
The upper midline incision usually extends from the xiphoid process to the umbilicus.
A typical lower midline incision is limited by the umbilicus superiorly and by the pubic symphysis inferiorly.
Sometimes a single incision extending from xiphoid process to pubic symphysis is employed, especially in trauma surgery.Midline incisions are particularly favoured in diagnostic laparotomy, as they allow wide access to most of the abdominal cavity.
Types of incisions:
Midline incision Cut (incise) the skin in midline Cut (incise) subcutaneous tissue Divide the linea alba (white line of the abdomen) Pick up peritoneum, confirm that there is no bowel adhesion (intestinal adhesion) Nick peritoneum Insert finger beneath the wound to make sure that there is no adhesion Cut the peritoneum with scissors Other Other common laparotomy incisions include: Kocher (right subcostal) incision (after Emil Theodor Kocher); appropriate for certain operations on the liver, gallbladder and biliary tract. This shares a name with the Kocher incision used for thyroid surgery: a transverse, slightly curved incision about 2 cm above the sternoclavicular joints.
Types of incisions:
Davis or Rockey–Davis "muscle-splitting" right lower quadrant incision for appendectomy, named for the Oregon surgeon Alpha Eugene Rockey (1857–1927) and the Philadelphia surgeon Gwilym George Davis (1857–1918), who devised such incision style in 1905.
Types of incisions:
Pfannenstiel incision, a transverse incision below the umbilicus and just above the pubic symphysis. In the classic Pfannenstiel incision, the skin and subcutaneous tissue are incised transversally, but the linea alba is opened vertically. It is the incision of choice for Cesarean section and for abdominal hysterectomy for benign disease. A variation of this incision is the Maylard incision in which the rectus abdominis muscles are sectioned transversally to permit wider access to the pelvis. This was pioneered by the Scottish surgeon Alfred Ernest Maylard (1855–1947) in 1920.
Types of incisions:
Lumbotomy consists of a lumbar incision which permits access to the kidneys (which are retroperitoneal) without entering the peritoneal cavity. It is typically used only for benign renal lesions. It has also been proposed for surgery of the upper urological tract.
Cherney Incision – developed in 1941 by the American uro-gynecologic surgeon Leonid Sergius Cherney (1908–1963).
Complications following laparotomy:
Globally, there are few studies comparing perioperative mortality following laparotomy across different health systems. A study in the UK with more than 180,000 patients aimed to define a timeframe for quantitative futility in emergency laparotomy and investigate predictors of futility using the United Kingdom National Emergency Laparotomy Audit (NELA) database. A two-stage methodology was used; stage one defined a timeframe for futility using an online survey and steering group discussion; stage two applied this definition to patients enrolled in NELA December 2013–December 2020 for analysis. Futility was defined as all-cause mortality within 3 days of emergency laparotomy. Results showed that quantitative futility occurred in 4% of patients (7442/180,987) and median age was 74 years. Significant predictors of futility included age, arterial lactate and cardiorespiratory co-morbidity. Frailty was associated with a 38% increased risk of early mortality and surgery for intestinal ischaemia was associated with a two times greater chance of futile surgery. These findings suggest that quantitative futility after emergency laparotomy is associated with quantifiable risk factors available to decision-makers preoperatively and should be incorporated into shared decision-making discussions with extremely high-risk patients.There are also several national studies looking at 30-day mortality in various health systems including the United Kingdom (the National Emergency Laparotomy Audit- NELA) and Australia and New Zealand (ANZELA). One major prospective study of 10,745 adult patients undergoing emergency laparotomy from 357 centres in 58 high-, middle-, and low-income countries found that mortality is three times higher in low- compared with high-HDI countries even when adjusted for prognostic factors. In this study the overall global mortality rate was 1.6 percent at 24 hours (high 1.1 percent, middle 1.9 percent, low 3.4 percent; P < 0.001), increasing to 5.4 percent by 30 days (high 4.5 percent, middle 6.0 percent, low 8.6 percent; P < 0.001). Of the 578 patients who died, 404 (69.9 percent) did so between 24 h and 30 days following surgery (high 74.2 percent, middle 68.8 percent, low 60.5 percent). Patient safety factors were suggested to play an important role, with use of the WHO Surgical Safety Checklist associated with reduced mortality at 30 days.Taking a similar approach, a unique global study of 1,409 children undergoing emergency laparotomy from 253 centres in 43 countries showed that adjusted mortality in children following surgery may be as high as 7 times greater in low-HDI and middle-HDI countries compared with high-HDI countries, translating to 40 excess deaths per 1000 procedures performed in these settings. Internationally, the most common operations performed were appendectomy, small bowel resection, pyloromyotomy and correction of intussusception. After adjustment for patient and hospital risk factors, child mortality at 30 days was significantly higher in low-HDI (adjusted OR 7.14 (95% CI 2.52 to 20.23), p<0.001) and middle-HDI (4.42 (1.44 to 13.56), p=0.009) countries compared with high-HDI countries.Absorption of drugs administered orally was shown to be significantly affected following abdominal surgery.
Related procedures:
A related procedure is laparoscopy, where cameras and other instruments are inserted into the peritoneal cavity via small holes in the abdomen. For example, an appendectomy can be done either by a laparotomy or by a laparoscopic approach.
There is no evidence of short-term or long-term advantages for peritoneal closure during laparotomy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**6N3P**
6N3P:
The 6N3P (Russian: 6Н3П) is a Russian-made direct equivalent of the 2C51 medium gain dual triode vacuum tube. It may be used as an amplifier, mixer, oscillator or multivibrator over a frequency range AF through VHF. The Russian tube is slightly larger in size than the American tube.
Basic data:
(per each triode) Uf = 6.3 V, If = 350 mA, μ = 36, Ia = 7.7 mA, S = 4.9 mA/V, Pa = 1.5 W
History of use:
6N3P was widely used for FM band radio input unit stages (nearly all 1960s Soviet radios with FM band employed the same input unit on a separate sub-chassis). Currently it has found use in DIY preamps. A ruggedized/industrial version of the tube is designated 6N3P-EV (Russian: 6Н3П-ЕВ).
Chinese 6N3:
eBay has proliferated with pre-amps apparently from Hong Kong that are largely populated with the 6N3, which is said to be the Chinese version of the 6N3P. The 6N3Ps are newly made (unlike the Soviet "new old stock," and the pre-amps appear to be the product of a cottage hifi industry). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Network simulation**
Network simulation:
In computer network research, network simulation is a technique whereby a software program replicates the behavior of a real network. This is achieved by calculating the interactions between the different network entities such as routers, switches, nodes, access points, links, etc. Most simulators use discrete event simulation in which the modeling of systems in which state variables change at discrete points in time. The behavior of the network and the various applications and services it supports can then be observed in a test lab; various attributes of the environment can also be modified in a controlled manner to assess how the network/protocols would behave under different conditions.
Network simulator:
A network simulator is a software program that can predict the performance of a computer network or a wireless communication network. Since communication networks have become too complex for traditional analytical methods to provide an accurate understanding of system behavior, network simulators are used. In simulators, the computer network is modeled with devices, links, applications, etc., and the network performance is reported. Simulators come with support for the most popular technologies and networks in use today such as 5G, Internet of Things (IoT), Wireless LANs, mobile ad hoc networks, wireless sensor networks, vehicular ad hoc networks, cognitive radio networks, LTE etc.
Simulations:
Most of the commercial simulators are GUI driven, while some network simulators are CLI driven. The network model/configuration describes the network (nodes, routers, switches, links) and the events (data transmissions, packet error, etc.). Output results would include network-level metrics, link metrics, device metrics etc. Further, drill down in terms of simulations trace files would also be available. Trace files log every packet, every event that occurred in the simulation and is used for analysis. Most network simulators use discrete event simulation, in which a list of pending "events" is stored, and those events are processed in order, with some events triggering future events—such as the event of the arrival of a packet at one node triggering the event of the arrival of that packet at a downstream node.
Network emulation:
Network emulation allows users to introduce real devices and applications into a test network (simulated) that alters packet flow in such a way as to mimic the behavior of a live network. Live traffic can pass through the simulator and be affected by objects within the simulation.
Network emulation:
The typical methodology is that real packets from a live application are sent to the emulation server (where the virtual network is simulated). The real packet gets 'modulated' into a simulation packet. The simulation packet gets demodulated into a real packet after experiencing effects of loss, errors, delay, jitter etc., thereby transferring these network effects into the real packet. Thus it is as-if the real packet flowed through a real network but in reality it flowed through the simulated network.
Network emulation:
Emulation is widely used in the design stage for validating communication networks prior to deployment.
List of network simulators:
There are both free/open-source and proprietary network simulators available. Examples of notable network simulators / emulators include: ns simulator OPNET (Riverbed) NetSim (Tetcos) GloMoSimAll of these are open source code editable while some of these are commercial.
Uses of network simulators:
Network simulators provide a cost-effective method for 5G-NR capacity, throughput and latency analysis Network R & D (More than 70% of all Network Research paper reference a network simulator) Defense applications such as HF / UHF / VHF Radio based MANET Radios, Tactical data links etc.
Uses of network simulators:
IOT, VANET simulations UAV network/drone swarm communication simulation Machine Learning: Testing ML algorithms for optimizing network parameters, generating synthetic data training ML algorithms on networks Education: Online courses, Lab experimentation, and R & D. Most universities use a network simulator for teaching / R & D since it is too expensive to buy hardware equipmentThere are a wide variety of network simulators, ranging from the very simple to the very complex. Minimally, a network simulator must enable a user to Model the network topology specifying the nodes on the network and the links between those nodes Model the application flow (traffic) between the nodes Providing network performance metrics as output Visualization of the packet flow Technology/protocol evaluation and device designs Logging of packet/events for drill-down analyses/debugging | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Criticism of Windows Vista**
Criticism of Windows Vista:
Windows Vista, an operating system released by Microsoft for consumers on January 30, 2007, has been widely criticized by reviewers and users. Due to issues with new security features, performance, driver support and product activation, Windows Vista has been the subject of a number of negative assessments by various groups.
Security:
Driver signing requirement For security reasons, 64-bit versions of Windows Vista allow only signed drivers to be installed in kernel mode. Because code executing in kernel mode enjoys wide privileges on the system, the signing requirement aims to ensure that only code with a known origin executes at this level. In order for a driver to be signed, a developer/software vendor has to obtain an Authenticode certificate with which to sign the driver. Authenticode certificates can be obtained from certificate authorities trusted by Microsoft. Microsoft trusts the certificate authority to verify the applicant's identity before issuing a certificate. If a driver is not signed using a valid certificate, or if the driver was signed using a certificate which has been revoked by Microsoft or the certificate authority, Windows will refuse to load the driver.
Security:
The following criticisms/claims have been made regarding this requirement: It disallows experimentation from the hobbyist community. The required Authenticode certificates for signing Vista drivers are expensive and out of reach for small developers, usually about $400–$500/year (from Verisign).Microsoft allows developers to temporarily or locally disable the signing requirement on systems they control (by hitting F8 during boot) or by signing the drivers with self-issued certificates or by running a kernel debugger.At one time, a third-party tool called Atsiv existed that would allow any driver, unsigned or signed to be loaded. Atsiv worked by installing a signed "surrogate" driver which could be directed to load any other driver, thus circumventing the driver signing requirement. Since this was in violation of the driver signing requirement, Microsoft closed this workaround with hotfix KB932596, by revoking the certificate with which the surrogate driver was signed.
Security:
Flaws in memory protection features Security researchers Alexander Sotirov and Mark Dowd have developed a technique that bypasses many of the new memory-protection safeguards in Windows Vista, such as address space layout randomization (ASLR). The result of this is that any already existing buffer overflow bugs that, in Vista, were previously not exploitable due to such features, may now be exploitable. This is not in itself a vulnerability: as Sotirov notes, "What we presented is weaknesses in the protection mechanism. It still requires the system under attack to have a vulnerability. Without the presence of a vulnerability these techniques don't really [accomplish] anything." The vulnerability Sotirov and Dowd used in their paper as an example was the 2007 animated cursor bug, CVE-2007-0038.
Security:
One security researcher (Dino Dai Zovi) claimed that this means that it is "completely game over" for Vista security though Sotirov refuted this, saying that "The articles that describe Vista security as 'broken' or 'done for,' with 'unfixable vulnerabilities' are completely inaccurate. One of the suggestions I saw in many of the discussions was that people should just use Windows XP. In fact, in XP a lot of those protections we're bypassing [such as ASLR] don't even exist."
Digital rights management:
Another common criticism concerns the integration of a new form of digital rights management (DRM) into the operating system, specifically the Protected Video Path (PVP), which involves technologies such as High-bandwidth Digital Content Protection (HDCP) and the Image Constraint Token (ICT). These features were added to Vista due to licensing restrictions from the HD-DVD consortium and Blu-ray association. This would have concerned only the playback resolution of protected content on HD DVD and Blu-ray discs, but it had not been enabled as of 2017. A lack of a protected channel did not stop playback. Audio plays back as normal but high-definition video downsampled on Blu-ray and HD DVD to slightly-better-than-DVD quality video.
Digital rights management:
The Protected Video Path mandates that encryption must be used whenever content marked as "protected" will travel over a link where it might be intercepted. This is called a User-Accessible Bus (UAB). Additionally, all devices that come into contact with premium content (such as graphics cards) have to be certified by Microsoft. Before playback starts, all the devices involved are checked using a hardware functionality scan (HFS) to verify if they are genuine and have not been tampered with. Devices are required to lower the resolution (from 1920×1080 to 960×540) of video signals outputs that are not protected by HDCP. Additionally, Microsoft maintains a global revocation list for devices that have been compromised. This list is distributed to PCs over the Internet using normal update mechanisms. The only effect on a revoked driver's functionality is that high-level protected content will not play; all other functionality, including low-definition playback, is retained.
Digital rights management:
Notable critics Peter Gutmann, a computer security expert from the University of Auckland, New Zealand, released a whitepaper in which he raises the following concerns against these mechanisms: Adding encryption facilities to devices makes them more expensive, a cost that is passed on to the user.
If outputs are not deemed sufficiently protected by the media industry, then even very expensive equipment can be required to be switched off (for example, S/PDIF-based, high-end audio cards).
Some newer high-definition monitors are not HDCP-enabled, even though the manufacturer may claim otherwise.
The added complexity makes systems less reliable.
Since non-protected media are not subject to the new restrictions, users may be encouraged to remove the protection in order to view them without restrictions, thus defeating the content protection scheme's initial purpose.
Protection mechanisms, such as disabling or degrading outputs, may be triggered erroneously or maliciously, motivating denial-of-service attacks.
Revoking the driver of a device that is in wide use is such a drastic measure that Gutmann doubts Microsoft will ever actually do so. On the other hand, they may be forced to because of their legal obligations to the movie studios.The Free Software Foundation conducted a campaign against Vista, called "BadVista", on these grounds.
Reaction to criticism Ed Bott, author of Windows Vista Inside Out, published a three-part blog which rebuts many of Gutmann's claims.Bott's criticisms can be summarized as follows: Gutmann based his paper on outdated documentation from Microsoft and second-hand web sources.
Gutmann quotes selectively from the Microsoft specifications.
Gutmann did no experimental work with Vista to prove his theories. Rather, he makes mistaken assumptions and then speculates wildly on their implications.
Digital rights management:
Gutmann's paper, while presented as serious research, is really just an opinion piece.Technology writer George Ou stated that Gutmann's paper relies on unreliable sources and that Gutmann has never used Windows Vista to test his theories.Gutmann responded to both Bott and Ou in a further article, which stated that the central thesis of Gutmann's article has not been refuted and the response of Bott is "disinformation".
Digital rights management:
Microsoft published a blog entry with "Twenty Questions (and Answers)" on Windows Vista Content Protection which refutes some of Gutmann's arguments.Microsoft MVP Paul Smith has written a response to Gutmann's paper in which he counters some of his arguments. Specifically, he says: Microsoft is not to blame for these measures. The company offered this solution as an alternative to not being able to playback the content at all.
Digital rights management:
The Protected Video Path will not be used for quite a while. There is said to be an agreement between Microsoft and Sony that Blu-ray discs will not mandate protection until at least 2010, possibly even 2012.
Vista does not degrade or refuse to play any existing media, CDs or DVDs. The protected data paths are only activated if protected content requires it.
Digital rights management:
Users of other operating systems such as Linux or Mac OS X will not have official access to this premium content.Microsoft also noted that content protection mechanisms have existed in Windows as far back as Windows ME.Since mainstream and extended support for Windows Vista ended on April 10, 2012, and April 11, 2017, respectively, plans to enable the Protected Video Path for Windows Vista is very unlikely.
Hardware requirements and performance:
Around the time of its release, Microsoft stated, "nearly all PCs on the market today will run Windows Vista," and most PCs sold after 2005 are capable of running Vista.Some of the hardware that worked in Windows XP does not work, or works poorly in Vista, because no Vista-compatible drivers are available due to companies going out of business or their lack of interest in supporting old hardware.
Hardware requirements and performance:
Speed Tom's Hardware published benchmarks in January 2007 that showed that Windows Vista executed typical applications more slowly than Windows XP with the same hardware configuration. A subset of the benchmarks used were provided by Standard Performance Evaluation Corporation (or SPEC), who later stated that such "results should not be compared to those generated while running Windows XP, even if testing is done with the same hardware configuration." SPEC acknowledges that an apple-to-apples comparison cannot be made in cases such as the one done by Tom's Hardware, calling such studies "invalid comparisons." However, the Tom's Hardware report conceded that the SPECviewperf tests "suffered heavily from the lack of support for the OpenGL graphics library under Windows Vista". For this reason the report recommended against replacing Windows XP with Vista until manufacturers made these drivers available.The report also concluded in tests involving real world applications Vista performed considerably slower, noting "We are disappointed that CPU-intensive applications such as video transcoding with XviD (DVD to XviD MPEG4) or the MainConcept H.264 Encoder performed 18% to nearly 24% slower in our standard benchmark scenarios". Other commonly used applications, including Photoshop and WinRAR, also performed worse under Vista.Many low-to-mid-end machines that come with Windows Vista pre-installed suffer from exceptionally slow performance with the default Vista settings that come pre-loaded, and laptop manufacturers have offered to "downgrade" laptops to Windows XP—for a price. However, this "price" is unnecessary, as Microsoft allows users of Windows Vista and Windows 7 to freely "downgrade" their software by installing XP and then phoning a Microsoft representative for a new product key.
Hardware requirements and performance:
File operation performance When first released in November 2006, Vista performed file operations such as copying and deletion more slowly than other operating systems. Large copies required when migrating from one computer to another seemed difficult or impossible without workarounds such as using the command line. This inability to efficiently perform basic file operations attracted strong criticism. After six months, Microsoft confirmed the existence of these problems by releasing a special performance and reliability update, which was later disseminated through Windows Update, and is included in Service Pack 1.Nonetheless, one benchmark reported to show that, while improving performance compared to Vista's original release, Service Pack 1 does not increase the level of performance to that of Windows XP. However, that benchmark has been questioned by others within ZDNet. Ed Bott both questions his colleagues' methods and provides benchmarks that refute the results.
Hardware requirements and performance:
Game performance Early in Vista's lifecycle, many games showed a drop in frame rate compared to Windows XP. These results were largely the consequence of Vista's immature drivers for graphics cards, and higher system requirements for Vista itself.
By the time Service Pack 1 was released in mid-2008, gaming benchmarks showed that Vista was on par with Windows XP. However, games such as Devil May Cry 4, Crysis and Left 4 Dead stated that their memory requirements on Vista were 1.5x–2x higher than XP.
Hardware requirements and performance:
Software bloat Concerns were expressed that Windows Vista may contain software bloat. Speaking in 2007 at the University of Illinois, Microsoft distinguished engineer Eric Traut said, "A lot of people think of Windows as this large, bloated operating system, and that's maybe a fair characterization, I have to admit." He went on to say that, "at its core, the kernel, and the components that make up the very core of the operating system, is actually pretty streamlined."Former PC World editor Ed Bott expressed skepticism about the claims of bloat, noting that almost every single operating system that Microsoft has ever sold had been criticized as "bloated" when they first came out; even those now regarded as the exact opposite, such as MS-DOS.
Hardware requirements and performance:
Vista capable lawsuit Two consumers sued Microsoft in United States federal court alleging the "Windows Vista Capable" marketing campaign was a bait-and-switch tactic as some computers with Windows XP pre-installed could only run Vista Home Basic, sometimes not even running at a user-acceptable speed. In February 2008, a Seattle judge granted the suit class action status, permitting all purchasers in the class to participate in the case.
Hardware requirements and performance:
Released documents in the case, as well as a Dell presentation in March 2007, discussed late changes to Windows Vista which permitted hardware to be certified that would require upgrading in order to use Vista, and that lack of compatible drivers forced hardware vendors to "limp out with issues" when Vista was launched. This was one of several Vista launch appraisals included in 158 pages of unsealed documents.
Hardware requirements and performance:
Laptop battery life With the new features of Vista, criticism has surfaced concerning the use of battery power in laptops running Vista, which can drain the battery much more rapidly than Windows XP, reducing battery life. With the Windows Aero visual effects turned off, battery life is equal to or better than Windows XP systems. "With the release of a new operating system and its new features and higher requirements, higher power consumption is normal", as Richard Shim, an analyst with IDC noted, "when Windows XP came out, that was true, and when Windows 2000 came out, that was true."
Software compatibility:
According to Gartner, "Vista has been dogged by fears, in some cases proven, that many existing applications have to be re-written to operate on the new system." Cisco has been reported as saying, "Vista will solve a lot of problems, but for every action, there's a reaction, and unforeseen side-effects and mutations. Networks can become more brittle." According to PC World, "software compatibility issues, bug worries keep businesses from moving to Microsoft's new OS." Citing "concerns over cost and compatibility," the United States Department of Transportation prohibited workers from upgrading to Vista. The University of Pittsburgh Medical Center said that the rollout of Vista is significantly behind schedule because "several key programs still aren't compatible, including patient scheduling software."As of July 2007, there were over 2,000 tested applications that were compatible with Vista. Microsoft published a list of legacy applications that meet their "Works with Windows Vista" software standards as well as a list of applications that meet their more stringent "Certified for Windows Vista" standards. Microsoft released the Application Compatibility Toolkit 5.0 application for migrating Vista-incompatible applications, while virtualization solutions like VirtualBox, Virtual PC 2007 or those from VMware can also be used as a last resort to continue running Vista-incompatible applications under legacy versions of Windows.
Software compatibility:
Microsoft also provided an Upgrade Advisor Tool (.NET Framework must be installed and an Internet connection is required) which can be used on existing XP systems to flag driver and application compatibility issues before upgrading to Vista.
Removal of announced features:
Microsoft has also been criticized for removing some heavily discussed features such as Next-Generation Secure Computing Base in May 2004, WinFS in August 2004, Windows PowerShell in August 2005 (though this was released separately from Vista prior to Vista's release, and is included in Vista's successor, Windows 7), SecurID Support in May 2006, PC-to-PC Synchronization in June 2006. The initial "three pillars" in Vista were all radically altered to reach a release date.
Pricing:
Microsoft's international pricing of Vista has been criticized by many as too expensive. The differences in pricing from one country to another vary significantly, especially considering that copies of Vista can be ordered and shipped worldwide from the United States; this could save between $42 (€26) and $314 (€200). In many cases, the difference in price is significantly greater than was the case for Windows XP. In Malaysia, the pricing for Vista is at around RM799 ($244/€155). At the 2007 exchange rate, United Kingdom consumers paid almost double their United States counterparts for the same software.
Pricing:
Microsoft has come under fire from British consumers about the price it is charging for Vista, the latest version of Windows.
British (and French) customers will pay double the US price. The upgrade from Windows XP to Vista Home Basic will cost £100 (€126), while American users will pay only £51 ($100, €64).
Since the release of Windows Vista in January 2007 Microsoft has reduced the retail and upgrade price point of Vista. Originally Vista Ultimate full retail was priced at $399, and the upgrade at $259. These prices have since been reduced to $319 and $219 respectively.
Software Protection Platform:
Vista includes an enhanced set of anti-copying technologies, based on Windows XP's Windows Genuine Advantage, called Software Protection Platform (SPP). In the initial release of Windows Vista (without Service Pack 1), SPP included a reduced-functionality mode, which the system enters when it detects that the user has "failed product activation" or that the copy of Vista is "identified as counterfeit or non-genuine". A Microsoft white paper described the technology as follows: The default Web browser will be started and the user will be presented with an option to purchase a new product key. There is no start menu, no desktop icons, and the desktop background is changed to black. [...] After one hour, the system will log the user out without warning.
Software Protection Platform:
Some analysts questioned this behavior, especially given an imperfect false-positive record on behalf of SPP's predecessor, and given at least one temporary validation server outage which reportedly flagged many legitimate copies of Vista and XP as "Non-Genuine" when Windows Update would "check in" and fail the "validation" challenge.Microsoft altered SPP significantly in Windows Vista Service Pack 1. Instead of the reduced-functionality mode, installations of Vista left unactivated for 30 days present users with a nag screen which prompts them to activate the operating system when they log in, change the desktop to a solid black colour every hour, and periodically use notification balloons to warn users about software counterfeiting. In addition, updates classified as optional are not available to unactivated copies of Vista.
Software Protection Platform:
Microsoft maintains a technical bulletin providing further details on product activation for Vista.
Windows Ultimate Extras:
Windows Vista Ultimate users can download exclusive Windows Ultimate Extras. These extras have been released much more slowly than expected, with only four available as of August 2009, almost three years after Vista was released, which has angered some users who paid extra mainly for the promised add-ons. Barry Goffe, Director of Windows Vista Ultimate for Microsoft states that they were unexpectedly delayed on releasing several of the extras, but that "Microsoft plans to ship a collection of additional Windows Ultimate Extras that it is confident will delight its passionate Windows Vista Ultimate customers."
Vistaster:
This term was coined as a disparaging substitute for the proper name of the Vista operating system. Use of the term was popularized by its use on The Secret Diary of Steve Jobs, a technology and pop culture comedic blog where author Daniel Lyons writes in the persona of then Apple CEO Steve Jobs. This use is in reference to the failure of Vista to meet sales and customer satisfaction expectations. Lyons published an article in Forbes using the term, and it was soon picked up by international media outlets: Jornal de Notícias, Rádio e Televisão de Portugal, La Nación, The Chosun Ilbo, and 163.com.
Retrospective analysis:
Keith Ward of Lifewire said that "Windows Vista was not Microsoft's most-loved release. People look at Windows 7 with nostalgia, but you don't hear much love for Vista. Microsoft has mostly forgotten it, but Vista was a good, solid operating system with many things going for it." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.