text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Pavement dwellers**
Pavement dwellers:
Pavement dwellers refers to informal housing built on the footpaths/pavements of city streets. The structures use the walls or fences which separate properties from the pavement and street outside. Materials include cloth, corrugated iron, cardboard, wood, plastic, and sometimes also bricks or cement.
Mumbai:
According to Sheela Patel of the Society for the Promotion of Area Resource Centers (SPARC), pavement dwellers are primarily first generation migrants who moved to Mumbai as early as the 1940s, and who have lived on the pavement of public roadways ever since. They are completely invisible as far as local, state, and national policies are concerned. People who sleep on or near pavements often pay to keep their belongings in shops, kiosks, or other buildings.SPARC conducted a study in 1985 called We the Invisible based on a census of about 6,000 households. It showed approximately half of the pavement dwellers to be from the poorest districts in the state of Maharashtra, with the other half from the poorest parts of wider India. Many came as victims of drought, famine, earthquakes, religious persecution or riots. Others came as a result of a complete breakdown in their livelihoods where they had been living. Pavement dwellers migrate to Mumbai hoping to capitalize on the wealth and job opportunities that the city offers. They are typically forward-thinking, seeking to build lives in the city that give the next generation better opportunities than would have been possible in the village.
Mumbai:
1985 eviction crisis In 1985, the Supreme Court of India granted the Municipal Corporation of Greater Mumbai authority to demolish household structures on the sidewalks of Mumbai. With the aid of SPARC, the rights of Mumbai's pavement dwellers were recognised and coexistence was successfully negotiated.
South Africa:
The Symphony Way Pavement Dwellers were a group of evicted families who lived on a main road in Delft, South Africa, from February 2008 till late 2009, until they were evicted by court order to the Blikkiesdorp resettlement zone.
Paris:
Following the eviction of the Calais Jungle in 2016, African migrants began living on the pavements of Paris. The encampments would grow over a period of weeks and the police would evict them. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Extended Boolean model**
Extended Boolean model:
The Extended Boolean model was described in a Communications of the ACM article appearing in 1983, by Gerard Salton, Edward A. Fox, and Harry Wu. The goal of the Extended Boolean model is to overcome the drawbacks of the Boolean model that has been used in information retrieval. The Boolean model doesn't consider term weights in queries, and the result set of a Boolean query is often either too small or too big. The idea of the extended model is to make use of partial matching and term weights as in the vector space model. It combines the characteristics of the Vector Space Model with the properties of Boolean algebra and ranks the similarity between queries and documents. This way a document may be somewhat relevant if it matches some of the queried terms and will be returned as a result, whereas in the Standard Boolean model it wasn't.Thus, the extended Boolean model can be considered as a generalization of both the Boolean and vector space models; those two are special cases if suitable settings and definitions are employed. Further, research has shown effectiveness improves relative to that for Boolean query processing. Other research has shown that relevance feedback and query expansion can be integrated with extended Boolean query processing.
Definitions:
In the Extended Boolean model, a document is represented as a vector (similarly to in the vector model). Each i dimension corresponds to a separate term associated with the document.
The weight of term Kx associated with document dj is measured by its normalized Term frequency and can be defined as: wx,j=fx,j∗IdfxmaxiIdfi where Idfx is inverse document frequency and fx,j the term frequency for term x in document j.
The weight vector associated with document dj can be represented as: vdj=[w1,j,w2,j,…,wi,j]
The 2 Dimensions Example:
Considering the space composed of two terms Kx and Ky only, the corresponding term weights are w1 and w2. Thus, for query qor = (Kx ∨ Ky), we can calculate the similarity with the following formula: sim(qor,d)=w12+w222 For query qand = (Kx ∧ Ky), we can use: sim(qand,d)=1−(1−w1)2+(1−w2)22
Generalizing the idea and P-norms:
We can generalize the previous 2D extended Boolean model example to higher t-dimensional space using Euclidean distances.
This can be done using P-norms which extends the notion of distance to include p-distances, where 1 ≤ p ≤ ∞ is a new parameter.
A generalized conjunctive query is given by: qor=k1∨pk2∨p....∨pkt The similarity of qor and dj can be defined as:: sim(qor,dj)=w1p+w2p+....+wtptp A generalized disjunctive query is given by: qand=k1∧pk2∧p....∧pkt The similarity of qand and dj can be defined as: sim(qand,dj)=1−(1−w1)p+(1−w2)p+....+(1−wt)ptp
Examples:
Consider the query q = (K1 ∧ K2) ∨ K3. The similarity between query q and document d can be computed using the formula: sim(q,d)=(1−((1−w1)p+(1−w2)p2p))p+w3p2p
Improvements over the Standard Boolean Model:
Lee and Fox compared the Standard and Extended Boolean models with three test collections, CISI, CACM and INSPEC.
Using P-norms they obtained an average precision improvement of 79%, 106% and 210% over the Standard model, for the CISI, CACM and INSPEC collections, respectively.
The P-norm model is computationally expensive because of the number of exponentiation operations that it requires but it achieves much better results than the Standard model and even Fuzzy retrieval techniques. The Standard Boolean model is still the most efficient. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cheoptics360**
Cheoptics360:
Pepper's ghost is an illusion technique used in the theatre, cinema, amusement parks, museums, television, and concerts. The illusion is performed by reflecting an image of an object off-stage so that it appears to be in front of the audience.It is named after the English scientist John Henry Pepper (1821–1900) who began popularising the effect with a theatre demonstration in 1862. This launched an international vogue for ghost-themed plays which used this novel stage effect during the 1860s and subsequent decades.
Cheoptics360:
The illusion is widely used for entertainment and publicity purposes. These include the Girl-to-Gorilla trick found in old carnival sideshows and the appearance of "ghosts" at the Haunted Mansion and the "Blue Fairy" in Pinocchio's Daring Journey, both at Disneyland in California. Teleprompters are a modern implementation of Pepper's ghost. The technique was used to display a life-size illusion of Kate Moss at the 2006 runway show for the Alexander McQueen collection The Widows of Culloden.In the 2010s the technique has been used to make virtual artists appear onstage in apparent 'live' concerts, with examples including Elvis Presley, Tupac Shakur and Michael Jackson. It is often wrongly described as "holographic". Such setups can involve custom projection media server software and specialized stretched films. The installation may be a site-specific one-off, or a use of a commercial system such as the Cheoptics360 or Musion Eyeliner.
Cheoptics360:
Products have been designed using a clear plastic pyramid and a smartphone screen to generate the illusion of a 3D object.
Effect:
The core illusion involves a stage specially arranged into two rooms or areas, one into which audience members can see, and a second (sometimes referred to as the "blue room") that is hidden to the side. A plate of glass (or Plexiglas or plastic film) is placed somewhere in the main room at an angle that reflects the view of the blue room towards the audience. Generally, this is arranged with the blue room to one side of the stage, and the plate on the stage rotated around its vertical axis at 45 degrees. Care must be taken to make the glass as invisible as possible, normally hiding the lower edge in patterning on the floor and ensuring lights do not reflect off it. The plate catches a reflection from a brightly lit actor in an area hidden from the audience. Not noticing the glass screen, the audience mistakenly perceive this reflection as a ghostly figure located among the actors on the main stage. The lighting of the actor in the hidden area can be gradually brightened or dimmed to make the ghost image fade in and out of visibility.
Effect:
When the lights are bright in the main room and dark in the blue room, the reflected image cannot be seen. When the lighting in the blue room is increased, often with the main room lights dimming to make the effect more pronounced, the reflection becomes visible and the objects within the blue/hidden room seem to appear, from thin air, in the space visible to the audience. A common variation uses two blue/hidden rooms, one behind the glass in the main room, and one to the side, the contents of which can be switched between 'visible' and 'invisible' states by manipulating the lighting therein.The hidden room may be an identical mirror-image of the main room, so that its reflected image exactly matches the layout of the main room; this approach is useful in making objects seem to appear or disappear. This illusion can also be used to make an object, or person—reflected in, say, a mirror—appear to morph into another (or vice versa). This is the principle behind the Girl-to-Gorilla trick found in old carnival sideshows. Another variation: the hidden room may itself be painted black, with only light-coloured objects in it. In this case, when light is cast on the room, only the light objects strongly reflect that light, and therefore appear as ghostly, translucent images on the (invisible) pane of glass in the room visible to the audience. This can be used to make objects appear to float in space.
Effect:
The type of theatre use of the illusion which John Henry Pepper pioneered and repeatedly staged in the 1860s were short plays featuring a ghostly apparition which interacts with other actors. An early favourite showed an actor attempting to use a sword against an ethereal ghost, as in the illustration. To choreograph other actors' dealings with the ghost, Pepper used concealed markings on the stage floor for where they should place their feet, since they could not see the ghost image's apparent location. Pepper's 1890 book includes such detailed explanation of his stagecraft secrets, disclosed in his 1863 joint application with co-inventor Henry Dircks to patent this ghost illusion technique.The hidden area is typically below the visible stage but in other Pepper's Ghost set-ups it can be above or, quite commonly, adjacent to the area visible to the viewers. The scale can be very much smaller, for instance small peepshows, even hand-held toys. The illustration shows Pepper's initial arrangement for making a ghost image visible anywhere throughout a theatre.
Effect:
Many effects can be produced via Pepper's Ghost. Since glass screens are less reflective than mirrors, they do not reflect matte black objects in the area hidden from the audience. Thus Pepper's Ghost showmen sometimes used an invisible black-clad actor in the hidden area to manipulate brightly lit, light-coloured objects, which can thus appear to float in air. Pepper's very first public ghost show used a seated skeleton in a white shroud which was being manipulated by an unseen actor in black velvet robes. Hidden actors, whose heads were powdered white for reflection but whose clothes were matte black, could appear as disembodied heads when strongly lit and reflected by the angled glass screen.Pepper's Ghost can be adapted to make performers apparently materialise from nowhere or disappear into empty space. Pepper would sometimes greet an audience by suddenly materialising in the middle of the stage. The illusion can also apparently transform one object or person into another. For instance, Pepper sometimes suspended on stage a basket of oranges which then 'transformed' into jars of marmalade.Another 19th century Pepper's Ghost entertainment featured a figure flying around a theatre backcloth painted as the sky. The hidden actor, lying under bright lights on a rotating, matte black table, wore a costume with metallic spangles to maximise reflection on the hidden glass screen. This foreshadows some 20th century cinema special effects.
History:
Precursors Giambattista della Porta was a 16th-century Neapolitan scientist and scholar who is credited with a number of scientific innovations. His 1589 work Magia Naturalis (Natural Magic) includes a description of an illusion, titled "How we may see in a Chamber things that are not" that is the first known description of the Pepper's ghost effect.Porta's description, from the 1658 English language translation (page 370), is as follows.
History:
Let there be a chamber wherein no other light comes, unless by the door or window where the spectator looks in. Let the whole window or part of it be of glass, as we use to do to keep out the cold. But let one part be polished, that there may be a Looking-glass on bothe sides, whence the spectator must look in. For the rest do nothing. Let pictures be set over against this window, marble statues and suchlike. For what is without will seem to be within, and what is behind the spectator's back, he will think to be in the middle of the house, as far from the glass inward, as they stand from it outwardly, and clearly and certainly, that he will think he sees nothing but truth. But lest the skill should be known, let the part be made so where the ornament is, that the spectator may not see it, as above his head, that a pavement may come between above his head. And if an ingenious man do this, it is impossible that he should suppose that he is deceived.
History:
From the mid-19th century, the illusion, today known as Pepper's Ghost, became widely developed for money-making stage entertainments, amid bitter argument, patent disputes and legal action concerning the technique's authorship. A popular genre of entertainment was stage demonstrations of scientific novelties. Simulations of ghostly phenomena through innovative optical technology fitted these well. 'Phantasmagoria' shows, which simulated supernatural effects, were also familiar public entertainments. Previously, these had made much use of complex magic lantern techniques, like the multiple projectors, mobile projectors and projection on mirrors and smoke, which had been perfected by Étienne-Gaspard Robert/Robertson in Paris early in the century. The new illusion, soon to be labeled Pepper's Ghost, offered a completely different and more convincing way to produce ghost effects, using reflections not projection.
History:
A claim to be the first user of the new illusion in theatres came from the Dutch-born stage magician Henrik Joseph Donckel, who became famous in France under the stage name Henri Robin. Robin said he had spent two years developing the illusion before trying it in 1847 during his regular shows of stage magic and the supernatural in Lyons. However, he found this early rendering of the ghost effect made little impression on the audience. He wrote: "The ghosts failed to achieve the full illusory effect which I have subsequently perfected." The shortcomings of his original techniques "caused me great embarrassment, I found myself forced to put them aside for a while."While Robin later became famous for many effective, imaginative, and complex applications of 'Pepper's Ghost' at Robin's own theatre in Paris, such shows only began mid-1863 after John Henry Pepper had demonstrated his own method for staging the illusion at the London Polytechnic in December 1862. Jean-Eugene Robert-Houdin, contemporary French grand master of stage magic, regarded Robin's performances and other 1863 ghost shows in Paris as "plagiarists" of Pepper's innovation. Jim Steinmeyer, a modern technical and historical authority on Pepper's Ghost, has expressed doubts as to the reliability of Robin's claims for his 1847 performances. Whatever Robin did in 1847, by his own account it produced nothing like the stage effect whereby Pepper, and later Robin himself, astonished and thrilled audiences during 1863.
History:
In October 1852 Pierre Séguin, an artist, patented in France a portable peepshow-like toy for children, which he named the 'polyoscope'. This used the very same illusion, based on reflection, which ten years later Pepper and Dircks would patent in Britain under their own names. Although creating illusory images within a small box is appreciably different from delivering an illusion on stage, Séguin's 1852 patent was eventually to lead to the defeat of Pepper's 1863 attempt to control and license the 'Pepper's Ghost' technique in France as well as in Britain.Pepper described Séguin's polyoscope: "It consisted of a box with a small sheet of glass, placed at an angle of forty-five degrees, and it reflected a concealed table, with plastic figures, the spectre of which appeared behind the glass, and which young people who possessed the toy invited their companions to take out of the box, when it melted away, as it were, in their hands and disappeared." In 1863 Henri Robin maintained that Séguin's polyoscope had been inspired by his own original version of the stage illusion, which Séguin had witnessed while painting magic lantern slides for another part of Robin's show.
History:
Dircks and Pepper Henry Dircks was an English engineer and practical inventor who from 1858 strove to find theatres which would implement his vision of a sensational new genre of drama featuring apparitions which interacted with actors on stage. He constructed a peepshow-like model which demonstrated how reflections on a glass screen could produce convincing illusions. He also outlined a series of plays featuring ghost effects, which his apparatus could enable, and worked out how complex illusions, like image transformations, could be achieved through the technique. But in terms of applying the effect in theatres, Dircks seemed unable to think beyond remodelling theatres to resemble his peepshow model. He produced a design for theatres which required costly, impractical rebuilding of an auditorium to host the illusion. The theatres, which he approached, were not interested. In another bid to attract interest, he advertised his models for sale and in late 1862 the models' manufacturer invited John Henry Pepper to view one.John Henry Pepper was a scientific all-rounder who was both an effective public educator in science and an astute, publicity-conscious, commercial showman. In 1854, he became the director and sole lessee of the Royal Polytechnic where he held the title of Professor. The Polytechnic ran a mix of science education courses and eye-catching public displays of scientific innovations.After seeing Dircks' peepshow model in 1862, Pepper quickly devised an ingenious twist whereby, through adding an angled sheet of glass and a screened-off orchestra pit, almost any theatre or hall could make the illusion visible to a large audience. First public performance in December 1862—a scene from Charles Dickens's The Haunted Man—produced rapturous responses from audience and journalists. A deal was struck between Pepper and Dircks whereby they jointly patented the illusion. Dircks agreed to waive any share of profits for the satisfaction of seeing his idea implemented so effectively. Their joint patent was obtained provisionally in February 1863 and ratified in October 1863.
History:
Before Dircks' partnership with Pepper was a full year old, Dircks published a book which accused Pepper of plotting to systematically stamp Pepper's name alone on their joint creation. According to Dircks, while Pepper took care to credit Dircks in any communications to the scientific community, everything which reached the general public—like newspaper reports, advertisements and theatre posters—mentioned Pepper alone. Whenever Dircks complained, he said, Pepper would blame careless journalists or theatre managers. However, the omission had occurred so repeatedly that Dircks believed that Pepper was deliberately striving to fix his name alone in the minds of the general public. A good half of Dircks' 106-page book, The Ghost, comprises such recriminations with detailed examples of how Pepper hid Dircks' name.An earlier 1863 Spectator article had presented the Dircks/Pepper partnership thus: "This admirable ghost is the offspring of two fathers…. To Mr. Dircks belongs the honour of having invented him…. and Professor Pepper has the merit of having improved him considerably, fitting him for the intercourse of mundane society, and even educating him for the stage." Popularity Short plays using the new ghost illusion swiftly became sensationally popular. Pepper staged many dramatic and profitable demonstrations, notably in the lecture theatre of London's Royal Polytechnic. By late 1863, the illusion's fame had spread extensively with ghost-centred plays performed at multiple London venues, in Manchester and Glasgow, Paris and New York. Royalty attended. There was even a shortage of plate glass because of demand from theatres for glass screens. A popular song from 1863 celebrated the 'Patent Ghost': By his own account, Pepper, who was entitled to all profits, made considerable earnings from the patent. He ran his own performances and licensed other operators for money. In Britain, he was initially successful in suing some unlicensed imitators, deterring others by legal threats, and defeating a September 1863 court action by music-hall proprietors who challenged the patent. However, while in Paris in summer 1863 to assist a licensed performance, Pepper had proved unable to stop Henri Robin and several others who were already performing unlicensed versions there. Robin successfully cited Séguin's pre-existing patent of the polyoscope, of which Pepper had been ignorant. During the next four years Robin developed spectacular and original applications of the illusion in Paris. One famous Robin show depicted the great violinist Paganini being troubled in his sleep by a demon violinist, who repeatedly appeared and disappeared.During the next two decades, performances using the illusion spread to several countries. In 1877 a patent was registered for the United States. In Britain, theatre productions using Pepper's Ghost toured far outside major cities. The performers travelled with their own glass screens and became known as "spectral opera companies". Around a dozen such specialist theatre companies existed in Britain. A typical performance would comprise a substantial play where apparitions were central to the plot, like an adaption of Dickens' A Christmas Carol, followed by a short comic piece which also used ghost effects. One company, for instance, 'The Original Pepper's Ghost and Spectral Opera Company' had 11 ghost-themed plays in its repertoire. Another such company during a single year, 1877, performed at 30 different places in Britain, usually for a week but sometimes for as long as six weeks. By the 1890s, however, novelty had faded and the vogue for such theatre was in steep decline. Pepper's Ghost remained in use however at sensational entertainments comparable to 'dark rides' or 'ghost trains' at modern funfairs and amusement parks: a detailed account survives of audience participation in two macabre entertainments, which both used Pepper's Ghost, within a 'Tavern of the Dead' show which visited Paris and New York in the 1890s.Since the 1860s, "Pepper's Ghost" has become a universal term for any illusion produced via a reflection on an unnoticed glass screen. It is routinely applied to all versions of the illusion, which are now quite common in 21st century displays, peepshows, and installations in museums and amusement parks. However, the specific optics in these modern displays often follow Séguin's or Dircks' earlier designs rather than the modification for theatres which first brought Pepper's name into enduring usage.
Modern uses:
Systems Several proprietary systems produce modern Pepper's ghost effects. The "Musion Eyeliner" uses thin metalized film is placed across the front of the stage at an angle of 45 degrees towards the audience; recessed below the screen is a bright image supplied by an LED screen or powerful projector. When viewed from the audience's perspective, the reflected images appear to be on the stage. The "Cheoptics360" displays revolving 3D animations or special video sequences inside a four-sided transparent pyramid. and Ramboll. This system is often used for retail environments and exhibitions.
Modern uses:
Amusement parks The world's largest implementation of this illusion can be found at The Haunted Mansion and Phantom Manor attractions at several Walt Disney Parks and Resorts. There, a 90-foot (27 m)-long scene features multiple Pepper's ghost effects, brought together in one scene. Guests travel along an elevated mezzanine, looking through a 30-foot (9.1 m)-tall pane of glass into an empty ballroom. Animatronic ghosts move in hidden black rooms beneath and above the mezzanine. A more advanced variation of the Pepper's Ghost effect is also used at The Twilight Zone Tower of Terror.
Modern uses:
The walk-through attraction Turbidite Manor in Nashville, Tennessee, employs variations of the classic technique, enabling guests to see various spirits that also interact with the physical environment, viewable at a much closer proximity. The House at Haunted Hill, a Halloween attraction in Woodland Hills, California, employs a similar variation in its front window to display characters from its storyline.
Modern uses:
An example that combines the Pepper's ghost effect with a live actor and film projection can be seen in the Mystery Lodge exhibit at the Knott's Berry Farm theme park in Buena Park, California, and the Ghosts of the Library exhibit at the Abraham Lincoln Presidential Library and Museum in Springfield, Illinois, as well as the depiction of Maori legends called A Millennium Ago at the Museum of Wellington City & Sea in New Zealand.The Hogwarts Express attraction at Universal Studios Florida uses the Pepper's ghost effect, such that guests entering "Platform 9+3⁄4" seem to disappear into a brick wall when viewed from those further behind in the queue.
Modern uses:
Museums Museums increasingly use Pepper's ghost exhibits to create attractions that appeal to visitors. In the mid-1970s James Gardener designed the Changing Office installation in the London Science Museum, consisting of a 1970s-style office that transforms into an 1870s-style office as the audience watches. It was designed and built by Will Wilson and Simon Beer of Integrated Circles. Another particularly intricate Pepper's ghost display is the Eight Stage Ghost built for the British Telecom Showcase Exhibition in London in 1978. This display follows the history of electronics in a number of discrete transitions.More modern examples of Pepper's ghost effects can be found in various museums in the United Kingdom and Europe. Examples of these in the United Kingdom are the ghost of Annie McLeod at the New Lanark World Heritage Site, the ghost of John McEnroe at the Wimbledon Lawn Tennis Museum, which reopened in new premises in 2006, and one of Sir Alex Ferguson, which opened at the Manchester United Museum in 2007. Other examples include the ghost of Sarah (who picks up a candle and walks through the wall) and also the ghost of the Eighth Duke at Blenheim Palace.
Modern uses:
In October 2008 a life-sized Pepper's ghost of Shane Warne was opened at the National Sports Museum in Melbourne, Australia. The effect was also used at the Dickens World attraction at Chatham Maritime, Kent, United Kingdom. Both the York Dungeon and the Edinburgh Dungeon use the effect in the context of their 'Ghosts' shows.
Another example can be found at Our Planet Centre in Castries, St Lucia, which opened in May 2011, where a life-size Charles III and Governor-General of the island appear on stage talking about climate change.German company Musion installed a holostage in the German Football Museum in Dortmund in 2016.
Television, film and video Teleprompters are a modern implementation of Pepper's ghost used by the television industry. They reflect a speech or script and are commonly used for live broadcasts such as news programmes.
Modern uses:
A demonstration of Pepper's ghost was shown in a segment of Mr. Wizard's World.On 1 June 2013, ITV broadcast Les Dawson: An Audience With That Never Was. The program featured a Pepper's ghost projection of Les Dawson, presenting content for a 1993 edition of An Audience with... to be hosted by Dawson but unused due to his death two weeks before recording.In the 1990 movie Home Alone, the technique is used to show Harry with his head in flames, as result of a blow torch from a home invasion gone bad. CGI was not able to produce the desired results. The James Bond movie Diamonds are Forever features the girl-to-gorilla trick in one scene.Early electro-mechanical arcade machines, such as Midway's "Stunt Pilot" and Bally's "Road Runner," both made in 1971, use the effect to allow player-controlled moving vehicles to appear to share the same space as various obstacles within a diorama. Electrical contacts, connected to the control linkages, sense the position of the vehicle and obstacles, simulating collisions in the games' logic circuits without the models physically touching each other. Various arcade games, most notably Taito's 1978 video game Space Invaders and SEGA's 1991 video game Time Traveler, used a mirror-based variation of the illusion to make the game's graphics appear against an illuminated backdrop.
Modern uses:
Concerts An illusion based on Pepper's ghost involving projected images has been featured at music concerts (often erroneously marketed as "holographic").At the 2006 Grammy Awards, the Pepper's ghost technique was used to project Madonna with the virtual members of the band Gorillaz onto the stage in a "live" performance. This type of system consists of a projector (usually DLP) or LED screen, with a resolution of 1280×1024 or higher and brightness of at least 5,000 lumens, a high-definition video player, a stretched film between the audience and the acting area, a 3D set/drawing that encloses three sides, plus lighting, audio, and show control.During Dr. Dre and Snoop Dogg's performance at the 2012 Coachella Valley Music and Arts Festival, a projection of deceased rapper Tupac Shakur appeared and performed "Hail Mary" and "2 of Amerikaz Most Wanted". The use of this approach was repeated in 2013 at west coast Rock the Bells dates, featuring projections of Eazy-E and Ol' Dirty Bastard.
Modern uses:
On 18 May 2014, during the Billboard Music Awards, an illusion of deceased pop star Michael Jackson, other dancers, and the entire stage set was projected onto the stage for a performance of the song "Slave to the Rhythm" from the posthumous Xscape album.On 21 September 2017, the Frank Zappa estate announced plans to conduct a reunion tour with the Mothers of Invention that would make use of Pepper's ghosts of Frank Zappa and the settings from his studio albums. Initially scheduled to run through 2018, the tour was later pushed back to 2019.A projection of Ronnie James Dio performed at the Wacken Open Air festival in 2016.Swedish supergroup ABBA returned to the stage in May 2022, as digital avatars performing much-loved hits. The technology used for this show is wrongly assumed to be a pepper's ghost effect. The technology used is a huge opaque seamless flat led-screen, with careful integration of show effects inside the theatre to create a layered dynamic experience that creates a realistic illusion of depth, as described by the show's director Baillie Walsh.
Modern uses:
Political speeches NChant 3D telecast, live, a 55-minute speech by Narendra Modi, Chief Minister of Gujarat, to 53 locations across Gujarat on 10 December 2012 during the assembly elections. In April 2014, they projected Narendra Modi again at 88 locations across India.In 2014, Turkish Prime Minister Recep Tayyep Erdogan delivered a speech via Pepper's ghost in Izmir.In 2017, French Presidential candidate Jean-Luc Mélenchon gave a speech using Pepper's ghost at a campaign event in Aubervilliers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pipobroman**
Pipobroman:
Pipobroman (trade names Vercite, Vercyte) is an anti-cancer drug that probably acts as an alkylating agent. It is marketed in France and Italy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Code page 770**
Code page 770:
Code page 770 (also known as CP 770) is a code page used under DOS to write the Estonian, Lithuanian and Latvian languages.
Character set:
The following table shows code page 770. Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as code page 437. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CUL5**
CUL5:
Cullin-5 is a protein that in humans is encoded by the CUL5 gene.
Discovery:
The mammalian gene product was originally discovered by expression cloning, due to the protein's ability to mobilize intracellular calcium in response to the peptide hormone arginine vasopressin. It was first titled VACM-1, for vasopressin-activated, calcium-mobilizing receptor. Since then, VACM-1 has been shown to be homologous to the Cullin family of proteins, and was subsequently dubbed cul5.
Tissue distribution:
Studies have shown that the cul5 protein is expressed at its highest levels in heart and skeletal tissue, and is specifically expressed in vascular endothelium and renal collecting tubules.
Function:
Cul5 inhibits cellular proliferation, potentially through its involvement in the SOCS/ BC-box/ eloBC/ cul5/ RING E3 ligase complex, which functions as part of the ubiquitin system for protein degradation.One study have shown that Cul5 plays a role in Reelin signaling cascade, participating in the DAB1 degradation and thus ensuring the negative feedback mechanism of Reelin signaling during corticogenesis.
Interactions:
CUL5 has been shown to interact with RBX1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Plastic.com**
Plastic.com:
Plastic.com (2001–2011) was a general-interest internet forum running under the motto 'Recycling the Web in Real Time'.
The website was community-driven, with readers moderating discussions, submitting stories, and participating in their selection.
The site:
Plastic was launched in January 2001 by Automatic Media, a conglomerate that included the pioneering webzines Feed and Suck.com. In keeping with Automatic's model of small, low-cost websites, Plastic launched with a staff of only four, amongst them Suck co-founder Joey Anuff as editor-in-chief. When Automatic Media folded in June of that year, several of the editors stayed on, working for free. On November 2, 2001, the site was sold for $30,000 to Suck co-founder Carl Steadman, who became its sole owner. Steadman took the site offline for two weeks and relaunched it on December 16, 2001, after falling out with Anuff, who no longer contributed to Plastic after the relaunch.Plastic did not feature any advertising, and was supported entirely by user donations. The site used a modified version of Slash, the content management system developed and distributed by Slashdot, and it was almost entirely member-driven. As of November 11, 2008, there were 50,218 accounts, with several thousand being active members. For a while, Plastic offered email accounts.
The site:
Plastic closed permanently in February 2011, about a month after its ten-year anniversary.
Content:
The site's topics included "etcetera", "filmtv", "media", "music", "politics", "scitech", and "work". Topics covered on the board were primarily related to current events. Plastic's content was entirely driven by user-submitted stories. A typical Plastic story selected a topic based around a story found on an external link, with the Plastic user providing a larger context for that article with supporting links and some editorial comment. The stories were often written in a way that frames a discussion for the other readers to post comments within. Readers are invited to post their comments in the stories, which can be moderated by other users. New submissions will appear in the Submissions Queue (subQ), which is visible to all users, and can be voted on by users with 50 "karma" or higher. Each voter can give the sub 0, .5, or 1 points, and high-ranking subs will eventually become full-fledged stories that can be commented upon. In addition to voting on the submissions, users are given a 255 character text field to suggest changes to the story, post helpful links to exterior sites or previous Plastic stories, or suggest alternate headlines for the story itself. One of Plastic's volunteer editors will then properly format the story for running on "the front page".
Content:
Karma and moderation Plastic's moderation system was modeled on the one established by Slashdot, and very similar to it. Commenting plastic members were randomly awarded moderation points which could be given out as they saw fit. In any discussion thread, a person with moderation points was able to moderate a post up or down, based on the content, with a descriptive tag, such as 'compelling', 'scholarly', 'astute', 'disingenuous', 'obnoxious', etc. It cost one point to moderate a post up, and two to mod a post down. A given comment could have a score between –1 and 5 inclusive at half point intervals, and Plastic users were able to set a personal threshold where no comments with a lesser score were displayed. (For example, a person with a score threshold of 1 would not see comments with a score of –1 or 0 but would see all others.) Non-registered users were allowed to post anonymously. Plastic members had the option to block anonymous postings. Additionally, anonymous posts started with a karma score of 0, below the default moderation threshold of 1. Registered Plastic members also had the option to post anonymously, which allowed them to make controversial or offensive comments without fear of losing karma. This practice was generally frowned upon.
Content:
0 karma or higher was required to post comments in stories and submit stories to the subQ 5 karma or higher was required to post QuickLinks 50 karma or higher was required to vote in the subQ The members with the highest karma had access to other tools, including the list of all other users in the "Top Karma" group and a list of all members currently logged in. The "Top Karma" page stated that this list was limited to the top 250 members, "give or take," but in reality the list contained 566 members as of 12 August 2005. The minimum karma required to be on this list was 120.5. At 120.5, members were able to see the list of all members currently logged in, and at 121, members had access to the "Top Karma" list. Previously the list showed the karma totals for all members on the list, but at some point this was changed so that karma details are accessible only in "bands"—higher-karma members can see details of all lower scoring members' karma scores, but lower-scoring members were able to see higher rankings only in bands such as +1500, +2500, +5000, etc.As with the story submission queue, some Plastic members complained that many comments were moderated (especially down) based on political motivations, usually to aid liberal posts or downgrade conservative posts. As with most member-moderated sites, many downmods were motivated by personal feuds. The moderation system did not encourage these practices, and the top karma bands typically included several self-described conservative members.
Content:
QuickLinks A member with at least 5 karma points could also post QuickLinks (a "QL"), which appeared on the sidebar. No designated place existed on the site for discussion of these links, but users frequently responded directly to the poster of the QL through private messages. As with comments, Quicklinks could be modded up or down, although the system was different—any registered user with a minimum karma score of 1 could vote a Quicklink up or down. If a link received at least six more up votes than down votes, the submitter would automatically receive 1.5 karma. If a link was modded down with six more down votes than up votes, the submitter would be docked 1.5 karma and the QL would disappear from the site (but still be available in the archive). QuickLinks were used for breaking news, follow-ups to previously discussed items, or humorous stories with little discussion potential.
Plastic Chat:
Plastic used to have an online chat server at irc://chat.plastic.com, port 6667. It also was previously available through a modified CGI:IRC client at https://web.archive.org/web/20050406191329/http://chat.plastic.com/. The chat server has been inexplicably down for an extended period of time.
Awards:
In 2001, Plastic won a Webby award in the category of "Print + Zines", beating out its parent Feed Magazine, as well as Mother Jones Magazine, Nerve, and The Position.com. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tank Connectors**
Tank Connectors:
Tank Connectors are a type of tank fitting also known as a tank inlet, tank outlet, or tank nipple. This fitting must be leakage proof, as the water supply (inward and outward) depends on same. Many different varieties of tank connectors exist. Tank connectors are widely made of Plastic (PVC) or brass. They have a flange either on the edge of one side or in the center. They are supplemented with a rubber washer or a plastic washer with one or two hexagonal flange nuts to tighten the connector to the tank wall. Those with two nuts usually require some silicone or other sealant to prevent fluid passing along the threads.
Tank Connectors:
The size of a connector varies from 1/2" to 4". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Foster cage**
Foster cage:
In the mathematical field of graph theory, the Foster cage is a 5-regular undirected graph with 30 vertices and 75 edges. It is one of the four (5,5)-cage graphs, the others being the Meringer graph, the Robertson–Wegner graph, and the Wong graph.
Like the unrelated Foster graph, it is named after R. M. Foster.
It has chromatic number 4, diameter 3, and is 5-vertex-connected.
Algebraic properties:
The characteristic polynomial of the Foster cage is 11 )4. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Melt sandwich**
Melt sandwich:
A melt sandwich is a type of hot sandwich containing a suitable meltable cheese (sometimes grated) and a filling of meat or fish. The sandwich is grilled on the stovetop until the cheese melts (hence the name) and the bread is toasted, or heated in an oven.One common type is the tuna melt, a melt sandwich filled with canned tuna that has been mixed with mayonnaise (tuna salad) and other ingredients such as pickles, tomato, and onion. Other popular choices are ham, roast beef, chicken, turkey, or a ground beef patty (for a patty melt). Both patty melts and tuna melts are staples of the traditional American diner; patty melts were commonly found on menus by the 1940s, and tuna melts by the 1960s. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fear-potentiated startle**
Fear-potentiated startle:
Fear-potentiated startle (FPS) is a reflexive physiological reaction to a presented stimulus, and is an indicator of the fear reaction in an organism. The FPS response can be elicited in the face of any threatening stimulus (e.g., any object, person or situation that would cause someone to experience feelings of fear), but it can also be elicited by a neutral stimulus as a result of fear conditioning, a process that occurs when a benign stimulus comes to evoke fear and anxiety upon being paired with a traumatic or fear-provoking event. The stimulus in question is usually of auditory (e.g., loud noise) or visual (e.g., bright light) nature, and startle response measures include eyeblink rates and pulse/heart rate. The negative impact of heightened FPS in the face of neutral stimuli can be treated pharmacologically, using psychotropic medications that are typically used to reduce anxiety in humans. Recent literature, moreover, has implicated increased FPS responses as a correlate in posttraumatic stress disorder (PTSD) and other anxiety disorders.
Neurobiology of FPS:
The central brain structure through which fear-associated responses are mediated has been determined to be the amygdala, which is located in the brain's temporal lobe. When the central nucleus of the amygdala is stimulated - what is popularly referred to as the "fight-or-flight" response is activated - the organism in question reacts passively (is rendered frozen in its tracks, becomes hyper-vigilantly attentive, etc.), or displays a physiological reaction geared toward facilitating an aggressive reaction (e.g., increased heart-rate/pulse, rapid breathing). These fear-induced reactions result from communication between the amygdala and a variety of other brain regions (such as the brain stem and hypothalamus), resulting in a variety of physiological responses in the organism. For instance, communication between an activated amygdala and the lateral hypothalamus results in increased blood pressure and dilation of the pupils; the initiation of the central grey via communication from the amygdala results in the organism's becoming frozen in its tracks; communication between the activated amygdala and the paraventricular nucleus of the hypothalamus releases hormones associated with stress.Literature has linked the FPS response to interplay between the central nucleus of the amygdala and both the central grey and nucleus reticular pontis caudalis. Insult (e.g., traumatic brain injury) to these brain areas inhibits any display of FPS response in humans. In addition, a distinction has been made concerning neural activity of the reflexive FPS response, and that which occurs in the face of exposure to a fear-inducing stimulus over a long period of time, such as abuse or combat or to a place or situation. Literature suggests that, in such situations, FPS is caused by activation of the bed nucleus of the stria terminals. Insult to this brain region inhibiting FPS response in the face of longitudinally conditioned or situation/location-related threatening stimuli in rats. The extinction of heightened FPS response to stimuli previously conditioned to be threatening has been linked to activity in the medial prefrontal cortex.
Measuring the startle response and utilization of FPS data:
The most common physiological response measured to gauge FPS response in humans is eyeblink, or the reflexive act of blinking. Currently, the most widely accepted/used means with which to measure the eyeblink reflex is by using a technology called electromyographic recording (EMG). EMG provides eyeblink rate data by measuring and recording activity of the eyelid muscles using two electrodes. In order to obtain an optimal reading, the person's skin must be cleaned, dried, and covered with a thin layer of electrode gel in only the spots where the measures will be taken; one electrode is placed in the center of the person's forehead above the nose, and two recording electrodes are placed directly underneath the eye, approximately two centimeters apart. The participant should be looking forward for the duration of data collection. If noises are used as the catalyst for the FPS response in a study (acoustic startle), the volume must be both controlled and reported, as noises around 50/60 Hz can compromise the accuracy of the recordings taken by the EMG.Specifically measuring FPS response in studies of fear is highly practical, as the experimental and baseline measures of an individual's startle response can be partitioned, and variance in startle can, in turn, be attributed to fear (or lack thereof, in the case that extinction of fear is variable of interest), allowing illusory correlates (other variables that can also appear to cause an effect on our variable of interest) to be ruled out. There are several experimental climates that can be used to examine the FPS response. Eyeblink FPS response is typically gauged by presenting participants with pleasant and unpleasant (as well as neutral) emotionally evocative stimuli, paired with a loud noise or a flash of bright lights. The presented stimuli can be replaced by having participants imagine emotionally evocative stimuli of pleasant, unpleasant, and neutral natures. FPS response is typically most exaggerated in response to emotionally unpleasant stimuli, followed by pleasant and then neutral stimuli, in members of the general population.In addition, FPS response in research concerning fear conditioning (and extinction of a conditioned aversion to a previously neutral stimulus) is also commonly examined; such studies will present noise or light startle probes with unpleasant stimuli to condition the FPS to occur in the presence of that stimuli. Measurements of FPS response in response to both the conditioned stimuli and the neutral stimuli (in the absence of light or sound probes) are then taken, with the measured difference in the size of startle response being the variable of interest, as this difference score indicates alteration of naturally occurring and conditioned FPS response. Resulting data from such studies can be used to examine both FPS response in light of conditioned fear, and an individual's ability to break the conditioned fear reactions (extinction).
FPS and posttraumatic stress disorder:
A heightened (or abnormally overactive) startle response in the face of benign stimuli/settings is often seen in individuals suffering from posttraumatic stress disorder (PTSD), a psychological ailment characterized by maladaptive and inappropriate affective and physiological reactions to stimuli that can be associated with a previously experienced trauma. For instance, combat veterans often experience psychological and physiological panic / anxiety / dissociative "flashbacks" to the traumatic experience that triggered the PTSD pathology in reaction to unexpected loud noises, a stimulus that can remind the individual of gunshots, bombs, or exploding grenades.
FPS and posttraumatic stress disorder:
Individuals with PTSD have been shown to have an increased FPS response, and data have also suggested that this response becomes further exaggerated when these individuals experience stress. People having been diagnosed with PTSD display similar FPS response to threatening and neutral stimuli, indicating that (unlike those not suffering from PTSD) these individuals have difficulty distinguishing a stimulus as posing a threat or being benign. Additionally, data has displayed a significantly reduced ability for the extinction of conditioned fear responses in combat Veterans with severe, chronic PTSD The reduced ability for fear extinction over longer periods of time in combat veterans, as a result of the pathology associated with PTSD, has also been asserted.Heightened FPS response has also been implicated in the following disorders, falling under the current DSM-IV-TR classification of anxiety disorders: phobia (social and specific) and obsessive-compulsive disorder. On the converse, mood disorders such as depression have been shown to cause weakened FPS responses in diagnosed individuals.
Treatment options:
As exaggerated FPS responses can lend to the pathology associated with PTSD and other disorders of the anxiety disorder classification, decreasing the startle response in humans may be of benefit in the treatment of these psychological disorders. Several forms of medication acting on different neurotransmitters (e.g., GABA, dopamine) in the brain have been shown to cause significant reductions in startle response; the medications that are effective in treating conditioned fear are those typically used in the treatment of anxiety. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Offset T-intersection**
Offset T-intersection:
An offset T-intersection is an at-grade road intersection where a conventional four leg intersection is split into two three-leg T-intersections to reduce the number of conflicts and improve traffic flow. Building the offset T-intersections as continuous green T-intersections (also called seagull intersection), there is a single stop on the arterial road, only. A higher volume of through traffic on the cross road, or on unsignalized intersections, a rebuild to a conventional four-leg intersection may be adequate, also when the offset is a few feet only like staggered junctions causing slower traffic for a longer time on the arterial road.Seen as a spur route or access road, offset T-intersections can be seen as an A2 or B2 type partial cloverleaf interchange with no arterial road. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cyclobutyrol**
Cyclobutyrol:
Cyclobutyrol is a drug used in bile therapy. Cyclobutyrol (CB) is a choleretic agent which also inhibits biliary lipids secretion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fukuyama indole synthesis**
Fukuyama indole synthesis:
The Fukuyama indole synthesis is a versatile tin mediated chemical reaction that results in the formation of 2,3-disubstituted indoles. A practical one-pot reaction that can be useful for the creation of disubstituted indoles. Most commonly tributyltin hydride is utilized as the reducing agent, with azobisisobutyronitrile (AIBN) as a radical initiator. Triethylborane can also be used as a radical initiator. The reaction can begin with either an ortho-isocyanostyrene or a 2-alkenylthioanilide derivative, both forming the indole through Radical cyclization via an α-stannoimidoyl radical. The R group can be a range of both basic and acidic sensitive functional groups such as esters, THP ethers, and β-lactams. In addition the reaction is not stereospecific, in that both the cis and trans isoform can be used to obtain the desired product.
Mechanism:
The reaction mechanism begins with the creation of the tributyl tin radical with either AIBN or triethylborane, not shown in either step-wise mechanism. Following the radical attacks the o-isocyano carbon creating the alpha-stannoimidoyl radical. Through radical cyclization a five membered ring is formed followed by the propagation of a new tin radical. The final step is dependent on the desired outcome of the reaction. This reaction is a one-pot synthesis and results in yields ranging from 50% to 98% depending on the substituent.
Mechanism:
The mechanism using 2-alkenylthioanilide is very similar, also starting with the formation of a bond, now between the tin radical and the sulfur. Followed by a similar radical cyclization resulting in a five membered ring, a new tin radical is produced and the original attacking radical leaves with the sulfur substituent. This part of the step-wise mechanism has yet to be detailed. The reaction yield can range from 40% to 93% depending also on the desired substituent.
Derivatives:
The Fukuyama Indole synthesis can generate a range of different substituents at the 2,3 position that were previously unattainable without a protecting group on the nitrogen in the ring. One such example is the 2-iodoindole derivative, which can then lead to a variety of N-unprotected 2,3 substituted indoles. Before the discovery of this compound the chemistry involving 2-stannylindoles was not developed as there was no way to practically synthesize these N-unprotected 2,3-stannylindoles. One was limited to the production of N-protected 2-stannylindoles through metalation by a process known as Stille coupling. The N-unprotected 2-stannylindoles generated from the Fukuyama Synthesis can be readily oxidized with iodine opening up an area of chemistry that allows for the synthesis of a variety compounds utilizing the 2-iodoindoles as a starting reagent. This iodine substituted derivative can lead to aryl halides, vinyl iodides, vinyl triflates, benzyl bromides.
Derivatives:
In addition to acetylenes (Sonogashira coupling), and acrylates (Heck reaction) in the second position.
Applications:
The synthesis is one of the simplest methods for creating poly-substituted indoles, this procedure has been utilized in numerous natural product syntheses, including aspidophytine, vinblastine, and strychnineShown below is the fourth step in the synthesis of (+)-Vinblastine, the application of the Fukuyama Indole synthesis to create a disubstituted indole.
In addition, the fukuyama reaction plays a role in the syntheses of indolocarbazoles, biindolyls, and the total synthesis of vincadifformine and tabersonine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Role-playing**
Role-playing:
Role-playing or roleplaying is the changing of one's behaviour to assume a role, either unconsciously to fill a social role, or consciously to act out an adopted role. While the Oxford English Dictionary offers a definition of role-playing as "the changing of one's behaviour to fulfill a social role", in the field of psychology, the term is used more loosely in four senses: To refer to the playing of roles generally such as in a theatre, or educational setting; To refer to taking a role of a character or person and acting it out with a partner taking someone else's role, often involving different genres of practice; To refer to a wide range of games including role-playing video game (RPG), play-by-mail games and more; To refer specifically to role-playing games.
Amusement:
Many children participate in a form of role-playing known as make believe, wherein they adopt certain roles such as doctor and act out those roles in character. Sometimes make believe adopts an oppositional nature, resulting in games such as cops and robbers.
Entertainment:
Historical re-enactment has been practiced by adults for millennia. The ancient Romans, Han Chinese, and medieval Europeans all enjoyed occasionally organizing events in which everyone pretended to be from an earlier age, and entertainment appears to have been the primary purpose of these activities. Within the 20th century historical re-enactment has often been pursued as a hobby.
Entertainment:
Improvisational theatre dates back to the Commedia dell'Arte tradition of the 16th century. Modern improvisational theatre began in the classroom with the "theatre games" of Viola Spolin and Keith Johnstone in the 1950s. Viola Spolin, who was one of the founders the famous comedy troupe Second City, insisted that her exercises were games, and that they involved role-playing as early as 1946. She accurately judged role-playing in the theatre as rehearsal and actor training, or the playing of the role of actor versus theatre roles, but many now use her games for fun in their own right.
Entertainment:
Role-playing games A role-playing game is a game in which the participants assume the roles of characters and collaboratively create stories. Participants determine the actions of their characters based on their characterisation, and the actions succeed or fail according to a formal system of rules and guidelines. Within the rules, they may improvise freely; their choices shape the direction and outcome of the games.
Entertainment:
Role-playing can also be done online in the form of group story creation, involving anywhere from two to several hundred people, utilizing public forums, private message boards, mailing lists, chatrooms, and instant-messaging chat services to build worlds and characters that may last a few hours, or several years. Often on forum-based roleplays, rules, and standards are set up, such as a minimum word count, character applications, and "plotting" boards to increase complexity and depth of story. There are different genres of which one can choose while role-playing, including, but not limited to, fantasy, modern, medieval, steam punk, and historical. Books, movies, or games can be, and often are, used as a basis for role-plays (which in such cases may be deemed "collaborative fan-fiction"), with players either assuming the roles of established canon characters or using those the players themselves create ("Original Characters") to replace—or exist alongside—characters from the book, movie, or game, playing through well-trodden plots as alternative characters, or expanding upon the setting and story outside of its established canon.
Psychology:
In psychology, an individual's personality can be conceptualized as a set of expectations about oneself and others and that these add up to role-playing or role-taking. Here, the role is fiction because it is not real but it has a degree of consistency. Role-playing is also an important part of a child's psychological development. For example, the instance when a child starts to define "I" and separate him or herself from an adult is the initial condition for and the result of role play. There are also experiments that found role-playing resulted in behavioral change such as the case of smokers who reported negative attitude towards smoking after being asked to pretend to be a person diagnosed with lung cancer.
Training:
Role-playing may also refer to role training where people rehearse situations in preparation for a future performance and to improve their abilities within a role. The most common examples are occupational training role-plays, educational role-play exercises, and certain military wargames.
Training:
Simulation One of the first uses of computers was to simulate real-world conditions for participants role-playing the flying of aircraft. Flight simulators used computers to solve the equations of flight and train future pilots. The army began full-time role-playing simulations with soldiers using computers both within full scale training exercises and for training in numerous specific tasks under wartime conditions. Examples include weapon firing, vehicle simulators, and control station mock-ups.
Research method:
Role playing may also refer to the technique commonly used by researchers studying interpersonal behavior by assigning research participants to particular roles and instructing the participants to act as if a specific set of conditions were true. This technique of assigning and taking roles in psychological research has a long history. It has been used in the early classic social psychological experiments by Kurt Lewin (1939/1997), Stanley Milgram (1963), and Phillip Zimbardo (1971). Herbert Kelman suggested that role-playing might be "the most promising source" of research methods alternative to methods using deception (Kelman 1965). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tau function (integrable systems)**
Tau function (integrable systems):
Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form. The term tau function, or τ -function, was first used systematically by Mikio Sato and his students in the specific context of the Kadomtsev–Petviashvili (or KP) equation and related integrable hierarchies. It is a central ingredient in the theory of solitons. In this setting, given any τ -function satisfying a Hirota-type system of bilinear equations (see § Hirota bilinear residue relation for KP tau functions below), the corresponding solutions of the equations of the integrable hierarchy are explicitly expressible in terms of it and its logarithmic derivatives up to a finite order. Tau functions also appear as matrix model partition functions in the spectral theory of random matrices, and may also serve as generating functions, in the sense of combinatorics and enumerative geometry, especially in relation to moduli spaces of Riemann surfaces, and enumeration of branched coverings, or so-called Hurwitz numbers. There are two notions of τ -functions, both introduced by the Sato school. The first is isospectral τ -functions of the Sato–Segal–Wilson type for integrable hierarchies, such as the KP hierarchy, which are parametrized by linear operators satisfying isospectral deformation equations of Lax type. The second is isomonodromic τ -functions.
Tau function (integrable systems):
Depending on the specific application, a τ -function may either be: 1) an analytic function of a finite or infinite number of independent, commuting flow variables, or deformation parameters; 2) a discrete function of a finite or infinite number of denumerable variables; 3) a formal power series expansion in a finite or infinite number of expansion variables, which need have no convergence domain, but serves as generating function for certain enumerative invariants appearing as the coefficients of the series; 4) a finite or infinite (Fredholm) determinant whose entries are either specific polynomial or quasi-polynomial functions, or parametric integrals, and their derivatives; 5) the Pfaffian of a skew symmetric matrix (either finite or infinite dimensional) with entries similarly of polynomial or quasi-polynomial type. Examples of all these types are given below.
Tau function (integrable systems):
In the Hamilton–Jacobi approach to Liouville integrable Hamiltonian systems, Hamilton's principal function, evaluated on the level surfaces of a complete set of Poisson commuting invariants, plays a role similar to the τ -function, serving both as a generating function for the canonical transformation to linearizing canonical coordinates and, when evaluated on simultaneous level sets of a complete set of Poisson commuting invariants, as a complete solution of the Hamilton–Jacobi equation.
Definition of tau functions: isospectral and isomonodromic:
A τ -function of isospectral type is defined as a solution of the Hirota bilinear equations, from which the linear operator undergoing isospectral evolution can be uniquely reconstructed. Geometrically, in the Sato and Segal-Wilson sense, it is the value of the determinant of a Fredholm integral operator, interpreted as the orthogonal projection of an element of a suitably defined (infinite dimensional) Grassmann manifold onto the origin, as that element evolves under the linear exponential action of a maximal abelian subgroup of the general linear group. It typically arises as a partition function, in the sense of statistical mechanics, many-body quantum mechanics or quantum field theory, as the underlying measure undergoes a linear exponential deformation.
Definition of tau functions: isospectral and isomonodromic:
Isomonodromic τ -functions for linear systems of Fuchsian type are defined below in § Fuchsian isomonodromic systems: isomonodromic tau functions. For the more general case of linear ordinary differential equations with rational coefficients, including irregular singularities, they are developed in reference .
Definition of tau functions: isospectral and isomonodromic:
Hirota bilinear residue relation for KP τ -functions A KP (Kadomtsev–Petviashvili) τ -function τ(t) is a function of an infinite collection t=(t1,t2,…) of variables (called KP flow variables) that satisfies the bilinear formal residue equation identically in the δtj variables, where resz=0 is the z−1 coefficient in the formal Laurent expansion resulting from expanding all factors as Laurent series in z , and As explained below in the section § Formal Baker-Akhiezer function and the KP hierarchy, every such τ -function determines a set of solutions to the equations of the KP hierarchy.
Definition of tau functions: isospectral and isomonodromic:
Kadomtsev–Petviashvili equation If τ(t1,t2,t3,……) is a KP τ -function satisfying the Hirota residue equation (1) and we identify the first three flow variables as t1=x,t2=y,t3=t, it follows that the function := log (τ(x,y,t,t4,…)) satisfies the 2 (spatial) +1 (time) dimensional nonlinear partial differential equation known as the Kadomtsev-Petviashvili (KP) equation. This equation plays a prominent role in plasma physics and in shallow water ocean waves.
Definition of tau functions: isospectral and isomonodromic:
Taking further logarithmic derivatives of τ(t1,t2,t3,……) gives an infinite sequence of functions that satisfy further systems of nonlinear autonomous PDE's, each involving partial derivatives of finite order with respect to a finite number of the KP flow parameters t=(t1,t2,…) . These are collectively known as the KP hierarchy.
Definition of tau functions: isospectral and isomonodromic:
Formal Baker–Akhiezer function and the KP hierarchy If we define the (formal) Baker-Akhiezer function ψ(z,t) by Sato's formula := e∑i=1∞tiziτ(t−[z−1])τ(t) and expand it as a formal series in the powers of the variable z ψ(z,t)=e∑i=1∞tizi(1+∑j=1∞wj(t)z−j), this satisfies an infinite sequence of compatible evolution equations where Di is a linear ordinary differential operator of degree i in the variable := t1 , with coefficients that are functions of the flow variables t=(t1,t2,…) , defined as follows := (Li)+ where L is the formal pseudo-differential operator L=∂+∑j=1∞uj(t)∂−j=W∘∂∘W−1 with := ∂∂x := 1+∑j=1∞wj(t)∂−j is the wave operator and (Li)+ denotes the projection to the part of Li containing purely non-negative powers of ∂ ; i.e. the differential operator part of Li The pseudodifferential operator L satisfies the infinite system of isospectral deformation equations and the compatibility conditions for both the system (3) and (4) are This is a compatible infinite system of nonlinear partial differential equations, known as the KP (Kadomtsev-Petviashvili) hierarchy, for the functions {uj(t)}j∈N , with respect to the set t=(t1,t2,…) of independent variables, each of which contains only a finite number of uj 's, and derivatives only with respect to the three independent variables (x,ti,tj) . The first nontrivial case of these is the Kadomtsev-Petviashvili equation (2).
Definition of tau functions: isospectral and isomonodromic:
Thus, every KP τ -function provides a solution, at least in the formal sense, of this infinite system of nonlinear partial differential equations.
Isomonodromic systems. Isomonodromic tau functions:
Fuchsian isomonodromic systems. Schlesinger equations Consider the overdetermined system of first order matrix partial differential equations where {Ni}i=1,…,n are a set of n r×r traceless matrices, {αi}i=1,…,n a set of n complex parameters, z a complex variable, and Ψ(z,α1,…,αm) is an invertible r×r matrix valued function of z and {αi}i=1,…,n These are the necessary and sufficient conditions for the based monodromy representation of the fundamental group π1(P1∖{αi}i=1,…,n) of the Riemann sphere punctured at the points {αi}i=1,…,n corresponding to the rational covariant derivative operator ∂∂z−∑i=1nNiz−αi to be independent of the parameters {αi}i=1,…,n ; i.e. that changes in these parameters induce an isomonodromic deformation. The compatibility conditions for this system are the Schlesinger equations for i≠j, ∂Ni∂αi=−∑1≤j≤n,j≠i[Ni,Nj]αi−αj.
Isomonodromic systems. Isomonodromic tau functions:
Isomonodromic τ -function Defining the n functions Hi=12∑1≤j≤n,j≠iTr(NiNj)αi−αj,i=1,…,n, the Schlesinger equations imply that the differential form := ∑i=1nHidαi on the space of parameters is closed: dω=0 and hence, locally exact. Therefore, at least locally, there exists a function τ(α1,…,αn) of the parameters, defined within a multiplicative constant, such that ω=dlnτ The function τ(α1,…,αn) is called the isomonodromic τ -function associated to the fundamental solution Ψ of the system (6), (7).
Isomonodromic systems. Isomonodromic tau functions:
The simplest nontrivial case of the Schlesinger equations is when r=2 and n=3 . By applying a Möbius transformation to the variable z two of the finite poles may be chosen to be at 0 and 1 , and the third viewed as the independent variable.
Setting the sum ∑i=13Ni of the matrices appearing in (6), which is an invariant of the Schlesinger equations, equal to a constant, and quotienting by its stabilizer under Gl(2) conjugation, we obtain a system equivalent to the most generic case PVI of the six Painlevé transcendent equations, for which many detailed classes of explicit solutions are known .
Isomonodromic systems. Isomonodromic tau functions:
Non-Fuchsian isomonodromic systems For non-Fuchsian systems, with higher order poles, the generalized monodromy data include Stokes matrices and connection matrices, and there are further isomonodromic deformation parameters associated with the local asymptotics, but the isomonodromic τ -functions may be defined in a similar way, using differentials on the extended parameter space . Taking all possible confluences of the poles appearing in (6) for the r=2 and n=3 case, including the one at z=∞ , and making the corresponding reductions, we obtain all other instances PI⋯PV of the Painlevé transcendents, for which numerous special solutions are also known .
Fermionic VEV (vacuum expectation value) representations:
The fermionic Fock space F , is a semi-infinite exterior product space F=Λ∞/2H=⊕n∈ZFn defined on a (separable) Hilbert space H with basis elements {ei}i∈Z and dual basis elements {ei}i∈Z for H∗ The free fermionic creation and annihilation operators {ψj,ψj†}j∈Z act as endomorphisms on F via exterior and interior multiplication by the basis elements := := iei,i∈Z, and satisfy the canonical anti-commutation relations [ψi,ψk]+=[ψi†,ψk†]+=0,[ψi,ψk†]+=δij.
Fermionic VEV (vacuum expectation value) representations:
These generate the standard fermionic representation of the Clifford algebra on the direct sum H+H∗ corresponding to the scalar product := ν(u)+μ(v),u,v∈H,μ,ν∈H∗ with the Fock space F as irreducible module.
Denote the vacuum state, in the zero fermionic charge sector F0 , as := e−1∧e−2∧⋯ ,which corresponds to the Dirac sea of states along the real integer lattice in which all negative integer locations are occupied and all non-negative ones are empty.
Fermionic VEV (vacuum expectation value) representations:
This is annihilated by the following operators ψ−j|0⟩=0,ψj−1†|0⟩=0,j=0,1,… The dual fermionic Fock space vacuum state, denoted ⟨0| , is annihilated by the adjoint operators, acting to the left ⟨0|ψ−j†=0,⟨0|ψj−1|0=0,j=0,1,… Normal ordering :L1,⋯Lm: of a product of linear operators (i.e., finite or infinite linear combinations of creation and annihilation operators) is defined so that its vacuum expectation value (VEV) vanishes 0.
Fermionic VEV (vacuum expectation value) representations:
In particular, for a product L1L2 of a pair (L1,L2) of linear operators, one has :L1L2:=L1L2−⟨0|L1L2|0⟩.
Fermionic VEV (vacuum expectation value) representations:
The fermionic charge operator C is defined as C=∑i∈Z:ψiψi†: The subspace Fn⊂F is the eigenspace of C consisting of all eigenvectors with eigenvalue n C|v;n⟩=n|v;n⟩,∀|v;n⟩∈Fn .The standard orthonormal basis {|λ⟩} for the zero fermionic charge sector F0 is labelled by integer partitions λ=(λ1,…,λℓ(λ)) where λ1≥⋯≥λℓ(λ) is a weakly decreasing sequence of ℓ(λ) positive integers, which can equivalently be represented by a Young diagram, as depicted here for the partition (5,4,1) An alternative notation for a partition λ consists of the Frobenius indices (α1,…αr|β1,…βr) , where αi denotes the arm length; i.e. the number λi−i of boxes in the Young diagram to the right of the i 'th diagonal box, βi denotes the leg length, i.e. the number of boxes in the Young diagram below the i 'th diagonal box, for i=1,…,r , where r is the Frobenius rank, which is the number of elements along the principal diagonal.
Fermionic VEV (vacuum expectation value) representations:
The basis element |λ⟩ is then given by acting on the vacuum with a product of r pairs of creation and annihilation operators, labelled by the Frobenius indices |λ⟩=(−1)∑j=1rβj∏k=1r(ψαkψ−βk−1†)|0⟩.
The integers {αi}i=1,…,r indicate, relative to the Dirac sea, the occupied non-negative sites on the integer lattice while {−βi−1}i=1,…,r indicate the unoccupied negative integer sites.
Fermionic VEV (vacuum expectation value) representations:
The corresponding diagram, consisting of infinitely many occupied and unoccupied sites on the integer lattice that are a finite perturbation of the Dirac sea are referred to as a Maya diagram.The case of the null (emptyset) partition |∅⟩=|0⟩ gives the vacuum state, and the dual basis {⟨μ|} is defined by ⟨μ|λ⟩=δλ,μ Then any KP τ -function can be expressed as a sum where t=(t1,t2,…,…) are the KP flow variables, sλ(t) is the Schur function corresponding to the partition λ , viewed as a function of the normalized power sum variables := := 1i∑a=1nxaii=1,2,… in terms of an auxiliary (finite or infinite) sequence of variables := (x1,…,xN) and the constant coefficients πλ(w) may be viewed as the Plücker coordinates of an element w∈GrH+(H) of the infinite dimensional Grassmannian consisting of the orbit, under the action of the general linear group Gl(H) , of the subspace H+=span{e−i}i∈N⊂H of the Hilbert space H This corresponds, under the Bose-Fermi correspondence, to a decomposable element |τw⟩=∑λπλ(w)|λ⟩ of the Fock space F0 which, up to projectivization is the image of the Grassmannian element w∈GrH+(H) under the Plücker map Pl:span(w1,w2,…)⟶[w1∧w2∧⋯]=[|τw⟩], where (w1,w2,…) is a basis for the subspace w⊂H and [⋯] denotes projectivization of an element of F The Plücker coordinates {πλ(w)} satisfy an infinite set of bilinear relations, the Plücker relations, defining the image of the Plücker embedding into the projectivization P(F) of the fermionic Fock space, which are equivalent to the Hirota bilinear residue relation (1).
Fermionic VEV (vacuum expectation value) representations:
If w=g(H+) for a group element g∈Gl(H) with fermionic representation g^ , then the τ -function τw(t) can be expressed as the fermionic vacuum state expectation value (VEV): τw(t)=⟨0|γ^+(t)g^|0⟩, where Γ+={γ^+(t)=e∑i=1∞tiJi}⊂Gl(H) is the abelian subgroup of Gl(H) that generates the KP flows, and := ∑j∈Zψjψj+i†,i=1,2… are the ""current"" components.
Examples of solutions to the equations of the KP hierarchy:
Schur functions As seen in equation (8), every KP τ -function can be represented (at least formally) as a linear combination of Schur functions, in which the coefficients πλ(w) satisfy the bilinear set of Plucker relations corresponding to an element w of an infinite (or finite) Grassmann manifold. In fact, the simplest class of (polynomial) tau functions consists of the Schur functions sλ(t) themselves, which correspond to the special element of the Grassmann manifold whose image under the Plücker map is |λ> Multisoliton solutions If we choose 3N complex constants {αk,βk,γk}k=1,…,N with αk,βk 's all distinct, γk≠0 , and define the functions := e∑i=1∞tiαki+γke∑i=1∞tiβkik=1,…,N, we arrive at the Wronskian determinant formula := |y1(t)y2(t)⋯yN(t)y1′(t)y2′(t)⋯yN′(t)⋮⋮⋱⋮y1(N−1)(t)y2(N−1)(t)⋯yN(N−1)(t)|.
Examples of solutions to the equations of the KP hierarchy:
which gives the general N -soliton τ -function.
Theta function solutions associated to algebraic curves Let X be a compact Riemann surface of genus g and fix a canonical homology basis a1,…,ag,b1,…,bg of H1(X,Z) with intersection numbers ai∘aj=bi∘bj=0,ai∘bj=δij,1≤i,j≤g.
Let {ωi}i=1,…,g be a basis for the space H1(X) of holomorphic differentials satisfying the standard normalization conditions ∮aiωj=δij,∮bjωj=Bij, where B is the Riemann matrix of periods. The matrix B belongs to the Siegel upper half space Im is positive definite }.
The Riemann θ function on Cg corresponding to the period matrix B is defined to be := ∑N∈Zgeiπ(N,BN)+2iπ(N,Z).
Choose a point p∞∈X , a local parameter ζ in a neighbourhood of p∞ with ζ(p∞)=0 and a positive divisor of degree g := ∑i=1gpi,pi∈X.
For any positive integer k∈N+ let Ωk be the unique meromorphic differential of the second kind characterized by the following conditions: The only singularity of Ωk is a pole of order k+1 at p=p∞ with vanishing residue.
The expansion of Ωk around p=p∞ is Ωk=d(ζ−k)+∑j=1∞Qijζjdζ .Ωk is normalized to have vanishing a -cycles: 0.
Denote by Uk∈Cg the vector of b -cycles of Ωk := ∮bjΩk.
Denote the image of D under the Abel map A:Sg(X)→Cg := := ∑j=1g∫p0piωj with arbitrary base point p0 Then the following is a KP τ -function: := e−12∑ijQijtitjθ(E+∑k=1∞tkUk|B).
Matrix model partition functions as KP τ -functions Let dμ0(M) be the Lebesgue measure on the N2 dimensional space HN×N of N×N complex Hermitian matrices.
Let ρ(M) be a conjugation invariant integrable density function ρ(UMU†)=ρ(M),U∈U(N).
Define a deformation family of measures := Tr (∑i=1∞tiMi)ρ(M)dμ0(M) for small t=(t1,t2,⋯) and let := ∫HN×NdμN,ρ(t).
be the partition function for this random matrix model.
Then τN,ρ(t) satisfies the bilinear Hirota residue equation (1), and hence is a τ -function of the KP hierarchy.
τ -functions of hypergeometric type. Generating function for Hurwitz numbers Let {ri}i∈Z be a (doubly) infinite sequence of complex numbers.
For any integer partition λ=(λ1,…,λℓ(λ)) define the content product coefficient := ∏(i,j)∈λrj−i ,where the product is over all pairs (i,j) of positive integers that correspond to boxes of the Young diagram of the partition λ , viewed as positions of matrix elements of the corresponding ℓ(λ)×λ1 matrix.
Examples of solutions to the equations of the KP hierarchy:
Then, for every pair of infinite sequences t=(t1,t2,…) and s=(s1,s2,…) of complex vaiables, viewed as (normalized) power sums t=[x],s=[y] of the infinite sequence of auxiliary variables x=(x1,x2,…) and y=(y1,y2,…) , defined by: := := 1j∑j=1∞yaj the function := ∑λrλsλ(t)sλ(s) is a double KP τ -function, both in the t and the s variables, known as a τ -function of hypergeometric type.In particular, choosing := ejβ for some small parameter β , denoting the corresponding content product coefficient as rλβ and setting =: t0 , the resulting τ -function can be equivalently expanded as where {Hd(λ)} are the simple Hurwitz numbers, which are 1n! times the number of ways in which an element kλ∈Sn of the symmetric group Sn in n=|λ| elements, with cycle lengths equal to the parts of the partition λ , can be factorized as a product of d 2 -cycles kλ=(a1b1)…(adbd) ,and with := ∑a=1∞xai=iti is the power sum symmetric function. Equation (8) thus shows that the (formal) KP hypergeometric τ -function (9) corresponding to the content product coefficients rλβ is a generating function, in the combinatorial sense, for simple Hurwitz numbers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neural Impulse Actuator**
Neural Impulse Actuator:
The Neural Impulse Actuator (NIA) is a brain–computer interface (BCI) device developed by OCZ Technology. BCI devices attempt to move away from the classic input devices like keyboard and mouse and instead read electrical activity from the head, preferably the EEG. The name Neural Impulse Actuator implies that the signals originate from some neuronal activity; however, what is actually captured is a mixture of muscle, skin and nerve activity including sympathetic and parasympathetic components that have to be summarized as biopotentials rather than pure neural signals. As of May 27, 2011, the OCZ website says that the NIA is no longer being manufactured and has been end-of-lifed.On June 1, 2012 a post was made on the official forums, asking about the NIAs future, the reply being, "It [the NIA] was spun out into a different company as a side-effect of OCZ's IPO and that company is BCInet."
Name:
The name Neural Impulse Actuator is still justifiable since also the secondary signals are under neuronal control. The biopotentials are decompiled into different frequency spectra to allow the separation into different groups of electrical signals. Individual signals that are isolated comprise alpha and beta brain waves, electromyograms and electro oculograms. The current version of the NIA uses carbon-fibers injected into soft plastic as substrate for the headband and for the sensors and achieves sensitivity much greater than the original silver chloride-based sensors using a clip-on interface to the wire harness.
Shortkeys system:
Control over the computer in either desktop or gaming environments is done by binding keys to different zones within as many as three vertical joysticks. Each joystick can be divided into several zones based on thresholds and each zone within each joystick can be bound to a keyboard key. Each keystroke can further be assigned to several modes, including single keystroke, hold, repeat and dwell, which allows full plasticity with respect to configuration of the NIA for any application. Moreover, the same "vertical joysticks" can be used in more than one instance to enable simultaneous pressing of multiple keys at any given time like "W" and "Spacebar" for jumping forward or toggling between left and right strafing for running in a zigzag pattern.
Software support:
The only software available officially is proprietary to 32 and 64-bit versions of Microsoft Windows 7 (XP and Vista).No specifications have been published. People who are trying to make use of the device on Unix-like platforms, or create their own software for it for other reasons, say it may be a HID device providing raw data from its sensors to the software. There is no support for Linux.The 3rd-party input remapping applications GlovePIE and PPJoy accept input from the nia according to GlovePIE.org forums. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Silanization**
Silanization:
Silanization is the attachment of an organosilyl group to some chemical species. Almost always, the silanization refers to conversion of a silanol-terminated surface to a alkylsiloxy-terminated surface. This conversion confers hydrophobicity to a previously hydrophilic surface. This process is often used to modify the surface properties of glass, silicon, alumina, quartz, and metal oxide substrates, which all have an abundance of hydroxyl groups. Silanization differs from silylation, which usually refers to attachment of organosilicon groups to molecular substrates.
Mechanism:
Silanization mechanisms vary with substrate and with silanization reagent. In the usual circumstance, surface MOH groups react as nucleophiles with silyl chlorides or silyl alkoxides. The stoichiometry for these reactions are shown: M−OH + R3SiCl → M−OSiR3 + HCl M−OH + R3SiOCH3 → M−OSiR3 + CH3OHM is typically Si, but could be many other elements.
The process is assumed to follow the pathways that apply to silylation of molecular substrates, such as alcohols.
Properties of organofunctional alkoxysilanes:
Silanizing reagents often have the formula (RO)3-n(CH3)nSiR' or R'SiCl3, where R = methyl or ethyl and R' is a long alkyl chain or a functionalized alkyl group.
Properties of organofunctional alkoxysilanes:
The nature of the organic group on the silanization agent strongly influences the properties of the surface. Simple alkyl-containing silanizing groups confer hydrophobicity. When side chains of the silane compound are amines or thiols, the surfaces assume the properties of those functional groups. These surfaces are susceptible to further reactions characteristic of the appended functional groups. Grafting can be performed, for example.
Applications:
Reversed-phase chromatography A popular stationary phase is an octadecyl carbon chain (C18)-bonded silica (USP classification L1). Such materials are produced by reaction of silica gel with trimethoxyoctadecylsilane]]. The individual links involve silanol groups displacing the methoxy groups, forming an Si-O-Si bonds. This reaction changes the hydrophilic silanol groups into hydrophobic coatings. Other silanizing groups install C8-bonded silica (L7), pure silica (L3), cyano-bonded silica (L10) and phenyl-bonded silica (L11). Note that C18, C8 and phenyl are dedicated reversed-phase resins, while cyano columns can be used in a reversed-phase mode depending on analyte and mobile phase conditions.
Applications:
Glassware Silanization (or siliconization) of glassware is a common application that increases the hydrophobicity of a glass container. Thus treated, the glassware produces a flat meniscus and allowing for more complete transfer of aqueous solutions.Silanization of glassware is used in cell culturing to minimize adherence of cells to flask walls. Additionally, the silanization process is also used in biomedical fields for a wide variety of purposes, including anchoring DNA to substrates.Silanization of glassware can be achieved by dipping into a solution of 5-10% dimethyldiethoxysilane followed by heating.Silanization is also used for DNA chips. Nucleic acids are do not bond to untreated glass surfaces. Silanization can be providing a better bonding site for the nucleic acids onto the chip. A common silane used to treat glass surfaces for this application is (3-mercaptopropyl) trimethoxysilane, which increases the number of reactive thiol groups on the surface The nucleic acids can bond to these available thiol groups on the surface of the glass DNA chip after silanization occurs.
Applications:
Dental Silanization is often used to treat ceramics used for dental restorations and repairs. Applying silane coupling agents after grit blasting the ceramic material has been shown to produce durable resin bonding. Additionally, for titanium and other metal implant features in wires and crowns, application of silane coupling agents followed by resin composite cement has produced durable bonding in a clinical application. While silane coupling agents are widely used in dental practices, they are subject to bond degradation due to the environment of the oral cavity, weakening the adhesion between the surfaces that they are used to connect.
Additional reading:
Zelzer, M. (2015), "Peptide-based switchable and responsive surfaces", Switchable and Responsive Surfaces and Materials for Biomedical Applications, Elsevier, pp. 65–92, doi:10.1016/b978-0-85709-713-2.00003-1, ISBN 978-0-85709-713-2, retrieved 2023-04-30 Schwartz, Jeffrey; Avaltroni, Michael J; Danahy, Michael P; Silverman, Brett M; Hanson, Eric L; Schwarzbauer, Jean E; Midwood, Kim S; Gawalt, Ellen S (2003). "Cell Attachment and Spreading on Metal Implant Materials". Materials Science and Engineering: C. 23 (3): 395–400. doi:10.1016/S0928-4931(02)00310-7. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Prim's algorithm**
Prim's algorithm:
In computer science, Prim's algorithm (also known as Jarník's algorithm) is a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. The algorithm operates by building this tree one vertex at a time, from an arbitrary starting vertex, at each step adding the cheapest possible connection from the tree to another vertex.
Prim's algorithm:
The algorithm was developed in 1930 by Czech mathematician Vojtěch Jarník and later rediscovered and republished by computer scientists Robert C. Prim in 1957 and Edsger W. Dijkstra in 1959. Therefore, it is also sometimes called the Jarník's algorithm, Prim–Jarník algorithm, Prim–Dijkstra algorithm or the DJP algorithm.Other well-known algorithms for this problem include Kruskal's algorithm and Borůvka's algorithm. These algorithms find the minimum spanning forest in a possibly disconnected graph; in contrast, the most basic form of Prim's algorithm only finds minimum spanning trees in connected graphs. However, running Prim's algorithm separately for each connected component of the graph, it can also be used to find the minimum spanning forest. In terms of their asymptotic time complexity, these three algorithms are equally fast for sparse graphs, but slower than other more sophisticated algorithms.
Prim's algorithm:
However, for graphs that are sufficiently dense, Prim's algorithm can be made to run in linear time, meeting or improving the time bounds for other algorithms.
Description:
The algorithm may informally be described as performing the following steps: In more detail, it may be implemented following the pseudocode below.
Description:
As described above, the starting vertex for the algorithm will be chosen arbitrarily, because the first iteration of the main loop of the algorithm will have a set of vertices in Q that all have equal weights, and the algorithm will automatically start a new tree in F when it completes a spanning tree of each connected component of the input graph. The algorithm may be modified to start with any particular vertex s by setting C[s] to be a number smaller than the other values of C (for instance, zero), and it may be modified to only find a single spanning tree rather than an entire spanning forest (matching more closely the informal description) by stopping whenever it encounters another vertex flagged as having no associated edge.
Description:
Different variations of the algorithm differ from each other in how the set Q is implemented: as a simple linked list or array of vertices, or as a more complicated priority queue data structure. This choice leads to differences in the time complexity of the algorithm. In general, a priority queue will be quicker at finding the vertex v with minimum cost, but will entail more expensive updates when the value of C[w] changes.
Time complexity:
The time complexity of Prim's algorithm depends on the data structures used for the graph and for ordering the edges by weight, which can be done using a priority queue. The following table shows the typical choices: A simple implementation of Prim's, using an adjacency matrix or an adjacency list graph representation and linearly searching an array of weights to find the minimum weight edge to add, requires O(|V|2) running time. However, this running time can be greatly improved further by using heaps to implement finding minimum weight edges in the algorithm's inner loop.
Time complexity:
A first improved version uses a heap to store all edges of the input graph, ordered by their weight. This leads to an O(|E| log |E|) worst-case running time. But storing vertices instead of edges can improve it still further. The heap should order the vertices by the smallest edge-weight that connects them to any vertex in the partially constructed minimum spanning tree (MST) (or infinity if no such edge exists). Every time a vertex v is chosen and added to the MST, a decrease-key operation is performed on all vertices w outside the partial MST such that v is connected to w, setting the key to the minimum of its previous value and the edge cost of (v,w).
Time complexity:
Using a simple binary heap data structure, Prim's algorithm can now be shown to run in time O(|E| log |V|) where |E| is the number of edges and |V| is the number of vertices. Using a more sophisticated Fibonacci heap, this can be brought down to O(|E| + |V| log |V|), which is asymptotically faster when the graph is dense enough that |E| is ω(|V|), and linear time when |E| is at least |V| log |V|. For graphs of even greater density (having at least |V|c edges for some c > 1), Prim's algorithm can be made to run in linear time even more simply, by using a d-ary heap in place of a Fibonacci heap.
Proof of correctness:
Let P be a connected, weighted graph. At every iteration of Prim's algorithm, an edge must be found that connects a vertex in a subgraph to a vertex outside the subgraph. Since P is connected, there will always be a path to every vertex. The output Y of Prim's algorithm is a tree, because the edge and vertex added to tree Y are connected. Let Y1 be a minimum spanning tree of graph P. If Y1=Y then Y is a minimum spanning tree. Otherwise, let e be the first edge added during the construction of tree Y that is not in tree Y1, and V be the set of vertices connected by the edges added before edge e. Then one endpoint of edge e is in set V and the other is not. Since tree Y1 is a spanning tree of graph P, there is a path in tree Y1 joining the two endpoints. As one travels along the path, one must encounter an edge f joining a vertex in set V to one that is not in set V. Now, at the iteration when edge e was added to tree Y, edge f could also have been added and it would be added instead of edge e if its weight was less than e, and since edge f was not added, we conclude that w(f)≥w(e).
Proof of correctness:
Let tree Y2 be the graph obtained by removing edge f from and adding edge e to tree Y1. It is easy to show that tree Y2 is connected, has the same number of edges as tree Y1, and the total weights of its edges is not larger than that of tree Y1, therefore it is also a minimum spanning tree of graph P and it contains edge e and all the edges added before it during the construction of set V. Repeat the steps above and we will eventually obtain a minimum spanning tree of graph P that is identical to tree Y. This shows Y is a minimum spanning tree. The minimum spanning tree allows for the first subset of the sub-region to be expanded into a smaller subset X, which we assume to be the minimum.
Parallel algorithm:
The main loop of Prim's algorithm is inherently sequential and thus not parallelizable. However, the inner loop, which determines the next edge of minimum weight that does not form a cycle, can be parallelized by dividing the vertices and edges between the available processors. The following pseudocode demonstrates this.
Parallel algorithm:
This algorithm can generally be implemented on distributed machines as well as on shared memory machines. The running time is log |P|) , assuming that the reduce and broadcast operations can be performed in log |P|) . A variant of Prim's algorithm for shared memory machines, in which Prim's sequential algorithm is being run in parallel, starting from different vertices, has also been explored. It should, however, be noted that more sophisticated algorithms exist to solve the distributed minimum spanning tree problem in a more efficient manner. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**IPod Touch (2nd generation)**
IPod Touch (2nd generation):
The second-generation iPod Touch (marketed as "the new iPod touch", and colloquially known as the iPod Touch 2G, iPod Touch 2, or iPod 2) is a multi-touch mobile device designed and marketed by Apple Inc. with a touchscreen-based user interface. The successor to the 1st-generation iPod Touch, it was unveiled and released at Apple's media event on September 9, 2008. It is compatible with up to iOS 4.2.1, which was released on November 22, 2010.
History:
The second-generation iPod Touch was only sold in 8GB, 16GB and 32GB models. Two revisions of the device exist, one with an old BootROM exploitable with 24kPwn with a larger device capacity label on the back. In late 2009, Apple introduced a revised version of the second-gen iPod touch under the MC model name, which was only available in an 8GB variant. It featured a newer BootROM version which patched the 24kPwn BootROM exploit and has a smaller device capacity label similar to that of the iPod touch 3rd gen.
Features:
Software It fully supports iPhone OS 3 but has limited support for iOS 4 and did not get support for home screen wallpapers, multitasking, or Game Center, and it never received iOS 4.3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2-Methyl-6-nitrobenzoic anhydride**
2-Methyl-6-nitrobenzoic anhydride:
2-Methyl-6-nitrobenzoic anhydride is an organic acid anhydride also known as the Shiina reagent, having a structure wherein carboxylic acids undergo intermolecular dehydration condensation.
It was developed in 2002 by Prof. Isamu Shiina (Tokyo University of Science, Japan). The compound is often abbreviated MNBA.
Abstract:
The reagent is used for synthetic reactions wherein medium- and large-sized lactones are formed from hydroxycarboxylic acids via intramolecular ring closure (Shiina macrolactonization). The reaction proceeds at room temperature under basic or neutral conditions. This reagent can be used not only for macrolactonization but also for esterification, amidation, and peptide coupling. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hybrizyme**
Hybrizyme:
Hybrizyme is a term coined to indicate novel or normally rare gene variants (or alleles) that are associated with hybrid zones, geographic areas where two related taxa (e.g. species or subspecies) meet, mate, and produce hybrid offspring. The hybrizyme phenomenon is widespread and these alleles occur commonly, if not in all hybrid zones. Initially considered to be caused by elevated rates of mutation in hybrids, the most probable hypothesis infers that they are the result of negative (purifying) selection. Namely, in the center of the hybrid zone, negative selection purges alleles against hybrid disadvantage (e.g. hybrid inviability or infertility). Stated differently, any allele that will decrease reproductive isolation is favored and any linked alleles (genetic markers) also increase their frequency by genetic hitchhiking. If the linked alleles used to be rare variants in the parental taxa, they will become more common in the area where the hybrids are formed.
Etymology:
Originally hybrizymes were defined as "unexpected allelic electromorphs associated with hybrid zones", a formal term proposed by renowned conservation geneticist and biogeographer David S. Woodruff in 1988. By suggesting a new definition for a phenomenon that had been previously widely observed Woodruff's interpretation bypasses the etiological connotation of alternative terms and avoids inappropriate context. Namely, previous studies referred to allozymes that were observed at high frequency in hybrid zones, but are absent or rare in parental taxa as "the rare allele phenomenon". These alleles can have increased frequencies up to a point of the allele becoming the most common one in the hybrid zone, rendering the term "the rare allele phenomenon" deceptive. Despite this, these two terms have been used interchangeably in literature.
Widespread phenomenon:
Hybrid populations display the hybrizyme phenomenon by having increased frequencies of certain alleles that are rare or non-existent outside of the hybrid zone. The hybrizyme phenomenon is widespread in hybrid zones of species of snails, crickets, lizards, salamanders, rodents, fish and birds. Intriguingly, the increased frequency of some of these alleles can have a pronounced effect making them 3-20 times more common in hybrids than in non-hybrid populations.Early studies focused on detecting electromorphs for loci that code regulatory and non-regulatory enzymes from several functional classes using allozyme electrophoresis and usually involved loci that were polymorphic in parental populations. The phenomenon has also been detected in a broad range of genetic markers such as intron haplotypes, microsatellites, ribosomal DNA spacer variants, and anonymous SNPs.
Mutational origin:
Multiple hypotheses have been proposed to explain the mutational (molecular) origin of hybrizymes. They include gene conversion, transposable element activity, post-translational modification, mutations. and intragenic recombination. Some of these hypotheses are rejected by research in the past couple of years, but there is an unambiguous explanation for the mutational origin of hybrizymes. The two hypotheses most often discussed are increased mutation rates and intragenic recombination.
Mutational origin:
Mutation Under the mutational hypothesis, hybrizymes likely arise due to simple point mutations. Sequencing data have indicated this and imply low likelihood that hybrizymes arise as a result of transposition or recombination. Research on pocket gophers and Japanese freshwater crabs confirms that the phenomenon is possibly caused by simple nucleotide substitutions. However, the hypothesis has several weaknesses. It does not explain why normally rare alleles are restricted to a hybrid zone, why polymorphic loci are affected more or offers a mechanism that explains the high frequency of even the rarest variants.
Mutational origin:
Intragenic recombination Intragenic recombination, under certain circumstances, might create new allelic variants at rates higher than the ones associated with regular mutational processes. Under this hypothesis the variant allele would be a mosaic of the parental alleles. The likelihood of this hypothesis was disputed, through sequencing studies. Although there is yet no specific explanation for hybrizymes, it is not excluded that hybrizymes are generated by the combined effect of recombination and mutation events, with any recombination trace concealed by succeeding mutations. However, research on Acer species implies that high recombination rates are possible due to acceleration of genetic variation after hybridization. Furthermore, results are found that indicate that recurrent mutation is unlikely and that support the hypothesis of recombination.
Cause of maintenance:
Several hypotheses have been proposed to account the high frequency of hybrizymes in hybrid zones such as genetic drift, elevated rates of nucleotide substitutions. or positive selection on alleles which are mildly deleterious in parental taxa. Still, some faced a certain degree of unpredictability; specifically under the mutational hypothesis the overall substitution rates are elevated and many variants are expected versus having only one allele reaching high frequency and, at the same time, positive selection on deleterious alleles seems ambiguous.
Cause of maintenance:
Selection does not need to be directed to the hybrizyme, but to other genes with which the hybrizyme is linked, placing genetic hitchhiking in perspective. In other words, hybrid zones are maintained primarily by balance between gene flow and hybrid inferiority. In the centre of hybrid zones, the process of constant creation of low-fitness recombinant genotypes will favor any allele that will decrease reproductive isolation, consequently elevating the hybrid fitness. So, a likely mechanism would be negative or purifying selection against poorly fit multilocus genotypes. Therefore, the hybrizymes that increase in frequency could be modifier alleles or genetic markers that increase via hitchhiking. It is not excluded that the targets of selection are the barrier loci, loci that resist homogenization with the other genome during gene flow among diverging species, making them the most different parts of the genome between divergent populations.If allelic variation at these loci is considered, there might be alleles that have differential effect on reproductive isolation or hybrid disadvantage, leading to selection of those who have lower severity. The exact origin and mechanism that maintains these alleles at a high frequency is still a subject of debate and additional studies, such as Next Generation Sequencing analysis of the genomic regions involved in the phenomenon as a more trustworthy pathway to identify genes that impact the level of reproductive isolation.
Adaptive novelty:
Hybridization might expand the prospect of adaptive radiation to the point where positive selection on recombinant hybrid genotypes surpasses the intrinsic selection against them. Therefore, the selection schemes in hybrid swarms ensures that relatively strong endogenous selection would not quench such potential. Additionally, partial postzygotic reproductive isolation usually involves multiple genes and segregation and recombination of genes creates broadly varying reproductive compatibility in hybrid populations. Consequently, there will be recurrent removal of disadvantageous alleles for reproductive isolation and relative stabilization of hybrid zones, possibly slowing down the path of complete speciation by reinforcement.With the continual selection against hybrid disadvantage, crossing-over might, over time, interrupt existing linkages and establish new. This generates a shift in selection pressure on loci which are in linkage with these genes and will contribute to further changes in allele frequencies on a genome scale. The "rare allele phenomenon" might be an indication of this process. Even with the continuous effect of relatively strong endogenous selection against hybrids, a hybrid population might be an example where selection against reproductive isolation results in creating variable recombinant genotypes. Sometimes, this phenomenon might assist in creating a complex of adaptive traits that lead to adaptive novelty. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hydroxytyrosol**
Hydroxytyrosol:
Hydroxytyrosol is an organic compound with the formula (HO)2C6H3CH2CH2OH. Classified as a phenylethanoid, i.e. a relative of phenethyl alcohol. Its derivatives are found in a variety of natural sources, notably olive oils and wines. Hydroxytyrosol is a colorless solid, although samples often turn beige during storage. It is a derivative, formally speaking, of catechol.
It or its derivatives occurs in olives and in wines
Occurrence:
Olives The olives, leaves, and olive pulp contain large amounts of hydroxytyrosol derivative Oleuropein, more so than olive oil). Unprocessed, green (unripe) olives, contain between 4.3 and 116 mg of hydroxytyrosol per 100g of olives, while unprocessed, black (ripe) olives contain up to 413.3 mg per 100g. The ripening of an olive substantially increases the amount of hydroxytyrosol. Processed olives, such as the common canned variety containing iron(II) gluconate, contained little hydroxytyrosol, as iron salts are catalysts for its oxidation.
Occurrence:
Food safety Hydroxytyrosol is considered safe as a novel food for human consumption, with a no-observed-adverse-effect level of 50 mg/kg body weight per day, as evaluated by the European Food Safety Authority (EFSA).In the United States, hydroxytyrosol is considered to be a safe ingredient (GRAS) in processed foods at levels of 5 mg per serving.
Function and production In nature, hydroxytyrosol is generated by the hydrolysis of oleuropein that occurs during olive ripening. Oleuropein accumulates in olive leaves and fruit as a defense mechanism against pathogens and herbivores. During olive ripening or when the olive tissue is damaged by pathogens, herbivores, or mechanical damage, the enzyme β-glucosidase catalyzes hydroxytyrosol synthesis via hydrolysis from oleuropein.
Metabolism Shortly after olive oil consumption, 98% of hydroxytyrosol in plasma and urine appears in conjugated forms (65% glucuronoconjugates), suggesting extensive first-past metabolism and a half-life of 2.43 hours.
Mediterranean diet:
Mediterranean diets, characterized by regular intake of olive oil, have been shown to positively affect human health, including reduced rates of cardiovascular diseases. Research on consumption of olive oil and its components includes hydroxytyrosol and oleuropein, which may inhibit oxidation of LDL cholesterol – a risk factor for atherosclerosis, heart attack or stroke. The daily intake of hydroxytyrosol within the Mediterranean diet is estimated to be between 0.15 and 30 mg.
Regulation:
Europe The EFSA has issued a scientific opinion on health claims in relation to dietary consumption of hydroxytyrosol and related polyphenol compounds from olive fruit and oil, and protection of blood lipids from potential oxidative damage.EFSA concluded that a cause-and-effect relationship existed between the consumption of hydroxytyrosol and related compounds from olives and olive oil and protection of blood lipids from oxidative damage, providing a health claim for consumption of olive oil polyphenols containing at least 5 mg of hydroxytyrosol and its derivatives (oleuropein complex and tyrosol) per 20 g of olive oil. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bryant's triangle**
Bryant's triangle:
A surface marking of clinical importance is Bryant's triangle (or iliofemoral triangle), which is mapped out thus: the hypotenuse of the right angled triangle is a line from the anterior superior iliac spine to the top of the greater trochanter.
its sides are formed respectively by: a vertical line from the anterior superior iliac spine a perpendicular line from the top of the greater trochanter. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Combustion models for CFD**
Combustion models for CFD:
Combustion models for CFD refers to combustion models for computational fluid dynamics. Combustion is defined as a chemical reaction in which a hydrocarbon fuel reacts with an oxidant to form products, accompanied with the release of energy in the form of heat. Being the integral part of various engineering applications like: internal combustion engines, aircraft engines, rocket engines, furnaces, and power station combustors, combustion manifests itself as a wide domain during the design, analysis and performance characteristics stages of the above-mentioned applications. With the added complexity of chemical kinetics and achieving reacting flow mixture environment, proper modeling physics has to be incorporated during computational fluid dynamic (CFD) simulations of combustion. Hence the following discussion presents a general outline of the various adequate models incorporated with the Computational fluid dynamic code to model the process of combustion.
Overview:
Computational fluid dynamics modeling of combustion calls upon the proper selection and implementation of a model suitable to faithfully represent the complex physical and chemical phenomenon associated with any combustion process. The model should be competent enough to deliver information related to the species concentration, their volumetric generation or destruction rate and changes in the parameters of the system like enthalpy, temperature and mixture density. The model should be capable of solving the general transport equations for fluid flow and heat transfer as well as the additional equations of combustion chemistry and chemical kinetics incorporated into that as per the simulating environment desired
Critical considerations in combustion phenomenon:
The major consideration during any general combustion process includes the mixing time scale and the reacting time scale elapsed for the process. The flame type and the type of mixing of flow streams of the constituents also have to be taken into account. Apart from that as far as the kinetic complexity of the reaction is concerned, the reaction proceeds in multiple steps and what appears as a simple one line reaction actually completes after a series of reactions. Also the transport equations for mass fractions of all the species as well as the enthalpy generated during the reaction have to be solved. Hence even the simplest combustion reaction involves very tedious and rigorous calculation if all the intermediate steps of the combustion process, all transport equations and all flow equations have to be satisfied simultaneously. All these factors will have a significant effect on the computational speed and time of the simulation. But with proper simplifying assumptions Computational fluid dynamic modeling of combustion reaction can be done without substantial compromise on the accuracy and convergence of the solution. The basic models used for the same are covered in the following paragraphs.
Simple chemical reacting system model:
This model takes into consideration only the final concentration of species and takes into account only the global nature of combustion process where the reaction proceeds infinitely fast as a single step process without much stress on the detailed kinetics involved.The reactants are assumed to react in stoichiometric proportions. The model also deduces a linear relationship between the mass fractions of fuel, oxidant and the non dimensional variable mixture fraction. The model also takes into account an additional assumption that the mass diffusion coefficients of all species are equal. Owing to this additional assumption the model only solves one extra partial differential equation for mixture fraction and after solving the transport equation for the mixture fraction the corresponding mass fractions for fuel and oxidant are calculated.
Simple chemical reacting system model:
This model can very well be applied to a combustion environment where laminar diffusion effects are dominant and the combustion proceeds via non premixed fuel and oxidant streams diffusing into each other giving rise to a laminar flame.
Eddy break–up model:
This model is used when turbulent mixing of the constituents has to be taken into consideration. The k/Ɛ turbulent time scale is used to calculate the reaction rate. A comparison between the turbulent dissipation rates of the fuel, oxidant and products is done and the minimum amongst all is taken as the rate of the reaction. The transport equations for the mass fractions of the constituents are solved using this rate of reaction. Apart from this a mean enthalpy equation is also solved and temperature, density and viscosity are calculated accordingly. The model can also be implemented when finite rate kinetically controlled reaction is to be simulated. In such situation while deciding the rate of the reaction the Arrhenius kinetic rate expression is also taken into account and the rate of reaction is taken as minimum amongst the turbulent dissipation rates of all the constituents and the Arrhenius kinetic rate expression. Since turbulent mixing governs the characteristics of this model, there exists a limit to the quality of the combustion simulation depending upon the type of the turbulent model implemented to represent the flow. The model can also be modified to account for mixing of fine structures during the turbulent reaction. This modification of the model results in the eddy dissipation model which consider the mass fraction of fine structures in its calculations.
Laminar flamelet model:
This model approximates the turbulent flame as a series of laminar flamelet regions concentrated just around the stoichiometric surfaces of the reacting mixture. This model exploits the use of experimental data for determining relations between the variables considered like mass fraction, temperature etc. The nature and type of dependence of the variables is predicted through experimental data obtained during laminar diffusion flame experiment and laminar flamelet relationship is deduced based on the same. These relationships are then used to solve the transport equations for species mass fraction and mixture composition. The model can very well be implemented for situations where concentration of minor species in the combustion is to be computed like quantifying the generation of pollutants. A simple enhancement to the model results in the flamelet time scale model which takes finite rate kinetics effect into consideration. The flamelet time scale model produces steady laminar flamelet solution when reaction proceeds very fast and captures the finite rate effects when reaction chemistry is dominant.
Presumed probability distribution function model:
This model takes into account a statistical approach for calculating the variables like species mass fractions, temperature and density while the mixture composition is calculated at the grids. Then these all variables are calculated as functions of the mixture fraction around a presumed probability distribution function. The model can produce satisfactory results for turbulent reactive flows where convection effects due to mean and fluctuating components of velocity are dominant. The model can be extended for adiabatic as well as non adiabatic conditions.
Conditional moment closure:
Conditional moment closure (CMC) is an advanced combustion model. The basic idea is to model the chemical source based on conditional averages. The model was first introduced for non-premixed flows and hence the conditioning is done in the mixture fraction.
Other models:
The following are some of the other relevant models used for computational fluid dynamic modeling of combustion.
Other models:
The chemical equilibrium model The Flamelet generated manifold model The flame surface density model The large eddy simulation modelThe chemical equilibrium model considers the effect of intermediate reactions during turbulent combustion. The concentration of species is calculated when the combustion reaction reaches equilibrium state. The species concentration is calculated as a function of mixture fraction by deploying certain equilibrium calculation programs available to serve the purpose. The conditional closure model solves the transport equations for the mean components of the flow properties without considering the fluctuating composition of the reaction mixture. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Honey super**
Honey super:
A honey super is a part of a commercial or other managed (such as by a hobbyist) beehive that is used to collect honey. The most common variety is the "Illinois" or "medium" super with a depth of 65⁄8 inches, in the length and width dimensions of a Langstroth hive.
A honey super consists of a box in which 8–10 frames are hung. Western honeybees collect nectar and store the processed nectar in honeycomb, which they build on the frames. When the honeycomb is full, the bees will reduce the moisture content of the honey to 17-18% moisture content before capping the comb with beeswax.
Beekeepers will take the full honey supers and extract the honey. Periods when there is an abundant nectar source available and bees are quickly bringing back the nectar, are called a honey flow. During a honey flow, beekeepers may put several honey supers onto a hive so the bees have enough storage space.
Honey supers are removed in the fall when the honey is extracted, and before the hive is winterized, but enough honey is left for the bees to consume during winter.
Langstroth hive dimensions:
Using 3⁄4 inch wood the outside dimensions are 197⁄8" × 161⁄4" × height. In the metric system 25mm wood may be used which makes the outside dimensions 515mm × 425mm × height. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spectral regrowth**
Spectral regrowth:
Spectral regrowth is the intermodulation products generated in the presence of a digital transmitter added to an analog communication system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polycon**
Polycon:
In geometry, a polycon is a kind of a developable roller. It is made of identical pieces of a cone whose apex angle equals the angle of an even sided regular polygon. In principle, there are infinitely many polycons, as many as there are even sided regular polygons. Most members of the family have elongated spindle like shapes. The polycon family generalizes the sphericon. It was discovered by the Israeli inventor David Hirsch in 2017
Construction:
Two adjacent edges of an even sided regular polygon are extended till they reach the polygon's axis of symmetry that is furthest from the edges' common vertex.
By rotating the two resulting line segments around the polygon's axis of symmetry that passes through the common vertex, a right circular cone is created.
Two planes are passed such that each one of them contains the normal to the polygon at its center point and one of the two distanced vertices of the two edges.
The cone part that lies between the two planes is replicated n2−1 times, where n is the number of the polygon's edges. All n2 parts are joined at their planer surfaces to create a spindle shaped object. It has n curved edges which pass through alternating vertices of the polygon.
The obtained object is cut in half at its plane of symmetry (the polygon's plane).
The two identical halves are reunited after being rotated at an offset angle of 2πn
Edges and vertices:
A polycon based on a regular polygon with n edges has n+2 vertices, n of which coincide with the polygon's vertices, with the remaining two lying at the extreme ends of the solid. It has n edges, each one being half of the conic section created where the cone's surface intersects one of the two cutting planes. On each side of the polygonal cross-section, n2 edges of the polycon run (from every second vertex of the polygon) to one of the solid's extreme ends. The edges on one side are offset by an angle of 2πn from those on the other side. The edges of the sphericon ( n=4 ) are circular. The edges of the hexacon ( n=6 ) are parabolic. All other polycons' edges are hyperbolic.
The sphericon as a polycon:
The sphericon is the first member of the polycon family. It is also a member of the poly-sphericon and the convex hull of the two disc roller (TDR convex hull) families. In each of the families, it is constructed differently. As a poly-sphericon, it is constructed by cutting a bicone with an apex angle of π2 at its plane of symmetry and reuniting the two obtained parts after rotating them at an offset angel of π2 . As a TDR convex hull it is the convex hull of two perpendicular 180° circular sectors joined at their centers. As a polycon, the starting point is a cone created by rotating two adjacent edges of a square around its axis of symmetry that passes through their common vertex. In this specific case there is no need to extend the edges because their ends reach the square's other axis of symmetry. Since, in this specific case, the two cutting planes coincide with the plane of the cone's base, nothing is discarded and the cone remains intact. By creating another identical cone and joining the two cones together using their flat surfaces, a bicone is created. From here the construction continues in the same way described for the construction of the sphericon as a poly-sphericon. The only difference between the sphericon as a poly-sphericon and sphericon as a polycon is that as a poly- sphericon it has four vertices and as a polycon it is considered to have six. The additional vertices are not noticeable because they are located in the middle of the circular edges, and merge with them completely.
Rolling properties:
The surface of each polycon is a single developable face. Thus the entire family has rolling properties that are related to the meander motion of the sphericon, as do some members of the poly-sphericon family. Because the polysphericons' surfaces consist of conical surfaces and various kinds of frustum surfaces (conical and/or cylindrical), their rolling properties change whenever each of the surfaces touches the rolling plane. This is not the case with the polycons. Because each one of them is made of only one kind of conical surface the rolling properties remain uniform throughout the entire rolling motion. The instantaneous motion of the polycon is identical to a cone rolling motion around one of its n central vertices. The motion, as a whole, is a combination of these motions with each of the vertices serving in turn as an instant center of rotation around which the solid rotates during 1n of the rotation cycle. Once another vertex comes into contact with the rolling surface it becomes the new temporary center of rotation, and the rotation vector flips to the opposite direction. The resulting overall motion is a meander that is linear on average. Each of the two extreme vertices touches the rolling plane, instantaneously, n2 times in one rotation cycle. The instantaneous line of contact between the polycon and the surface it is rolling on is a segment of one of the generatinglines of a cone, and everywhere along this line the tangent plane to the polycon is the same.When n2 is an odd number this tangent plane is a constant distance from the tangent plane to the generating line on the polycon surface which is instantaneously uppermost. Thus the polycons, for n2 odd, are constant height rollers (as is a right circular bicone, a cylinder or a prism with Reuleaux triangle cross-section). Polycons, for n2 even, don't possess this feature.
History:
The sphericon was first introduced by David Hirsch in 1980 in a patent he named 'A Device for Generating a Meander Motion'. The principle, according to which it was constructed, as described in the patent, is consistent with the principle according to which poly-sphericons are constructed. Only more than 25 years later, following Ian Stewart's article about the sphericon in the Scientific American Journal, it was realized both by members of the woodturning [17, 26] and mathematical [16, 20] communities that the same construction method could be generalized to a series of axial-symmetric objects that have regular polygon cross sections other than the square. The surfaces of the bodies obtained by this method (not including the sphericon itself) consist of one kind of conic surface, and one, or more, cylindrical or conical frustum surfaces. In 2017 Hirsch began exploring a different method of generalizing the sphericon, one that is based on a single surface without the use of frustum surfaces. The result of this research was the discovery of the polycon family. The new family was first introduced at the 2019 Bridges Conference in Linz, Austria, both at the art works gallery and at the film festival | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SplitsTree**
SplitsTree:
SplitsTree is a popular freeware program for inferring phylogenetic trees, phylogenetic networks, or, more generally, splits graphs, from various types of data such as a sequence alignment, a distance matrix or a set of trees. SplitsTree implements published methods such as split decomposition, neighbor-net, consensus networks, super networks methods or methods for computing hybridization or simple recombination networks. It uses the NEXUS file format. The splits graph is defined using a special data block (SPLITS block). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phrase chunking**
Phrase chunking:
Phrase chunking is a phase of natural language processing that separates and segments a sentence into its subconstituents, such as noun, verb, and prepositional phrases, abbreviated as NP, VP, and PP, respectively. Typically, each subconstituent or chunk is denoted by brackets. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Creational pattern**
Creational pattern:
In software engineering, creational design patterns are design patterns that deal with object creation mechanisms, trying to create objects in a manner suitable to the situation. The basic form of object creation could result in design problems or in added complexity to the design. Creational design patterns solve this problem by somehow controlling this object creation.
Overview:
Creational design patterns are composed of two dominant ideas. One is encapsulating knowledge about which concrete classes the system uses. Another is hiding how instances of these concrete classes are created and combined.Creational design patterns are further categorized into object-creational patterns and class-creational patterns, where object-creational patterns deal with object creation and class-creational patterns deal with class-instantiation. In greater details, object-creational patterns defer part of its object creation to another object, while class-creational patterns defer its object creation to subclasses.Five well-known design patterns that are parts of creational patterns are the abstract factory pattern, which provides an interface for creating related or dependent objects without specifying the objects' concrete classes.
Overview:
builder pattern, which separates the construction of a complex object from its representation so that the same construction process can create different representations.
factory method pattern, which allows a class to defer instantiation to subclasses.
prototype pattern, which specifies the kind of object to create using a prototypical instance, and creates new objects by cloning this prototype.
singleton pattern, which ensures that a class only has one instance, and provides a global point of access to it.
Definition:
The creational patterns aim to separate a system from how its objects are created, composed, and represented. They increase the system's flexibility in terms of the what, who, how, and when of object creation.
Usage:
As modern software engineering depends more on object composition than class inheritance, emphasis shifts away from hard-coding behaviors toward defining a smaller set of basic behaviors that can be composed into more complex ones. Hard-coding behaviors are inflexible because they require overriding or re-implementing the whole thing in order to change parts of the design. Additionally, hard-coding does not promote reuse and makes it difficult to keep track of errors. For these reasons, creational patterns are more useful than hard-coding behaviors. Creational patterns make design become more flexible. They provide different ways to remove explicit references in the concrete classes from the code that needs to instantiate them. In other words, they create independency for objects and classes.
Usage:
Consider applying creational patterns when: A system should be independent of how its objects and products are created.
A set of related objects is designed to be used together.
Hiding the implementations of a class library or product, revealing only their interfaces.
Constructing different representation of independent complex objects.
A class wants its subclass to implement the object it creates.
The class instantiations are specified at run-time.
There must be a single instance and client can access this instance at all times.
Instance should be extensible without being modified.
Structure:
Below is a simple class diagram that most creational patterns have in common. Note that different creational patterns require additional and different participated classes.
Participants: Creator: Declares object interface. Returns object.
ConcreteCreator: Implements object's interface.
Examples:
Some examples of creational design patterns include: Abstract Factory pattern: a class requests the objects it requires from a factory object instead of creating the objects directly Factory method pattern: centralize creation of an object of a specific type choosing one of several implementations Builder pattern: separate the construction of a complex object from its representation so that the same construction process can create different representations Dependency Injection pattern: a class accepts the objects it requires from an injector instead of creating the objects directly Lazy initialization pattern: tactic of delaying the creation of an object, the calculation of a value, or some other expensive process until the first time it is needed Object pool pattern: avoid expensive acquisition and release of resources by recycling objects that are no longer in use Prototype pattern: used when the type of objects to create is determined by a prototypical instance, which is cloned to produce new objects Singleton pattern: restrict instantiation of a class to one object | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shovel test pit**
Shovel test pit:
A shovel test pit (STP) is a standard method for Phase I of an archaeological survey. It is usually a part of the Cultural Resources Management (CRM) methodology and a popular form of rapid archaeological survey in the United States of America and Canada. It designates a series of (c. 0.50 m or less) test holes, usually dug out by a shovel (hence the name) in order to determine whether the soil contains any cultural remains that are not visible on the surface. The soil is sifted or screened through 1/4" or 6 mm wire mesh to recover the artifacts.
Shovel test pit:
STPs will often be laid out over the project area in a grid-like fashion or in a consistently spaced line, creating a fairly systematic survey. Therefore, after the holes have been dug, one may map artifact densities over the project area, pinpointing the locations of possible sites where further investigation may be necessary. The interval at which the STPs are placed varies considerably and, in CRM at least, is sometimes prescribed by state regulations (in the U.S.) or is determined by the conditions in the field. The usual space between two STPs is 10 m or more but it can be considerably less (e.g., 1 m). The current standards in the United States is 30m or less. The depth of an STP depends on the depth at which either the bedrock or the sterile subsoil is found. The form of STP may vary from region to region and even within regions by company/organization. Common forms include circular and square shaped. Circular STP often have cylindrical to "bullet" (i.e., cylinder with short inverted conical base) shaped profiles and range from 30 cm to 50 cm diameter. Square STPs are typically about 50 cm, but some locations prefer other sizes (e.g., 40 cm). Unusual and unusually ineffective variants include circular (30 to 50 cm diameter) STP with truncated conical profiles.
Shovel test pit:
Depth of STP excavation also varies widely and is often dependent upon local soil types and expected maximum depth of sites. Typically this ranges from 30 cm to 1.0 m. A second factor is mechanical, in that excavation is limited by the tools and techniques used (i.e., shovel versus trowel). Typically STP are excavated to a maximum average of 1.0 m, although it is possible to excavate somewhat deeper (1.25 to 1.5 m) dependent upon the excavator and the tools available. STP can be combined with other techniques and tools (augers, corers) to extend the maximum depth of effective testing beyond 1.5 m. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aureole effect**
Aureole effect:
The aureole effect or water aureole is an optical phenomenon similar to Heiligenschein, creating sparkling light and dark rays radiating from the shadow of the viewer's head. This effect is seen only over a rippling water surface. The waves act as lenses to focus and defocus sunlight: focused sunlight produces the lighter rays, while defocused sunlight produces the darker rays. Suspended particles in the water help make the aureole effect more pronounced. The effect extends a greater angular distance from the viewer's shadow when the viewer is higher above the water, and can sometimes be seen from a plane.Although the focused (light) ray cones are actually more or less parallel to each other, the rays from the aureole effect appear to be radiating from the shadow of the viewer’s head due to perspective effects. The viewer's line of sight is parallel and lies within the cones, so from the viewer's perspective the rays seem to be radiating from the antisolar point, within the viewer's shadow.As in similar antisolar optical effects (such as a glory or Heiligenschein), each observer will see an aureole effect radiating only from their own head’s shadow. Similarly, if a photographer holds their camera at arm's length, the aureole effect appearing in the picture will be seen radiating from the shadow of the camera, although the photographer would still see it around their head's shadow while taking the picture. This happens because the aureole effect always appears directly opposite the sun, centered at the antisolar point. The antisolar point itself is located within the shadow of the viewer, whatever this is: the eyes of the viewer or the camera's lens. As a matter of fact, when aureole effects are photographed from a plane, it is possible to tell where the photographer was seated. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Biogas**
Biogas:
Biogas is a gaseous renewable energy source produced from raw materials such as agricultural waste, manure, municipal waste, plant material, sewage, green waste, wastewater, and food waste. Biogas is produced by anaerobic digestion with anaerobic organisms or methanogens inside an anaerobic digester, biodigester or a bioreactor.
Biogas:
The gas composition is primarily methane (CH4) and carbon dioxide (CO2) and may have small amounts of hydrogen sulfide (H2S), moisture and siloxanes. The gases methane and hydrogen, can be combusted or oxidized with oxygen. This energy release allows biogas to be used as a fuel; it can be used in fuel cells and for heating purpose, such as in cooking. It can also be used in a gas engine to convert the energy in the gas into electricity and heat.After removal of carbon dioxide and hydrogen sulphide it can be compressed in the same way as natural gas and used to power motor vehicles. In the United Kingdom, for example, biogas is estimated to have the potential to replace around 17% of vehicle fuel. It qualifies for renewable energy subsidies in some parts of the world. Biogas can be cleaned and upgraded to natural gas standards, when it becomes bio-methane. Biogas is considered to be a renewable resource because its production-and-use cycle is continuous, and it generates no net carbon dioxide. From a carbon perspective, as much carbon dioxide is absorbed from the atmosphere in the growth of the primary bio-resource as is released, when the material is ultimately converted to energy.
Production:
Biogas is produced by microorganisms, such as methanogens and sulfate-reducing bacteria, performing anaerobic respiration. Biogas can refer to gas produced naturally and industrially.
Natural In soil, methane is produced in anaerobic environments by methanogens, but is mostly consumed in aerobic zones by methanotrophs. Methane emissions result when the balance favors methanogens. Wetland soils are the main natural source of methane. Other sources include oceans, forest soils, termites, and wild ruminants.
Industrial The purpose of industrial biogas production is the collection of biomethane, usually for fuel. Industrial biogas is produced either; As landfill gas (LFG), which is produced by the decomposition of biodegradable waste inside a landfill due to chemical reactions and microbes, or As digested gas, produced inside an anaerobic digester.
Production:
Bio-gas Plants A biogas plant is the name often given to an anaerobic digester that treats farm wastes or energy crops. It can be produced using anaerobic digesters (air-tight tanks with different configurations). These plants can be fed with energy crops such as maize silage or biodegradable wastes including sewage sludge and food waste. During the process, the micro-organisms transform biomass waste into biogas (mainly methane and carbon dioxide) and digestate. Higher quantities of biogas can be produced when the wastewater is co-digested with other residuals from the dairy industry, sugar industry, or brewery industry. For example, while mixing 90% of wastewater from beer factory with 10% cow whey, the production of biogas was increased by 2.5 times compared to the biogas produced by wastewater from the brewery only.Manufacturing of biogas from intentionally planted maize has been described as being unsustainable and harmful due to very concentrated, intense and soil eroding character of these plantations.
Production:
Key processes There are two key processes: mesophilic and thermophilic digestion which is dependent on temperature. In experimental work at University of Alaska Fairbanks, a 1000-litre digester using psychrophiles harvested from "mud from a frozen lake in Alaska" has produced 200–300 liters of methane per day, about 20–30% of the output from digesters in warmer climates.
Production:
Dangers The air pollution produced by biogas is similar to that of natural gas as when methane (a major constituent of biogas) is ignited for its usage as an energy source, Carbon dioxide is made as a product which is a greenhouse gas ( as described by this equation: CH4 + 2O2 → CO2 + 2H2O ). The content of toxic hydrogen sulfide presents additional risks and has been responsible for serious accidents. Leaks of unburned methane are an additional risk, because methane is a potent greenhouse gas. A facility may leak 2% of the methane.Biogas can be explosive when mixed in the ratio of one part biogas to 8–20 parts air. Special safety precautions have to be taken for entering an empty biogas digester for maintenance work. It is important that a biogas system never has negative pressure as this could cause an explosion. Negative gas pressure can occur if too much gas is removed or leaked; Because of this biogas should not be used at pressures below one column inch of water, measured by a pressure gauge.
Production:
Frequent smell checks must be performed on a biogas system. If biogas is smelled anywhere windows and doors should be opened immediately. If there is a fire the gas should be shut off at the gate valve of the biogas system.
Landfill gas:
Landfill gas is produced by wet organic waste decomposing under anaerobic conditions in a similar way to biogas.The waste is covered and mechanically compressed by the weight of the material that is deposited above. This material prevents oxygen exposure thus allowing anaerobic microbes to thrive. Biogas builds up and is slowly released into the atmosphere if the site has not been engineered to capture the gas. Landfill gas released in an uncontrolled way can be hazardous since it can become explosive when it escapes from the landfill and mixes with oxygen. The lower explosive limit is 5% methane and the upper is 15% methane.The methane in biogas is 28 times more potent a greenhouse gas than carbon dioxide. Therefore, uncontained landfill gas, which escapes into the atmosphere may significantly contribute to the effects of global warming. In addition, volatile organic compounds (VOCs) in landfill gas contribute to the formation of photochemical smog.
Landfill gas:
Technical Biochemical oxygen demand (BOD) is a measure of the amount of oxygen required by aerobic micro-organisms to decompose the organic matter in a sample of material being used in the biodigester as well as the BOD for the liquid discharge allows for the calculation of the daily energy output from a biodigester.
Landfill gas:
Another term related to biodigesters is effluent dirtiness, which tells how much organic material there is per unit of biogas source. Typical units for this measure are in mg BOD/litre. As an example, effluent dirtiness can range between 800 and 1200 mg BOD/litre in Panama.From 1 kg of decommissioned kitchen bio-waste, 0.45 m3 of biogas can be obtained. The price for collecting biological waste from households is approximately €70 per ton.
Composition:
The composition of biogas varies depending upon the substrate composition, as well as the conditions within the anaerobic reactor (temperature, pH, and substrate concentration). Landfill gas typically has methane concentrations around 50%. Advanced waste treatment technologies can produce biogas with 55–75% methane, which for reactors with free liquids can be increased to 80–90% methane using in-situ gas purification techniques. As produced, biogas contains water vapor. The fractional volume of water vapor is a function of biogas temperature; correction of measured gas volume for water vapour content and thermal expansion is easily done via simple mathematics which yields the standardized volume of dry biogas.
Composition:
For 1000 kg (wet weight) of input to a typical biodigester, total solids may be 30% of the wet weight while volatile suspended solids may be 90% of the total solids. Protein would be 20% of the volatile solids, carbohydrates would be 70% of the volatile solids, and finally fats would be 10% of the volatile solids.
Contaminants Sulfur compounds Toxic and foul smelling Hydrogen sulfide (H2S) is the most common contaminant in biogas, but other sulfur-containing compounds, such as thiols may be present. Left in the biogas stream, hydrogen sulfide is corrosive and when combusted yields sulfur dioxide (SO2) and sulfuric acid (H2SO4), also corrosive and environmentally hazardous compounds.
Ammonia Ammonia (NH3) is produced from organic compounds containing nitrogen, such as the amino acids in proteins. If not separated from the biogas, combustion results in NOx emissions.
Composition:
Siloxanes In some cases, biogas contains siloxanes. They are formed from the anaerobic decomposition of materials commonly found in soaps and detergents. During combustion of biogas containing siloxanes, silicon is released and can combine with free oxygen or other elements in the combustion gas. Deposits are formed containing mostly silica (SiO2) or silicates (SixOy) and can contain calcium, sulfur, zinc, phosphorus. Such white mineral deposits accumulate to a surface thickness of several millimeters and must be removed by chemical or mechanical means.
Composition:
Practical and cost-effective technologies to remove siloxanes and other biogas contaminants are available.
Benefits of manure derived biogas:
High levels of methane are produced when manure is stored under anaerobic conditions. During storage and when manure has been applied to the land, nitrous oxide is also produced as a byproduct of the denitrification process. Nitrous oxide (N2O) is 320 times more aggressive as a greenhouse gas than carbon dioxide and methane 25 times more than carbon dioxide By converting cow manure into methane biogas via anaerobic digestion, the millions of cattle in the United States would be able to produce 100 billion kilowatt hours of electricity, enough to power millions of homes across the United States. In fact, one cow can produce enough manure in one day to generate 3 kilowatt hours of electricity; only 2.4 kilowatt hours of electricity are needed to power a single 100-watt light bulb for one day. Furthermore, by converting cattle manure into methane biogas instead of letting it decompose, global warming gases could be reduced by 99 million metric tons or 4%.
Applications:
Biogas can be used for electricity production on sewage works, in a CHP gas engine, where the waste heat from the engine is conveniently used for heating the digester; cooking; space heating; water heating; and process heating. If compressed, it can replace compressed natural gas for use in vehicles, where it can fuel an internal combustion engine or fuel cells and is a much more effective displacer of carbon dioxide than the normal use in on-site CHP plants.
Applications:
Biogas upgrading Raw biogas produced from digestion is roughly 60% methane and 39% CO2 with trace elements of H2S: inadequate for use in machinery. The corrosive nature of H2S alone is enough to destroy the mechanisms.Methane in biogas can be concentrated via a biogas upgrader to the same standards as fossil natural gas, which itself has to go through a cleaning process, and becomes biomethane. If the local gas network allows, the producer of the biogas may use their distribution networks. Gas must be very clean to reach pipeline quality and must be of the correct composition for the distribution network to accept. Carbon dioxide, water, hydrogen sulfide, and particulates must be removed if present.There are four main methods of upgrading: water washing, pressure swing absorption, selexol absorption, and amine gas treating. In addition to these, the use of membrane separation technology for biogas upgrading is increasing, and there are already several plants operating in Europe and USA.The most prevalent method is water washing where high pressure gas flows into a column where the carbon dioxide and other trace elements are scrubbed by cascading water running counter-flow to the gas. This arrangement could deliver 98% methane with manufacturers guaranteeing maximum 2% methane loss in the system. It takes roughly between 3% and 6% of the total energy output in gas to run a biogas upgrading system.
Applications:
Biogas gas-grid injection Gas-grid injection is the injection of biogas into the methane grid (natural gas grid). Until the breakthrough of micro combined heat and power two-thirds of all the energy produced by biogas power plants was lost (as heat). Using the grid to transport the gas to consumers, the energy can be used for on-site generation, resulting in a reduction of losses in the transportation of energy. Typical energy losses in natural gas transmission systems range from 1% to 2%; in electricity transmission they range from 5% to 8%.Before being injected in the gas grid, biogas passes a cleaning process, during which it is upgraded to natural gas quality. During the cleaning process trace components harmful to the gas grid and the final users are removed.
Applications:
Biogas in transport If concentrated and compressed, it can be used in vehicle transportation. Compressed biogas is becoming widely used in Sweden, Switzerland, and Germany. A biogas-powered train, named Biogaståget Amanda (The Biogas Train Amanda), has been in service in Sweden since 2005. Biogas powers automobiles. In 1974, a British documentary film titled Sweet as a Nut detailed the biogas production process from pig manure and showed how it fueled a custom-adapted combustion engine. In 2007, an estimated 12,000 vehicles were being fueled with upgraded biogas worldwide, mostly in Europe.Biogas is part of the wet gas and condensing gas (or air) category that includes mist or fog in the gas stream. The mist or fog is predominately water vapor that condenses on the sides of pipes or stacks throughout the gas flow. Biogas environments include wastewater digesters, landfills, and animal feeding operations (covered livestock lagoons).
Applications:
Ultrasonic flow meters are one of the few devices capable of measuring in a biogas atmosphere. Most of thermal flow meters are unable to provide reliable data because the moisture causes steady high flow readings and continuous flow spiking, although there are single-point insertion thermal mass flow meters capable of accurately monitoring biogas flows with minimal pressure drop. They can handle moisture variations that occur in the flow stream because of daily and seasonal temperature fluctuations, and account for the moisture in the flow stream to produce a dry gas value.
Applications:
Biogas generated heat/electricity Biogas can be used in different types of internal combustion engines, such as the Jenbacher or Caterpillar gas engines. Other internal combustion engines such as gas turbines are suitable for the conversion of biogas into both electricity and heat. The digestate is the remaining inorganic matter that was not transformed into biogas. It can be used as an agricultural fertiliser.
Applications:
Biogas can be used as the fuel in the system of producing biogas from agricultural wastes and co-generating heat and electricity in a combined heat and power (CHP) plant. Unlike the other green energy such as wind and solar, the biogas can be quickly accessed on demand. The global warming potential can also be greatly reduced when using biogas as the fuel instead of fossil fuel.However, the acidification and eutrophication potentials produced by biogas are 25 and 12 times higher respectively than fossil fuel alternatives. This impact can be reduced by using correct combination of feedstocks, covered storage for digesters and improved techniques for retrieving escaped material. Overall, the results still suggest that using biogas can lead to significant reduction in most impacts compared to fossil fuel alternative. The balance between environmental damage and green house gas emission should still be considered while implicating the system.
Technological advancements:
Projects such as NANOCLEAN are nowadays developing new ways to produce biogas more efficiently, using iron oxide nanoparticles in the processes of organic waste treatment. This process can triple the production of biogas.
Technological advancements:
Biogas and Sanitation Faecal Sludge is a product of onsite sanitation systems. Post collection and transportation, Faecal sludge can be treated with sewage in a conventional treatment plant, or otherwise it can be treated independently in a faecal sludge treatment plant. Faecal sludge can also be co-treated with organic solid waste in composting or in an anaerobic digestion system. Biogas can be generated through anaerobic digestion in the treatment of faecal sludge.
Technological advancements:
The appropriate management of excreta and its valorisation through the production of biogas from faecal sludge helps mitigate the effects of poorly managed excreta such as waterborne diseases and water and environmental pollution.
Legislation:
European Union The European Union has legislation regarding waste management and landfill sites called the Landfill Directive.
Legislation:
Countries such as the United Kingdom and Germany now have legislation in force that provides farmers with long-term revenue and energy security.The EU mandates that internal combustion engines with biogas have ample gas pressure to optimize combustion, and within the European Union ATEX centrifugal fan units built in accordance with the European directive 2014–34/EU (previously 94/9/EG) are obligatory. These centrifugal fan units, for example Combimac, Meidinger AG or Witt & Sohn AG are suitable for use in Zone 1 and 2 .
Legislation:
United States The United States legislates against landfill gas as it contains VOCs. The United States Clean Air Act and Title 40 of the Code of Federal Regulations (CFR) requires landfill owners to estimate the quantity of non-methane organic compounds (NMOCs) emitted. If the estimated NMOC emissions exceeds 50 tonnes per year, the landfill owner is required to collect the gas and treat it to remove the entrained NMOCs. That usually means burning it.
Legislation:
Because of the remoteness of landfill sites, it is sometimes not economically feasible to produce electricity from the gas.There are a variety of grants and loans the support the development of anaerobic digestor systems. The Rural Energy for American Program provides loan financing and grant funding for biogas systems, as does the Environmental Quality Incentives Program, Conservation Stewardship Program, and Conservation Loan Program.
Global developments:
United States With the many benefits of biogas, it is starting to become a popular source of energy and is starting to be used in the United States more. In 2003, the United States consumed 43 TWh (147 trillion BTU) of energy from "landfill gas", about 0.6% of the total U.S. natural gas consumption. Methane biogas derived from cow manure is being tested in the U.S. According to a 2008 study, collected by the Science and Children magazine, methane biogas from cow manure would be sufficient to produce 100 billion kilowatt hours enough to power millions of homes across America. Furthermore, methane biogas has been tested to prove that it can reduce 99 million metric tons of greenhouse gas emissions or about 4% of the greenhouse gases produced by the United States.The number of farm-based digesters increased by 21% in 2021 according to the American Biogas Council. In Vermont biogas generated on dairy farms was included in the CVPS Cow Power program. The program was originally offered by Central Vermont Public Service Corporation as a voluntary tariff and now with a recent merger with Green Mountain Power is now the GMP Cow Power Program. Customers can elect to pay a premium on their electric bill, and that premium is passed directly to the farms in the program. In Sheldon, Vermont, Green Mountain Dairy has provided renewable energy as part of the Cow Power program. It started when the brothers who own the farm, Bill and Brian Rowell, wanted to address some of the manure management challenges faced by dairy farms, including manure odor, and nutrient availability for the crops they need to grow to feed the animals. They installed an anaerobic digester to process the cow and milking center waste from their 950 cows to produce renewable energy, a bedding to replace sawdust, and a plant-friendly fertilizer. The energy and environmental attributes are sold to the GMP Cow Power program. On average, the system run by the Rowells produces enough electricity to power 300 to 350 other homes. The generator capacity is about 300 kilowatts.In Hereford, Texas, cow manure is being used to power an ethanol power plant. By switching to methane biogas, the ethanol power plant has saved 1000 barrels of oil a day. Over all, the power plant has reduced transportation costs and will be opening many more jobs for future power plants that will rely on biogas.In Oakley, Kansas, an ethanol plant considered to be one of the largest biogas facilities in North America is using integrated manure utilization system (IMUS) to produce heat for its boilers by utilizing feedlot manure, municipal organics and ethanol plant waste. At full capacity the plant is expected to replace 90% of the fossil fuel used in the manufacturing process of ethanol and methanol.In California, the Southern California Gas Company has advocated for mixing biogas into existing natural gas pipelines. However, California state officials have taken the position that biogas is "better used in hard-to-electrify sectors of the economy-- like aviation, heavy industry and long-haul trucking." Europe The level of development varies greatly in Europe. While countries such as Germany, Austria and Sweden are fairly advanced in their use of biogas, there is a vast potential for this renewable energy source in the rest of the continent, especially in Eastern Europe. MT-Energie is a German biogas technology company operating in the field of renewable energies. Different legal frameworks, education schemes and the availability of technology are among the prime reasons behind this untapped potential. Another challenge for the further progression of biogas has been negative public perception.In February 2009, the European Biogas Association (EBA) was founded in Brussels as a non-profit organisation to promote the deployment of sustainable biogas production and use in Europe. EBA's strategy defines three priorities: establish biogas as an important part of Europe's energy mix, promote source separation of household waste to increase the gas potential, and support the production of biomethane as vehicle fuel. In July 2013, it had 60 members from 24 countries across Europe.
Global developments:
UK As of September 2013, there are about 130 non-sewage biogas plants in the UK. Most are on-farm, and some larger facilities exist off-farm, which are taking food and consumer wastes.On 5 October 2010, biogas was injected into the UK gas grid for the first time. Sewage from over 30,000 Oxfordshire homes is sent to Didcot sewage treatment works, where it is treated in an anaerobic digestor to produce biogas, which is then cleaned to provide gas for approximately 200 homes.In 2015 the Green-Energy company Ecotricity announced their plans to build three grid-injecting digesters.
Global developments:
Italy In Italy the biogas industry first started in 2008, thanks to the introduction of advantageous feed tariffs. They were later replaced by feed-in premiums and the preference was given to by products and farming waste and leading to stagnation in biogas production and derived heat and electricity since 2012.As of September 2018, in Italy there are more than 200 biogas plants with a production of about 1.2 GW Germany Germany is Europe's biggest biogas producer and the market leader in biogas technology. In 2010 there were 5,905 biogas plants operating throughout the country: Lower Saxony, Bavaria, and the eastern federal states are the main regions. Most of these plants are employed as power plants. Usually the biogas plants are directly connected with a CHP which produces electric power by burning the bio methane. The electrical power is then fed into the public power grid. In 2010, the total installed electrical capacity of these power plants was 2,291 MW. The electricity supply was approximately 12.8 TWh, which is 12.6% of the total generated renewable electricity.Biogas in Germany is primarily extracted by the co-fermentation of energy crops (called 'NawaRo', an abbreviation of nachwachsende Rohstoffe, German for renewable resources) mixed with manure. The main crop used is corn. Organic waste and industrial and agricultural residues such as waste from the food industry are also used for biogas generation. In this respect, biogas production in Germany differs significantly from the UK, where biogas generated from landfill sites is most common.Biogas production in Germany has developed rapidly over the last 20 years. The main reason is the legally created frameworks. Government support of renewable energy started in 1991 with the Electricity Feed-in Act (StrEG). This law guaranteed the producers of energy from renewable sources the feed into the public power grid, thus the power companies were forced to take all produced energy from independent private producers of green energy. In 2000 the Electricity Feed-in Act was replaced by the Renewable Energy Sources Act (EEG). This law even guaranteed a fixed compensation for the produced electric power over 20 years. The amount of around 8 ¢/kWh gave farmers the opportunity to become energy suppliers and gain a further source of income.The German agricultural biogas production was given a further push in 2004 by implementing the so-called NawaRo-Bonus. This is a special payment given for the use of renewable resources, that is, energy crops. In 2007 the German government stressed its intention to invest further effort and support in improving the renewable energy supply to provide an answer on growing climate challenges and increasing oil prices by the 'Integrated Climate and Energy Programme'.
Global developments:
This continual trend of renewable energy promotion induces a number of challenges facing the management and organisation of renewable energy supply that has also several impacts on the biogas production. The first challenge to be noticed is the high area-consuming of the biogas electric power supply. In 2011 energy crops for biogas production consumed an area of circa 800,000 ha in Germany. This high demand of agricultural areas generates new competitions with the food industries that did not exist hitherto. Moreover, new industries and markets were created in predominately rural regions entailing different new players with an economic, political and civil background. Their influence and acting has to be governed to gain all advantages this new source of energy is offering. Finally biogas will furthermore play an important role in the German renewable energy supply if good governance is focused.
Global developments:
Developing countries Domestic biogas plants convert livestock manure and night soil into biogas and slurry, the fermented manure. This technology is feasible for small-holders with livestock producing 50 kg manure per day, an equivalent of about 6 pigs or 3 cows. This manure has to be collectable to mix it with water and feed it into the plant. Toilets can be connected. Another precondition is the temperature that affects the fermentation process. With an optimum at 36 °C the technology especially applies for those living in a (sub) tropical climate. This makes the technology for small holders in developing countries often suitable.
Global developments:
Depending on size and location, a typical brick made fixed dome biogas plant can be installed at the yard of a rural household with the investment between US$300 to $500 in Asian countries and up to $1400 in the African context. A high quality biogas plant needs minimum maintenance costs and can produce gas for at least 15–20 years without major problems and re-investments. For the user, biogas provides clean cooking energy, reduces indoor air pollution, and reduces the time needed for traditional biomass collection, especially for women and children. The slurry is a clean organic fertilizer that potentially increases agricultural productivity.Energy is an important part of modern society and can serve as one of the most important indicators of socio-economic development. As much as there have been advancements in technology, even so, some three billion people, primarily in the rural areas of developing countries, continue to access their energy needs for cooking through traditional means by burning biomass resources like firewood, crop residues and animal dung in crude traditional stoves.Domestic biogas technology is a proven and established technology in many parts of the world, especially Asia. Several countries in this region have embarked on large-scale programmes on domestic biogas, such as China and India.
Global developments:
The Netherlands Development Organisation, SNV, supports national programmes on domestic biogas that aim to establish commercial-viable domestic biogas sectors in which local companies market, install and service biogas plants for households. In Asia, SNV is working in Nepal, Vietnam, Bangladesh, Bhutan, Cambodia, Lao PDR, Pakistan and Indonesia, and in Africa; Rwanda, Senegal, Burkina Faso, Ethiopia, Tanzania, Uganda, Kenya, Benin and Cameroon.
Global developments:
In South Africa a prebuilt Biogas system is manufactured and sold. One key feature is that installation requires less skill and is quicker to install as the digester tank is premade plastic.
Global developments:
India Biogas in India has been traditionally based on dairy manure as feed stock and these "gobar" gas plants have been in operation for a long period of time, especially in rural India. In the last 2–3 decades, research organisations with a focus on rural energy security have enhanced the design of the systems resulting in newer efficient low cost designs such as the Deenabandhu model.
Global developments:
The Deenabandhu Model is a new biogas-production model popular in India. (Deenabandhu means "friend of the helpless".) The unit usually has a capacity of 2 to 3 cubic metres. It is constructed using bricks or by a ferrocement mixture. In India, the brick model costs slightly more than the ferrocement model; however, India's Ministry of New and Renewable Energy offers some subsidy per model constructed.
Global developments:
Biogas which is mainly methane/natural gas can also be used for generating protein rich cattle, poultry and fish feed in villages economically by cultivating Methylococcus capsulatus bacteria culture with tiny land and water foot print. The carbon dioxide gas produced as by product from these plants can be put to use in cheaper production of algae oil or spirulina from algaculture particularly in tropical countries like India which can displace the prime position of crude oil in near future. Union government of India is implementing many schemes to utilise productively the agro waste or biomass in rural areas to uplift rural economy and job potential. With these plants, the non-edible biomass or waste of edible biomass is converted in to high value products without any water pollution or green house gas (GHG) emissions.LPG (Liquefied Petroleum Gas) is a key source of cooking fuel in urban India and its prices have been increasing along with the global fuel prices. Also the heavy subsidies provided by the successive governments in promoting LPG as a domestic cooking fuel has become a financial burden renewing the focus on biogas as a cooking fuel alternative in urban establishments. This has led to the development of prefabricated digester for modular deployments as compared to RCC and cement structures which take a longer duration to construct. Renewed focus on process technology like the Biourja process model has enhanced the stature of medium and large scale anaerobic digester in India as a potential alternative to LPG as primary cooking fuel.
Global developments:
In India, Nepal, Pakistan and Bangladesh biogas produced from the anaerobic digestion of manure in small-scale digestion facilities is called gobar gas; it is estimated that such facilities exist in over 2 million households in India, 50,000 in Bangladesh and thousands in Pakistan, particularly North Punjab, due to the thriving population of livestock. The digester is an airtight circular pit made of concrete with a pipe connection. The manure is directed to the pit, usually straight from the cattle shed. The pit is filled with a required quantity of wastewater. The gas pipe is connected to the kitchen fireplace through control valves. The combustion of this biogas has very little odour or smoke. Owing to simplicity in implementation and use of cheap raw materials in villages, it is one of the most environmentally sound energy sources for rural needs. One type of these system is the Sintex Digester. Some designs use vermiculture to further enhance the slurry produced by the biogas plant for use as compost.In Pakistan, the Rural Support Programmes Network is running the Pakistan Domestic Biogas Programme which has installed 5,360 biogas plants and has trained in excess of 200 masons on the technology and aims to develop the Biogas Sector in Pakistan.
Global developments:
In Nepal, the government provides subsidies to build biogas plant at home.
Global developments:
China The Chinese have experimented with the applications of biogas since 1958. Around 1970, China had installed 6,000,000 digesters in an effort to make agriculture more efficient. During the last few years, technology has met high growth rates. This seems to be the earliest developments in generating biogas from agricultural waste.The rural biogas construction in China has shown an increased development trend. The exponential growth of energy supply caused by rapid economic development and severe haze condition in China have led biogas to become the better eco-friendly energy for the rural areas. In Qing county, Hebei Province, the technology of using crop straw as a main material to generate biogas is currently developing.China had 26.5 million biogas plants, with an output of 10.5 billion cubic meter biogas until 2007. The annual biogas output has increased to 248 billion cubic meter in 2010. The Chinese government had supported and funded rural biogas projects, but only about 60% were operating normally. During the winter, the biogas production in northern regions of China is lower. This is caused by the lack of heat control technology for digesters thus the co-digestion of different feedstock failed to complete in the cold environment.
Global developments:
Zambia Lusaka, the capital city of Zambia, has two million inhabitants with over half of the population residing in peri-urban areas. The majority of this population use pit latrines as toilets generating approximately 22,680 tons of fecal sludge per annum. This sludge is inadequately managed: Over 60% of the generated faecal sludge remains within the residential environment thereby compromising both the environment and public health.
Global developments:
In the face of research work and implementation of biogas having started as early as in the 1980s, Zambia is lagging behind in the adoption and use of biogas in the sub-Saharan Africa. Animal manure and crop residues are required for the provision of energy for cooking and lighting. Inadequate funding, absence of policy, regulatory framework and strategies on biogas, unfavorable investor monetary policy, inadequate expertise, lack of awareness of the benefits of biogas technology among leaders, financial institutions and locals, resistance to change due cultural and traditions of the locals, high installation and maintenance costs of biogas digesters, inadequate research and development, improper management and lack of monitoring of installed digesters, complexity of the carbon market, lack of incentives and social equity are among the challenges that have impeded the acquiring and sustainable implementation of domestic biogas production in Zambia.
Associations:
World Biogas Association (https://www.worldbiogasassociation.org/) American Biogas Council (https://americanbiogascouncil.org/) Canadian Biogas Association (https://www.biogasassociation.ca/) European Biogas Association German Biogas Association Indian Biogas Association
Society and culture:
In the 1985 Australian film Mad Max Beyond Thunderdome the post-apocalyptic settlement Barter town is powered by a central biogas system based upon a piggery. As well as providing electricity, methane is used to power Barter's vehicles.
Society and culture:
"Cow Town", written in the early 1940s, discusses the travails of a city vastly built on cow manure and the hardships brought upon by the resulting methane biogas. Carter McCormick, an engineer from a town outside the city, is sent in to figure out a way to utilize this gas to help power, rather than suffocate, the city.Contemporary biogas production provides new opportunities for skilled employment, drawing on the development of new technologies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Camel loin**
Camel loin:
Camel loin is a cut of meat from a camel, created from the tissue along the dorsal side of the rib cage.
The brisket, ribs and loin are among the preferred parts. The method of cooking varies from country to country; the Saudis prefer to cook the kabsa by using pressure cooking. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cold-cranking simulator**
Cold-cranking simulator:
The cold-cranking simulator (CCS) is a device used to determine the low temperature performance of lubricants, when starting a cold engine (i.e. cold-cranking). In this condition, the only energy available to turn the engine comes from the starter motor and the battery, and it has been widely assumed that the system acts as a constant power viscometer. The use of this device for this purpose is standardized as ASTM D5293.
Test development:
The cold-cranking simulator was invented developed by Dr. Dae Sik Kim of Esso Research and Engineering Company in 1964. The first prototype was built on his apartment kitchen table with Unimat, a miniature lathe/milling machine, to minimize and avoid proper company procedures. He reported the results of his developmental work, titled: "Results of Cold Cranking Simulator and a Comment" at SAE Fuels and Lubricants Meeting in Palmer house, Chicago on May 18, 1965. Although the device was initially called "Kimometer", he refused to put his name on it and he named it for what it was intended.
Purpose of this test:
Cold-cranking simulator simulates rheological process of "an average engine" during cold starting. The engine's starter motor was replaced with a small series wound universal motor, a typical sewing machine motor, and the engine, with a specially designed cold cylinder and an insulated cylindrical rotor with a pair of parallel flats. The sample oil is continuously sheared under a periodically varying shear rate, lower when the flats pass. Oils in real engines are similarly sheared, high in the journal bearings, oscillatory on piston rings and low in galley. Most developmental work went into proper sizing of the flat to simulate relative shear rate distribution in an "average engine". Both an engine and simulator is calibrated with a set of Newtonian standard crank case oils with known viscosities.When SAE and ASTM decided to use the simulator for their future standard instrument, Esso R & E Company gave a free exclusive license to Cannon Instrument Co of State College, PA to avoid conflict of interest.During past four decades many marginal improvements are being made but the basic design and idea remains. Various generations of the CCS have been made over the years, with the latest Cannon CCS-2100 utilising Peltier cooling and an associated chiller to operate essentially the same instrument as the original 1960s design.In the late 1980s Ravenfield Designs, Heywood, England, redesigned the entire system from the ground up, utilising a novel system to accurately model the old instruments and created a new machine offering higher repeatability and reproducibility than former methods. The Ravenfield apparatus, designated Model CS is markedly smaller than the Cannon apparatus, incorporating the cooler, the PC, the instrument and sample pumping in a 600 mm square footprint.The Society of Automotive Engineers adopted the CCS test as part of the J300 specification, and is the subject of ASTM test method D5293. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Improvisational Team Synchronization**
Improvisational Team Synchronization:
Improvisational Team Synchronization, Improv Team Sync, or ITS (formerly Improvisational Tribal Style) belly dance is a style of group dance improvisation, often associated with Tribal Fusion and belly dance. ITS is performed by a group of dancers consisting of one of more leaders and followers. The dance relies on a shared vocabulary of movements, each initiated by the leader using a distinct cue movement. After the cue, a short choreographed movement sequence, or combo, is performed by the group. The leader chooses the combo based on their interpretation of the music, which is often done spontaneously during the performance. The result is a dance that appears choreographed, but is in fact structured improvisation. This format of structured group improvisation was first developed by Carolena Nericcio-Bohlman, the founder of American Tribal Style.
History:
ITS was coined in 2006 by Amy Sigil of UNMATA to describe her improv vocabulary, as it evolved away from American Tribal Style to include street dance and hip-hop dance movements, music and aesthetics. Sigil first studied with Antara Nepa (Julia Carrol), director of the Ottoman Traders located in Folsom, California. Later she opened her own dance studio in Sacramento, California, and was introduced to American Tribal Style by Shawna Rai. Sigil used the basic improvisational format of American Tribal Style to develop a new vocabulary of movements, which evolved into ITS. Other improvisational belly dance styles include: American Tribal Style, Synchronized Group Improv, Tribal Group Improv, American Improv Tribal, Group Improv Tribal. Although the various styles of dance improvisation associated with Tribal Fusion dance are rooted in the United States, Improvisational Tribal Style is taught and performed internationally.
Characteristics:
The ITS format is composed of three major elements: stall movements, combinations, and concepts.
Stall movements An ITS stall movement has a minimum number of counts, which can be repeated indefinitely. A stall movement has no cue (but it does have a specific starting position, or "ground zero"), can be repeated indefinitely by the leader, and can usually be used to travel with (rotating leaders, changing formations, etc.).
Combinations An ITS combination (or combo) is a sequence of movements that must be initiated by a unique cue, which have a defined beginning and end.
Concepts ITS concepts are ideas that can be applied to stall moves. Concepts include formations, fadebacks, directional and timed turns.
Costuming and aesthetics:
Unlike American Tribal Style, which emphasizes elaborate multi-cultural costuming and jewelry, the costuming associated with ITS tends to be simple; often consisting of black pants, a metal belt and a tank top. Elements associated with Tribal Fusion costuming are also commonly incorporated into ITS, such as antique multi-cultural jewelry, and permanent body adornment such as tattoos and piercings. Costuming can vary widely depending on the performance context, however. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Principles of Compiler Design**
Principles of Compiler Design:
Principles of Compiler Design, by Alfred Aho and Jeffrey Ullman, is a classic textbook on compilers for computer programming languages. Both of the authors won the 2020 Turing award for their work on compilers.
Principles of Compiler Design:
It is often called the "green dragon book" and its cover depicts a knight and a dragon in battle; the dragon is green, and labeled "Complexity of Compiler Design", while the knight wields a lance and a shield labeled "LALR parser generator" and "Syntax Directed Translation" respectively, and rides a horse labeled "Data Flow Analysis". The book may be called the "green dragon book" to distinguish it from its successor, Aho, Sethi & Ullman's Compilers: Principles, Techniques, and Tools, which is the "red dragon book". The second edition of Compilers: Principles, Techniques, and Tools added a fourth author, Monica S. Lam, and the dragon became purple; hence becoming the "purple dragon book." The book also contains the entire code for making a compiler. The back cover offers the original inspiration of the cover design: The dragon is replaced by windmills, and the knight is Don Quixote.
Principles of Compiler Design:
The book was published by Addison-Wesley, ISBN 0-201-00022-9. The acknowledgments mention that the book was entirely typeset at Bell Labs using troff on the Unix operating system, little of which had, at that time, been seen outside the Laboratories. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ultraviolet fixed point**
Ultraviolet fixed point:
In a quantum field theory, one may calculate an effective or running coupling constant that defines the coupling of the theory measured at a given momentum scale. One example of such a coupling constant is the electric charge.
In approximate calculations in several quantum field theories, notably quantum electrodynamics and theories of the Higgs particle, the running coupling appears to become infinite at a finite momentum scale. This is sometimes called the Landau pole problem.
Ultraviolet fixed point:
It is not known whether the appearance of these inconsistencies is an artifact of the approximation, or a real fundamental problem in the theory. However, the problem can be avoided if an ultraviolet or UV fixed point appears in the theory. A quantum field theory has a UV fixed point if its renormalization group flow approaches a fixed point in the ultraviolet (i.e. short length scale/large energy) limit. This is related to zeroes of the beta-function appearing in the Callan–Symanzik equation. The large length scale/small energy limit counterpart is the infrared fixed point.
Specific cases and details:
Among other things, it means that a theory possessing a UV fixed point may not be an effective field theory, because it is well-defined at arbitrarily small distance scales. At the UV fixed point itself, the theory can behave as a conformal field theory.
The converse statement, that any QFT which is valid at all distance scales (i.e. isn't an effective field theory) has a UV fixed point is false. See, for example, cascading gauge theory.
Noncommutative quantum field theories have a UV cutoff even though they are not effective field theories.
Specific cases and details:
Physicists distinguish between trivial and nontrivial fixed points. If a UV fixed point is trivial (generally known as Gaussian fixed point), the theory is said to be asymptotically free. On the other hand, a scenario, where a non-Gaussian (i.e. nontrivial) fixed point is approached in the UV limit, is referred to as asymptotic safety. Asymptotically safe theories may be well defined at all scales despite being nonrenormalizable in perturbative sense (according to the classical scaling dimensions).
Asymptotic safety scenario in quantum gravity:
Steven Weinberg has proposed that the problematic UV divergences appearing in quantum theories of gravity may be cured by means of a nontrivial UV fixed point. Such an asymptotically safe theory is renormalizable in a nonperturbative sense, and due to the fixed point physical quantities are free from divergences. As yet, a general proof for the existence of the fixed point is still lacking, but there is mounting evidence for this scenario. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quantum signal processing**
Quantum signal processing:
Quantum Signal Processing is a Hamiltonian simulation algorithm with optimal lower bounds in query complexity. It linearizes the operator of a quantum walk using eigenvalue transformation. The quantum walk takes a constant number of queries. So quantum signal processing's cost depends on the constant number of calls to the quantum walk operator, number of single qubit quantum gates that aid in the eigenvalue transformation and an ancilla qubit.
Eigenvalue transformation:
Given a unitary W|ui⟩=eiθi|ui⟩ , calculate A=f(W)=∑ieif(θi)|ui⟩⟨ui| . For example, if W=[eiθ1000eiθ2000eiθ3] , A=[eif(θ1)000eif(θ2)000eif(θ3)]
Algorithm:
Input: Given a Hamiltonian H , define a quantum walk operator W using 2 d-sparse oracles OH and OF . OH accepts inputs j and k (j is the row of the Hamiltonian and k is the column) and outputs ⟨j|H|k⟩ , so querying OH=|j⟩|k⟩|l⟩=|j⟩|k⟩|l⊕Hj,k⟩ . OF accepts inputs j and l and computes the th non-zero element in the th row of H . Output: eiHt Create an input state θ Define a controlled-gate, c−W Repeatedly apply single qubit gates to the ancilla followed applications of c−W to the register that contains θ max log log log 1ϵ) times. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Virtual black hole**
Virtual black hole:
In quantum gravity, a virtual black hole is a hypothetical micro black hole that exists temporarily as a result of a quantum fluctuation of spacetime. It is an example of quantum foam and is the gravitational analog of the virtual electron–positron pairs found in quantum electrodynamics. Theoretical arguments suggest that virtual black holes should have mass on the order of the Planck mass, lifetime around the Planck time, and occur with a number density of approximately one per Planck volume.The emergence of virtual black holes at the Planck scale is a consequence of the uncertainty relation ΔRμΔxμ≥ℓP2=ℏGc3 where Rμ is the radius of curvature of spacetime small domain, xμ is the coordinate of the small domain, ℓP is the Planck length, ℏ is the reduced Planck constant, G is the Newtonian constant of gravitation, and c is the speed of light. These uncertainty relations are another form of Heisenberg's uncertainty principle at the Planck scale.
Virtual black hole:
If virtual black holes exist, they provide a mechanism for proton decay. This is because when a black hole's mass increases via mass falling into the hole, and is theorized to decrease when Hawking radiation is emitted from the hole, the elementary particles emitted are, in general, not the same as those that fell in. Therefore, if two of a proton's constituent quarks fall into a virtual black hole, it is possible for an antiquark and a lepton to emerge, thus violating conservation of baryon number.The existence of virtual black holes aggravates the black hole information loss paradox, as any physical process may potentially be disrupted by interaction with a virtual black hole. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ent-isokaurene C2-hydroxylase**
Ent-isokaurene C2-hydroxylase:
Ent-isokaurene C2-hydroxylase (EC 1.14.13.143, CYP71Z6) is an enzyme with systematic name ent-isokaurene,NADPH:oxygen oxidoreductase (hydroxylating). This enzyme catalyses the following chemical reaction ent-isokaurene + O2 + NADPH + H+ ⇌ ent-2alpha-hydroxyisokaurene + H2O + NADP+Ent-isokaurene C2-hydroxylase performs the initial step in the conversion of ent-isokaurene to the antibacterial oryzalides in rice, Oryza sativa. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fatigue testing**
Fatigue testing:
Fatigue testing is a specialised form of mechanical testing that is performed by applying cyclic loading to a coupon or structure. These tests are used either to generate fatigue life and crack growth data, identify critical locations or demonstrate the safety of a structure that may be susceptible to fatigue. Fatigue tests are used on a range of components from coupons through to full size test articles such as automobiles and aircraft.
Fatigue testing:
Fatigue tests on coupons are typically conducted using servo hydraulic test machines which are capable of applying large variable amplitude cyclic loads. Constant amplitude testing can also be applied by simpler oscillating machines. The fatigue life of a coupon is the number of cycles it takes to break the coupon. This data can be used for creating stress-life or strain-life curves. The rate of crack growth in a coupon can also be measured, either during the test or afterward using fractography. Testing of coupons can also be carried out inside environmental chambers where the temperature, humidity and environment that may affect the rate of crack growth can be controlled.
Fatigue testing:
Because of the size and unique shape of full size test articles, special test rigs are built to apply loads through a series of hydraulic or electric actuators.
Fatigue testing:
Actuators aim to reproduce the significant loads experienced by a structure, which in the case of aircraft, may consist of manoeuvre, gust, buffet and ground-air-ground (GAG) loading. A representative sample or block of loading is applied repeatedly until the safe life of the structure has been demonstrated or failures occur which need to be repaired. Instrumentation such as load cells, strain gauges and displacement gauges are installed on the structure to ensure the correct loading has been applied. Periodic inspections of the structure around critical stress concentrations such as holes and fittings are made to determine the time detectable cracks were found and to ensure any cracking that does occur, does not affect other areas of the test article. Because not all loads can be applied, any unbalanced structural loads are typically reacted out to the test floor through non-critical structure such as the undercarriage.
Fatigue testing:
Airworthiness standards generally require a fatigue test to be carried out for large aircraft prior to certification to determine their safe life. Small aircraft may demonstrate safety through calculations, although typically larger scatter or safety factors are used because of the additional uncertainty involved.
Coupon tests:
Fatigue tests are used to obtain material data such as the rate of growth of a fatigue crack that can be used with crack growth equations to predict the fatigue life. These tests usually determine the rate of crack growth per cycle da/dN versus the stress intensity factor range max min , where the minimum stress intensity factor min corresponds to the minimum load for R>0 and is taken to be zero for R≤0 , and R is the stress ratio min max . Standardised tests have been developed to ensure repeatability and to allow the stress intensity factor to be easily determined but other shapes can be used providing the coupon is large enough to be mostly elastic.
Coupon tests:
Coupon shape A variety of coupons can be used but some of the common ones are: compact tension coupon (CT). The compact specimen uses the least amount of material for a specimen that is used to measure crack growth. Compact tension specimens typically use pins that are slightly smaller than the holes in the coupon to apply the loads. This method however prevents the accurate application of loads near zero and the coupon is therefore not recommended when negative loads need to be applied.
Coupon tests:
Centre Cracked Tension panel (CCT). The centre cracked tension or middle tension specimen is made from a flat sheet or bar containing two holes for attaching to grips .
Single Edge Notch Tension coupon (SENT). The single edge coupon is an elongated version of the compact tension coupon.
Instrumentation The following instrumentation is commonly used for monitoring coupon tests: Strain gauges are used to monitor the applied loading or stress fields around the crack tip. They may be placed beneath the path of the crack or on the back face of a compact tension coupon.
Coupon tests:
An extensometer or displacement gauge can be used to measure the crack tip opening displacement at the mouth of a crack. This value can be used to determine the stress intensity factor which will change with the length of the crack. Displacement gauges can also be used to measure the compliance of a coupon and the position during the loading cycle when contact between the opposite crack faces occurs in order to measure crack closure.
Coupon tests:
Applied test loads are usually monitored on the test machine with a load cell.
A travelling optical microscope can be use for measurement of the position of the crack tip.
Full scale fatigue tests:
Full-scale tests may be used to: Validate the proposed aircraft maintenance schedule.
Demonstrate the safety of a structure that may be susceptible to widespread fatigue damage.
Generate fatigue data Validate expectations for crack initiation and growth pattern.
Identify critical locations Validate software used to design and manufacture the aircraft.Fatigue tests can also be used to determine the extent that widespread fatigue damage may be a problem.
Test article Certification requires knowing and accounting for the complete load history that has been experienced by a test article. Using test articles that have previously been used for static proof testing have caused problems where overloads have been applied and that can retard the rate of fatigue crack growth.
The test loads are typically recorded using a data acquisition system acquiring data from possibly thousands of inputs from instrumentation installed on the test article, including: strain gages, pressure gauges, load cells, LVDTs, etc.
Fatigue cracks typically initiate from high stress regions such as stress concentrations or material and manufacturing defects. It is important that the test article is representative of all of these features.
Cracks may initiate from the following sources: Fretting, typically from high cycle count dynamic loads.
Mis-drilled holes or incorrectly sized holes for interference fit fasteners.
Material treatment and defects such as broken inclusions.
Stress concentrations such as holes and fillets.
Scratches, impact damage.
Loading sequence A representative block of loading is applied repeatedly until the safe life of the structure has been demonstrated or failures occur which need to be repaired.
Full scale fatigue tests:
The size of the sequence is chosen so that the maximum loads which may cause retardation effects are applied sufficiently often, typically at least ten times throughout the test, so that there are no sequence effects.The loading sequence is generally filtered to eliminate applying small non-fatigue damaging cycles that would take too long to apply. Two types of filtering are typically used: deadband filtering eliminates small cycles that completely fall within a certain range such as +/-3g.
Full scale fatigue tests:
rise-fall filtering eliminates small cycles that are less than a certain range such as 1g.The testing rate of large structures is typically limited to a few Hz and needs to avoid the resonance frequency of the structure.
Full scale fatigue tests:
Test rig All components that are not part of the test article or instrumentation are termed the test rig. The following components are typically found in full scale fatigue tests: Whiffletrees. In order to apply the correct loads to various parts of the structure, a mechanism known as a whiffletree is used to distribute the loads from a loading actuator to the test article. Loads applied to a central point are distributed through a series of pin jointed connected beams to produce known loads at the end connections. Each end connection is typically attached to a pad which is bonded onto the structure such as an aircraft wing. Hundreds of pads are usually applied to reproduce the aerodynamic and inertial loads seen on wing. Because the whiffletree consists of tension linkages, they are unable to apply compressive loads and therefore, independent whiffletrees are typically used on the upper and lower sides of wing fatigue tests.
Full scale fatigue tests:
Hydraulic, electromagnetic or pneumatic actuators are used to apply loads to the structure, either directly or through the use of a whiffletree to distribute the loads. A load cell is placed inline with the actuator and is used by the load controller to control the loads into the actuator. When many actuators are used on a flexible test structure, there may be cross-interaction between the different actuators. The load controller must ensure that spurious loading cycles are not applied to the structure as a result of this interaction.
Full scale fatigue tests:
Reaction restraints. Many of the loads such as aerodynamic and internal forces are re-acted by internal forces which are not present during a fatigue test. Hence, the loads are reacted out of the structure at non-critical points such as the undercarriage or through restraints on the fuselage.
Linear variable differential transformer can be used to measure the displacement of critical locations on the structure. Limits on these displacements can be used to signal when a structure has failed and to automatically shut down the test.
Non-representative structure. Some test structure may be expensive or unavailable and are typically replaced on the test structure with an equivalent structure. Structure that is close to actuator attachment points may see an unrealistic load that makes these areas non-representative.
Full scale fatigue tests:
Instrumentation The following instrumentation is typically used on a fatigue test: strain gauges accelerometers displacement gauges load cells crack sensor structural health monitoring sensorsIt is important to install any strain gauges on the test article that are also used for monitoring fleet aircraft. This allows the same damage calculations to be performed on the test article that are used to track the fatigue life of fleet aircraft. This is the primary way of ensuring fleet aircraft do not exceed the safe-life determined from the fatigue test.
Full scale fatigue tests:
Inspections Inspections form a component of a fatigue test. It is important to know when a detectable crack occurs in order to determine the certified life of each component in addition to minimising the damage to surrounding structure and to develop repairs that have minimal impact on the certification of the adjacent structure. Non-destructive inspections may be carried out during testing and destructive tests may be used at the end of testing to ensure the structure retains its load carrying capacity.
Full scale fatigue tests:
Certification Test interpretation and certification involves using the results from the fatigue test to justify the safe life and operation of an item. The purpose of certification is to ensure the probability of failure in service is acceptably small. The following factors may need to be considered: number of tests symmetry of the test structure and the applied loading installation and certification of repairs scatter factors material and manufacturing process variability environment criticalityAirworthy standards typically require that an aircraft remains safe even with the structure in a degraded state due to the presence of fatigue cracking.
Notable fatigue tests:
Cold proof loading tests of the F-111. These tests involved applying static limit loads to aircraft which had been chilled to reduce the critical fracture size. Passing the test meant that there were no large fatigue cracks present. When cracks were present, the wings failed catastrophically.
The International Follow-On Structural Fatigue Test Program (IFOSTP) was a joint venture between Australia, Canada and the U.S. to fatigue test the F/A-18 Hornet. The Australian test involved the use of electrodynamic shakers and pneumatic airbags to simulate high angle of attack buffet loads over the empennage.
de Havilland Comet suffered a series of catastrophic failures that ultimately proved to be fatigue despite being fatigue tested.
Fatigue tests on 110 Mustang wing sets were carried out to determine the scatter in fatigue life.
The novel No Highway and movie No Highway in the Sky were about the fictional fatigue test of the fuselage of a passenger aircraft.
Fatigue tests have also been used to grow fatigue cracks that are too small to be detected. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oudemansin A**
Oudemansin A:
Oudemansin A is a natural product first isolated from the basidiomycete fungus Oudemansiella mucida. Its chemical structure was determined by X-ray crystallography in 1979 and absolute stereochemistry by total synthesis. Two closely related derivatives, oudemansin B and X have also been isolated from other basidiomycetes. They are all biologically active against many filamentous fungi and yeasts but with insufficient potency and stability to become useful commercial products. However, their discovery, together with the strobilurins led to agricultural fungicides including azoxystrobin with the same mechanism of action.
Isolation and Characterization:
Oudemansin A (initially known simply as oudemansin) with R1 = R2 = H was first described in 1979, after being isolated from mycelial fermentations of the basidiomycete fungus Oudemansiella mucida. Its structure, including the relative configuration of the methoxy and adjacent methyl groups, was established by both spectroscopic methods and single crystal X-ray analysis but its absolute stereochemistry was at that time undetermined.
Isolation and Characterization:
Later it was found in cultures of the basidiomycete fungi Mycena polygramma and Xerula melanotricha. The latter fungus also produces oudemansin B, with R1 = MeO and R2 = Cl. Oudemansin X, with R1 = H and R2 = MeO was isolated from Oudemansiella radicata.
Chemical synthesis:
The oudemansins have been targets for total synthesis and in 1983, the synthesis of (-)-oudemansin A established that all three compounds have the (9S,10S)-configuration. Routes to oudemansins B and X have also been reported.
Mechanism of action as fungicides:
The fungicidal effects were shown to stem from what was then a novel mode of action, QoI inhibition. This was related to the β-methoxyacrylic acid sub-structure which this and related natural products, the strobilurins have in common. Intensive research by several agrochemical companies led to the development of useful agricultural fungicides based on the same mode of action, of which azoxystrobin is a typical example. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Locked-in syndrome**
Locked-in syndrome:
Locked-in syndrome (LIS), also known as pseudocoma, is a condition in which a patient is aware but cannot move or communicate verbally due to complete paralysis of nearly all voluntary muscles in the body except for vertical eye movements and blinking. The individual is conscious and sufficiently intact cognitively to be able to communicate with eye movements. Electroencephalography results are normal in locked-in syndrome.
Locked-in syndrome:
Total locked-in syndrome, or completely locked-in state (CLIS), is a version of locked-in syndrome wherein the eyes are paralyzed as well. Fred Plum and Jerome B. Posner coined the term for this disorder in 1966.
Signs and symptoms:
Locked-in syndrome is usually characterized by quadriplegia (loss of limb function) and the inability to speak in otherwise cognitively intact individuals. Those with locked-in syndrome may be able to communicate with others through coded messages by blinking or moving their eyes, which are often not affected by the paralysis. The symptoms are similar to those of sleep paralysis. Patients who have locked-in syndrome are conscious and aware, with no loss of cognitive function. They can sometimes retain proprioception and sensation throughout their bodies. Some patients may have the ability to move certain facial muscles, and most often some or all of the extraocular muscles. Individuals with the syndrome lack coordination between breathing and voice. This prevents them from producing voluntary sounds, though the vocal cords themselves may not be paralysed.
Causes:
Unlike persistent vegetative state, in which the upper portions of the brain are damaged and the lower portions are spared, locked-in syndrome is essentially the opposite, caused by damage to specific portions of the lower brain and brainstem, with no damage to the upper brain. Injuries to the pons are the most common cause of locked-in syndrome.
Causes:
Possible causes of locked-in syndrome include: Poisoning cases – More frequently from a krait bite and other neurotoxic venoms, as they cannot usually cross the blood–brain barrier Brainstem stroke Diseases of the circulatory system Medication overdose Damage to nerve cells, particularly destruction of the myelin sheath, caused by disease or osmotic demyelination syndrome (formerly designated central pontine myelinolysis) secondary to excessively rapid correction of hyponatremia [>1 mEq/L/h]) A stroke or brain hemorrhage, usually of the basilar artery Traumatic brain injury Result from lesion of the brain-stemCurare poisoning and paralytic shellfish poisoning mimic a total locked-in syndrome by causing paralysis of all voluntarily controlled skeletal muscles. The respiratory muscles are also paralyzed, but the victim can be kept alive by artificial respiration.
Diagnosis:
Locked-in syndrome can be difficult to diagnose. In a 2002 survey of 44 people with LIS, it took almost three months to recognize and diagnose the condition after it had begun. Locked-in syndrome may mimic loss of consciousness in patients, or, in the case that respiratory control is lost, may even resemble death. People are also unable to actuate standard motor responses such as withdrawal from pain; as a result, testing often requires making requests of the patient such as blinking or vertical eye movement.Brain imaging may provide additional indicators of locked-in syndrome, as brain imaging provides clues as to whether or not brain function has been lost. Additionally, an EEG can allow the observation of sleep-wake patterns indicating that the patient is not unconscious but simply unable to move.
Diagnosis:
Similar conditions Amyotrophic lateral sclerosis (ALS) Bilateral brainstem tumors Brain death (of the whole brain or the brain stem or other part) Coma (deep or irreversible) Guillain–Barré syndrome Myasthenia gravis Poliomyelitis Polyneuritis Vegetative state (chronic or otherwise)
Treatment:
Neither a standard treatment nor a cure is available. Stimulation of muscle reflexes with electrodes (NMES) has been known to help patients regain some muscle function. Other courses of treatment are often symptomatic. Assistive computer interface technologies such as Dasher, combined with eye tracking, may be used to help people with LIS communicate with their environment.
Prognosis:
It is extremely rare for any significant motor function to return, with the majority of locked-in syndrome patients never regaining motor control. However, some people with the condition continue to live for extended periods of time, while in exceptional cases, like that of Kerry Pink, Gareth Shepherd, Jacob Haendel, Kate Allatt, and Jessica Wegbrans, a near-full recovery may be achieved with intensive physical therapy.
Research:
New brain–computer interfaces (BCIs) may provide future remedies. One effort in 2002 allowed a fully locked-in patient to answer yes-or-no questions. In 2006, researchers created and successfully tested a neural interface which allowed someone with locked-in syndrome to operate a web browser. Some scientists have reported that they have developed a technique that allows locked-in patients to communicate via sniffing.
Research:
For the first time in 2020, a 34-year-old German patient, paralyzed since 2015 (later also the eyeballs) managed to communicate through an implant capable of reading brain activity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SPG23**
SPG23:
Spastic paraplegia 23 (SPG autosomal recessive) is a 25cM gene locus at 1q24-q32. A genome-wide linkage screen has associated this locus with a type of hereditary spastic paraplegia (HSP). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cysmethynil**
Cysmethynil:
Cysmethynil is a chemical compound that is reported to inhibit Icmt, a protein that methylates a Ras protein, which then triggers uncontrolled cell growth. If Icmt no longer activates Ras, cell growth and proliferatation remains under normal control. As such, this small molecule has been investigated as a treatment for cancer. In animal models containing multiple human tumor growths, treatment with cysmethynil causes autophagy in the cell and results in cell death and reduced tumor burden. In prostate cancer cells, cysmethynil inhibits Icmt such that the cell is stuck in the G1 phase, and this leads to the autophagic cell death. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lipscombite**
Lipscombite:
Lipscombite (Fe2+,Mn2+)(Fe3+)2(PO4)2(OH)2 is a green gray, olive green, or black. phosphate-based mineral containing iron, manganese, and iron phosphate.
Lipscombite is often formed at meteorite impact sites where its crystals are microscopically small, because crystal-forming conditions of pressure and temperature are brief.
In the Classification of non-silicate minerals lipscombite is in the lipscombite group, which also includes zinclipscombite. This group is within the non-silicate, category 8, anhydrous phosphates, lazulite supergroup.
Discovery:
The mineral lipscombite was first made artificially and then found in nature. It was named after chemist William Lipscomb by the mineralogist John W. Gruner who first made it artificially.While investigating the stability relations of iron oxides small, black, shiny crystals were obtained when a spherical iron pressure-temperature vessel was contaminated with phosphorus. The x-ray powder diffraction pattern was similar to lazulite, but unknown.
Discovery:
Gruner, a mineralogist at the University of Minnesota, gave Lipscomb, a chemistry professor there, the crystals for Lewis Katz and Lipscomb to determine the atomic structure using single-crystal x-ray diffraction. They initially called the mineral iron lazulite. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lithium niobate**
Lithium niobate:
Lithium niobate (LiNbO3) is a synthetic salt consisting of niobium, lithium, and oxygen. Its single crystals are an important material for optical waveguides, mobile phones, piezoelectric sensors, optical modulators and various other linear and non-linear optical applications. Lithium niobate is sometimes referred to by the brand name linobate.
Properties:
Lithium niobate is a colorless solid, and it is insoluble in water. It has a trigonal crystal system, which lacks inversion symmetry and displays ferroelectricity, the Pockels effect, the piezoelectric effect, photoelasticity and nonlinear optical polarizability. Lithium niobate has negative uniaxial birefringence which depends slightly on the stoichiometry of the crystal and on temperature. It is transparent for wavelengths between 350 and 5200 nanometers.
Properties:
Lithium niobate can be doped by magnesium oxide, which increases its resistance to optical damage (also known as photorefractive damage) when doped above the optical damage threshold. Other available dopants are iron, zinc, hafnium, copper, gadolinium, erbium, yttrium, manganese and boron.
Growth:
Single crystals of lithium niobate can be grown using the Czochralski process.
After a crystal is grown, it is sliced into wafers of different orientation. Common orientations are Z-cut, X-cut, Y-cut, and cuts with rotated angles of the previous axes.
Thin-films Thin-film lithium niobate (e.g. for optical wave guides) can be transferred to or grown on sapphire and other substrates, using the Smart Cut (ion slicing) process or MOCVD process. The technology is known as lithium niobate-on-insulator (LNOI).
Nanoparticles:
Nanoparticles of lithium niobate and niobium pentoxide can be produced at low temperature. The complete protocol implies a LiH induced reduction of NbCl5 followed by in situ spontaneous oxidation into low-valence niobium nano-oxides. These niobium oxides are exposed to air atmosphere resulting in pure Nb2O5. Finally, the stable Nb2O5 is converted into lithium niobate LiNbO3 nanoparticles during the controlled hydrolysis of the LiH excess. Spherical nanoparticles of lithium niobate with a diameter of approximately 10 nm can be prepared by impregnating a mesoporous silica matrix with a mixture of an aqueous solution of LiNO3 and NH4NbO(C2O4)2 followed by 10 min heating in an infrared furnace.
Applications:
Lithium niobate is used extensively in the telecommunications market, e.g. in mobile telephones and optical modulators. Due to its large electro-mechanical coupling, it is the material of choice for surface acoustic wave devices. For some uses it can be replaced by lithium tantalate, LiTaO3. Other uses are in laser frequency doubling, nonlinear optics, Pockels cells, optical parametric oscillators, Q-switching devices for lasers, other acousto-optic devices, optical switches for gigahertz frequencies, etc. It is an excellent material for manufacture of optical waveguides. It's also used in the making of optical spatial low-pass (anti-aliasing) filters.
Applications:
In the past few years lithium niobate is finding applications as a kind of electrostatic tweezers, an approach known as optoelectronic tweezers as the effect requires light excitation to take place. This effect allows for fine manipulation of micrometer-scale particles with high flexibility since the tweezing action is constrained to the illuminated area. The effect is based on the very high electric fields generated during light exposure (1–100 kV/cm) within the illuminated spot. These intense fields are also finding applications in biophysics and biotechnology, as they can influence living organisms in a variety of ways. For example, iron-doped lithium niobate excited with visible light has been shown to produce cell death in tumoral cell cultures.
Periodically-poled lithium niobate (PPLN):
Periodically poled lithium niobate (PPLN) is a domain-engineered lithium niobate crystal, used mainly for achieving quasi-phase-matching in nonlinear optics. The ferroelectric domains point alternatively to the +c and the −c direction, with a period of typically between 5 and 35 µm. The shorter periods of this range are used for second harmonic generation, while the longer ones for optical parametric oscillation. Periodic poling can be achieved by electrical poling with periodically structured electrode. Controlled heating of the crystal can be used to fine-tune phase matching in the medium due to a slight variation of the dispersion with temperature.
Periodically-poled lithium niobate (PPLN):
Periodic poling uses the largest value of lithium niobate's nonlinear tensor, d33 = 27 pm/V. Quasi-phase matching gives maximum efficiencies that are 2/π (64%) of the full d33, about 17 pm/V.Other materials used for periodic poling are wide band gap inorganic crystals like KTP (resulting in periodically poled KTP, PPKTP), lithium tantalate, and some organic materials.
The periodic poling technique can also be used to form surface nanostructures.However, due to its low photorefractive damage threshold, PPLN only finds limited applications: at very low power levels. MgO-doped lithium niobate is fabricated by periodically poled method. Periodically poled MgO-doped lithium niobate (PPMgOLN) therefore expands the application to medium power level.
Periodically-poled lithium niobate (PPLN):
Sellmeier equations The Sellmeier equations for the extraordinary index are used to find the poling period and approximate temperature for quasi-phase matching. Jundt gives 5.35583 4.629 10 0.100473 3.862 10 0.20692 0.89 10 100 2.657 10 11.34927 1.5334 10 −2λ2 valid from 20 to 250 °C for wavelengths from 0.4 to 5 micrometers, whereas for longer wavelength, 5.39121 4.968 10 0.100473 3.862 10 0.20692 0.89 10 100 2.657 10 11.34927 1.544 10 9.62119 10 10 λ)λ2 which is valid for T = 25 to 180 °C, for wavelengths λ between 2.8 and 4.8 micrometers.
Periodically-poled lithium niobate (PPLN):
In these equations f = (T − 24.5)(T + 570.82), λ is in micrometers, and T is in °C.
More generally for ordinary and extraordinary index for MgO-doped LiNbO3: n2≈a1+b1f+a2+b2fλ2−(a3+b3f)2+a4+b4fλ2−a52−a6λ2 with: for congruent LiNbO3 (CLN) and stochiometric LiNbO3 (SLN).
Cited sources:
Haynes, William M., ed. (2016). CRC Handbook of Chemistry and Physics (97th ed.). CRC Press. ISBN 9781498754293. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Controlled language in machine translation**
Controlled language in machine translation:
Using controlled language in machine translation poses several problems.
In an automated translation, the first step in order to understand the controlled language is to know what it is and to distinguish between natural language and controlled language.
The main problem in machine translation is a linguistic problem. Language is ambiguous and the system tries to model a language on lexical and grammatical way. In order to solve this problem there are a lot of alternatives, e.g. a glossary related with the text’s topic can be used.
Benefits of using a controlled language:
It is enabling to produce texts easier to read, more comprehensible and easier to retain, as well as with better vocabulary and style. Reasons for introducing a controlled language include: Documents that are more readable and comprehensible improve the usability of a product.
Controlled-language guarantees giving objective and structured support in a typically rather subjective and unstructured environment.
Tools-driven controlled language environments enable the automation of many editing tasks and provide objective quality metrics for the authoring process.
More restrictive and controlled language, more uniform and standardized resulting source document and higher the match rate in a translation memory system, and the translation cost is cheaper.
A controlled language designed for machine translation will significantly improve the quality of machine-generated translation proposals and it will reduce the time and cost of human translators editing.
Controlled language and translation:
One of the biggest challenges facing organizations that wish to reduce the cost and time involved in their translations is the fact that even in environments that combine content management systems with translation memory technology, the percentage of un-translated segments per new document remains fairly high. While it is certainly possible to manage content on the sentence/segment level, the current best practice seems to be to chunk at the topic level. Which means that reuse occurs at a fairly high level of granularity.
Sources:
AMORES CARREDANO, Jose Javier. Automatic translation systems[on line]. Available in: http://quark.prbb.org/19/019046.htm [Date of view: 29 May 2011] AECMA: AECMA Simplified English: A Guide for the Preparation of Aircraft Maintenance Documentation in the International Aerospace Maintenance Language, Bruselas, 1995.
Grimaila, A.; Chandioux, J.: "Made to measure solutions". In: John Newton, ed.: Computers in Translation: A Practical Appraisal, Londres, Routledge, 1992: 33-45.
Hartley, A.F.; Paris, C.L.: «Multi-lingual document production: from support for translating to support for authoring», Machine Translation (Special Issue on new tools for human translators) 1997; 12 (12): 109-129.
Ide, I; Véronis, J.: «Introduction to the Special Issue on Word Sense Disambiguation: The State of the Art», Computational Linguistics 1998; 24 (1): 1-40.
Lehrberger, L.; Bourbeau, L.: Machine Translation: Linguistic Characteristics of machine translation Systems and General Methodology of Evaluation, Amsterdam/Filadelfia, John Benjamins, 1988. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alpha Delta Theta (professional)**
Alpha Delta Theta (professional):
Alpha Delta Theta (ΑΔΘ) is a professional fraternity in the field of medical technology, originally for women.
History:
Alpha Delta Theta was established on February 1, 1944 by two local sororities, Alpha Delta Tau of the University of Minnesota, formed in 1926, and Tau Sigma of Marquette University, formed in 1942. It was founded to unite all women entering into or engaging in the field of medical technology, to promote social and intellectual fellowship among its members, and to raise the prestige of medical technologists by inspiring the members to greater group and individual effort.
History:
Though the Minnesota group was sixteen years older, the Marquette chapter was designated as Alpha chapter and the Minnesota group as the Beta chapter.
Alpha Delta Theta joined the Professional Panhellenic Association in 1952.
Some professional fraternities became co-educational as a result of Title IX; it is unknown whether Alpha Delta Theta followed this course, or if they remain/remained a women's fraternity only.
As of 2020, Alpha Iota chapter at University of the Sciences in Philadelphia is still active; others may also be active. It is listed there as a women's fraternity.
Traditions and insignia:
The colors of Alpha Delta Theta are described as the "green (of medicine) and gold (of science)." The fraternity flower is the daffodil.
The official pin is described as six-sided with a black background, and bears the Greek letters of ΑΔΘ.
The biannual publication is The Scope.
Both collegiate and graduate/Alumni chapters are created.
Chapters:
Chapter information from Baird's Manual (20th), however this record was reprinted from the 19th edition. Chapters in bold are active, chapters in italics are assumed or known to be dormant. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Syncoilin**
Syncoilin:
== Discovery == Syncoilin is a muscle-specific atypical type III intermediate filament protein encoded in the human by the gene SYNC. It was first isolated as a binding partner to α-dystrobrevin, as determined by a yeast two-hybrid assay.Later, a yeast two-hybrid method was used to demonstrate that syncoilin is a binding partner of desmin. These binding partners suggest that syncoilin acts as a mechanical "linker" between the sarcomere Z-disk (where desmin is localized) and the dystrophin-associated protein complex (where α-dystrobrevin is localized). However, the specific in vivo functions of syncoilin have not yet been determined.
Syncoilin:
Through the use of Western blotting techniques, a second species of syncoilin was found. This species was 55kDa in size, whereas the original species of syncoilin was 64kDa in size. This discovery inspired scientists to use gene splicing to identify two new isoforms called SYNC2 and SYNC3.Abnormally high levels of syncoilin have been shown to be a characteristic of neuromuscular wasting diseases such as desminopathy and muscular dystrophy. Therefore, syncoilin is being explored as a promising marker of neuromuscular disease.
Structure:
Syncoilin is characterized as an intermediate filament and contains the key structural features that make up intermediate filaments such as a head region, linker regions, alpha helices, and a tail region. Each protein that is classified as an intermediate filament will vary in the size and shape of their head and tail regions.More specifically, syncoilin is structurally defined by its central rod domain that forms a coil made up of 4 subunits, an α-helical region separated by flexible linkers, a N-terminal head domain, and a C-terminal tail domain. The isoform of syncoilin, SYNC 3, has a much different structure than the original protein filament. This isoform has a truncated rod domain and lacks a C-terminal tail region.Because the tail of syncoilin is so short, it is hypothesized that this affects the ability of syncoilin to form other filaments. Syncoilin is different from other type III intermediate filaments because it has a unique N-terminal that is unlike any other protein. Syncoilin does not have the capability of forming dimers spontaneously like other filaments.
Function:
The most important job of syncoilin is to provide linkage between DAPC proteins and α-dystrobrevin. Studies have yet to determine if the binding of synacoilin to DAPC proteins and α-dystrobrevin occurs simultaneously. Syncoilin, like other intermediate filaments, is also necessary for the supporting the structure of the muscle fiber.However, syncoilin does not serve the same function as most other intermediate filaments. It can be used in an attempt to fix muscle that has been damaged through up-regulation. Studies have shown that the upregulation of syncoilin is not just harmful to muscle fibers. Upregulation has also been proven to help with muscle membrane stability.Hepatic stellate cells are a specialized tissue type in the body that require syncoilin intermediate filaments. When an injury occurs to the liver, expression of intermediate filaments such as syncoilin and an increase in α-smooth muscle cells (α-SMA). It is now used to help mark activated hepatic stellate cells after being identified in an experiment done on primary liver cells in mice. In this study, syncoilin isoforms SYNC1 and SYNC2 were highly expressed during in vivo activation of hepatic stellate cells.
Location:
Syncoilin is found in skeletal and cardiac muscle which is similar to its binding protein desmin. The region of skeletal muscle that houses most of the syncoilin is the sarcolemma. If the muscle tissue is dissected further into individual muscle fibers, it can be found on neuromuscular junctions. Syncoilin is also enriched in areas such as the perinuclear space and myotendinous junction. When there is a lack of either α-dystrobrevin or desmin, the expression of syncoilin is changed in order to compensate for the loss of one or both of the proteins.In addition to another intermediate filament called peripherin, syncoilin can also be found in the central nervous system and the peripheral nervous system. The spinal cord is able to express variants of the original SYNC gene into two alternate isoforms called SYNC1 and SYNC2. However, SYNC1 and SYNC2 are dominant in different nervous systems. SYNC1 is more typically found in the brain and SYNC2 is typically found in the peripheral nervous system and spinal cord.
Clinical significance:
Muscular diseases When skeletal and cardiac muscle contain increased levels of syncoilin, it can often lead to disease in the muscle tissue.
Clinical significance:
Examples of diseases that syncoilin has been linked to include: Duchenne muscular dystrophy Becker muscular dystrophy Central core disease Congenital muscular dystrophies Neurogenic disorderSyncoilin strongly interacts with the filament, desmin, which suggests that a mutation in syncoilin might affect the bond between desmin and the sarcolemma. This may result in desmin-related myopathy. Another cause of muscular diseases is a mutation in the SYNC gene.
Clinical significance:
Gastric diseases Mutations that affect syncoilin or a lack of syncoilin can be contributing factors that lead to cellular necrosis. The gene SYNC that identifies syncoilin was found to be expressed at higher levels in gastric cancer tissue than in regular gastric tissues. Within the gastric cancer tissues, the syncoilin was found primarily in the cytoplasm and the cell membrane. A recent study shows that gastric cancer tissues that have high SYNC expression reveal a strong correlation with low survival rates for the individual. More specifically, higher gene expression of SYNC in gastric cancer tissues suggests that the individual is at a more advanced stage of gastric cancer and a potentially more aggressive type subtype of gastric cancer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**IBM 1401 Symbolic Programming System**
IBM 1401 Symbolic Programming System:
The IBM 1401 Symbolic Programming System (SPS) was an assembler that was developed by Gary Mokotoff, IBM Applied Programming Department, for the IBM 1401 computer, the first of the IBM 1400 series. One source indicates that "This programming system was announced by IBM with the machine."SPS-1 could run on a low-end machine with 1.4K memory, SPS-2 required at least 4K memory.
IBM 1401 Symbolic Programming System:
SPS-1 punched one card for each input instruction in its first pass and this deck had to be read during pass 2. At the University of Chicago and many other locations, SPS-1 was replaced by assemblers taking advantage of the commonly available 4K memory configuration to pack the output of pass one into several instructions per card. Other assemblers were written which placed the pass one output into memory for small programs.As the 1400 series matured additional assemblers, programming languages and report generators became available, replacing SPS in most sites. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Furoshiki**
Furoshiki:
Furoshiki (風呂敷) are traditional Japanese wrapping cloths traditionally used to wrap and/or to transport goods. Consideration is placed on the aesthetics of furoshiki, which may feature hemmed edges, thicker and more expensive materials, and hand-painted designs; however, furoshiki are much less formal than fukusa, and are not generally used to present formal gifts.
While they come in a variety of sizes, they are typically almost square: the height is slightly greater than the width.Traditional materials include silk or cotton, but modern furoshiki are available in synthetic materials like rayon, nylon, or polyester.
History:
The first furoshiki cloths were tsutsumi ("wrapping"), used during the Nara period as protection for precious temple objects. By the Heian period, cloths called hiratsusumi (平裏/平包), meaning "flat wrap", were used to wrap clothes. These cloths came to be known as furoshiki during the Muromachi period; the term furoshiki (literally "bath spread", from furo (風呂, "bath"), and shiki (敷, "spread")) is said to have come about after high-ranking visitors to bathhouses packed their belongings in cloth decorated with their family crest.They became popular in the Edo period with increased access to bathhouses by the general public; moreover, cloths with family crests grew in demand as common people gained the right to have family crests during the Meiji period.Modern furoshiki may be made from fabrics of various thicknesses and price points, including silk, chirimen, cotton, rayon, and nylon. The cloth is typically square, and while sizes vary, the most common are 45 by 45 centimetres (18 in × 18 in) and 70 by 70 centimetres (28 in × 28 in).Furoshiki usage declined in the post-war period, in large part due the proliferation of paper and plastic bags available to shoppers. In recent years, however, it has seen a renewed interest as environmental protection has become a greater concern. In 2006, Japanese Minister of the Environment, Yuriko Koike, showcased a specially-designed furoshiki cloth to promote environmental awareness. In 2020, The Observer reported a growing interest in furoshiki in the UK, in part as a response to its perceived greater environmental sustainability compared to traditional single-use wrapping paper. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gazumping**
Gazumping:
Gazumping occurs when a seller (especially of property) accepts a verbal offer (a promise to purchase) on the property from one potential buyer, but then accepts a higher offer from someone else. It can also refer to the seller raising the asking price or asking for more money at the last minute, after previously verbally agreeing to a lower one. In either case, the original buyer is left in a bad situation, and either has to offer a higher price or lose the purchase. The term gazumping is most commonly used in the United Kingdom and Ireland, although similar practices can be found in some other jurisdictions.
England and Wales:
With buoyant property prices in the British residential property market of the late 1980s, gazumping became commonplace in England and Wales, because a buyer's offer is not legally binding even after acceptance of the offer by the vendor. A contract for the sale of land must be in writing, a requirement of English law that dates back to the Statute of Frauds of 1677 and is restated by s.2 of the Law of Property (Miscellaneous Provisions) Act 1989. This requirement was originally intended to promote good faith and certainty in land transactions and to prevent dishonesty.
England and Wales:
When the owner accepts the offer on a property, the buyer will usually not yet have commissioned a building survey nor will the buyer have yet had the opportunity to perform recommended legal checks. The offer to purchase is made "subject to contract" and thus, until written contracts are exchanged, either party can pull out at any time. It can take as long as 10–12 weeks for formalities to be completed, and if the seller is tempted by a higher offer during this period, it leaves the buyer disappointed and out-of-pocket. Asking price has no impact on whether a property will be "gazumped", but location does: it is more common in London and the North East. Accepting any offer over a previous offer is known as gazumping.
England and Wales:
When property prices are in decline, the practice of gazumping becomes rare. The term 'gazundering' has been coined for the opposite practice, whereby the buyer waits until everybody is poised to exchange contracts before lowering the offer on the property, threatening the collapse of a whole chain of house sales waiting for the deal to go through. 'Gazanging' describes a similar situation, wherein a seller pulls out of a sale entirely, expecting to get a better asking price or offer once the market improves.
England and Wales:
The term may be derived from the Yiddish word 'gezumph', meaning to overcharge or cheat.
Scotland:
Scots law and practice makes the problem of gazumping a rarity in Scotland. In the Scottish system of conveyancing, buyers either obtain a survey prior to making a bid to the seller's solicitor or make an offer "subject to survey". Sellers normally set a closing date for written offers, then provide written acceptance of the chosen bid. The agreement becomes binding when a seller's solicitor delivers a signed written acceptance of a buyer's offer. Should the seller attempt to accept a higher bid after the contracts have been legally finalised by a written offer and acceptance, their solicitor will refuse to act for them, as this, according to the Law Society of Scotland code of practice, would be professional misconduct. As in England, all contracts for the sale of land must be evidenced in writing, signed by or on behalf of each party. In Scotland, the parties' solicitors sign on their behalf, unlike in England, where buyer and seller both sign a contract which has been produced in duplicate form, with the duplicates then being exchanged to effect a binding contract. It is often wrongly claimed that gazumping is a rarity in Scotland because it is said that an oral agreement on a property deal is legally binding; while the law on contract differs from the law in England, the rarity is due to the different system of conveyancing.
Scotland:
In Scotland, however, an estate agent, acting on behalf of the seller, can initiate instances of another form of gazumping. Once a closing date for written offers has been reached and an estate agent has given an oral acceptance of the chosen bid, the estate agent can then attempt to induce a bidding war between the successful buyer and a rival, who may be fictional, in an attempt to increase the offer made by each party. In such circumstance, there is little recourse for a successful buyer who, despite having been informed orally that their offer has been accepted, is then informed orally that their offer has been rejected in favour of a higher bid. Such situations only occur at an early stage of the conveyancing process, prior to any written acceptance of an offer being given by the seller's solicitor. Often they result from the legal requirement on the part of estate agents to advise a seller of any higher offer received prior to written confirmation of an orally accepted offer being given, including those received after a closing date.
Scotland:
In Scotland, gazundering is possible where the buyer has insufficient assets to be worth suing, but is not common.
United States:
The term gazumping is not used in the United States. Every state has different laws and traditions, but buyers typically make a written offer that, when accepted (signed) by the seller, is in most localities binding on the seller. This is known as a "purchase and sale" contract, which may have conditions. U.S. residential purchase contracts typically contain an inspection clause, a short period during which the buyer can inspect the property and back out of the contract with the full return of the earnest money, if the property does not pass the buyer's inspections. The seller, however, cannot, except in some states, back out during the inspection period. New Jersey is one state where the seller has a "legal review" period, during which they can back out of an accepted contract.
Gazundering:
"Gazundering" is where a prospective buyer (especially of a property) offers to buy the property at an agreed price but subsequently threatens to withdraw the offer unless the seller agrees to a lower price. There are two circumstances where this happens: In a falling market, where the value of the property has fallen significantly in the period between the initial agreement and the expected sale date.
Gazundering:
Just before formal contracts are signed, when the buyer believes the seller is in a weak position (a practice is generally perceived as unethical). The seller could be in a weak position, having entered into other commitments such as a chain and would be very reluctant to pull out. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Christmas tree stand**
Christmas tree stand:
A Christmas tree stand is an object designed to support a cut, natural or an artificial Christmas tree. Christmas tree stands appeared as early as 1876 and have had various designs over the years. Those stands designed for natural trees have a water-well, which, in many cases may not hold enough water to adequately supply the cut tree. Some specialty Christmas tree stands have value on the secondary antiques market.
History:
Christmas tree stands have been around at least since 1876, when Arthur's Illustrated Home Magazine suggested connecting a Christmas tree stand into a stand for flowers. In that same year, Hermann Albrecht of Philadelphia, Pennsylvania received U.S. Patent 183,100 and U.S. Patent 183,194as two of the first Christmas tree stand patents issued in the United States. In 1892, carpenter Edward Smith suggested a homemade Christmas tree stand, noting, As Christmas is near at hand, I will tell how I made a pretty stand for a Christmas tree: I took a board 14x14 inches, and one inch thick around this I made a tiny paling fence — there is a post at each corner set firmly Into a 1/4-inch hole, and a gate at the middle of one side with little posts, the same as at the corner. The palings are about 1/8-inch thick, and 1/2 inch wide, and the cross pieces are just a little thicker. The best tacks I could find for tacking the palings to the cross-pieces were pins cut in two, using only the head ends. I then painted the fence white, and the board grass-green. In the center of this Is a hole into which to fasten the tree. In 1919, an American monthly magazine Popular Science touted a new type of Christmas-tree stand. The stand featured a broad, cone-shaped base that included an inlet for water and the Christmas tree trunk. Water placed in the galvanized iron shell would give considerable weight to the stand to steady the tree. Once the tree trunk was inserted into the water inlet, the tree would be kept fresh and green much longer than without the water supply.
Design:
Christmas tree stands designed for natural Christmas trees often have a water-well in them; natural trees require water so that they do not dry out. In fact, growers state that the secret to long-lived natural Christmas tree is a lot of water, so often they recommend a tree stand that has a large water reservoir. Washington State University plant pathologist Gary Chastagner conducted research into various models of Christmas tree stands and found that just six of 22 different stands tested had adequate water capacity for Christmas trees larger than 4 inches (10 cm) in diameter.Not all Christmas tree stands are manufactured for the specific purpose, one example would be a makeshift Christmas tree stand made from an old two-gallon (~7.5 l) bucket or can. Another example of a homemade-type Christmas tree stand is a converted cast iron garden urn.
Types of tree stands:
Christmas tree stands sometimes had a more specialized role when it came to aluminum Christmas trees. The common method of illumination for these artificial Christmas trees was a floor-based "color wheel" which was placed under the tree. The color wheel featured varyingly colored segments on a clear plastic wheel, when switched on the wheel rotated and a light shone through the clear plastic casting an array of colors throughout the tree's metallic branches. Sometimes this spectacle was enhanced by a rotating Christmas tree stand.
Secondary value:
Some Christmas tree stands were uniquely designed and have value in the secondary antiques market. One example is a 1950s decorative Christmas tree stand designed by National Outfit Manufacturers Association and made of lithographed tin and featuring a holiday design. A Christmas tree stand such of the lithographed tin design could fetch up to $250 on the open market. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fitoterapia**
Fitoterapia:
Fitoterapia is a peer-reviewed medical journal covering research on the use of medicinal plants and bioactive natural products of plant origin in pharmacotherapy.
According to the Journal Citation Reports, Fitoterapia has a 2022 impact factor of 3.4. Since 2023 the Editor in Chief is prof. Orazio Taglialatela-Scafati. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Three thirteen**
Three thirteen:
Three thirteen is a variation of the card game Rummy. It is an eleven-round game played with two or more players. It requires two decks of cards with the jokers removed. Like other Rummy games, once the hands are dealt, the remainder of the cards are placed face down on the table. The top card from the deck is flipped face up and put beside the deck to start the discard pile.
Gameplay:
Each player attempts to meld all of the cards in their hand into sets.
A set may be either: Three or more cards of the same rank, such as 7♥ 7♠ 7♣ or a pair or more with a wild card.
Gameplay:
A sequence of three or more cards of the same suit, such as 4♥ 5♥ 6♥ 7♥ using a wild card to fill in where necessary to form the sequence, and the wild card does not need to be in the suit.(cards of the same suit but not in sequence do not count as a set) Sets can contain more than three cards, however, the same card cannot be included in multiple sets.
Gameplay:
Once a player has melded all of their cards into sets, they "go out". They must still discard when "going out", and the remaining players are given one more draw to better their hands. The winner of a game of "Three thirteen" is the player who, at the end of the final round, has accumulated the fewest points.
Gameplay:
Dealing The first dealer, chosen at random, deals three cards to each player. In each successive round, the deal passes to the left. In the second round, the dealer deals four cards to each player. With each successive round, the number of cards dealt to start the round increases until the eleventh and final round in which thirteen cards each are dealt.
Gameplay:
Playing The player to dealer's left is the first to play, and the play moves clockwise. When it is a player's turn, they choose to draw either the top card from the discard pile or the top card from the top of the deck. Then the player must discard one card from their hand and place that card on top of the discard pile to conclude their turn. Have to go around once before going out.
Gameplay:
Wild cards In each round there is a designated wild card. The wild card is the card equal to the number of cards dealt. In the first round, three cards are dealt, so Threes are wild cards. In the second round four cards are dealt, so Fours are wild. When 11, 12, and 13 cards are dealt, the J, Q, and K are the respective wild cards. Wild cards can be used in place of any other card in making a group or sequence. A player may use more than one wild card in any set including a set made up of only wild cards.
Gameplay:
Scoring At the end of a given round, each of a player's cards that cannot be placed into a set counts towards their score.
Variations Some rules designate Jokers as additional wild cards. In that case, a joker left in a player's hand at the conclusion of a round counts as 20 points.
Some versions make the ace 13, 15, or 20 points.
According to some rules, Aces can be used as high or low in a sequence. In this case an Ace remaining in your hand at the end costs 15 points, rather than one.
Some rules score only 10 points for Jacks, Queens and Kings.
An extra round (twelfth) or two (thirteenth) where there are 14 and 15 cards dealt for each player and Aces and 2s are wild respectively. Some call it "Fourteens" or "Fifteens".
In other variations, Jokers as wild cards can be discarded onto any pile of any other player and count for no points.
Another variation plays 22 rounds starting from 3 to 13 and then back down from 13 to 3 A rule variation is that each player places the discard card in front of that player. Other players then have the choice to pick from the various discard piles. This makes the game move much faster.
In games with a "Redemption round", after a player goes out, the other players get one last play and can lay down any melds on their own table or deadwood cards on other players' tables. If it is possible to get rid of all one's cards in the redemption round, the player will receive 0 points.
Points are doubled on the 11th (Jacks), 12th (Queens), and 13th (Kings) round.
Some rules state that a player can make a set that consists only of wild cards.
Some play that if a player goes out incorrectly, this is counted as +20 points.
Double-naughts: A rule where if each player goes out with zero points on a given hand, the hand is replayed as if it never happened.
Some variations allow for a fixed negative score for the first player to go out each round. Common point values include -5, -10, or a negative value equal to the wild card. (-8 points if the 8 card is wild) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Frameless construction**
Frameless construction:
In cabinetmaking, frameless construction of cabinets uses flat panels of engineered wood — usually particle board, plywood or medium-density fibreboard — rather than the older frame and panel construction.
Frameless construction:
A common construction method for frameless cabinets originated in Europe after World War II and is known as the 32-mm system or European system. The name comes from the 32-millimetre spacing between the system holes used for construction and installation of hardware typically used for doors, drawers and shelves. There are numerous 32mm based cabinet systems, one such system is Hettich's System 32. In North America, it is also often referred to as "European Cabinetry".
Frameless construction:
With frameless or full access cabinets, thicker sides (boxes) keep the cabinet more stable while avoiding the use of the front frame found in face-frame cabinets. Frameless cabinets are usually edgebanded to finish the front faces. By eliminating the front frame, there is more room to place large objects inside, and more usable space. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inclusion (taxonomy)**
Inclusion (taxonomy):
In taxonomy, inclusion is the process whereby two species that were believed to be distinct are found in fact to be the same and are thus combined as one species. Which name is kept for this unified species is sometimes a cause of debate, but generally it is the earlier-named one, and the other species is said to be "included" within this one.
Inclusion (taxonomy):
Inclusion is far more common in paleontology than more recent biology, although it is not unheard of in the latter. When it occurs with more recent or modern species, it is usually the result of a species with wide geographical dispersion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TIP TIG**
TIP TIG:
TIP TIG is a subset of gas tungsten arc welding (GTAW), using a mechanism called filler wire agitation to enhance molten weld pool dynamics. This agitation has been found to enhance the weld puddle fluidity and release evolving gases, reducing the chances of inclusions and porosity, and also separate impurities.Welding systems that employ the TIP TIG method are in fact GTAW systems, simply wire fed, that additionally induce a vibratory effect on the wire and apply hot wire current on the filler metal before it even enters the weld pool. The vibration comes from a linear forward and backward motion applied mechanically using a custom wire feeder system. A secondary power, on the other hand, creates the hot wire current.The TIP TIG process was invented and patented 1999 by the Austrian engineer, Siegfried Plasch with the intention of a higher deposition rate compared to the regular GTAW process. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Perceptual learning**
Perceptual learning:
Perceptual learning is learning better perception skills such as differentiating two musical tones from one another or categorizations of spatial and temporal patterns relevant to real-world expertise. Examples of this may include reading, seeing relations among chess pieces, and knowing whether or not an X-ray image shows a tumor.
Sensory modalities may include visual, auditory, tactile, olfactory, and taste. Perceptual learning forms important foundations of complex cognitive processes (i.e., language) and interacts with other kinds of learning to produce perceptual expertise. Underlying perceptual learning are changes in the neural circuitry. The ability for perceptual learning is retained throughout life.
Category learning vs. perceptual learning:
It can be fairly easy to confuse category learning and perceptual learning. Category learning is "an assumed fixed, pre-established perceptual representation to describe the objects to be categorized." Category learning is built upon perceptual learning because you are showing a distinction of what the objects are. Perceptual learning is defined as a "change in perception as a product of experience, and has reviewed evidence demonstrating that discrimination between otherwiords that sound similar to their native language. They now can tell the difference whereas in category learning they are trying to separate the two.
Examples:
Basic sensory discrimination Laboratory studies reported many examples of dramatic improvements in sensitivities from appropriately structured perceptual learning tasks. In visual Vernier acuity tasks, observers judge whether one line is displaced above or below a second line. Untrained observers are often already very good with this task, but after training, observers' threshold has been shown to improve as much as 6 fold. Similar improvements have been found for visual motion discrimination and orientation sensitivity.
Examples:
In visual search tasks, observers are asked to find a target object hidden among distractors or in noise. Studies of perceptual learning with visual search show that experience leads to great gains in sensitivity and speed. In one study by Karni and Sagi, the time it took for subjects to search for an oblique line among a field of horizontal lines was found to improve dramatically, from about 200ms in one session to about 50ms in a later session. With appropriate practice, visual search can become automatic and very efficient, such that observers do not need more time to search when there are more items present on the search field. Tactile perceptual learning has been demonstrated on spatial acuity tasks such as tactile grating orientation discrimination, and on vibrotactile perceptual tasks such as frequency discrimination; tactile learning on these tasks has been found to transfer from trained to untrained fingers. Practice with Braille reading and daily reliance on the sense of touch may underlie the enhancement in tactile spatial acuity of blind compared to sighted individuals.
Examples:
Neuropsychology of perceptual category learning Multiple different category learning systems may mediate the learning of different category structures. "Two systems that have received support are a frontal-based explicit system that uses logical reasoning, depends on working memory and executive attention, and is mediated primarily by the anterior cingulate, the prefrontal cortex and the associative striatum, including the head of the caudate. The second is a basal ganglia-mediated implicit system that uses procedural learning, requires a dopamine reward signal and is mediated primarily by the sensorimotor striatum" The studies showed that there was significant involvement of the striatum and less involvement of the medial temporal lobes in category learning. In people who have striatal damage, the need to ignore irrelevant information is more predictive of a rule-based category learning deficit. Whereas, the complexity of the rule is predictive of an information integration category learning deficit.
Examples:
In the natural world Perceptual learning is prevalent and occurs continuously in everyday life. "Experience shapes the way people see and hear." Experience provides the sensory input to our perceptions as well as knowledge about identities. When people are less knowledgeable about different races and cultures people develop stereotypes because they are less knowledgeable. Perceptual learning is a more in-depth relationship between experience and perception. Different perceptions of the same sensory input may arise in individuals with different experiences or training. This leads to important issues about the ontology of sensory experience, the relationship between cognition and perception.
Examples:
An example of this is money. Every day we look at money and we can look at it and know what it is but when you are asked to find the correct coin in similar coins that have slight differences we may have a problem finding the difference. This is because we see it every day but we are not directly trying to find a difference. Learning to perceive differences and similarities among stimuli based on exposure to the stimuli. A study conducted by Gibson's in 1955 illustrates how exposure to stimuli can affect how well we learn details for different stimuli.
Examples:
As our perceptual system adapts to the natural world, we become better at discriminating between different stimuli when they belong to different categories than when they belong to the same category. We also tend to become less sensitive to the differences between two instances of the same category. These effects are described as the result of categorical perception. Categorical perception effects do not transfer across domains.
Examples:
Infants, when different sounds belong to the same phonetic category in their native language, tend to lose sensitivity to differences between speech sounds by 10 months of age. They learn to pay attention to salient differences between native phonetic categories, and ignore the less language-relevant ones. In chess, expert chess players encode larger chunks of positions and relations on the board and require fewer exposures to fully recreate a chess board. This is not due to their possessing superior visual skill, but rather to their advanced extraction of structural patterns specific to chess.When a woman has a baby, shortly after the baby's birth she will be able to decipher the difference in her baby's cry. This is because she is becoming more sensitive to the differences. She can tell what cry is because they are hungry, need to be changed, etc.
Examples:
Extensive practice reading in English leads to extraction and rapid processing of the structural regularities of English spelling patterns. The word superiority effect demonstrates this—people are often much faster at recognizing words than individual letters.In speech phonemes, observers who listen to a continuum of equally spaced consonant-vowel syllables going from /be/ to /de/ are much quicker to indicate that two syllables are different when they belonged to different phonemic categories than when they were two variants of the same phoneme, even when physical differences were equated between each pair of syllables.Other examples of perceptual learning in the natural world include the ability to distinguish between relative pitches in music, identify tumors in x-rays, sort day-old chicks by gender, taste the subtle differences between beers or wines, identify faces as belonging to different races, detect the features that distinguish familiar faces, discriminate between two bird species ("great blue crown heron" and "chipping sparrow"), and attend selectively to the hue, saturation and brightness values that comprise a color definition.
Brief history:
The prevalent idiom that “practice makes perfect” captures the essence of the ability to reach impressive perceptual expertise. This has been demonstrated for centuries and through extensive amounts of practice in skills such as wine tasting, fabric evaluation, or musical preference. The first documented report, dating to the mid-19th century, is the earliest example of tactile training aimed at decreasing the minimal distance at which individuals can discriminate whether one or two points on their skin have been touched. It was found that this distance (JND, Just Noticeable Difference) decreases dramatically with practice, and that this improvement is at least partially retained on subsequent days. Moreover, this improvement is at least partially specific to the trained skin area. A particularly dramatic improvement was found for skin positions at which initial discrimination was very crude (e.g. on the back), though training could not bring the JND of initially crude areas down to that of initially accurate ones (e.g. finger tips). William James devoted a section in his Principles of Psychology (1890/1950) to "the improvement in discrimination by practice". He noted examples and emphasized the importance of perceptual learning for expertise. In 1918, Clark L. Hull, a noted learning theorist, trained human participants to learn to categorize deformed Chinese characters into categories. For each category, he used 6 instances that shared some invariant structural property. People learned to associate a sound as the name of each category, and more importantly, they were able to classify novel characters accurately. This ability to extract invariances from instances and apply them to classify new instances marked this study as a perceptual learning experiment. It was not until 1969, however, that Eleanor Gibson published her seminal book The Principles of Perceptual learning and Development and defined the modern field of perceptual learning. She established the study of perceptual learning as an inquiry into the behavior and mechanism of perceptual change. By the mid-1970s, however, this area was in a state of dormancy due to a shift in focus to perceptual and cognitive development in infancy. Much of the scientific community tended to underestimate the impact of learning compared with innate mechanisms. Thus, most of this research focused on characterizing basic perceptual capacities of young infants rather than on perceptual learning processes.
Brief history:
Since the mid-1980s, there has been a new wave of interest in perceptual learning due to findings of cortical plasticity at the lowest sensory levels of sensory systems. Our increased understanding of the physiology and anatomy of our cortical systems has been used to connect the behavioral improvement to the underlying cortical areas. This trend began with earlier findings of Hubel and Wiesel that perceptual representations at sensory areas of the cortex are substantially modified during a short ("critical") period immediately following birth. Merzenich, Kaas and colleagues showed that though neuroplasticity is diminished, it is not eliminated when the critical period ends. Thus, when the external pattern of stimulation is substantially modified, neuronal representations in lower-level (e.g. primary) sensory areas are also modified. Research in this period centered on basic sensory discriminations, where remarkable improvements were found on almost any sensory task through discrimination practice. Following training, subjects were tested with novel conditions and learning transfer was assessed. This work departed from earlier work on perceptual learning, which spanned different tasks and levels.
Brief history:
A question still debated today is to what extent improvements from perceptual learning stems from peripheral modifications compared with improvement in higher-level readout stages. Early interpretations, such as that suggested by William James, attributed it to higher-level categorization mechanisms whereby initially blurred differences are gradually associated with distinctively different labels. The work focused on basic sensory discrimination, however, suggests that the effects of perceptual learning are specific to changes in low-levels of the sensory nervous system (i.e., primary sensory cortices). More recently, research suggest that perceptual learning processes are multilevel and flexible. This cycles back to the earlier Gibsonian view that low-level learning effects are modulated by high-level factors, and suggests that improvement in information extraction may not involve only low-level sensory coding but also apprehension of relatively abstract structure and relations in time and space.
Brief history:
Within the past decade, researchers have sought a more unified understanding of perceptual learning and worked to apply these principles to improve perceptual learning in applied domains.
Characteristics:
Discovery and fluency effects Perceptual learning effects can be organized into two broad categories: discovery effects and fluency effects. Discovery effects involve some change in the bases of response such as in selecting new information relevant for the task, amplifying relevant information or suppressing irrelevant information. Experts extract larger "chunks" of information and discover high-order relations and structures in their domains of expertise that are invisible to novices. Fluency effects involve changes in the ease of extraction. Not only can experts process high-order information, they do so with great speed and low attentional load. Discovery and fluency effects work together so that as the discovery structures becomes more automatic, attentional resources are conserved for discovery of new relations and for high-level thinking and problem-solving.
Characteristics:
The role of attention William James (Principles of Psychology, 1890) asserted that "My experience is what I agree to attend to. Only those items which I notice shape my mind - without selective interest, experience is an utter chaos.". His view was extreme, yet its gist was largely supported by subsequent behavioral and physiological studies. Mere exposure does not seem to suffice for acquiring expertise.
Characteristics:
Indeed, a relevant signal in a given behavioral condition may be considered noise in another. For example, when presented with two similar stimuli, one might endeavor to study the differences between their representations in order to improve one's ability to discriminate between them, or one may instead concentrate on the similarities to improve one's ability to identify both as belonging to the same category. A specific difference between them could be considered 'signal' in the first case and 'noise' in the second case. Thus, as we adapt to tasks and environments, we pay increasingly more attention to the perceptual features that are relevant and important for the task at hand, and at the same time, less attention to the irrelevant features. This mechanism is called attentional weighting.However, recent studies suggest that perceptual learning occurs without selective attention. Studies of such task-irrelevant perceptual learning (TIPL) show that the degree of TIPL is similar to that found through direct training procedures. TIPL for a stimulus depends on the relationship between that stimulus and important task events or upon stimulus reward contingencies. It has thus been suggested that learning (of task irrelevant stimuli) is contingent upon spatially diffusive learning signals. Similar effects, but upon a shorter time scale, have been found for memory processes and in some cases is called attentional boosting. Thus, when an important (alerting) event occurs, learning may also affect concurrent, non-attended and non-salient stimuli.
Characteristics:
Time course of perceptual learning The time course of perceptual learning varies from one participant to another. Perceptual learning occurs not only within the first training session but also between sessions. Fast learning (i.e., within-first-session learning) and slow learning (i.e., between-session learning) involves different changes in the human adult brain. While the fast learning effects can only be retained for a short term of several days, the slow learning effects can be preserved for a long term over several months.
Explanations and models:
Receptive field modification Research on basic sensory discriminations often show that perceptual learning effects are specific to the trained task or stimulus. Many researchers take this to suggest that perceptual learning may work by modifying the receptive fields of the cells (e.g., V1 and V2 cells) that initially encode the stimulus. For example, individual cells could adapt to become more sensitive to important features, effectively recruiting more cells for a particular purpose, making some cells more specifically tuned for the task at hand. Evidence for receptive field change has been found using single-cell recording techniques in primates in both tactile and auditory domains.However, not all perceptual learning tasks are specific to the trained stimuli or tasks. Sireteanu and Rettenback discussed discrimination learning effects that generalize across eyes, retinal locations and tasks. Ahissar and Hochstein used visual search to show that learning to detect a single line element hidden in an array of differently-oriented line segments could generalize to positions at which the target was never presented. In human vision, not enough receptive field modification has been found in early visual areas to explain perceptual learning. Training that produces large behavioral changes such as improvements in discrimination does not produce changes in receptive fields. In studies where changes have been found, the changes are too small to explain changes in behavior.
Explanations and models:
Reverse hierarchy theory The Reverse Hierarchy Theory (RHT), proposed by Ahissar & Hochstein, aims to link between learning dynamics and specificity and the underlying neuronal sites. RHT proposes that naïve performance is based on responses at high-level cortical areas, where crude, categorical level representations of the environment are represented. Hence initial learning stages involve understanding global aspects of the task. Subsequent practice may yield better perceptual resolution as a consequence of accessing lower-level information via the feedback connections going from high to low levels. Accessing the relevant low-level representations requires a backward search during which informative input populations of neurons in the low level are allocated. Hence, subsequent learning and its specificity reflect the resolution of lower levels. RHT thus proposes that initial performance is limited by the high-level resolution whereas post-training performance is limited by the resolution at low levels. Since high-level representations of different individuals differ due to their prior experience, their initial learning patterns may differ. Several imaging studies are in line with this interpretation, finding that initial performance is correlated with average (BOLD) responses at higher-level areas whereas subsequent performance is more correlated with activity at lower-level areas. RHT proposes that modifications at low levels will occur only when the backward search (from high to low levels of processing) is successful. Such success requires that the backward search will "know" which neurons in the lower level are informative. This "knowledge" is gained by training repeatedly on a limited set of stimuli, such that the same lower-level neuronal populations are informative during several trials. Recent studies found that mixing a broad range of stimuli may also yield effective learning if these stimuli are clearly perceived as different, or are explicitly tagged as different. These findings further support the requirement for top-down guidance in order to obtain effective learning.
Explanations and models:
Enrichment versus differentiation In some complex perceptual tasks, all humans are experts. We are all very sophisticated, but not infallible at scene identification, face identification and speech perception. Traditional explanations attribute this expertise to some holistic, somewhat specialized, mechanisms. Perhaps such quick identifications are achieved by more specific and complex perceptual detectors which gradually "chunk" (i.e., unitize) features that tend to concur, making it easier to pull a whole set of information. Whether any concurrence of features can gradually be chunked with practice or chunking can only be obtained with some pre-disposition (e.g. faces, phonological categories) is an open question. Current findings suggest that such expertise is correlated with a significant increase in the cortical volume involved in these processes. Thus, we all have somewhat specialized face areas, which may reveal an innate property, but we also develop somewhat specialized areas for written words as opposed to single letters or strings of letter-like symbols. Moreover, special experts in a given domain have larger cortical areas involved in that domain. Thus, expert musicians have larger auditory areas. These observations are in line with traditional theories of enrichment proposing that improved performance involves an increase in cortical representation. For this expertise, basic categorical identification may be based on enriched and detailed representations, located to some extent in specialized brain areas. Physiological evidence suggests that training for refined discrimination along basic dimensions (e.g. frequency in the auditory modality) also increases the representation of the trained parameters, though in these cases the increase may mainly involve lower-level sensory areas.
Explanations and models:
Selective reweighting In 2005, Petrov, Dosher and Lu pointed out that perceptual learning may be explained in terms of the selection of which analyzers best perform the classification, even in simple discrimination tasks. They explain that the some part of the neural system responsible for particular decisions have specificity, while low-level perceptual units do not. In their model, encodings at the lowest level do not change. Rather, changes that occur in perceptual learning arise from changes in higher-level, abstract representations of the relevant stimuli. Because specificity can come from differentially selecting information, this "selective reweighting theory" allows for learning of complex, abstract representation. This corresponds to Gibson's earlier account of perceptual learning as selection and learning of distinguishing features. Selection may be the unifying principles of perceptual learning at all levels.
The impact of training protocol and the dynamics of learning:
Ivan Pavlov discovered conditioning. He found that when a stimulus (e.g. sound) is immediately followed by food several times, the mere presentation of this stimulus would subsequently elicit saliva in a dog's mouth. He further found that when he used a differential protocol, by consistently presenting food after one stimulus while not presenting food after another stimulus, dogs were quickly conditioned to selectively salivate in response to the rewarded one. He then asked whether this protocol could be used to increase perceptual discrimination, by differentially rewarding two very similar stimuli (e.g. tones with similar frequency). However, he found that differential conditioning was not effective.
The impact of training protocol and the dynamics of learning:
Pavlov's studies were followed by many training studies which found that an effective way to increase perceptual resolution is to begin with a large difference along the required dimension and gradually proceed to small differences along this dimension. This easy-to-difficult transfer was termed "transfer along a continuum".
These studies showed that the dynamics of learning depend on the training protocol, rather than on the total amount of practice. Moreover, it seems that the strategy implicitly chosen for learning is highly sensitive to the choice of the first few trials during which the system tries to identify the relevant cues.
The impact of training protocol and the dynamics of learning:
Consolidation and sleep Several studies asked whether learning takes place during practice sessions or in between, for example, during subsequent sleep. The dynamics of learning are hard to evaluate since the directly measured parameter is performance, which is affected by both learning, inducing improvement, and fatigue, which hampers performance. Current studies suggest that sleep contributes to improved and durable learning effects, by further strengthening connections in the absence of continued practice. Both slow-wave and REM (rapid eye movement) stages of sleep may contribute to this process, via not-yet-understood mechanisms.
The impact of training protocol and the dynamics of learning:
Comparison and contrast Practice with comparison and contrast of instances that belong to the same or different categories allow for the pick-up of the distinguishing features—features that are important for the classification task—and the filter of the irrelevant features.
The impact of training protocol and the dynamics of learning:
Task difficulty Learning easy examples first may lead to better transfer and better learning of more difficult cases. By recording ERPs from human adults, Ding and Colleagues investigated the influence of task difficulty on the brain mechanisms of visual perceptual learning. Results showed that difficult task training affected earlier visual processing stage and broader visual cortical regions than easy task training.
The impact of training protocol and the dynamics of learning:
Active classification and attention Active classification effort and attention are often necessary to produce perceptual learning effects. However, in some cases, mere exposure to certain stimulus variations can produce improved discriminations.
Feedback In many cases, perceptual learning does not require feedback (whether or not the classification is correct). Other studies suggest that block feedback (feedback only after a block of trials) produces more learning effects than no feedback at all.
The impact of training protocol and the dynamics of learning:
Limits Despite the marked perceptual learning demonstrated in different sensory systems and under varied training paradigms, it is clear that perceptual learning must face certain unsurpassable limits imposed by the physical characteristics of the sensory system. For instance, in tactile spatial acuity tasks, experiments suggest that the extent of learning is limited by fingertip surface area, which may constrain the underlying density of mechanoreceptors.
Relations to other forms of learning:
Declarative & procedural learning In many domains of expertise in the real world, perceptual learning interacts with other forms of learning. Declarative knowledge tends to occur with perceptual learning. As we learn to distinguish between an array of wine flavors, we also develop a wide range of vocabularies to describe the intricacy of each flavor.
Relations to other forms of learning:
Similarly, perceptual learning also interacts flexibly with procedural knowledge. For example, the perceptual expertise of a baseball player at bat can detect early in the ball's flight whether the pitcher threw a curveball. However, the perceptual differentiation of the feel of swinging the bat in various ways may also have been involved in learning the motor commands that produce the required swing.
Relations to other forms of learning:
Implicit learning Perceptual learning is often said to be implicit, such that learning occurs without awareness. It is not at all clear whether perceptual learning is always implicit. Changes in sensitivity that arise are often not conscious and do not involve conscious procedures, but perceptual information can be mapped onto various responses.In complex perceptual learning tasks (e.g., sorting of newborn chicks by sex, playing chess), experts are often unable to explain what stimulus relationships they are using in classification. However, in less complex perceptual learning tasks, people can point out what information they're using to make classifications.
Applications:
Improving perceptual skills An important potential application of perceptual learning is the acquisition of skill for practical purposes. Thus it is important to understand whether training for increased resolution in lab conditions induces a general upgrade which transfers to other environmental contexts, or results from mechanisms which are context specific. Improving complex skills is typically gained by training under complex simulation conditions rather than one component at a time. Recent lab-based training protocols with complex action computer games have shown that such practice indeed modifies visual skills in a general way, which transfers to new visual contexts. In 2010, Achtman, Green, and Bavelier reviewed the research on video games to train visual skills. They cite a previous review by Green & Bavelier (2006) on using video games to enhance perceptual and cognitive abilities. A variety of skills were upgraded in video game players, including "improved hand-eye coordination, increased processing in the periphery, enhanced mental rotation skills, greater divided attention abilities, and faster reaction times, to name a few". An important characteristic is the functional increase in the size of the effective visual field (within which viewers can identify objects), which is trained in action games and transfers to new settings. Whether learning of simple discriminations, which are trained in separation, transfers to new stimulus contexts (e.g. complex stimulus conditions) is still an open question.
Applications:
Like experimental procedures, other attempts to apply perceptual learning methods to basic and complex skills use training situations in which the learner receives many short classification trials. Tallal, Merzenich and their colleagues have successfully adapted auditory discrimination paradigms to address speech and language difficulties. They reported improvements in language learning-impaired children using specially enhanced and extended speech signals. The results applied not only to auditory discrimination performance but speech and language comprehension as well.
Applications:
Technologies for perceptual learning In educational domains, recent efforts by Philip Kellman and colleagues showed that perceptual learning can be systematically produced and accelerated using specific, computer-based technology. Their approach to perceptual learning methods take the form of perceptual learning modules (PLMs): sets of short, interactive trials that develop, in a particular domain, learners' pattern recognition, classification abilities, and their abilities to map across multiple representations. As a result of practice with mapping across transformations (e.g., algebra, fractions) and across multiple representations (e.g., graphs, equations, and word problems), students show dramatic gains in their structure recognition in fraction learning and algebra. They also demonstrated that when students practice classifying algebraic transformations using PLMs, the results show remarkable improvements in fluency at algebra problem solving. These results suggests that perceptual learning can offer a needed complement to conceptual and procedural instructions in the classroom.
Applications:
Similar results have also been replicated in other domains with PLMs, including anatomic recognition in medical and surgical training, reading instrumental flight displays, and apprehending molecular structures in chemistry. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Triazane**
Triazane:
Triazane is an inorganic compound with the chemical formula NH2NHNH2 or N3H5. Triazane is the third simplest acyclic azane after ammonia and hydrazine. It can be synthesized from hydrazine but is unstable and cannot be isolated in the free base form, only as salt forms such as triazanium sulfate. Attempts to convert triazanium salts to the free base release only diazene and ammonia. Triazane was first synthesized as a ligand of the silver complex ion: tris(μ2-triazane-κ2N1,N3)disilver(2+). Triazane has also been synthesized in electron-irradiated ammonia ices and detected as a stable gas-phase product after sublimation.
Compounds containing the triazane skeleton:
Several compounds containing the triazane skeleton are known, including 1-methyl-1-nitrosohydrazine (NH2−N(CH3)−N=O), produced from the solventless reaction of methylhydrazine (CH3NHNH2) and an alkyl nitrite (R–O–N=O): CH3NHNH2 + RONO → NH2N(CH3)NO + ROH1-Methyl-1-nitrosohydrazine is a colorless solid, sensitive to impact, but not to friction. It melts at 45°C and decomposes at 121°C. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Achondrogenesis type 1B**
Achondrogenesis type 1B:
Achondrogenesis, type 1B is a severe autosomal recessive skeletal disorder, invariably fatal in the perinatal period. It is characterized by extremely short limbs, a narrow chest and a prominent, rounded abdomen. The fingers and toes are short and the feet may be rotated inward. Affected infants frequently have a soft out-pouching around the belly-button (an umbilical hernia) or near the groin (an inguinal hernia).Achondrogenesis, type 1B is a rare genetic disorder; its incidence is unknown. Achondrogenesis, type 1B is the most severe condition in a spectrum of skeletal disorders caused by mutations in the SLC26A2 gene. This gene provides instructions for making a protein that is essential for the normal development of cartilage and for its conversion to bone. Mutations in the SLC26A2 gene disrupt the structure of developing cartilage, preventing bones from forming properly and resulting in the skeletal problems characteristic of achondrogenesis, type 1B.Achondrogenesis, type 1B is inherited in an autosomal recessive pattern, which means two copies of the gene in each cell are altered. Most often, the parents of an individual with an autosomal recessive disorder are carriers of one copy of the altered gene but do not show signs and symptoms of the disorder. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Exponential decay**
Exponential decay:
A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the following differential equation, where N is the quantity and λ (lambda) is a positive rate called the exponential decay constant, disintegration constant, rate constant, or transformation constant: dNdt=−λN.
The solution to this equation (see derivation below) is: N(t)=N0e−λt, where N(t) is the quantity at time t, N0 = N(0) is the initial quantity, that is, the quantity at time t = 0.
Measuring rates of decay:
Mean lifetime If the decaying quantity, N(t), is the number of discrete elements in a certain set, it is possible to compute the average length of time that an element remains in the set. This is called the mean lifetime (or simply the lifetime), where the exponential time constant, τ , relates to the decay rate constant, λ, in the following way: τ=1λ.
Measuring rates of decay:
The mean lifetime can be looked at as a "scaling time", because the exponential decay equation can be written in terms of the mean lifetime, τ , instead of the decay constant, λ: N(t)=N0e−t/τ, and that τ is the time at which the population of the assembly is reduced to 1/e ≈ 0.367879441 times its initial value.
For example, if the initial population of the assembly, N(0), is 1000, then the population at time τ , N(τ) , is 368.
A very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, rather than e. In that case the scaling time is the "half-life".
Measuring rates of decay:
Half-life A more intuitive characteristic of exponential decay for many people is the time required for the decaying quantity to fall to one half of its initial value. (If N(t) is discrete, then this is the median life-time rather than the mean life-time.) This time is called the half-life, and often denoted by the symbol t1/2. The half-life can be written in terms of the decay constant, or the mean lifetime, as: ln ln (2).
Measuring rates of decay:
When this expression is inserted for τ in the exponential equation above, and ln 2 is absorbed into the base, this equation becomes: N(t)=N02−t/t1/2.
Thus, the amount of material left is 2−1 = 1/2 raised to the (whole or fractional) number of half-lives that have passed. Thus, after 3 half-lives there will be 1/23 = 1/8 of the original material left.
Therefore, the mean lifetime τ is equal to the half-life divided by the natural log of 2, or: ln 1.44 ⋅t1/2.
For example, polonium-210 has a half-life of 138 days, and a mean lifetime of 200 days.
Solution of the differential equation:
The equation that describes exponential decay is dNdt=−λN or, by rearranging (applying the technique called separation of variables), dNN=−λdt.
Integrating, we have ln N=−λt+C where C is the constant of integration, and hence N(t)=eCe−λt=N0e−λt where the final substitution, N0 = eC, is obtained by evaluating the equation at t = 0, as N0 is defined as being the quantity at t = 0.
Solution of the differential equation:
This is the form of the equation that is most commonly used to describe exponential decay. Any one of decay constant, mean lifetime, or half-life is sufficient to characterise the decay. The notation λ for the decay constant is a remnant of the usual notation for an eigenvalue. In this case, λ is the eigenvalue of the negative of the differential operator with N(t) as the corresponding eigenfunction. The units of the decay constant are s−1.
Solution of the differential equation:
Derivation of the mean lifetime Given an assembly of elements, the number of which decreases ultimately to zero, the mean lifetime, τ , (also called simply the lifetime) is the expected value of the amount of time before an object is removed from the assembly. Specifically, if the individual lifetime of an element of the assembly is the time elapsed between some reference time and the removal of that element from the assembly, the mean lifetime is the arithmetic mean of the individual lifetimes.
Solution of the differential equation:
Starting from the population formula N=N0e−λt, first let c be the normalizing factor to convert to a probability density function: 1=∫0∞c⋅N0e−λtdt=c⋅N0λ or, on rearranging, c=λN0.
Exponential decay is a scalar multiple of the exponential distribution (i.e. the individual lifetime of each object is exponentially distributed), which has a well-known expected value. We can compute it here using integration by parts.
τ=⟨t⟩=∫0∞t⋅c⋅N0e−λtdt=∫0∞λte−λtdt=1λ.
Solution of the differential equation:
Decay by two or more processes A quantity may decay via two or more different processes simultaneously. In general, these processes (often called "decay modes", "decay channels", "decay routes" etc.) have different probabilities of occurring, and thus occur at different rates with different half-lives, in parallel. The total decay rate of the quantity N is given by the sum of the decay routes; thus, in the case of two processes: −dN(t)dt=Nλ1+Nλ2=(λ1+λ2)N.
Solution of the differential equation:
The solution to this equation is given in the previous section, where the sum of λ1+λ2 is treated as a new total decay constant λc .N(t)=N0e−(λ1+λ2)t=N0e−(λc)t.
Partial mean life associated with individual processes is by definition the multiplicative inverse of corresponding partial decay constant: τ=1/λ . A combined τc can be given in terms of λ s: 1τc=λc=λ1+λ2=1τ1+1τ2 τc=τ1τ2τ1+τ2.
Solution of the differential equation:
Since half-lives differ from mean life τ by a constant factor, the same equation holds in terms of the two corresponding half-lives: T1/2=t1t2t1+t2 where T1/2 is the combined or total half-life for the process, t1 and t2 are so-named partial half-lives of corresponding processes. Terms "partial half-life" and "partial mean life" denote quantities derived from a decay constant as if the given decay mode were the only decay mode for the quantity. The term "partial half-life" is misleading, because it cannot be measured as a time interval for which a certain quantity is halved.
Solution of the differential equation:
In terms of separate decay constants, the total half-life T1/2 can be shown to be ln ln 2λ1+λ2.
For a decay by three simultaneous exponential processes the total half-life can be computed as above: ln ln 2λ1+λ2+λ3=t1t2t3(t1t2)+(t1t3)+(t2t3).
Solution of the differential equation:
Decay series / coupled decay In nuclear science and pharmacokinetics, the agent of interest might be situated in a decay chain, where the accumulation is governed by exponential decay of a source agent, while the agent of interest itself decays by means of an exponential process. These systems are solved using the Bateman equation. In the pharmacology setting, some ingested substances might be absorbed into the body by a process reasonably modeled as exponential decay, or might be deliberately formulated to have such a release profile.
Applications and examples:
Exponential decay occurs in a wide variety of situations. Most of these fall into the domain of the natural sciences.
Many decay processes that are often treated as exponential, are really only exponential so long as the sample is large and the law of large numbers holds. For small samples, a more general analysis is necessary, accounting for a Poisson process.
Natural sciences Chemical reactions: The rates of certain types of chemical reactions depend on the concentration of one or another reactant. Reactions whose rate depends only on the concentration of one reactant (known as first-order reactions) consequently follow exponential decay. For instance, many enzyme-catalyzed reactions behave this way.
Applications and examples:
Electrostatics: The electric charge (or, equivalently, the potential) contained in a capacitor (capacitance C) discharges with exponential decay (when the capacitor experiences a constant external load of resistance R) and similarly charges with the mirror image of exponential decay (when the capacitor is charged from a constant voltage source though a constant resistance). The exponential time-constant for the process is τ=RC, so the half-life is ln (2).
Applications and examples:
The same equations can be applied to the dual of current in an inductor.
Furthermore, the particular case of a capacitor or inductor changing through several parallel resistors makes an interesting example of multiple decay processes, with each resistor representing a separate process. In fact, the expression for the equivalent resistance of two resistors in parallel mirrors the equation for the half-life with two decay processes.
Geophysics: Atmospheric pressure decreases approximately exponentially with increasing height above sea level, at a rate of about 12% per 1000m.
Heat transfer: If an object at one temperature is exposed to a medium of another temperature, the temperature difference between the object and the medium follows exponential decay (in the limit of slow processes; equivalent to "good" heat conduction inside the object, so that its temperature remains relatively uniform through its volume). See also Newton's law of cooling.
Luminescence: After excitation, the emission intensity – which is proportional to the number of excited atoms or molecules – of a luminescent material decays exponentially. Depending on the number of mechanisms involved, the decay can be mono- or multi-exponential.
Pharmacology and toxicology: It is found that many administered substances are distributed and metabolized (see clearance) according to exponential decay patterns. The biological half-lives "alpha half-life" and "beta half-life" of a substance measure how quickly a substance is distributed and eliminated.
Physical optics: The intensity of electromagnetic radiation such as light or X-rays or gamma rays in an absorbent medium, follows an exponential decrease with distance into the absorbing medium. This is known as the Beer-Lambert law.
Radioactivity: In a sample of a radionuclide that undergoes radioactive decay to a different state, the number of atoms in the original state follows exponential decay as long as the remaining number of atoms is large. The decay product is termed a radiogenic nuclide.
Thermoelectricity: The decline in resistance of a Negative Temperature Coefficient Thermistor as temperature is increased.
Vibrations: Some vibrations may decay exponentially; this characteristic is often found in damped mechanical oscillators, and used in creating ADSR envelopes in synthesizers. An overdamped system will simply return to equilibrium via an exponential decay.
Beer froth: Arnd Leike, of the Ludwig Maximilian University of Munich, won an Ig Nobel Prize for demonstrating that beer froth obeys the law of exponential decay.
Social sciences Finance: a retirement fund will decay exponentially being subject to discrete payout amounts, usually monthly, and an input subject to a continuous interest rate. A differential equation dA/dt = input – output can be written and solved to find the time to reach any amount A, remaining in the fund.
In simple glottochronology, the (debatable) assumption of a constant decay rate in languages allows one to estimate the age of single languages. (To compute the time of split between two languages requires additional assumptions, independent of exponential decay).
Applications and examples:
Computer science The core routing protocol on the Internet, BGP, has to maintain a routing table in order to remember the paths a packet can be deviated to. When one of these paths repeatedly changes its state from available to not available (and vice versa), the BGP router controlling that path has to repeatedly add and remove the path record from its routing table (flaps the path), thus spending local resources such as CPU and RAM and, even more, broadcasting useless information to peer routers. To prevent this undesired behavior, an algorithm named route flapping damping assigns each route a weight that gets bigger each time the route changes its state and decays exponentially with time. When the weight reaches a certain limit, no more flapping is done, thus suppressing the route. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Long intergenic non-protein coding rna 483**
Long intergenic non-protein coding rna 483:
Long intergenic non-protein coding RNA 483 is a protein that in humans is encoded by the LINC00483 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rete pegs**
Rete pegs:
Rete pegs (also known as rete processes or rete ridges) are the epithelial extensions that project into the underlying connective tissue in both skin and mucous membranes.
In the epithelium of the mouth, the attached gingiva exhibit rete pegs, while the sulcular and junctional epithelia do not. Scar tissue lacks rete pegs and scars tend to shear off more easily than normal tissue as a result.Also known as papillae, they are downward thickenings of the epidermis between the dermal papillae. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Calculus on Manifolds (book)**
Calculus on Manifolds (book):
Calculus on Manifolds: A Modern Approach to Classical Theorems of Advanced Calculus (1965) by Michael Spivak is a brief, rigorous, and modern textbook of multivariable calculus, differential forms, and integration on manifolds for advanced undergraduates.
Description:
Calculus on Manifolds is a brief monograph on the theory of vector-valued functions of several real variables (f : Rn→Rm) and differentiable manifolds in Euclidean space. In addition to extending the concepts of differentiation (including the inverse and implicit function theorems) and Riemann integration (including Fubini's theorem) to functions of several variables, the book treats the classical theorems of vector calculus, including those of Cauchy–Green, Ostrogradsky–Gauss (divergence theorem), and Kelvin–Stokes, in the language of differential forms on differentiable manifolds embedded in Euclidean space, and as corollaries of the generalized Stokes theorem on manifolds-with-boundary. The book culminates with the statement and proof of this vast and abstract modern generalization of several classical results: The cover of Calculus on Manifolds features snippets of a July 2, 1850 letter from Lord Kelvin to Sir George Stokes containing the first disclosure of the classical Stokes' theorem (i.e., the Kelvin–Stokes theorem).
Reception:
Calculus on Manifolds aims to present the topics of multivariable and vector calculus in the manner in which they are seen by a modern working mathematician, yet simply and selectively enough to be understood by undergraduate students whose previous coursework in mathematics comprises only one-variable calculus and introductory linear algebra. While Spivak's elementary treatment of modern mathematical tools is broadly successful—and this approach has made Calculus on Manifolds a standard introduction to the rigorous theory of multivariable calculus—the text is also well known for its laconic style, lack of motivating examples, and frequent omission of non-obvious steps and arguments. For example, in order to state and prove the generalized Stokes' theorem on chains, a profusion of unfamiliar concepts and constructions (e.g., tensor products, differential forms, tangent spaces, pullbacks, exterior derivatives, cube and chains) are introduced in quick succession within the span of 25 pages. Moreover, careful readers have noted a number of nontrivial oversights throughout the text, including missing hypotheses in theorems, inaccurately stated theorems, and proofs that fail to handle all cases.
Other textbooks:
A more recent textbook which also covers these topics at an undergraduate level is the text Analysis on Manifolds by James Munkres (366 pp.). At more than twice the length of Calculus on Manifolds, Munkres's work presents a more careful and detailed treatment of the subject matter at a leisurely pace. Nevertheless, Munkres acknowledges the influence of Spivak's earlier text in the preface of Analysis on Manifolds.Spivak's five-volume textbook A Comprehensive Introduction to Differential Geometry states in its preface that Calculus on Manifolds serves as a prerequisite for a course based on this text. In fact, several of the concepts introduced in Calculus on Manifolds reappear in the first volume of this classic work in more sophisticated settings. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Subtract a square**
Subtract a square:
Subtract-a-square (also referred to as take-a-square) is a two-player mathematical subtraction game. It is played by two people with a pile of coins (or other tokens) between them. The players take turns removing coins from the pile, always removing a non-zero square number of coins. The game is usually played as a normal play game, which means that the player who removes the last coin wins. It is an impartial game, meaning that the set of moves available from any position does not depend on whose turn it is. Solomon W. Golomb credits the invention of this game to Richard A. Epstein.
Example:
A normal play game starting with 13 coins is a win for the first player provided they start with a subtraction of 1: player 1: 13 - 1*1 = 12 Player 2 now has three choices: subtract 1, 4 or 9. In each of these cases, player 1 can ensure that within a few moves the number 2 gets passed on to player 2: player 2: 12 - 1*1 = 11 player 2: 12 - 2*2 = 8 player 2: 12 - 3*3 = 3 player 1: 11 - 3*3 = 2 player 1: 8 - 1*1 = 7 player 1: 3 - 1*1 = 2 player 2: 7 - 1*1 = 6 or: 7 - 2*2 = 3 player 1: 6 - 2*2 = 2 3 - 1*1 = 2 Now player 2 has to subtract 1, and player 1 subsequently does the same: player 2: 2 - 1*1 = 1 player 1: 1 - 1*1 = 0 player 2 loses
Mathematical theory:
In the above example, the number '13' represents a winning or 'hot' position, whilst the number '2' represents a losing or 'cold' position. Given an integer list with each integer labeled 'hot' or 'cold', the strategy of the game is simple: try to pass on a 'cold' number to your opponent. This is always possible provided you are being presented a 'hot' number. Which numbers are 'hot' and which numbers are 'cold' can be determined recursively: the number 0 is 'cold', whilst 1 is 'hot' if all numbers 1 .. N have been classified as either 'hot' or 'cold', then the number N+1 is 'cold' if only 'hot' numbers can be reached by subtracting a positive square the number N+1 is 'hot' if at least one 'cold' number can be reached by subtracting a positive squareUsing this algorithm, a list of cold numbers is easily derived: 0, 2, 5, 7, 10, 12, 15, 17, 20, 22, 34, 39, 44, … (sequence A030193 in the OEIS)A faster divide and conquer algorithm can compute the same sequence of numbers, up to any threshold n , in time log 2n) .There are infinitely many cold numbers. More strongly, the number of cold numbers up to some threshold n must be at least proportional to the square root of n , for otherwise there would not be enough of them to provide winning moves from all the hot numbers.
Mathematical theory:
Cold numbers tend to end in 0, 2, 4, 5, 7, or 9. Cold values that end with other digits are quite uncommon. This holds in particular for cold numbers ending in 6. Out of all the over 180,000 cold numbers less than 40 million, only one ends in a 6: 11,356.No two cold numbers can differ by a square, because if they did then a move from the larger of the two to the smaller would be winning, contradicting the assumption that they are both cold. Therefore, by the Furstenberg–Sárközy theorem, the natural density of the cold numbers is zero. That is, for every ϵ>0 , and for all sufficiently large n , the fraction of the numbers up to n that are cold is less than ϵ More strongly, for every n there are log log log log log n) cold numbers up to n . The exact growth rate of the cold numbers remains unknown, but experimentally the number of cold positions up to any given threshold n appears to be roughly 0.7
Extensions:
The game subtract-a-square can also be played with multiple numbers. At each turn the player to make a move first selects one of the numbers, and then subtracts a square from it. Such a 'sum of normal games' can be analysed using the Sprague–Grundy theorem. This theorem states that each position in the game subtract-a-square may be mapped onto an equivalent nim heap size. Optimal play consists of moving to a collection of numbers such that the nim-sum of their equivalent nim heap sizes is zero, when this is possible. The equivalent nim heap size of a position may be calculated as the minimum excluded value of the equivalent sizes of the positions that can be reached by a single move.
Extensions:
For subtract-a-square positions of values 0, 1, 2, ... the equivalent nim heap sizes are 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 3, 2, 3, 4, … (sequence A014586 in the OEIS).In particular, a position of subtract-a-square is cold if and only if its equivalent nim heap size is zero.
Extensions:
It is also possible to play variants of this game using other allowed moves than the square numbers. For instance, Golomb defined an analogous game based on the Moser–de Bruijn sequence, a sequence that grows at a similar asymptotic rate to the squares, for which it is possible to determine more easily the set of cold positions and to define an easily computed optimal move strategy.Additional goals may also be added for the players without changing the winning conditions. For example, the winner can be given a "score" based on how many moves it took to win (the goal being to obtain the lowest possible score) and the loser given the goal to force the winner to take as long as possible to reach victory. With this additional pair of goals and an assumption of optimal play by both players, the scores for starting positions 0, 1, 2, ... are 0, 1, 2, 3, 1, 2, 3, 4, 5, 1, 4, 3, 6, 7, 3, 4, 1, 8, 3, 5, 6, 3, 8, 5, 5, 1, 5, 3, 7, … (sequence A338027 in the OEIS).
Misère game:
Subtract-a-square can also be played as a misère game, in which the player to make the last subtraction loses. The recursive algorithm to determine 'hot' and 'cold' numbers for the misère game is the same as that for the normal game, except that for the misère game the number 1 is 'cold' whilst 2 is 'hot'. It follows that the cold numbers for the misère variant are the cold numbers for the normal game shifted by 1: Misère play 'cold' numbers: 1, 3, 6, 8, 11, 13, 16, 18, 21, 23, 35, 40, 45, ... | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Earth's shadow**
Earth's shadow:
Earth's shadow (or Earth shadow) is the shadow that Earth itself casts through its atmosphere and into outer space, toward the antisolar point. During the twilight period (both early dusk and late dawn), the shadow's visible fringe – sometimes called the dark segment or twilight wedge – appears as a dark and diffuse band just above the horizon, most distinct when the sky is clear.
Earth's shadow:
Since Earth's diameter is 3.7 times the Moon's, the length of the planet's umbra is correspondingly 3.7 times that of the lunar umbra: roughly 1,400,000 km (870,000 mi).
Appearance:
Earth's shadow cast onto the atmosphere can be viewed during the "civil" stage of twilight, assuming the sky is clear and the horizon is relatively unobstructed. The shadow's fringe appears as a dark bluish to purplish band that stretches over 180° of the horizon opposite the Sun, i.e. in the eastern sky at dusk and in the western sky at dawn. Before sunrise, Earth's shadow appears to recede as the Sun rises; after sunset, the shadow appears to rise as the Sun sets.Earth's shadow is best seen when the horizon is low, such as over the sea, and when the sky conditions are clear. In addition, the higher the observer's elevation is to view the horizon, the sharper the shadow appears.
Belt of Venus:
A related phenomenon in the same part of the sky is the Belt of Venus, or anti-twilight arch, a pinkish band visible above the bluish shade of Earth's shadow, named after the planet Venus which, when visible, is typically located in this region of the sky.
No defined line divides the Earth's shadow and the Belt of Venus; one colored band blends into the other in the sky.The Belt of Venus is quite a different phenomenon from the afterglow, which appears in the geometrically opposite part of the sky.
Color When the Sun is near the horizon around sunset or sunrise, the sunlight appears reddish. This is because the light rays are penetrating an especially thick layer of the atmosphere, which works as a filter, scattering all but the longer (redder) wavelengths.
From the observer's perspective, the red sunlight directly illuminates small particles in the lower atmosphere in the sky opposite of the Sun. The red light is backscattered to the observer, which is the reason why the Belt of Venus appears pink.
Belt of Venus:
The lower the setting Sun descends, the less defined the boundary between Earth's shadow and the Belt of Venus appears. This is because the setting Sun now illuminates a thinner part of the upper atmosphere. There the red light is not scattered because fewer particles are present, and the eye only sees the "normal" (usual) blue sky, which is due to Rayleigh scattering from air molecules. Eventually, both Earth's shadow and the Belt of Venus dissolve into the darkness of the night sky.
Color of lunar eclipses:
Earth's shadow is as curved as the planet is, and its umbra extends 1,400,000 km (870,000 mi) into outer space. (The antumbra, however, extends indefinitely.) When the Sun, Earth, and the Moon are aligned perfectly (or nearly so), with Earth between the Sun and the Moon, Earth's shadow falls onto the lunar surface facing the night side of the planet, such that the shadow gradually darkens the full Moon, causing a lunar eclipse.
Color of lunar eclipses:
Even during a total lunar eclipse, a small amount of sunlight however still reaches the Moon. This indirect sunlight has been refracted as it passed through Earth's atmosphere. The air molecules and particulates in Earth's atmosphere scatter the shorter wavelengths of this sunlight; thus, the longer wavelengths of reddish light reaches the Moon, in the same way that light at sunset or sunrise appears reddish. This weak red illumination gives the eclipsed Moon a dimly reddish or copper color. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Maxillary artery**
Maxillary artery:
The maxillary artery supplies deep structures of the face. It branches from the external carotid artery just deep to the neck of the mandible.
Structure:
The maxillary artery, the larger of the two terminal branches of the external carotid artery, arises behind the neck of the mandible, and is at first imbedded in the substance of the parotid gland; it passes forward between the ramus of the mandible and the sphenomandibular ligament, and then runs, either superficial or deep to the lateral pterygoid muscle, to the pterygopalatine fossa.
Structure:
It supplies the deep structures of the face, and may be divided into mandibular, pterygoid, and pterygopalatine portions.
First portion The first or mandibular or bony portion passes horizontally forward, between the neck of the mandible and the sphenomandibular ligament, where it lies parallel to and a little below the auriculotemporal nerve; it crosses the inferior alveolar nerve, and runs along the lower border of the lateral pterygoid muscle.
Structure:
Branches include: Deep auricular artery Anterior tympanic artery Middle meningeal artery Inferior alveolar artery which gives off its mylohyoid branch just prior to entering the mandibular foramen Accessory meningeal artery Second portion The second or pterygoid or muscular portion runs obliquely forward and upward under cover of the ramus of the mandible and insertion of the temporalis, on the superficial (very frequently on the deep) surface of the lateral pterygoid muscle; it then passes between the two heads of origin of this muscle and enters the fossa.
Structure:
Branches include: Masseteric artery Pterygoid branches Deep temporal arteries (anterior and posterior) Buccal artery Third portion The third or pterygopalatine or pterygomaxillary portion lies in the pterygopalatine fossa in relation with the pterygopalatine ganglion. This is considered the terminal branch of the maxillary artery.
Structure:
Branches include: Sphenopalatine artery (nasopalatine artery) is the terminal branch of the Maxillary artery Descending palatine artery (Greater palatine artery and lesser palatine artery) Infraorbital artery Posterior superior alveolar artery Artery of pterygoid canal Pharyngeal branch, directed to palatovaginal canal Middle superior alveolar artery (a branch of the infraorbital artery) Anterior superior alveolar arteries (a branch of the infraorbital artery)
Nomenclature:
Formerly, the term "external maxillary artery" was used to describe what is now known as the facial artery (per Terminologia anatomica.) Currently, the term "external maxillary artery" is less commonly used, and the terms "internal maxillary artery" and "maxillary artery" are equivalent. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Novokhopyorsky**
Novokhopyorsky:
Novokhopyorsky/Novokhopersky (masculine), Novokhopyorskaya/Novokhoperskaya (feminine), or Novokhopyorskoye/Novokhoperskoye (neuter) may refer to: Novokhopyorsky District, a district of Voronezh Oblast, Russia Novokhopyorsky (urban locality), an urban locality (a work settlement) in Voronezh Oblast, Russia | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Common ISDN Application Programming Interface**
Common ISDN Application Programming Interface:
The Common ISDN Application Programming Interface (short CAPI) is an ISDN-conformant standardized software interface. With the help of CAPI, computer software intended for the use with ISDN can be provided, without knowledge of the deployed, proprietary ISDN card.
CAPI was designed from 1989 by German manufacturers (AVM, Systec, Stollmann). Since 1991, CAPI is being developed further by CAPI Association e.V. Implementations exist for different operating systems, including Linux and Microsoft Windows.
Through the ETSI, CAPI 2.0 was introduced as standard ETS 300 324 (Profile B).
Common ISDN Application Programming Interface:
Primarily, CAPI was designed for data transfer over ISDN. The specification has been extended multiple times, thereby it became important to the area of voice and fax communication. Because pure data transfer over IP-based networks is dominant in modern times, CAPI is being used primarily in the scope of voice applications (voice mail, IVR, call center, voice conference systems, etc.), for fax servers and combined systems (UMS).
Common ISDN Application Programming Interface:
The CAPI Interface in its current release (CAPI 2.0) supports a variety of signaling protocols (D channel protocols), e.g. DSS1 and FTZ 1 TR 6. The interface operates in the OSI model between layer 3 and 4, but only controls layers 1 to 3.
Besides popular signaling protocols for ISDN, implementations of CAPI for ATM, GSM and VoIP (H.323 and SIP) exist, thus CAPI applications can be used directly on communications infrastructure. Special extensions for protocol-specific features were defined several years ago for ATM. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pension spiking**
Pension spiking:
Pension spiking, sometimes referred to as "salary spiking", is the process whereby public sector employees are granted large raises, bonuses, incentives or otherwise artificially inflate their compensation in the time immediately preceding retirement in order to receive larger pensions than they otherwise would be entitled to receive. This artificially inflates the pension payments due to the retirees.
Pension spiking:
Upon retirement any employee transitions from receiving a paycheck from the employer to a pension check drawn on the assets of the retirement fund; this amount is typically determined as a percentage of the employee's regular salary by state law or statute. When an employee due to retire receives a "spike", the amount of money the employee will receive does not reflect the percentage of salary the employee and employer haves contributed for the majority of the employee's career, and places a burden on the economic viability of the pension fund. This practice is considered a significant contributor to the high cost of public sector pensions.
Pension spiking:
Several states including Illinois have passed laws making it more difficult for employees to spike their pensions. The California CalPERS system outlawed this practice in 1993, but as of 2012 it remained legal in the 20 counties which did not participate in this public employee retirement system.Pension spiking is often seen in public sector employers (who do not typically offer golden parachutes to employees the private sector does) and is an example of the principal–agent problem. In the classic principal–agent problem, a principal hires an agent to work on their behalf. The agent then seeks to maximize their own well-being within the confines of the engagement laid out by the principal. The agent, or bureaucrat in this instance, has superior information and is able to maximize their benefit at the cost of the principal. In other words, there is asymmetric information.
Pension spiking:
In the case of pension spiking the general public (the principal) elects officials to hire the bureaucrat who then hires the public servants, who are the ultimate agents of the general public. Thus, the principal is three steps removed from the bureaucrat. In the case of pension spiking, some have written that the public has allowed a pension system to be created which is based on the compensation in the last year of service and delegated the setting of this cost to the bureaucrat. The bureaucrat, who will often themselves benefit from a spiked pension or the same laws permitting pension spiking, fails to stop the practice, a clear conflict of interest.
Pension spiking:
Given that many public pension funds have been in existence for decades, it seems that it is the case that pension fund participants have found a way to manipulate an existing system to their benefit, rather than constructed a unique system. Issues also exist when pension funds allow the inclusion of overtime when determining the retiree's final pensionable salary. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ream Al-Hasani**
Ream Al-Hasani:
Ream Al-Hasani is a British neuroscientist and pharmacologist as well as an assistant professor of anesthesiology at Washington University in St. Louis. Al-Hasani studies the endogenous opioid system to understand how to target it therapeutically to treat addiction, affective disorders, and chronic pain.
Early life and education:
Al-Hasani's family moved from Iraq to the UK before she was born, and Al-Hasani grew up in the United Kingdom. She was the only Muslim girl in her school, and though there were not many Middle Eastern role models in science, she became fascinated by the effects of drugs on the brain and decided to pursue a career in academia.Al-Hasani graduated with an undergraduate degree in Pharmacology from the University of Portsmouth in the UK. She then held an internship at GlaxoSmithKline where she studied neurodegenerative diseases and neuroinflammation.
Graduate work:
Funded by the Medical Research Council, Al-Hasani pursued graduate studies at the University of Surrey in England. She was interested in exploring the interactions between adenosine and dopamine receptors in morphine addiction for her PhD, and sought co-mentorship from Ian Kitchen, Professor of Neuropharmacology, and Susanna Hourani, Professor of Pharmacology.
Graduate work:
Endogenous opioids In 2010, Al-Hasani and her colleagues in the Kitchen lab published a paper in the European Journal of Neuroscience exploring the influence of genetic variability on heroin addiction. When they compared a strain of highly inbred mice to typical control mice, they found that highly inbred mice had much higher sensitivity to the rewarding properties of heroin yet inbred did not display increased location upon administration of heroin unlike control mice. Heroin decreased mu opioid receptor (MOP-r) density in controls but not in the inbred mice, MOP-r stimulated binding was twofold higher in controls compared to inbreds, and heroin increased expression of dopamine transporters in inbred mice but not in controls.
Graduate work:
Adenosingeric and Dopaminergic Signalling In 2011, she published a paper in Neuroscience exploring the interaction between dopamine signalling and adenosine signalling in the ventral tegmental area (VTA). Since adenosine A(2A) receptors have been known to modulate neurotransmitter systems and modulate neural activity in the striatum, she sought to see if this was also true in the VTA. When Al-Hasani knocked out A(2A) receptors and looked the dopamine receptor 2 (D2) mediated inhibition in the VTA, she found that A(2A) knockouts had D2 receptor desensitization leading to reduced maximal inhibition. A follow up was published in Neuropharmacology in 2013 looking at the ability of A(2A) receptors to modulate cholinergic signalling through interactions with nicotinic acetylcholine receptors.
Graduate work:
Postdoctoral work Al-Hasani moved to America for postdoctoral work, joining the lab of Michael Bruchas in the Department of Anesthesiology at Washington University School of Medicine in St. Louis. Under his mentorship, she explored the kappa opioid system and associated neural circuitry to understand its role in driving motivated behaviors. Al-Hasani wrote a review paper in 2011 discussing the opioid system in the brain and how opioid receptors not only mediate intracellular signal transduction pathways to modulate molecular and cellular responses as well as their role in behaviors associated with analgesia, reward, depression, and anxiety. In 2013, she published a paper in Neuropsychopharmacology examining the interactions between noradrenergic (NA) and dynorphin/kappa opioid systems in the forebrain. She found that the kappa opioid receptor(KOR)-induced reinstatement of cocaine CPP was potentiated when beta-adrenergic signaling was blocked, and that the interactions between adrenergic signalling and KOR signaling occur external to the locus coeruleus. The interaction between the KORs and the NA system had not been previously known, and she established its role in the reinstatement of cocaine drug seeking behavior. Also in 2013, she published a paper exploring the effects of stress on the kappa opioid system in the context of drug relapse. She found that various stressors cause dysregulation of kappa opioid circuitry, but that mild stressors cause adaptive changes in the kappa opioid circuitry that might be protective against drug relapse.In 2015, her group helped elucidate the mechanisms by which the locus coeruleus noradrenergic (LC-NE) system generates stress-induced anxiety in rodents. They found that activation of the LC-NE neurons increases stress-induced anxiety and aversion and that inhibition attenuates these behaviors. They also found that specifically the corticotropin-releasing hormone positive neurons in the LC that receive inputs from the central amygdala are the neural subpopulation within the LC responsible for mediating the anxiety-like behaviors. Later in 2015, she published a paper in Neuron describing distinct functions of two subregions of the nucleus accumbens (NAc) that are mediated by dynorphin-kappa opioid receptor (KOR) signalling. Specifically, she found that stimulating dynorphinergic cells in the ventral shell of the NAc elicits aversive responses via KOR activation while stimulating dynorphinergic cells in the dorsal shell of the NAc elicits appetitive behaviors mediated by KOR signalling. Her work in the Bruchas Lab led to her winning an NIH Pathway to Independence Award (K99/R00) which provided her the funding to start her own lab.
Graduate work:
Tool Development Al-Hasani helped create novel technologies in the Bruchas Lab with which to probe neural circuits. She optimized wireless optogenetic technologies that allow for neural circuit modulation without animal movement being constrained by tethering. she also merged wireless optogenetics with pharmacology such that various compounds can be administered to the brain while certain neural circuits are being activated or inhibited to explore the effects of these compounds on neural circuit function and behavioral output. In 2018, she published a method to detect endogenously released peptides from active neural circuits in vivo. She combined in vivo optogenetics with microdialysis to stimulate genetically identified neurons and also detect the peptides that are released that might be mediating neural circuit changes and behavioral outputs.
Career and research:
In 2017, Al-Hasani was recruited to stay at Washington University School of Medicine, along with her husband Jordan McCall, as an assistant professor in the Department of Pharmaceutical and Administrative Sciences at the St. Louis College of Pharmacy with an adjunct appointment. Al-Hasani and McCall were the first two researchers to be appointed positions at the new Center for Clinical Pharmacology, formed by a merging of the St. Louis College of Pharmacy and WUSM. Al-Hasani's lab focuses on understanding the neural circuitry involved in addiction, stress, and chronic pain by specifically focusing on the opioid system to elucidate targets for future therapies. By building and using innovative tools for in vivo neural circuit dissection, Al-Hasani and her team study the role of the kappa opioid system in the generation of negative affective states that might accompany chronic pain, withdrawal, and preclude nicotine cessation. In April 2020, Al-Hasani was awarded the Young Investigator Grant from the Brain and Behavior Research Foundation to support her research.
Career and research:
Role of Kappa Opioid System in Affective Component of Pain In 2019, Al-Hasani helped dissect physical pain from its emotional affective counterpart in a study published in Neuron. Al-Hasani and her colleagues targeted their investigation to the ventral shell of the nucleus accumbens (NAc), which Al-Hasani had previously shown to be involved in negative affective states. They found that pain recruits the dynorphin-kappa opioid system in the NAc, and that the dynorphinergic cells are more active during inflammatory pain due to a decrease in inhibitory inputs onto these cells. By blocking dynorphinergic kappa opioid receptor signalling in the shell of the NAc, they are able to alleviate the decrease in motivation that results from the experience of pain. These findings show that the kappa opioid system modulates the emotional aspects of the experience of pain and could serve as a less addictive target for pain treatment compared to opiates.
Career and research:
Outreach Al-Hasani has helped to mentor the younger generation of scientists and create a space for them to enter the pipeline from a young age. Al-Hasani and McCall recently created a program for undergraduates to explore research opportunities and advance current research at Washington University. They have both hosted undergraduates in their labs and sent them to conferences to present their findings to the broader scientific community.
Awards and honors:
2020: International Narcotics Research Conference Young Investigator Award 2020: Young Investigators Grants from the Brain and Behavior Research Foundation "Pathway to Independence Award" from NIDA
Selected works and publications:
Al-Hasani R, Bruchas MR. Molecular mechanisms of opioid receptor-dependent signaling and behavior. Anesthesiology: The Journal of the American Society of Anesthesiologists. 2011 Dec 1;115(6):1363-81. Cited 608 times according to Google Scholar McCall JG, Al-Hasani R, Siuda ER, Hong DY, Norris AJ, Ford CP, Bruchas MR. CRH engagement of the locus coeruleus noradrenergic system mediates stress-induced anxiety. Neuron. 2015 Aug 5;87(3):605-20. Cited 200 times according to Google Scholar Al-Hasani R, McCall JG, Shin G, Gomez AM, Schmitz GP, Bernardi JM, Pyo CO, Park SI, Marcinkiewcz CM, Crowley NA, Krashes MJ. Distinct subpopulations of nucleus accumbens dynorphin neurons drive aversion and reward. Neuron. 2015 Sep 2;87(5):1063-77. Cited 145 times according to Google Scholar Trang T, Al-Hasani R, Salvemini D, Salter MW, Gutstein H, Cahill CM. Pain and poppies: the good, the bad, and the ugly of opioid analgesics. Journal of Neuroscience. 2015 Oct 14;35(41):13879-88. Cited 122 times according to Google Scholar | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sermorelin**
Sermorelin:
Sermorelin acetate (INN; brand names Geref, Gerel), also known as GHRH (1-29), is a peptide analogue of growth hormone-releasing hormone (GHRH) which is used as a diagnostic agent to assess growth hormone (GH) secretion for the purpose of diagnosing growth hormone deficiency. It is a 29-amino acid polypeptide representing the 1–29 fragment from endogenous human GHRH, thought to be the shortest fully functional fragment of GHRH. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Donsker's theorem**
Donsker's theorem:
In probability theory, Donsker's theorem (also known as Donsker's invariance principle, or the functional central limit theorem), named after Monroe D. Donsker, is a functional extension of the central limit theorem. Let X1,X2,X3,… be a sequence of independent and identically distributed (i.i.d.) random variables with mean 0 and variance 1. Let := ∑i=1nXi . The stochastic process := (Sn)n∈N is known as a random walk. Define the diffusively rescaled random walk (partial-sum process) by := S⌊nt⌋n,t∈[0,1].
Donsker's theorem:
The central limit theorem asserts that W(n)(1) converges in distribution to a standard Gaussian random variable W(1) as n→∞ . Donsker's invariance principle extends this convergence to the whole function := (W(n)(t))t∈[0,1] . More precisely, in its modern form, Donsker's invariance principle states that: As random variables taking values in the Skorokhod space D[0,1] , the random function W(n) converges in distribution to a standard Brownian motion := (W(t))t∈[0,1] as n→∞.
Formal statement:
Let Fn be the empirical distribution function of the sequence of i.i.d. random variables X1,X2,X3,… with distribution function F. Define the centered and scaled version of Fn by Gn(x)=n(Fn(x)−F(x)) indexed by x ∈ R. By the classical central limit theorem, for fixed x, the random variable Gn(x) converges in distribution to a Gaussian (normal) random variable G(x) with zero mean and variance F(x)(1 − F(x)) as the sample size n grows.
Formal statement:
Theorem (Donsker, Skorokhod, Kolmogorov) The sequence of Gn(x), as random elements of the Skorokhod space D(−∞,∞) , converges in distribution to a Gaussian process G with zero mean and covariance given by cov min {F(s),F(t)}−F(s) F(t).
The process G(x) can be written as B(F(x)) where B is a standard Brownian bridge on the unit interval.
History and related results:
Kolmogorov (1933) showed that when F is continuous, the supremum sup tGn(t) and supremum of absolute value, sup t|Gn(t)| converges in distribution to the laws of the same functionals of the Brownian bridge B(t), see the Kolmogorov–Smirnov test. In 1949 Doob asked whether the convergence in distribution held for more general functionals, thus formulating a problem of weak convergence of random functions in a suitable function space.In 1952 Donsker stated and proved (not quite correctly) a general extension for the Doob–Kolmogorov heuristic approach. In the original paper, Donsker proved that the convergence in law of Gn to the Brownian bridge holds for Uniform[0,1] distributions with respect to uniform convergence in t over the interval [0,1].However Donsker's formulation was not quite correct because of the problem of measurability of the functionals of discontinuous processes. In 1956 Skorokhod and Kolmogorov defined a separable metric d, called the Skorokhod metric, on the space of càdlàg functions on [0,1], such that convergence for d to a continuous function is equivalent to convergence for the sup norm, and showed that Gn converges in law in D[0,1] to the Brownian bridge.
History and related results:
Later Dudley reformulated Donsker's result to avoid the problem of measurability and the need of the Skorokhod metric. One can prove that there exist Xi, iid uniform in [0,1] and a sequence of sample-continuous Brownian bridges Bn, such that ‖Gn−Bn‖∞ is measurable and converges in probability to 0. An improved version of this result, providing more detail on the rate of convergence, is the Komlós–Major–Tusnády approximation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ensemble coding**
Ensemble coding:
Ensemble coding, also known as ensemble perception or summary representation, is a theory in cognitive neuroscience about the internal representation of groups of objects in the human mind. Ensemble coding proposes that such information is recorded via summary statistics, particularly the average or variance. Experimental evidence tends to support the theory for low-level visual information, such as shapes and sizes, as well as some high-level features such as face gender. Nonetheless, it remains unclear the extent to which ensemble coding applies to high-level or non-visual stimuli, and the theory remains the subject of active research.
Theory:
Extensive amounts of information are available to the visual system. Ensemble coding is a theory that suggests that people process the general gist of their complex visual surroundings by grouping objects together based on shared properties. The world is filled with redundant information of which the human visual system has become particularly sensitive. The brain exploits this redundancy and condenses the information. For example, the leaves of a tree or blades of grass give rise to the percept of 'tree-ness' and 'lawn-ness'. It has been demonstrated that individuals have the ability to quickly and accurately encode ensembles of objects, like leaves on a tree, and gather summary statistical information (like the mean and variance) from groups of stimuli. Some research suggests that this process provides rough visual information from the entire visual field, giving way to a complete and accurate picture of the visual world. Although the individual details of this accurate picture might be inaccessible, the 'gist' of the scene remains accessible. Ensemble coding is an adaptive process that lightens the cognitive load in the processing and storing of visual representations through the use of heuristics.
Operational definition:
David Whitney and Allison Yamanashi Lieb have developed an operational and flexible definition stating that ensemble coding should cover the following five concepts: Ensemble perception is the ability to discriminate or reproduce a statistical moment.
Ensemble perception requires the integration of multiple items.
Ensemble information at each level of representation can be precise relative to the processing of single objects at that level.
Single-item recognition is not a prerequisite for ensemble coding.
Ensemble representations can be extracted with a temporal resolution at or beyond the temporal resolution of individual object recognition.
Opposing theories:
Some research has found countering evidence to the theory of ensemble coding.
Limited visual capacity Vision science has noted that although humans take in large amounts of visual information, adults are only able to process, attend to, and retain up to roughly four items from the visual environment. Furthermore, scientists have found that this visual upper limit capacity exists across various phenomena including change blindness, object tracking, and feature representation.
Low resolution representations and limited capacity Additional theories in vision science propose that stimuli are represented in the brain individually as small, low resolution, icons stored in templates with limited capacities and are organized through associative links.
History:
Throughout its history, ensemble coding been known by many names. Interest in the theory began to emerge in the early 20th century. In its earliest years, ensemble coding was known as Gestalt grouping. In 1923, Max Wertheimer, a Gestalt psychology theorist, was addressing how humans perceive their visual world holistically rather than individually. Gestaltists argued that in object perception, the individual object features were either lost or difficult to perceive and therefore the grouped object was the favored percept. Although Gestaltists helped define some of the central principles of object perception, research into modern ensemble coding did not occur until many years later.In 1971, Norman Anderson was one of the earliest to conduct explicit ensemble coding research. Anderson's research into social ensemble coding showed that individuals described by two positive terms were rated more favorably than individuals described by two positive terms and two negative terms. This research on impression formation demonstrated that a weighted mean or average captures how information is integrated rather than the summation. Additional research during this time explored ensemble coding in group attractiveness, shopping preferences, and the perceived badness of criminals.
The current era:
Findings by Dan Ariely in 2001 were the first data to support the modern theories of ensemble coding. Ariely used novel experimental paradigms, which he labeled "mean discrimination" and "member identification", to examine how sets of objects are perceived. He conducted three studies involving shape ensembles that varied in size. Across all studies, participants were able to accurately encode the mean size of the ensemble of objects, but they were inaccurate when asked if a certain object was a part of the set. Ariely's findings were the first that found statistical summary information emerge in the visual perception of grouped objects.Consistent with Ariely's findings, follow-up research conducted by Sang Chul Chong and Anne Treisman in 2003 provided evidence that participants are engaging in summary statistical processes. Their research revealed that participant's maintained high accuracy in encoding the mean size of the stimuli even with short stimuli presentations as low as 50 milliseconds, memory delays, and object distribution differences.Additional research has demonstrated that ensemble coding is not limited to the mean size of objects in the ensemble, but that additional content is extracted, such as average line orientation, average spatial location, average number, and statistical summaries such as the variances are detected. Observers are also able to extract accurate perceptual summaries of high-level features such as the average direction of eye gaze of grouped faces and the average walking direction of a crowd.
Levels of ensemble coding:
People have the ability to encode ensembles of objects along various dimensions. These dimensions have been divided into levels that vary from low-level to high-level feature information.
Low-level feature information Low-level ensemble coding has been observed in various psychophysical areas of research. For example, people accurately perceive the average size of objects, motion direction of grouped dots, number, line orientation, and spatial location.
High-level feature information High-level ensemble coding extends to more complex, higher level objects including faces.
Independence of low- and high-level information Some findings suggest lower-level and higher-level information may be processed by independent cognitive mechanisms
Social vision and ensemble coding:
Based on the early work of Anderson, it appears that humans integrate semantic as well as social information into memory using ensemble coding. These findings suggest that social processes may hinge on the same sort of underlying mechanisms that allow people to perceive average object orientation and average object direction of motion.In recent years, ensemble coding in the field of social vision has emerged. Social vision is a field of research that examines how people perceive one another. With the addition of ensemble coding, the field is able to explore people perception, or how people perceive groups of other people. This specific research area focuses on how observers accurately perceive and extract social information from groups and how that extracted information influences downstream judgments and behaviors. In 2018, seminal research introducing the use ensemble coding in the field of social vision was conducted by Briana Goodale. Goodale's research found that humans can accurately extract sex ratio summaries from ensembles of faces and that this sex ratio provides an early visual cue signaling sense of belonging and fit within group. Specifically, this research found that participants felt a stronger sense of belonging to a given ensemble as members of their own sex increased in the perceived ensemble.Additional research has uncovered that in as little as 75 milliseconds, participants are able to derive the average sex ratio of an ensemble of faces. Furthermore, within that 75 milliseconds, participants were able to form impressions based on the perceived sex ratio and make inferences about the group's perceived threat. Specifically, this research found that groups were judged as more threatening as the ratio of men to women increased. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Humor styles**
Humor styles:
Humor styles are a subject of research in the field of personality psychology that focuses on the ways in which individuals differ in their use of humor. People of all ages and cultures respond to humor, but their use of it can vary greatly. There are multiple factors, such as culture, age, and political orientation, that play a role in determining what people find humorous. Although humor styles can be somewhat variable depending on social context, they tend to be a relatively stable personality characteristic among individuals. Humor can play an instrumental role in the formation of social bonds, enabling people to relate to peers or to attract a mate, and can help to release tension during periods of stress. There is a lack of current, reliable research that explores the impact of humor usages on others because it is difficult to distinguish a healthy humor usage from one that is unhealthy. Justifications for harmful versus benign humor styles are subjective and lead to varying definitions of either usage.The Humor Styles Questionnaire (HSQ) has emerged as a different model for understanding the individual differences in humor styles. Humor can enhance individuals' self representation, and can also help to facilitate positive interactions with others. Humor can be both beneficial and detrimental to social relationships. The combination of these factors creates four distinct humor styles: self-enhancing, affiliative, aggressive, and self-defeating. Some styles of humor promote health and well-being, while other styles have the potential to negatively impact both mental and physical health. There are other humor scale surveys that are used to measure different aspects of humor, such as The Situational Humor Response Questionnaire, The Coping Humor Scale, The Sense of Humor Questionnaire, and The Multidimensional Sense of Humor Scale.
The Sense of Humor Questionnaire:
The Sense of Humor Questionnaire was proposed by Sven Svenbaks in 1974. The original Sense of Humor Questionnaire was 22 items broken into three categories that could be answered on a scale of 1-4. The three categories are: M-items (reactive to humor and implicit messages), L-items (attitude towards humorous people and situations), E-items (openness to expression of amusement). An example of each type of item is: when I go to the movies I prefer to know ahead what type of story it is (M-item), fun is aimed at hurting another (L-item), do you ever laugh so hard it hurts? (E-item). M-items and L-items use the same scale prompts, 1 = total agreement, 4 = total disagreement, whereas E-items use 1 = very seldom, 4 = very often. However, some of the items could overlap and fit into another group of items. Despite the dimensionality problem, the scores correlated moderately positively to each other (r = .29 to .38). The Sense of Humor Questionnaire was revised and included items on each sub-scale that evaluate more in-depth of each group. The revised version of the Sense of Humor Questionnaire M and L-items have strong internal consistency (.60’s and .70’s) but E-items have poor internal consistency. Due to poor internal consistency, E-items were not used in further studies, but M-items were used for the Situation Humor Response and L-items were used for the Humor Coping Scale.
The Coping Humor Scale:
The Coping Humor Scale was created by Rod A. Martin, Fazal Mittu and Herbert M. Lefcourt in 1983. The Coping Humor Scale is a survey of 7 items that assesses how much participants use humor to cope with stress. The responses on the survey are on a 1-4 scale, strongly disagree (1) to strongly agree (4). The alphas range from .60 to .70 and the test-retest reliability of 12 weeks alpha is .80. While the Coping Humor Scale doesn't have as high of an internal consistency as the Situational Humor Response Questionnaire, it is unique in the "self-observer agreement." The way participants rate themselves is strongly correlated with how their friends rate them on similar content.
The Situational Humor Response Questionnaire:
The Situational Humor Response Questionnaire was created by Martin and Lefcourt in 1984. It is based on Eysenck's definition of humor and is a survey composed of 18 different situations that are on a scale from everyday events to events that are anxiety inducing and 3 non-situational items. The three non-situational items are: how desirable it is to the participant to have friends that are easily amused, how much a participants' humor changes depending on the situation, and a self-rating question about how likely the participant is to laugh in different situations. In regard to the Situational Humor Response Questionnaire, humor is defined as how often and individual smiles, laughs, or shows amusement but ignores the type of humor used. The responses to the survey are on a 1-5 scale, I would not have been particularly amused (1) to I would have laughed heartily (5). The Situational Humor Response Questionnaire was tested on almost 500 participants in four groups and has alpha coefficients from .70 to .83. Of the participants, 33 were tested again a month later to examine the test-retest reliability which has an alpha of 0.70. The Situational Humor Response Questionnaire was compared to the Crowne-Marlowe (1960) Social Desirability Scale but had only .04 correlation meaning the Situational Humor Response Questionnaire is free from the bias of social desirability.
The Multidimensional Sense of Humor Scale:
The Multidimensional Sense of Humor Scale was created by James A. Thorson and F. C. Powell in 1991 and combines elements from the Situational Humor Response Questionnaire, the Coping Humor Scale, and the Sense of Humor Questionnaire. It was created to assess the different elements of sense humor such as playfulness, humorous ability, recognition and appreciation of humor, and using humor to achieve social goals or as a coping mechanism. The Multidimensional Sense of Humor Scale is composed of 124 statements with responses on a scale of 1-5. 1 = strongly disagree, 5 = strongly agree. The 124 statements were reduced to 29 with an alpha reliability of .92. The remaining statements are broken into four factors. Factor 1 combines humor production humor for social uses, Factor 2 combines coping humor and adaptive humor, Factor 3 evaluates humor appreciation, and Factor 4 evaluates the participant's attitude on humor. Some examples of statements on the Multidimensional Sense of Humor Scale respective to the factors are: I use humor to entertain my friends, uses of humor help me master difficult situations, I like a good joke, and people who tell jokes are a pain in the neck.
The Humor Styles Questionnaire (HSQ):
The Humor Styles Questionnaire (HSQ) was developed by Rod Martin and Patricia Doris (2003) to measure individual differences in styles of humor. Humor has been shown to be a personality characteristic that remains relatively stable over time. Humor is sometimes viewed as a one-dimensional trait. However, individuals seem to differ in the ways in which they use humor in their everyday lives, and different styles of humor seem to have different outcomes. As a result, two variables are measured within the questionnaire to cover multiple dimensions that humor contain. The Humor Styles Questionnaire was developed to identify the ways in which individuals differ in humor styles and how these differences influence health, well-being, relationships, and other outcomes.The Humor Styles Questionnaire is a 32-item self-report inventory used to identify how individuals use humor in their lives. Participants respond to the degree to which they agree with each statement (e.g., "I enjoy making people laugh") on a scale from 1 (totally disagree) to 7 (totally agree). The questionnaire measures two main factors in humor. The first factor measures whether humor is used to enhance the self or enhance one's relationships with others. The second factor measures whether the humor is relatively benevolent or potentially detrimental and destructive. The combination of these factors creates four distinct humor styles: affiliative, self-enhancing, aggressive, and self-defeating.The reliability of the Humor Style Questionnaire is questionable. The original questionnaire was written in German and due to inexact translations and cultural differences, when translated to another language it frequently generates test items that don’t produce anticipated results. When the HSQ is given in the original language, the test for internal consistencies was an alpha over 0.77 for all items. However, when translated, the internal consistency alpha varied from .55 (aggressive) to .89 (self-enhancing) in one study, Taher et al. (2008), and from .67 (self-defeating) to .78 (self-enhancing) in another study, Bilge and Saltuk (2007). While most of the styles tested reasonably well, the aggressive humor scale produced the lowest internal consistency values.
The Humor Styles Questionnaire (HSQ):
Affiliative humor Affiliative humor is defined as the style of humor used to enhance one's relationships with others in a benevolent, positive manner. This style of humor is typically used in a benevolent, self-accepting way. Individuals high in this dimension often use humor as a way to charm and amuse others, ease tension among others, and improve relationships. They are often spontaneous in their joke telling, frequently participate in witty banter, and enjoy laughing with others. Affiliative humor is similar to self-defeating humor because both styles of humor enhance the relationships with others. However, unlike self-defeating humor, affiliative humor is not used at one's own expense.A number of outcomes are associated with the use of affiliative humor. Individuals who report high levels of affiliative humor are more likely to initiate friendships and less likely to become victims of bullying. In an organizational setting, affiliative humor has been shown to increase group cohesiveness and promote creativity in the workplace. Affiliative humor is also associated with increased levels of (explicit) self-esteem, psychological well-being, emotional stability, and social intimacy. They are also more likely to exhibit higher levels of implicit self-esteem (independently of their level of explicit self-esteem).This style of humor is associated with decreased levels of depressive symptoms and anxiety. Individuals who use affiliative humor tend to have higher levels of extraversion and openness to experience as personality characteristics.Examples of items targeting affiliative humor on the HSQ include: I don't often joke around with my friends. (reversed) I rarely make other people laugh by telling funny stories about myself. (reversed) Self-enhancing humor Self-enhancing humor is a style of humor related to having a good-natured attitude toward life, having the ability to laugh at yourself, your circumstances and the idiosyncrasies of life in constructive, non-detrimental manner. It is used by individuals to enhance the self in a benevolent, positive manner. This type of humor is best understood as a type of coping or emotion-regulating humor in which individuals use humor to look on the bright side of a bad situation, find the silver lining or maintain a positive attitude even in trying times.Self-enhancing humor is associated with a number of personality variables as well as psychological, physical and health-related outcomes. Individuals who engage more in the self-enhancing humor style are less likely to exhibit depressive symptoms. In an organizational setting, self-enhancing humor has been shown to promote creativity and reduce stress in the workplace. The self-enhancing style of humor has also been shown to be related to increased levels of self-esteem, optimism, and psychological well-being, as well as decreased levels of depression and anxiety. Individuals who use the self-enhancing humor style are more likely to exhibit extraversion and openness to experience as personality characteristics and less likely to exhibit neuroticism.Examples of self-enhancing humor on the HSQ include: If I am feeling upset or unhappy I usually try to think of something funny about the situation to make myself feel better.
The Humor Styles Questionnaire (HSQ):
Even when I’m by myself, I’m often amused by the absurdities of life.
The Humor Styles Questionnaire (HSQ):
Aggressive humor Aggressive humor is a style of humor that is potentially detrimental towards others. This type of humor is characterized by the use of sarcasm, put-downs, teasing, criticism, ridicule, and other types of humor used at the expense of others. Aggressive humor often disregards the impact it might have on others. Prejudices such as racism and sexism are considered to be the aggressive style of humor. This type of humor may at times seem like playful fun, but sometimes the underlying intent is to harm or belittle others. Aggressive humor is related to higher levels of neuroticism and lower levels of agreeableness and conscientiousness.Individuals who exhibit higher levels of aggressive humor tend to score higher on measures of hostility and general aggression. Males tend to use aggressive humor more often than women.Examples of aggressive humor on the HSQ might include: When telling jokes or saying funny things, I am usually not very concerned about how other people are taking it.
The Humor Styles Questionnaire (HSQ):
People are never offended or hurt by my sense of humor. (reversed) If you think people are laughing at you, they probably are.
The Humor Styles Questionnaire (HSQ):
Self-defeating humor Self-defeating humor is the style of humor characterized by the use of potentially detrimental humor towards the self in order to gain approval from others. Individuals high in this dimension engage in self-disparaging humor in which laughter is often at their own expense. Self-defeating humor often comes in the form of pleasing others by being the "butt" of the joke. This style of humor is sometimes seen as a form of denial in which humor is used as a defense mechanism for hiding negative feelings about the self.A variety of variables are associated with self-defeating humor. Individuals who more frequently use self-defeating humor show increased depressive symptoms. Individuals who use this style of humor tend to have higher levels of neuroticism and lower levels of agreeableness and conscientiousness. Self-defeating humor is associated with higher levels of depression, anxiety and psychiatric symptoms. It is also associated with lower levels of self-esteem, psychological well-being and intimacy and higher levels of bullying victimization.Examples of self-defeating items on the Humor Styles Questionnaire might include: I often try to make people like or accept me more by saying something funny about my own weaknesses, blunders, or faults.
The Humor Styles Questionnaire (HSQ):
If I am having problems or feeling unhappy, I often cover it up by joking around, so that even my closest friends don’t know how I really feel. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Offset Press Inc.**
Offset Press Inc.:
Offset Press Inc., Also known as OPI, is an Iranian corporation that develops, manufactures, and distributes analogue and digital products and systems for the making, processing, and reproduction of images.While the demographic of the corporation has changed with time, OPI initially started developing text books.The company operates publicly in Iran, being listed on the Tehran Stock Exchange since 1990.
History:
Offset Press was established in 1957 with funding from governmental corporations. In 1962 one of the shareholders of the company sold 600 shares of the company to Department of Social Services, which remain Offset's principal shareholder. However, with changing to business laws in Iran, OPI became a Private Corporation in the early 1970s, then reverted to being a public corporation.
Company Structure:
Offset Press currently operates at 4 branches throughout Iran; At present, the corporation prints 75 million copies of school books with an average of 160 pages each, 165 million copies of 16 pages full colour newspapers, 35 million copies of magazines with an average of 36 pages each, and 2 million copies of other print media | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Contact binary (small Solar System body)**
Contact binary (small Solar System body):
A contact binary is a small Solar System body such as a minor planet or a comet that is composed of two bodies that have gravitated toward each other until they touch, resulting in a bilobated, peanut-like overall shape. Contact binaries are often rubble piles but distinct from real binary systems such as binary asteroids. The term is also used for stellar contact binaries.
Contact binary (small Solar System body):
An example of what is thought to be a contact binary is the Kuiper belt object 486958 Arrokoth, which was imaged by the New Horizons spacecraft during its flyby in January 2019.
Description:
Comet Churyumov–Gerasimenko and Comet Tuttle are most likely contact binaries, while asteroids suspected of being contact binaries include the unusually elongated 624 Hektor and the bilobated 216 Kleopatra and 4769 Castalia. 25143 Itokawa, which was photographed by the Hayabusa probe, also appears to be a contact binary which has resulted in an elongated, bent body. Asteroid 4179 Toutatis with its elongated shape, as photographed by Chang'e-2, is a contact binary candidate as well. Among the distant minor planets, the icy Kuiper belt object Arrokoth was confirmed to be a contact binary when the New Horizons spacecraft flew past in 2019.
Candidates:
The table contains objects observed by radar, considered to be contact binaries (candidate objects with a darker background). LCDB = Lightcurve Database.
All of them are near-Earth objects except for Arrokoth. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**224 (number)**
224 (number):
224 (two hundred [and] twenty-four) is the natural number following 223 and preceding 225.
In mathematics:
224 is a practical number, and a sum of two positive cubes 23 + 63. It is also 23 + 33 + 43 + 53, making it one of the smallest numbers to be the sum of distinct positive cubes in more than one way.224 is the smallest k with λ(k) = 24, where λ(k) is the Carmichael function.The mathematician and philosopher Alex Bellos suggested in 2014 that a candidate for the lowest uninteresting number would be 224 because it was, at the time, "the lowest number not to have its own page on [the English-language version of] Wikipedia".
In other areas:
In the SHA-2 family of six cryptographic hash functions, the weakest is SHA-224, named because it produces 224-bit hash values. It was defined in this way so that the number of bits of security it provides (half of its output length, 112 bits) would match the key length of two-key Triple DES.The ancient Phoenician shekel was a standardized measure of silver, equal to 224 grains, although other forms of the shekel employed in other ancient cultures (including the Babylonians and Hebrews) had different measures. Likely not coincidentally, as far as ancient Burma and Thailand, silver was measured in a unit called a tikal, equal to 224 grains. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MainActor**
MainActor:
MainActor was video editing software from MainConcept for Windows and since July 15, 2004, also for Linux. The last version was 5.5, being available for SuSE 10.1 and Ubuntu 6 distributions. In the beginning the software was written on the Amiga.
The software cost EUR€167.23 (US$199.00). A free demo version, that shows a watermark, was available.
MainActor was created and developed by Markus Moenig, the CEO of MainConcept at that time.
In May 2007, MainConcept decided to discontinue the supply and development of MainActor. In November 2007, MainConcept was taken over by DivX, Inc. The software is no longer officially available or supported. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Qing Lan**
Qing Lan:
Qing Lan is a Chinese physician-scientist and molecular epidemiologist who researches indoor air pollution, lung cancer, and occupational exposures. She is a senior investigator in the occupational and environmental epidemiology branch at the National Cancer Institute.
Life:
Lan earned a M.D. from the Weifang Medical University in 1985. In 2001, she completed a Ph.D. in molecular epidemiology at the Chinese Academy of Preventive Medicine, as part of a joint training program with the U.S. Environmental Protection Agency and the University of North Carolina at Chapel Hill. Lan earned a M.P.H. at the Johns Hopkins Bloomberg School of Public Health.Lan was awarded National Institutes of Health (NIH) scientific tenure in 2008. She uses classic epidemiologic methods, exposure assessment approaches, and biomarker platforms to evaluate relationships between exposures and cancer, and to obtain mechanistic insight. Lan is a senior investigator in the occupational and environmental epidemiology branch at the National Cancer Institute (NCI). Her research focuses on the molecular epidemiology of indoor air pollution and lung cancer and occupational exposures to known or suspected carcinogens, as well as the etiology of hematopoietic malignancies. She has conducted molecular epidemiologic studies of populations exposed to well-defined classes of chemical compounds that are known or suspected carcinogens, including polycyclic aromatic hydrocarbons (PAHs), benzene, formaldehyde, trichloroethylene, diesel, carbon black, nanoparticles, and others. Lan and her colleagues apply "omic" technologies in their studies including metabolomics, genomics, epigenetics, transcriptomics, proteomics, and whole genome sequencing, as well as conduct genome-wide association studies (GWAS). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ion trap**
Ion trap:
An ion trap is a combination of electric and/or magnetic fields used to capture charged particles — known as ions — often in a system isolated from an external environment. Atomic and molecular ion traps have a number of applications in physics and chemistry such as precision mass spectrometry, improved atomic frequency standards, and quantum computing. In comparison to neutral atom traps, ion traps have deeper trapping potentials (up to several electronvolts) that do not depend on the internal electronic structure of a trapped ion. This makes ion traps more suitable for the study of light interactions with single atomic systems. The two most popular types of ion traps are the Penning trap, which forms a potential via a combination of static electric and magnetic fields, and the Paul trap which forms a potential via a combination of static and oscillating electric fields.Penning traps can be used for precise magnetic measurements in spectroscopy. Studies of quantum state manipulation most often use the Paul trap. This may lead to a trapped ion quantum computer and has already been used to create the world's most accurate atomic clocks. Electron guns (a device emitting high-speed electrons, used in CRTs) can use an ion trap to prevent degradation of the cathode by positive ions.
History:
The physical principles of ion traps were first explored by F. M. Penning (1894–1953), who observed that electrons released by the cathode of an ionization vacuum gauge follow a long cycloidal path to the anode in the presence of a sufficiently strong magnetic field. A scheme for confining charged particles in three dimensions without the use of magnetic fields was developed by W. Paul based on his work with quadrupole mass spectrometers.
Theory:
A charged particle, such as an ion, feels a force from an electric field. As a consequence of Earnshaw's theorem, it is not possible to confine an ion in an electrostatic field. However, physicists have various ways of working around this theorem by using combinations of static magnetic and electric fields (as in a Penning trap) or by oscillating electric fields (Paul trap). In the case of the latter, a common analysis begins by observing how an ion of charge e and mass M behaves in an a.c. electric field cos (Ωt) . The force on the ion is given by F=eE , so by Newton's second law we have cos (Ωt) .Assuming that the ion has zero initial velocity, two successive integrations give the velocity and displacement as sin (Ωt) cos (Ωt) ,where r0 is a constant of integration. Thus, the ion oscillates with angular frequency Ω and amplitude proportional to the electric field strength. A trapping potential can be realized by spatially varying the strength of the a.c. electric field.
Theory:
Linear Paul Trap The linear Paul trap uses an oscillating quadrupole field to trap ions radially and a static potential to confine ions axially. The quadrupole field is realized by four parallel electrodes laying in the z -axis positioned at the corners of a square in the xy -plane. Electrodes diagonally opposite each other are connected and an a.c. voltage cos (Ωt) is applied. Along the z -axis, an analysis of the radial symmetry yields a potential ϕ=α+β(x2−y2) .The constants α and β are determined by boundary conditions on the electrodes and ϕ satisfies Laplace's equation ∇2ϕ=0 . Assuming the length of the electrodes r is much greater than their separation r0 , it can be shown that cos (Ωt)(x2−y2) .Since the electric field is given by the gradient of the potential, we get that cos (Ωt)(xe^x−ye^y) .Defining τ=Ωt/2 , the equations of motion in the xy -plane are a simplified form of the Mathieu equation, cos (2τ)xi Penning Trap A standard configuration for a Penning trap consists of a ring electrode and two end caps. A static voltage differential between the ring and end caps confines ions along the axial direction (between end caps). However, as expected from Earnshaw's theorem, the static electric potential is not sufficient to trap an ion in all three dimensions. To provide the radial confinement, a strong axial magnetic field is applied.
Theory:
For a uniform electric field E=Ee^x , the force F=eE accelerates a positively charged ion along the x -axis. For a uniform magnetic field B=Be^z , the Lorentz force causes the ion to move in circular motion with cyclotron frequency ωc=eBM .Assuming an ion with zero initial velocity placed in a region with E=Ee^x and B=Be^z , the equations of motion are cos (ωct)) sin (ωct)) ,z=0 .The resulting motion is a combination of oscillatory motion around the z -axis with frequency ωc and a drift velocity in the y -direction. The drift velocity is perpendicular to the direction of the electric field.
Theory:
For the radial electric field produced by the electrodes in a Penning trap, the drift velocity will precess around the axial direction with some frequency ωm , called the magnetron frequency. An ion will also have a third characteristic frequency ωz between the two end cap electrodes. The frequencies usually have widely different values with <≪ ωc
Ion trap mass spectrometers:
An ion trap mass spectrometer may incorporate a Penning trap (Fourier-transform ion cyclotron resonance), Paul trap or the Kingdon trap. The Orbitrap, introduced in 2005, is based on the Kingdon trap. Other types of mass spectrometers may also use a linear quadrupole ion trap as a selective mass filter.
Ion trap mass spectrometers:
Penning ion trap A Penning trap stores charged particles using a strong homogeneous axial magnetic field to confine particles radially and a quadrupole electric field to confine the particles axially. Penning traps are well suited for measurements of the properties of ions and stable charged subatomic particles. Precision studies of the electron magnetic moment by Dehmelt and others are an important topic in modern physics.
Ion trap mass spectrometers:
Penning traps can be used in quantum computation and quantum information processing and are used at CERN to store antimatter. Penning traps form the basis of Fourier-transform ion cyclotron resonance mass spectrometry for determining the mass-to-charge ratio of ions.The Penning Trap was invented by Frans Michel Penning and Hans Georg Dehmelt, who built the first trap in the 1950s.
Ion trap mass spectrometers:
Paul ion trap A Paul trap is a type of quadrupole ion trap that uses static direct current (DC) and radio frequency (RF) oscillating electric fields to trap ions. Paul traps are commonly used as components of a mass spectrometer. The invention of the 3D quadrupole ion trap itself is attributed to Wolfgang Paul who shared the Nobel Prize in Physics in 1989 for this work. The trap consists of two hyperbolic metal electrodes with their foci facing each other and a hyperbolic ring electrode halfway between the other two electrodes. Ions are trapped in the space between these three electrodes by the oscillating and static electric fields.
Ion trap mass spectrometers:
Kingdon trap and orbitrap A Kingdon trap consists of a thin central wire, an outer cylindrical electrode and isolated end cap electrodes at both ends. A static applied voltage results in a radial logarithmic potential between the electrodes. In a Kingdon trap there is no potential minimum to store the ions; however, they are stored with a finite angular momentum about the central wire and the applied electric field in the device allows for the stability of the ion trajectories. In 1981, Knight introduced a modified outer electrode that included an axial quadrupole term that confines the ions on the trap axis. The dynamic Kingdon trap has an additional AC voltage that uses strong defocusing to permanently store charged particles. The dynamic Kingdon trap does not require the trapped ions to have angular momentum with respect to the filament. An Orbitrap is a modified Kingdon trap that is used for mass spectrometry. Though the idea has been suggested and computer simulations performed neither the Kingdon nor the Knight configurations were reported to produce mass spectra, as the simulations indicated mass resolving power would be problematic.
Trapped ion quantum computer:
Some experimental work towards developing quantum computers use trapped ions. Units of quantum information called qubits are stored in stable electronic states of each ion, and quantum information can be processed and transferred through the collective quantized motion of the ions, interacting by the Coulomb force. Lasers are applied to induce coupling between the qubit states (for single qubit operations) or between the internal qubit states and external motional states (for entanglement between qubits).
Cathode ray tubes:
Ion traps were used in television receivers prior to the introduction of aluminized CRT faces around 1958, to protect the phosphor screen from ions. The ion trap must be delicately adjusted for maximum brightness. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polymer degradation**
Polymer degradation:
Polymer degradation is the reduction in the physical properties of a polymer, such as strength, caused by changes in its chemical composition. Polymers and particularly plastics are subject to degradation at all stages of their product life cycle, including during their initial processing, use, disposal into the environment and recycling. The rate of this degradation varies significantly; biodegradation can take decades, whereas some industrial processes can completely decompose a polymer in hours.
Polymer degradation:
Technologies have been developed to both inhibit or promote degradation. For instance, polymer stabilizers ensure plastic items are produced with the desired properties, extend their useful lifespans, and facilitate their recycling. Conversely, biodegradable additives accelerate the degradation of plastic waste by improving its biodegradability. Some forms of plastic recycling can involve the complete degradation of a polymer back into monomers or other chemicals.
Polymer degradation:
In general, the effects of heat, light, air and water are the most significant factors in the degradation of plastic polymers. The major chemical changes are oxidation and chain scission, leading to a reduction in the molecular weight and degree of polymerization of the polymer. These changes affect physical properties like strength, malleability, melt flow index, appearance and colour. The changes in properties are often termed "aging".
Susceptibility:
Plastics exist in huge variety, however several types of commodity polymer dominate global production: polyethylene (PE), polypropylene (PP), polyvinyl chloride (PVC), polyethylene terephthalate (PET, PETE), polystyrene (PS), polycarbonate (PC), and poly(methyl methacrylate) (PMMA). The degradation of these materials is of primary importance as they account for most plastic waste.
These plastics are all thermoplastics and are more susceptible to degradation than equivalent thermosets, as those are more thoroughly cross-linked. The majority (PP, PE, PVC, PS and PMMA) are addition polymers with all-carbon backbones that are more resistant to most types of degradation. PET and PC are condensation polymers which contain carbonyl groups more susceptible to hydrolysis and UV-attack.
Degradation during processing:
Thermoplastic polymers (be they virgin or recycled) must be heated until molten to be formed into their final shapes, with processing temperatures anywhere between 150-320 °C (300–600 °F) depending on the polymer. Polymers will oxidise under these conditions, but even in the absence of air, these temperatures are sufficient to cause thermal degradation in some materials. The molten polymer also experiences significant shear stress during extrusion and moulding, which is sufficient to snap the polymer chains. Unlike many other forms of degradation, the effects of melt-processing degrades the entire bulk of the polymer, rather than just the surface layers. This degradation introduces chemical weak points into the polymer, particularly in the form of hydroperoxides, which become initiation sites for further degradation during the object's lifetime.
Degradation during processing:
Polymers are often subject to more than one round of melt-processing, which can cumulatively advance degradation. Virgin plastic typically undergoes compounding to introduce additives such as dyes, pigments and stabilisers. Pelletised material prepared in this may also be pre-dried in an oven to remove trace moisture prior to its final melting and moulding into plastic items. Plastic which is recycled by simple re‑melting (mechanical recycling) will usually display more degradation than fresh material and may have poorer properties as a result.
Degradation during processing:
Thermal oxidation Although oxygen levels inside processing equipment are usually low, it cannot be fully excluded and thermal-oxidation will usually take place more readily than degradation that is exclusively thermal (i.e. without air). Reactions follow the general autoxidation mechanism, leading to the formation of organic peroxides and carbonyls. The addition of antioxidants may inhibit such processes.
Degradation during processing:
Thermal degradation Heating polymers to a sufficiently high temperature can cause damaging chemical changes, even in the absence of oxygen. This usually starts with chain scission, generating free radicals, which primarily engage in disproportionation and crosslinking. PVC is the most thermally sensitive common polymer, with major degradation occurring from ~250 °C (480 °F) onwards; other polymers degrade at higher temperatures.
Degradation during processing:
Thermo-mechanical degradation Molten polymers are non-Newtonian fluids with high viscosities, and the interaction between their thermal and mechanical degradation can be complex. At low temperatures, the polymer-melt is more viscous and more prone to mechanical degradation via shear stress. At higher temperatures, the viscosity is reduced, but thermal degradation is increased. Friction at points of high sheer can also cause localised heating, leading to additional thermal degradation.
Degradation during processing:
Mechanical degradation can be reduced by the addition of lubricants, also referred to as processing aids or flow aids. These can reduce friction against the processing machinery but also between polymer chains, resulting in a decrease in melt-viscosity. Common agents are high-molecular-weight waxes (paraffin wax, wax esters, etc.) or metal stearates (i.e.zinc stearate).
In-service degradation:
Most plastic items, like packaging materials, are used briefly and only once. These rarely experience polymer degradation during their service-lives. Other items experience only gradual degradation from the natural environment. Some plastic items, however, can experience long service-lives in aggressive environments, particularly those where they are subject to prolonged heat or chemical attack. Polymer degradation can be significant in these cases and, in practice, is often only held back by the use of advanced polymer stabilizers. Degradation arising from the effects of heat, light, air and water is the most common, but other means of degradation exist.
In-service degradation:
The in-service degradation of mechanical properties is an important aspect which limits the applications of these materials. Polymer degradation caused by in-service degradation can cause life threatening accidents. In 1996, a baby was fed via a Hickman line and suffered an infection, when new connectors were used by a hospital. The reason behind this infection was the cracking and erosion of the pipes from the inner side due to contact with liquid media.
In-service degradation:
Chlorine-induced cracking Drinking water which has been chlorinated to kill microbes may contain trace levels of chlorine. The World Health Organization recommends an upper limit of 5 ppm.
Although low, 5 ppm is enough to slowly attack certain types of plastic, particularly when the water is heated, as it is for washing.
Polyethylene, polybutylene and acetal resin (polyoxymethylene) pipework and fittings are all susceptible. Attack leads to hardening of pipework, which can leave it brittle and more susceptible to mechanical failure.
In-service degradation:
Electronics Plastics are used extensively in the manufacture of electrical items, such as circuit boards and electrical cables. These applications can be harsh, exposing the plastic to a mixture of thermal, chemical and electrochemical attack. Many electric items like transformers, microprocessors or high-voltage cables operate at elevated temperatures for years, or even decades, resulting in low-level but continuous thermal oxidation. This can be exacerbated by direct contact with metals, which can promote the formation of free-radicals, for instance, by the action of Fenton reactions on hydroperoxides. High voltage loads can also damage insulating materials such as dielectrics, which degrade via electrical treeing caused by prolonged electrical field stress.
In-service degradation:
Galvanic action Polymer degradation by galvanic action was first described in the technical literature in 1990 by Michael C. Faudree, an employee at General Dynamics, Fort Worth Division. The phenomenon has been referred to as the "Faudree Effect", and has had implications for preventing corrosion on the YF-22 (F-22 prototype) aircraft for safety such as changes in design. When carbon-fiber-reinforced polymer is attached to a metal surface, the carbon fiber can act as a cathode if exposed to water or sufficient humidity, resulting in galvanic corrosion. This has been seen in engineering when carbon-fiber polymers have been used to reinforce weakened steel structures. Reactions have also been seen in aluminium and magnesium alloys, polymers affected include bismaleimides (BMI), and polyimides. The mechanism of degradation is believed to involve the electrochemical generation of hydroxide ions, which then cleave the amide bonds.
Degradation in the environment:
Most plastics do not biodegrade readily, however, they do still degrade in the environment because of the effects of UV-light, oxygen, water and pollutants. This combination is often generalised as polymer weathering. Chain breaking by weathering causes increasing embrittlement of plastic items, which eventually causes them to break apart. Fragmentation then continues until eventually microplastics are formed. As the particle sizes get smaller, so their combined surface area increases. This facilitates the leaching of additives out of plastic and into the environment. Many controversies associated with plastics actually relate to these additives.
Degradation in the environment:
Photo-oxidation Photo-oxidation is the combined action of UV-light and oxygen and is the most significant factor in the weathering of plastics. Although many polymers do not absorb UV-light, they often contain impurities like hydroperoxide and carbonyl groups introduced during thermal processing, which do. These act as photoinitiators to give complex free radical chain reactions where the mechanisms of autoxidation and photodegradation combine. Photo-oxidation can be held back by light stabilizers such as hindered amine light stabilizers (HALS).
Degradation in the environment:
Hydrolysis Polymers with an all-carbon backbone, such as polyolefins, are usually resistant to hydrolysis. Condensation polymers like polyesters, polyamides, polyurethanes and polycarbonates can be degraded by hydrolysis of their carbonyl groups, to give lower molecular weight molecules. Such reactions are exceedingly slow at ambient temperatures, however, they remain a significant source of degradation for these materials, particularly in the marine environment. Swelling caused by the absorption of minute amounts of water can also cause environmental stress cracking, which accelerates degradation.
Degradation in the environment:
Ozonolysis of rubbers Polymers, which are not fully saturated, are vulnerable to attack by ozone. This gas exists naturally in the atmosphere but is also formed by nitrogen oxides released in vehicle exhaust pollution. Many common elastomers (rubbers) are affected, with natural rubber, polybutadiene, styrene-butadiene rubber and NBR being most sensitive to degradation. The ozonolysis reaction results in immediate chain scission. Ozone cracks in products under tension are always oriented at right angles to the strain axis, so will form around the circumference in a rubber tube bent over. Such cracks are dangerous when they occur in fuel pipes because the cracks will grow from the outside exposed surfaces into the bore of the pipe, and fuel leakage and fire may follow. The problem of ozone cracking can be prevented by adding antiozonants.
Degradation in the environment:
Biological degradation The major appeal of biodegradation is that, in theory, the polymer will be completely consumed in the environment without needing complex waste management and that the products of this will be non-toxic. Most common plastics biodegrade very slowly, sometimes to the extent that they are considered non-biodegradable. As polymers are ordinarily too large to be absorbed by microbes, biodegradation initially relies on secreted extracellular enzymes to reduce the polymers to manageable chain-lengths. This requires the polymers bare functional groups the enzymes can 'recognise', such as ester or amide groups. Long-chain polymers with all-carbon backbones like polyolefins, polystyrene and PVC will not degrade by biological action alone and must first be oxidised to create chemical groups which the enzymes can attack.Oxidation can be caused by melt-processing or weathering in the environment. Oxidation may be intentionally accelerated by the addition of biodegradable additives. These are added to the polymer during compounding to improve the biodegradation of otherwise very resistant plastics. Similarly, biodegradable plastics have been designed which are intrinsically biodegradable, provided they are treated like compost and not just left in a landfill site where degradation is very difficult because of the lack of oxygen and moisture.
Degradation during recycling:
The act of recycling plastic degrades its polymer chains, usually as a result of thermal damage similar to that seen during initial processing. In some cases, this is turned into an advantage by intentionally and completely depolymerising the plastic back into its starting monomers, which can then be used to generate fresh, un-degraded plastic. In theory, this chemical (or feedstock) recycling offers infinite recyclability, but it is also more expensive and can have a higher carbon footprint because of its energy costs. Mechanical recycling, where the plastic is simply remelted and reformed, is more common, although this usually results in a lower-quality product. Alternatively, plastic may simply be burnt as a fuel in a waste-to-energy process.
Degradation during recycling:
Remelting Thermoplastic polymers like polyolefins can be remelted and reformed into new items. This approach is referred to as mechanical recycling and is usually the simplest and most economical form of recovery. Post-consumer plastic will usually already bare a degree of degradation. Another round of melt-processing will exacerbate this, with the result being that mechanically recycled plastic will usually have poorer mechanical properties than virgin plastic. Degradation can be enhanced by high concentrations of hydroperoxides, cross-contamination between different types of plastic and by additives present within the plastic. Technologies developed to enhance the biodegradation of plastic can also conflict with its recycling, with oxo-biodegradable additives, consisting of metallic salts of iron, magnesium, nickel, and cobalt, increasing the rate of thermal degradation. Depending on the polymer in question, an amount of virgin material may be added to maintain the quality of the product.
Degradation during recycling:
Thermal depolymerisation & pyrolysis As polymers approach their ceiling temperature, thermal degradation gives way to complete decomposition. Certain polymers like PTFE, polystyrene and PMMA undergo depolymerization to give their starting monomers, whereas others like polyethylene undergo pyrolysis, with random chain scission giving a mixture of volatile products. Where monomers are obtained, they can be converted back into new plastic (chemical or feedstock recycling), whereas pyrolysis products are used as a type of synthetic fuel (energy recycling). In practice, even very efficient depolymerisation to monomers tends to see some competitive pyrolysis. Thermoset polymers may also be converted in this way, for instance, in tyre recycling.
Degradation during recycling:
Chemical depolymerisation Condensation polymers baring cleavable groups such as esters and amides can also be completely depolymerised by hydrolysis or solvolysis. This can be a purely chemical process but may also be promoted by enzymes. Such technologies are less well developed than those of thermal depolymerisation, but have the potential for lower energy costs. Thus far, polyethylene terephthalate has been the most heavily studied polymer. Alternatively, waste plastic may be converted into other valuable chemicals (not necessarily monomers) by microbial action.
Stabilisers:
Hindered amine light stabilizers (HALS) stabilise against weathering by scavenging free radicals that are produced by photo-oxidation of the polymer matrix. UV-absorbers stabilise against weathering by absorbing ultraviolet light and converting it into heat. Antioxidants stabilise the polymer by terminating the chain reaction because of the absorption of UV light from sunlight. The chain reaction initiated by photo-oxidation leads to cessation of crosslinking of the polymers and degradation of the property of polymers. Antioxidants are used to protect from thermal degradation.
Detection:
Degradation can be detected before serious cracks are seen in a product using infrared spectroscopy. In particular, peroxy-species and carbonyl groups formed by photo-oxidation have distinct absorption bands. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cognitive computing**
Cognitive computing:
Cognitive computing refers to technology platforms that, broadly speaking, are based on the scientific disciplines of artificial intelligence and signal processing. These platforms encompass machine learning, reasoning, natural language processing, speech recognition and vision (object recognition), human–computer interaction, dialog and narrative generation, among other technologies.
Definition:
At present, there is no widely agreed upon definition for cognitive computing in either academia or industry.In general, the term cognitive computing has been used to refer to new hardware and/or software that mimics the functioning of the human brain (2004) and helps to improve human decision-making. In this sense, cognitive computing is a new type of computing with the goal of more accurate models of how the human brain/mind senses, reasons, and responds to stimulus. Cognitive computing applications link data analysis and adaptive page displays (AUI) to adjust content for a particular type of audience. As such, cognitive computing hardware and applications strive to be more affective and more influential by design.
Definition:
The term "cognitive system" also applies to any artificial construct able to perform a cognitive process where a cognitive process is the transformation of data, information, knowledge, or wisdom to a new level in the DIKW Pyramid. While many cognitive systems employ techniques having their origination in artificial intelligence research, cognitive systems, themselves, may not be artificially intelligent. For example, a neural network trained to recognize cancer on an MRI scan may achieve a higher success rate than a human doctor. This system is certainly a cognitive system but is not artificially intelligent.
Definition:
Cognitive systems may be engineered to feed on dynamic data in real-time, or near real-time, and may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).
Cognitive analytics:
Cognitive computing-branded technology platforms typically specialize in the processing and analysis of large, unstructured datasets.
Applications:
Education Even if cognitive computing can not take the place of teachers, it can still be a heavy driving force in the education of students. Cognitive computing being used in the classroom is applied by essentially having an assistant that is personalized for each individual student. This cognitive assistant can relieve the stress that teachers face while teaching students, while also enhancing the student’s learning experience over all. Teachers may not be able to pay each and every student individual attention, this being the place that cognitive computers fill the gap. Some students may need a little more help with a particular subject. For many students, Human interaction between student and teacher can cause anxiety and can be uncomfortable. With the help of Cognitive Computer tutors, students will not have to face their uneasiness and can gain the confidence to learn and do well in the classroom. While a student is in class with their personalized assistant, this assistant can develop various techniques, like creating lesson plans, to tailor and aid the student and their needs.
Applications:
Healthcare Numerous tech companies are in the process of developing technology that involves cognitive computing that can be used in the medical field. The ability to classify and identify is one of the main goals of these cognitive devices. This trait can be very helpful in the study of identifying carcinogens. This cognitive system that can detect would be able to assist the examiner in interpreting countless numbers of documents in a lesser amount of time than if they did not use Cognitive Computer technology. This technology can also evaluate information about the patient, looking through every medical record in depth, searching for indications that can be the source of their problems.
Applications:
Commerce Together with Artificial Intelligence, it has been used in warehouse management systems to collect, store, organize and analyze all related supplier data. All these aims at improving efficiency, enabling faster decision-making, monitoring inventory and fraud detection Human Cognitive Augmentation In situations where humans are using or working collaboratively with cognitive systems, called a human/cog ensemble, results achieved by the ensemble are superior to results obtainable by the human working alone. Therefore, the human is cognitively augmented. In cases where the human/cog ensemble achieves results at, or superior to, the level of a human expert then the ensemble has achieved synthetic expertise. In a human/cog ensemble, the "cog" is a cognitive system employing virtually any kind of cognitive computing technology.Other use casesSpeech recognition Sentiment analysis Face detection Risk assessment Fraud detection Behavioral recommendations
Industry work:
Cognitive computing in conjunction with big data and algorithms that comprehend customer needs, can be a major advantage in economic decision making.
Industry work:
The powers of cognitive computing and artificial intelligence hold the potential to affect almost every task that humans are capable of performing. This can negatively affect employment for humans, as there would be no such need for human labor anymore. It would also increase the inequality of wealth; the people at the head of the cognitive computing industry would grow significantly richer, while workers without ongoing, reliable employment would become less well off.The more industries start to utilize cognitive computing, the more difficult it will be for humans to compete. Increased use of the technology will also increase the amount of work that AI-driven robots and machines can perform. Only extraordinarily talented, capable and motivated humans would be able to keep up with the machines. The influence of competitive individuals in conjunction with artificial intelligence/cognitive computing with has the potential to change the course of humankind. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Power Management Bus**
Power Management Bus:
The Power Management Bus (PMBus) is a variant of the System Management Bus (SMBus) which is targeted at digital management of power supplies. Like SMBus, it is a relatively slow speed two wire communications protocol based on I²C. Unlike either of those standards, it defines a substantial number of domain-specific commands rather than just saying how to communicate using commands defined by the reader.
Overview:
The first part gives an overview with particular reference to SMBus, while the second part goes into detail about all the commands defined for PMBus devices. There are both standardized commands and manufacturer specific commands. Conformance requirements for PMBus are minimal, and are described in Part I of the specification. See the PMBus 1.1 specification for full details.
Comparison to SMBus At the lowest level, PMBus follows SMBus 1.1 with a few differences. This information is presented in more detail in Part I of the PMBus specification: 400 kHz bus speeds are allowed (vs. the 100 kHz limit of SMBus) In PMBus, blocks may include up to 255 bytes (vs. the 32 byte limit of SMbus).
As in SMBus 2.0, only seven bit addressing is used.
Some commands use the SMBus 2.0 block process calls.
Either the SMBALERT# mechanism or the SMBus 2.0 host notify protocol may be used to notify the host about faults.
PMBus devices are required to support a Group Protocol, where devices defer acting on commands until they receive a terminating STOP. Since commands can be issued to many different devices before that STOP, this lets the PMBus master synchronize their actions.
An "extended command" protocol is defined, using a second command byte to add 256 more codes each for both standard and manufacturer-specific commands.
Overview:
PMBus commands The PMBus command space can be seen as exposing a variety of readable, and often writable, device attributes such as measured voltage and current levels, temperatures, fan speeds, and more. Different devices will expose different attributes. Some devices may expose such attributes in multiple "pages", as for example one page managing each power supply rail (maybe 3.3V, 5V, 12V, −12V, and a programmable supply supporting 1.0-1.8V). The device may set warning and fault limits, where crossing a limit will alert the host and possibly trigger fault recovery. Different devices will offer different capabilities.
Overview:
The ability to query a PMBus 1.1 device about its capabilities may be particularly useful when building tools, especially in conjunction with the ability to store user data in the devices (e.g. in EEPROM). Without such a query capability, only error-prone external configuration data is available.
Part II of the PMBus specification covers every standard PMBus command. It also describes the models for managing output power and current, managing faults, converting values to and from the formats understood by a given device, and accessing manufacturer-provided information such as inventory data (model and serial number, etc.) and device ratings.
Linear11 Floating Point Format PMBus defines its own 16-bit floating point format, termed "Linear11".
Overview:
N = Signed Exponent Y = Signed MantissaValue Represented = Y × 2NUnlike Half-precision floating-point format and other typical float formats, a signed 11-bit mantissa is used rather than an unsigned fraction with a separate sign bit. Similarly, the exponent is stored as a signed 5-bit number rather than a more typical biased unsigned number. This has the following implications: The sign of the resulting number uniquely depends on bit 2 of the high byte, rather than the most significant bit of the high byte.
Overview:
Because both values are stored as signed numbers, it is necessary to explicitly sign-extend both values when decoding the number. However, this makes the encoding process simpler.
There is no representation for negative zero.
Inverting the sign of the resulting number must take into account some special edge cases: The sign of the result can be inverted with an 11-bit Two's complement operation, if and only if Y ≠ -1024.
When Y = -1024, the sign inversion process must produce Y = 512, N = N + 1, if and only if N remains less than 32.
The most negative number is represented with Y = -1024 and N = 31. There is not a positive representation for this number.
Patenting issues:
In January 2008, Power-One was awarded a win in a patent infringement suit between them and Artesyn Technologies for the latter's PMBus enabled converters. Power-One claims that PMBus applications need a license from them. Potential PMBus users should investigate the issue for themselves. See external links. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SMURF1**
SMURF1:
E3 ubiquitin-protein ligase SMURF1 is an enzyme that in humans is encoded by the SMURF1 gene. The SMURF1 Gene encodes a protein with a size of 757 amino acids and the molecular mass of this protein is 86114 Da.
Function:
Smad ubiquitination regulatory factor 1 (Smurf1) is part of a gene that encodes a ubiquitin ligase and is specific for receptor-regulated SMAD proteins in the bone morphogenetic protein (BMP) pathway.
A similar protein in Xenopus is involved in embryonic pattern formation. Alternative splicing results in multiple transcript variants encoding different isoforms. An additional transcript variant has been identified, but its full length sequence has not been determined.
HIV:
The inhibition of HIV-1 replication in HeLa P4/R5 cells can be achieved by siRNA-mediated knockdown of SMURF1.
Cancer:
Breast SMURF1 and SMURF2 have shown to exhibit E3 ligase-dependent and E3 ligase-independent activities in a multitude of different cell types whereby smurfs can act as tumor promoters or tumor suppressors by regulating biological tumorigenesis-related processes. Recent research in breast cancer explains a relationship between SMURF1 and ER alpha (Estrogen receptor alpha) during breast cancer growth. Since ER alpha is expressed in most breast cancers and is attributed to contributing to the progression of estrogen-dependent cancer, it has been supported that the reduction of SMURF1 decreases the proliferation of ER alpha-positive cells in vitro and in vivo. Thus, it is feasible that targeting SMURF1 may become a potential therapy for ER alpha-positive breast cancer.
Cancer:
Gastrointestinal Smurf1 may the potential to act as an oncogenic factor in other essential organs of the body. For instance, high levels of SMURF1’s are linked to low survival rates of patients who are diagnosed with gastric cancer (GC) and clear cell renal cell carcinoma (ccRCC). Similarly to the suppression of SMURF1 to possibly treat breast cancer, the inhibition of Smurf1 can decrease tumorigenesis in various types of digestive cancer cell models like pancreatic and gastric cancers.
Neurodegenerative Disorders:
Continued research shows that SMURF1 can also been linked to various diseases. The downregulation of SMURF1 expression has been observed in neurodegenerative disorders such as Alzheimer's disease and Parkinson’s Disease. Research is showing that SMURF1 plays a role in neuronal necroptosis whereby the up-regulation of Smurf1 was observed in the brain cortex of adult rats who experienced neuroinflammation, and Smurf1 knockdown with siRNA inhibited neuronal necroptosis. This suggests that Smurf1 may promote neuronal necroptosis in neuroinflammatory conditions.
Neurodegenerative Disorders:
SMURF1 expression was increased in brain tissue samples from Parkinson's disease patients compared to controls, and that this increase was positively correlated with the accumulation of α-synuclein aggregates. Furthermore, the overexpression of SMURF1 in cultured cells led to increased levels of α-synuclein aggregates, while knockdown of SMURF1 reduced α-synuclein aggregation. In the context of neurodegeneration, SMURF1 has been implicated in the regulation of protein quality control mechanisms such as autophagy and the ubiquitin-proteasome system, which are critical for the clearance of misfolded or aggregated proteins that can contribute to disease pathogenesis.
Neurodegenerative Disorders:
While the exact mechanisms by which SMURF1 contributes to neurodegenerative disorders are still not fully understood, there is growing evidence, research studies may suggest that SMURF1 may be a potential target for therapeutic intervention in protein aggregation and improving cellular proteostasis in neurodegenerative diseases.
Post Translational Modifications:
Under the influence of NDFIP1, it undergoes auto-ubiquitination. The SMURF1 protein is modified by the SCF(FBXL15) complex at two lysine residues, Lys-381 and Lys-383, which leads to its degradation by the proteasome. Whereby, Lys-383 is the primary site of ubiquitination.
Interactions:
Smurfs are composed of several distinct domains that include an N-terminal C2 domain, two to three WW domains containing tryptophan residues, and an HECT domain. The C2 domain plays a crucial role in mediating the interaction of Smurfs with intracellular membranes. On the other hand, the WW domains of Smurfs are typically involved in protein-protein interactions, allowing them to interact with various target proteins. SMURF1 has been shown to interact with: ARHGEF9 PLEKHO1 SMURF2 TRAF4 Interacts with FBXL15 (via HECT domain) SMAD7 TGFBR1 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ternary complex**
Ternary complex:
A ternary complex is a protein complex containing three different molecules that are bound together. In structural biology, ternary complex can also be used to describe a crystal containing a protein with two small molecules bound, for example cofactor and substrate; or a complex formed between two proteins and a single substrate. In Immunology, ternary complex can refer to the MHC–peptide–T-cell-receptor complex formed when T cells recognize epitopes of an antigen.
Ternary complex:
Some other example can be taken like ternary complex while eukaryotic translation, in which ternary complex is composed of eIF-3 & eIF-2 + Ribosome 40s subunit+ tRNAi.
Ternary complex:
A ternary complex can be a complex formed between two substrate molecules and an enzyme. This is seen in multi-substrate enzyme-catalyzed reactions where two substrates and two products can be formed. The ternary complex is an intermediate between the product formation in this type of enzyme-catalyzed reactions. An example for a ternary complex is seen in random-order mechanism or a compulsory-order mechanism of enzyme catalysis for multi substrates. The term ternary complex can also refer to a polymer formed by electrostatic interactions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Propylphenidate**
Propylphenidate:
Propylphenidate (also known as PPH) is a piperidine based stimulant drug, closely related to methylphenidate, but with the methyl ester replaced by a propyl ester. It was banned in the UK as a Temporary Class Drug from April 2015 following its unapproved sale as a designer drug.
Legal status:
Propylphenidate is illegal in Sweden as of 26. January 2016, and in Finland since 2017. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conductance quantum**
Conductance quantum:
The conductance quantum, denoted by the symbol G0, is the quantized unit of electrical conductance. It is defined by the elementary charge e and Planck constant h as: G0=2e2h = 7.748091729...×10−5 S.It appears when measuring the conductance of a quantum point contact, and, more generally, is a key component of the Landauer formula, which relates the electrical conductance of a quantum conductor to its quantum properties. It is twice the reciprocal of the von Klitzing constant (2/RK).
Conductance quantum:
Note that the conductance quantum does not mean that the conductance of any system must be an integer multiple of G0. Instead, it describes the conductance of two quantum channels (one channel for spin up and one channel for spin down) if the probability for transmitting an electron that enters the channel is unity, i.e. if transport through the channel is ballistic. If the transmission probability is less than unity, then the conductance of the channel is less than G0. The total conductance of a system is equal to the sum of the conductances of all the parallel quantum channels that make up the system.
Derivation:
In a 1D wire, connecting two reservoirs of potential u1 and u2 adiabatically: The density of states is where the factor 2 comes from electron spin degeneracy, h is the Planck constant, and v is the electron velocity.
The voltage is: where e is the electron charge.
The 1D current going across is the current density: This results in a quantized conductance:
Occurrence:
Quantized conductance occurs in wires that are ballistic conductors, when the elastic mean free path is much larger than the length of the wire: lel≫L . B. J. van Wees et al. first observed the effect in a point contact in 1988. Carbon nanotubes have quantized conductance independent of diameter. The quantum hall effect can be used to precisely measure the conductance quantum value. It also occurs in electrochemistry reactions and in association with the quantum capacitance defines the rate with which electrons are transferred between quantum chemical states as described by the quantum rate theory. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Split-step method**
Split-step method:
In numerical analysis, the split-step (Fourier) method is a pseudo-spectral numerical method used to solve nonlinear partial differential equations like the nonlinear Schrödinger equation. The name arises for two reasons. First, the method relies on computing the solution in small steps, and treating the linear and the nonlinear steps separately (see below). Second, it is necessary to Fourier transform back and forth because the linear step is made in the frequency domain while the nonlinear step is made in the time domain.
Split-step method:
An example of usage of this method is in the field of light pulse propagation in optical fibers, where the interaction of linear and nonlinear mechanisms makes it difficult to find general analytical solutions. However, the split-step method provides a numerical solution to the problem. Another application of the split-step method that has been gaining a lot of traction since the 2010s is the simulation of Kerr frequency comb dynamics in optical microresonators. The relative ease of implementation of the Lugiato–Lefever equation with reasonable numerical cost, along with its success in reproducing experimental spectra as well as predicting soliton behavior in these microresonators has made the method very popular.
Description of the method:
Consider, for example, the nonlinear Schrödinger equation ∂A∂z=−iβ22∂2A∂t2+iγ|A|2A=[D^+N^]A, where A(t,z) describes the pulse envelope in time t at the spatial position z . The equation can be split into a linear part, ∂AD∂z=−iβ22∂2A∂t2=D^A, and a nonlinear part, ∂AN∂z=iγ|A|2A=N^A.
Both the linear and the nonlinear parts have analytical solutions, but the nonlinear Schrödinger equation containing both parts does not have a general analytical solution.
However, if only a 'small' step h is taken along z , then the two parts can be treated separately with only a 'small' numerical error. One can therefore first take a small nonlinear step, exp [iγ|A(t,z)|2h]A(t,z), using the analytical solution. Note that this ansatz imposes |A(z)|2=const.
and consequently γ∈R The dispersion step has an analytical solution in the frequency domain, so it is first necessary to Fourier transform AN using exp [i(ω−ω0)t]dt ,where ω0 is the center frequency of the pulse.
It can be shown that using the above definition of the Fourier transform, the analytical solution to the linear step, commuted with the frequency domain solution for the nonlinear step, is exp [iβ22(ω−ω0)2h]A~N(ω,z).
Description of the method:
By taking the inverse Fourier transform of A~(ω,z+h) one obtains A(t,z+h) ; the pulse has thus been propagated a small step h . By repeating the above N times, the pulse can be propagated over a length of Nh The above shows how to use the method to propagate a solution forward in space; however, many physics applications, such as studying the evolution of a wave packet describing a particle, require one to propagate the solution forward in time rather than in space. The non-linear Schrödinger equation, when used to govern the time evolution of a wave function, takes the form iℏ∂ψ∂t=−ℏ22m∂2ψ∂x2+γ|ψ|2ψ=[D^+N^]ψ, where ψ(x,t) describes the wave function at position x and time t . Note that D^=−ℏ22m∂2∂x2 and N^=γ|ψ|2 , and that m is the mass of the particle and ℏ is Planck's constant over 2π .The formal solution to this equation is a complex exponential, so we have that ψ(x,t)=e−it(D^+N^)/ℏψ(x,0) .Since D^ and N^ are operators, they do not in general commute. However, the Baker-Hausdorff formula can be applied to show that the error from treating them as if they do will be of order dt2 if we are taking a small but finite time step dt . We therefore can write ψ(x,t+dt)≈e−idtD^/ℏe−idtN^/ℏψ(x,t) .The part of this equation involving N^ can be computed directly using the wave function at time t , but to compute the exponential involving D^ we use the fact that in frequency space, the partial derivative operator can be converted into a number by substituting ik for ∂∂x , where k is the frequency (or more properly, wave number, as we are dealing with a spatial variable and thus transforming to a space of spatial frequencies—i.e. wave numbers) associated with the Fourier transform of whatever is being operated on. Thus, we take the Fourier transform of e−idtN^/ℏψ(x,t) ,recover the associated wave number, compute the quantity eidtk2 ,and use it to find the product of the complex exponentials involving N^ and D^ in frequency space as below: eidtk2F[e−idtN^ψ(x,t)] ,where F denotes a Fourier transform. We then inverse Fourier transform this expression to find the final result in physical space, yielding the final expression ψ(x,t+dt)=F−1[eidtk2F[e−idtN^ψ(x,t)]] .A variation on this method is the symmetrized split-step Fourier method, which takes half a time step using one operator, then takes a full-time step with only the other, and then takes a second half time step again with only the first. This method is an improvement upon the generic split-step Fourier method because its error is of order dt3 for a time step dt The Fourier transforms of this algorithm can be computed relatively fast using the fast Fourier transform (FFT). The split-step Fourier method can therefore be much faster than typical finite difference methods.
External references:
Thomas E. Murphy, Software, http://www.photonics.umd.edu/software/ssprop/ Andrés A. Rieznik, Software, http://www.freeopticsproject.org Prof. G. Agrawal, Software, http://www.optics.rochester.edu/workgroups/agrawal/grouphomepage.php?pageid=software Thomas Schreiber, Software, http://www.fiberdesk.com Edward J. Grace, Software, http://www.mathworks.com/matlabcentral/fileexchange/24016 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Steep turn (aviation)**
Steep turn (aviation):
A steep turn in aviation, performed by an aircraft (usually fixed wing), is a turn that involves a bank of more than 30 degrees. This means the angle created by the axis running along both wings and the horizon is more than 30 degrees. Generally, for training purposes, steep turns are demonstrated and practiced at 45 degrees, sometimes more. The purpose of learning and practicing a steep turn is to train a pilot to maintain control of an aircraft in cases of emergency such as structural damage, loss of power in one engine etc.
Steep turn (aviation):
Entry procedure for a steep turn involves putting the aircraft into a bank (left or right), simultaneously increasing the thrust adequately to maintain altitude, while pulling back on the flight stick or flight yoke to speed up the turning process. For Jet training an increase of 7-8% of N1 caters. While doing this the pilot has to ensure no loss or gain of altitude. The pilot is expected to constantly look outside the aircraft while keeping a close check on the Attitude indicator for angle of bank. When the aircraft is in a 45 degree bank, it is common for a certain amount of opposite aileron control to be required to prevent the aircraft from slipping into a steeper bank.
Tolerances and technicalities:
For purposes of testing, a steep turn is a 360 degree turn in either direction with a 45 degree bank angle while maintaining altitude, speed and bank within certain set tolerances. Furthermore, the roll out heading must be within 10 degrees of the entry heading for the manoeuvre to be deemed successful by most flight training standards and check rides.
Tolerances and technicalities:
A steep turn increases the load factor of an aircraft. Simply put the aircraft feels heavier due to the effect of centrifugal force. At a 45 degree bank angle the load factor of an aircraft is 1.4 i.e. the aircraft effectively becomes 40% heavier. This requires the pilot to exert backward pressure on the flight stick or column to raise the nose, thereby creating more lift to maintain altitude. In the event that backward pressure is not exerted on the stick / column, the aircraft will tend to lose altitude. This increase in the lift required also generates what is referred to as lift induced drag which without increased power, means the aircraft will lose speed.
Parallax error based on pilot seat position:
This applies to cockpits with two seats (usually pilot and co-pilot) arranged horizontally or abreast. Assuming the pilot performing the manoeuvre is positioned in the left seat (command seat) when a steep turn to the right is performed, the nose will appear to fall. Conversely, a steep turn to the left will make it seem like the nose is rising against the horizon. This is parallax error based purely on the pilot's vantage point and instinctively causes a pull back or a push down reaction on the control stick / column which is an incorrect reaction. A good way of eliminating the effect of this error is to keep an eye on the horizon and maintain the aircraft's position relative to the horizon line thereby allowing you to approximate the 45 degree angle the panel top creates against the horizon. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Triaugmented hexagonal prism**
Triaugmented hexagonal prism:
In geometry, the triaugmented hexagonal prism is one of the Johnson solids (J57). As the name suggests, it can be constructed by triply augmenting a hexagonal prism by attaching square pyramids (J1) to three of its nonadjacent equatorial faces.
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solar eclipse of June 13, 2132**
Solar eclipse of June 13, 2132:
This is a list of solar eclipses that will occur in the 22nd century. During the period 2101 to 2200 there will be 235 solar eclipses of which 79 will be partial, 87 will be annular (five non-central), 65 will be total, and 4 will be hybrids. The greatest number of eclipses in one year will be four, in 11 different years: 2112, 2134, 2141, 2152, 2159, 2170, 2177, 2181, 2188, 2195, and 2199. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Platinum fulminate**
Platinum fulminate:
Platinum fulminate is a primary explosive which is a fulminate salt of platinum discovered by Edmund Davy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Beurre manié**
Beurre manié:
Beurre manié (French "kneaded butter") is a paste, consisting of equal parts by volume of soft butter and flour, used to thicken soups and sauces. By kneading the flour and butter together, the flour particles are coated in butter. When the beurre manié is whisked into a hot or warm liquid, the butter melts, releasing the flour particles without creating lumps.Beurre manié is similar to, but should not be confused with a roux, which is also a thickener made of equal parts of sometimes clarified butter or many other oils and flour, but is cooked before use.Beurre manié is also used as a finishing step for sauces, imparting a smooth, shiny texture prior to service. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Discovery and development of direct Xa inhibitors**
Discovery and development of direct Xa inhibitors:
Four drugs from the class of direct Xa inhibitors are marketed worldwide. Rivaroxaban (Xarelto) was the first approved FXa inhibitor to become commercially available in Europe and Canada in 2008. The second one was apixaban (Eliquis), approved in Europe in 2011 and in the United States in 2012. The third one edoxaban (Lixiana, Savaysa) was approved in Japan in 2011 and in Europe and the US in 2015. Betrixaban (Bevyxxa) was approved in the US in 2017.
History:
Heparin Heparin was discovered by Jay McLean and William Henry Howell in 1916, it was first isolated from a canine liver, which in Greek translates to hepar. Heparin targets multiple factors in the blood coagulation cascade, one of them being FXa. At first, it had many side effects but for the next twenty years, investigators worked on heparin to make it better and safer. It entered clinical trials in 1935 and the first drug was launched in 1936. Chains of natural heparin can vary from 5.000 to 40.000 Daltons. In the 1980s Low molecular weight heparin (LMWH) were developed and they only contain chains with an average molecular weight of less than 8.000 Da.
History:
Warfarin In the 1920s there was an outbreak of a mysterious haemorrhagic cattle disease in Canada and the northern United States. The disease was named sweet clover disease because the cattle had grazed on sweet clover hay. It wasn't until ten years after the outbreak, that a local investigator, Karl P. Link and his student Wilhelm Schoeffel started an intense investigation to find the substance causing the internal bleeding. It took them 6 years to discover dicoumarol, the causing agent. They patented the right for the substance and in 1945 Link started selling a coumarin derivative as a rodenticide. He and his colleagues worked on several variations and ended up with a substance they named warfarin in 1948. It wasn't until 1954 that it was approved for medicinal use in humans making warfarin the first oral anticoagulant drug.
History:
Need for newer and better oral drugs Warfarin treatment requires blood monitoring and dose adjustments regularly due to its narrow therapeutic window. If supervision isn't adequate warfarin poses a threat in causing, all too frequent, haemorrhagic events and multiple interactions with food and other drugs. Currently, the main problem with low molecular weight heparin (LMWH) is the administration route, as it has to be given subcutaneously. Because of these disadvantages there has been an urgent need for better anticoagulant drugs. For a modern society, convenient and fast drug administration is the key to a good drug compliance. In 2008 the first direct Xa inhibitor was approved for clinical use. Direct Xa inhibitors are just as efficacious as LMWH and warfarin but they are given orally and don't need as strict monitoring. Other Xa inhibitors advantages are rapid onset/offset, few drug interactions and predictable pharmacokinetics. The rapid onset/offset effect greatly reduces the need for “bridging” with parenteral anticoagulants after surgeries. Today there are four factor Xa inhibitors marketed: rivaroxaban, apixaban, edoxaban and betrixaban.
History:
Antistasin and tick anticoagulant peptide (TAP) Factor Xa was identified as a promising target for the development of new anticoagulants in the early 1980s. In 1987 the first factor Xa inhibitor, the naturally occurring compound antistasin, was isolated from the salivary glands of the Mexican leech Haementeria officinalis. Antistasin is a polypeptide and a potent Xa inhibitor. In 1990 another naturally occurring Xa inhibitor was isolated, tick anticoagulant peptide (TAP) from extracts of the tick Ornithodoros moubata. TAP and antistasin were used to estimate factor Xa as a drug target.
Mechanism of action:
Blood coagulation is a complex process by which the blood forms clots. It is an essential part of hemostasis and works by stopping blood loss from damaged blood vessels. At the site of injury, where there is an exposure of blood under the endothelium, the platelets gather and immediately form a plug. That process is called primary hemostasis. Simultaneously, a secondary hemostasis occurs. It is defined as the formation of insoluble fibrin by activated coagulation factors, specifically thrombin. These factors activate each other in a blood coagulation cascade that occurs through two separate pathways that interact, the intrinsic and extrinsic pathway. After activating various proenzymes, thrombin is formed in the last steps of the cascade, it then converts fibrinogen to fibrin which leads to clot formation. Factor Xa is an activated serine protease that occupies a key role in the blood coagulation pathway by converting prothrombin to thrombin. Inhibition of factor Xa leads to antithrombotic effects by decreasing the amount of thrombin. Directly targeting factor Xa is suggested to be an effective approach to anticoagulation.
Development:
In 1987 antistasin was tested as the first direct Xa inhibitor. Antistasin is a protein made up of 119 amino acid residues, of which 20 are cysteines involved in 10 disulfide bonds. It acts as a slow, tight-binding inhibitor of factor Xa with a Ki value of 0.3–0.6 nM but it also inhibits trypsin. Recombinant Antistasin can be produced by genetically modified yeast, saccharomyces cerevisiae. Another natural occurring direct Xa-inhibitor, the tick anticoagulant peptide (TAP), was discovered in 1990. It is a single-chain, 60 amino acid peptide and like antistasin it is a slow, tight-binding inhibitor with a similar Ki value (~0.6 nM).
Development:
These two proteins were mostly used to validate factor Xa as a drug target. Animal studies suggested direct Xa-inhibition to be a more efficient approach to anticoagulation compared to direct thrombin inhibitors, especially offering a wider therapeutic window and reducing the risk of rebound thrombosis, (increase in thromboembolic events occurring shortly after the withdrawal of an antithrombotic medication) compared to direct and indirect thrombin inhibitors.During the 1990s several low-molecular-weight substances were developed, such as DX-9065a and YM-60828.
Development:
DX-9065a was the first synthetic compound that inhibited FXa without inhibiting thrombin. That was attained by inserting a carboxyl group which seemed to be the most important moiety for a selective binding to FXa. Those early developed small molecules yet had amidine-groups or even higher-basic functions, which were thought to be necessary as mimics for an arginine residue in prothrombin, the natural substrate of factor Xa. Nevertheless, these basic functions are also related to a very poor oral bioavailability (e.g. 2–3% for DX-9065a).
Development:
In 1998 Bayer Healthcare, a pharmaceutical company started searching for low-molecular-weight direct factor Xa inhibitors with higher oral bioavailability. High-throughput screening and further optimisation at first lead to several substances from the class of isoindolinones demonstrating that much less basic substances can also act as potent Xa inhibitors to an IC50 value of up to 2 nM. Although isoindolinones have a better oral bioavailability than the original compounds it wasn't sufficient enough. However, the project later lead to the class of n-aryloxazolidinones that provides substances with both high potency of inhibiting factor Xa and high bioavailability. One compound of this class, Rivaroxaban (IC50 = 0.7 nM, bioavailability: 60%), was granted marketing authorization for the prevention of venous thromboembolism in Europe and Canada in September 2008.
Chemistry:
Factor Xa: Structure and binding sites Factors IIa, Xa, VIIa, IXa and XIa are all proteolytic enzymes that have a specific role in the coagulation cascade. Factor Xa (FXa) is the most promising one due to its position at the intersection of the intrinsic and extrinsic pathway as well as generating around 1000 thrombin molecules for each Xa molecule which results in a potent anticoagulant effect. FXa is generated from FX by cleavage of a 52 amino acid activation peptide, as the "a" in factor Xa means activated. FXa consists of 254 amino acid catalytic domain and is also linked to a 142 amino acid light chain. The chain contains both GLA domain and two epidermal growth factor domains (EGF like domains).The active site of FXa is structured to catalyze the cleavage of physiological substrates and cleaves PhePheAsnProArg-ThrPhe and TyrIleAspGlyArg-IleVal in prothrombin. FXa has four so-called pockets which are targets for substrates to bind to factor Xa. These pockets are lined up by different amino acids and Xa inhibitors target these pocket when binding to factor Xa. The two most relevant pockets regarding affinity and selectivity for the Xa inhibitors are S1 and S4.S1: The S1 pocket is a hydrophobic pocket and contains an aspartic acid residue (Asp-189) which can serve as a recognition site for a basic group. FXa has a residual space in the S1 pocket and is lined by residues Tyr-228, Asp-189 and Ser-195.S2: The S2 pocket is a small and shallow pocket. It merges with the S4 pocket and has room for small amino acids. Tyr-99 seems to block access to this pocket, so this pocket is not as important as S1 and S4.S3: The S3 pocket is located on the rim of the S1 pocket and is flat and exposed to the solvent. This pocket is not as important as S1 and S4.
Chemistry:
S4: The S4 pocket is hydrophobic in nature and the floor of the pocket is formed by Trp-215 residue. The residues Phe-174 and Tyr-99 of FXa join Trp-215 to form an aromatic box that is able to bind aliphatic, aromatic and positively charged fragments. Because of the binding to positively charged entities, it can be described as a cation hole.
Chemistry:
Chemical structure and properties of direct Xa inhibitors Binding of Xa inhibitors to factor Xa The Xa inhibitors all bind in a so-called L-shape fashion within the active site of factor Xa. The key constituents of the factor Xa are the S1 and S4 binding sites. It was first noted that the natural compounds, antistasin and TAP, which possess highly polar and therefore charged components bind to the target with some specificity. That's why newer drugs were designed with positively charged groups but those resulted in poor bioavailability. Nowadays marketed Xa inhibitors, therefore contain an aromatic ring with various moieties attached for different interactions with the S1 and S4 binding sites. This also ensures good bioavailability as well as maintaining firm binding strength. The Xa inhibitors currently on market today, therefore rely on hydrophobic and hydrogen bonding instead of highly polar interactions.
Chemistry:
Antistasin binding to factor Xa Antistasin contains an N- and a C-terminal domain which are similar in their amino acid sequences with ~40% identity and ~56% homology. Each of them contains a short β-sheet structure and 5 disulfide bonds. Only the N-terminal domain is necessary to inhibit Xa while the C-terminal domain does not contribute to the inhibitory properties due to differences in the 3 dimensional structure, even though the C-terminal domain has a strongly analogue pattern to the actual active site.The interaction of antistasin with FXa involves both the active site and the inactive surface of FXa. The reactive site of antistasin formed by Arg-34 and Val-35 in the N-terminal domain suits the binding site of FXa, most likely the S1 pocket. At the same time, Glu-15 located outside the reactive site of antistasin fits to positively charged residues on the surface of FXa. The multiple binding is thermodynamically advantageous and leads to sub-nanomolar inhibition (Ki = 0.3–0.6 nM).
Chemistry:
DX-9065a binding to factor Xa DX-9065a, the first small molecule direct Xa-inhibitor, is an amidinoaryl derivate with a molecular weight of 571.07g/mol. Its positively charged amidinonaphtalene group forms a salt bridge to the Asp-189 residue in the S1 pocket of FXa. The pyrrolidine ring fits between Tyr-99, Phe-174 and Trp-215 in the S4 pocket of FXa.
Chemistry:
Unlike older drugs, e.g. heparin, DX-9065a is selective for FXa compared to thrombin even though FXa and thrombin are similar in their structure. This is caused by a difference in the amino acid residue in the homologue position 192. While FXa has a glutamine residue in that position, thrombin has a glutamic acid that causes electrostatic repulsion with the carboxyl group of DX-9065a. In addition, a salt bridge between Glu-97 of thrombin and the amidine group fixed in the pyrrolidine ring of DX-9065a reduces the flexibility of the DX-9065a molecule, which now cannot rotate enough to avoid the electrostatic clash. That's why the IC50 value for thrombin is >1000µM while the IC50 value for FXa is 0,16µM.
Chemistry:
Rivaroxaban binding to factor Xa Rivaroxaban binding to FXa is mediated through two hydrogen bonds to the amino acid Gly-219. These two hydrogen bonds serve an important role directing the drug into the S1 and S4 subsites of FXa. The first hydrogen bond is a strong interaction which comes from the carbonyl oxygen of the oxazolidinone core of rivaroxaban. The second hydrogen bond is a weaker interaction and comes from the amino group of the clorothiophene carboxamide moiety.
Chemistry:
These two hydrogen bonds result in the drug forming an L-shape and fits in the S1 and S4 pockets. The amino acids residues Phe-174, Tyr-99, and Trp-215 form a narrow hydrophobic channel that is the S4 binding pocket. The morpholinone part of rivaroxaban is “sandwiched” between amino acids Tyr-99 and Phe-174 and the aryl ring of rivaroxaban is oriented perpendicularly across Trp-215. The morpholinone carbonyl group does not have a direct interaction to the FXa backbone, instead, it contributes to a planarization of the morpholinone ring and therefore supports rivaroxaban to be sandwiched between the two amino acids.
Chemistry:
The interaction between the chlorine substituent of the thiophene moiety and the aromatic ring of Tyr-228, which is located at the bottom of the S1, it is very important due to the fact that it obviates the need for strongly basic groups for high affinity for FXa. This enables rivaroxaban, which is non-basic, to achieve good oral bioavailability and potency.
Chemistry:
Apixaban binding to factor Xa Apixaban shows a similar binding mode as rivaroxaban and forms a tight inhibitor-enzyme complex when connected to FXa. The p-methoxy group of apixaban connects to S1 pocket of FXa but does not appear to have any interaction with any residues in this region of FXa. The pyrazole N-2 nitrogen atom of apixaban interacts with Gln-192 and the carbonyl oxygen interacts with Gly-216. The phenyl lactam group of apixaban is positioned between Tyr-99 and Phe-174 and due to its orientation, it is able to interact with Trp-215 of the S4 pocket. The carbonyl oxygen group of the lactam moiety interacts with a water molecule and does not seem to interact with any residues in the S4 pocket.
Chemistry:
Structure-activity-relationship (SAR) An important part of designing a compound, that is an ideal inhibitor to a certain target, is to understand the amino acid sequence of the target site for the compound to bind to. Modelling both prothrombin and FXa makes it possible to deduct the difference and identify the amino acids at each binding site. At the bottom of the S1 pocket on FXa the binding amino acid is Asp-189 which amidine moieties can bind to. After X-raying the binding site of FXa, it was revealed that the S1 pocket had a planar shape, meaning that a flat amidinoaryl group should bind to it without steric hindrance.Modern direct Xa inhibitors are L-shaped molecules whose ends fit perfectly in the S1 and S4 pockets. The long side of the L-shape has to conform to a highly-specific tunnel within the targets active site. To accomplish that, this part of the molecules is designed to have little formal interactions with FXa in that region. As there is no specific bonding, the fit of these agents between the pockets of FXa increases the total specificity of the drugs to the FXa molecule. The interaction between the S1 pocket of FXa and the inhibitor can be both ionic or non-ionic, which is important because it allows the design of the moiety to be adjusted to increase oral bioavailability. Previously designed compounds were charged molecules that are not absorbed well in the gastrointestinal tract and therefore did not reach high serum concentrations. The newer drugs have a better bioavailability as they are not charged and have a non-ionic interaction to the S1 pocket.Rivaroxaban During the SAR development of rivaroxaban, researchers realized that adding a 5-chlorothiophene-2-carboxamide group to the oxazolidonine core could increase the potency by 200 fold, which had previously been too weak for medical use. In addition to this discovery, a clear preference for the (S)-configuration was confirmed. This compound had a promising pharmacokinetical profile and did not contain a highly basic amidine group, but that had previously been considered important for the interaction with the S1 pocket. These findings lead to extensive SAR (structure-activity relationship) researches. During the SAR testing, R1 was defined as the most important group for potency. Pyrrolidinone was the first R1 functional group to significantly increase the potency but further researches revealed even higher potency with a morpholinone group instead. Groups R2 and R3 had hydrogen or fluorine attached and it was quickly assessed that having hydrogen resulted in highest potency. Groups R2 and R3 were then substituted for various groups, which were all less potent than the hydrogen, so hydrogen was the final result. As the chlorothiophene moiety had an inadequate water solubility, substituting it with another group was attempted but was unsuccessful. The chlorothiophene moiety binds to Tyr-228 at the bottom of the S1 pocket, making it a key factor regarding binding to FXa. Rivaroxaban has both high affinity and good bioavailability.
Chemistry:
ApixabanDuring the SAR development of apixaban there were three groups that needed to be tested to attain maximum potency and bioavailability. The first group to be tested was the non-active site as it needs to be stabilized before SAR testing on the p-methoxyphenyl group (S1 binding moiety). There are several of groups that increase the potency of the compound, mostly amides, amines and tetrazoles but also methylsulfonyl and trifluoromethyl groups. Of these groups, carboxamide has the greatest binding and had similar clotting activity as the compounds.In dog testing, this compound with a carboxamide group called 13F, showed a great pharmacokinetical profile, a low clearance and adequate half-life and volume of distribution. Due to the success of finding a stabilizing group, SAR research for S1 binding moiety (p-methoxyphenyl) was discontinued. In the S4 binding group, N-methylacetyl and lactam analogues proved to have a very high binding affinity for FXa, showed great clotting and selectivity versus other proteases. Orientation turned out to be important as N-methyl acetyl, compared to acetamide, had a 300 fold lower binding ability to FXa due to unfavorable planarity close to the S4 region binding site.
Chemistry:
Synthesis Rivaroxaban Rivaroxaban chemically belongs to the group of n-aryloxazolidinones. Other drugs of that group are linezolid and tedizolid, both of whom are antibiotics. A synthesis of n-aryloxazolidinones starting with an O-silyl protected ethyl(2,3-dihydroxypropyl)-carbamate was published in 2016. In a one-pot reaction the carbamate cyclisizes to a 2-oxazolidone ring under slightly basic conditions while simultaneously the oxazolidone nitrogen is arylized by copper-catalization. For rivaroxaban in particular, 3-morpholinone substitutes the iodine in p-position of the benzene ring by copper-catalization. Afterwards, the silyl protecting group is removed and the resulting alcohol is replaced by an amino group which is then acylated in the last step.An industrial preparation of rivaroxaban was registered as a patent by Bayer Healthcare in 2005. It starts from N-(4-aminophenol)-morpholinone which is alkylated by a propylene oxide derivate that also contains a primary amine involved in a phthalimide protection group. Next, a phosgene equivalent is added to form the 2-oxazolidone ring and the phthalimide is removed. The free amine can now be acylated which leads to rivaroxaban.
Chemistry:
However, according to the patent the synthesis has “various disadvantages in the reaction management which has particularly unfavourable effects for preparation“. The patent also explains another synthesis starting from a chlorothiophene derivate that would be more suitable for the industrial process but points out that toxic solvents or reagents have to be removed from the final product. Therefore, this way is not an alternative.
Chemistry:
Various other synthesis pathways of rivaroxaban have been described.
Chemistry:
Apixaban The first full synthesis of apixaban was published in 2007. The key step of this reaction is a (3+2)cycloaddition of a p-methoxyphenylchlorohydrazon derivate and a p-iodophenyl-morpholin-dihydropyridin derivate. After the following elimination of HCl and morpholine, the iodine is substituted by 2-piperidinone by copper-catalization and the ethyl esther is converted to an amide (aminolysis). This reaction was registered as a patent in 2009.
Clinical use:
Direct factor Xa inhibitors are being used clinically and their usage is constantly increasing. They are gradually taking over warfarin usage and low molecular weight heparins (LMWH). Indication for Xa inhibitors is preventing deep vein thrombosis (DVT) which can lead to pulmonary embolism. It is also used to treat atrial fibrillation to lower the risk of stroke caused by a blood clot. Another indication is a prophylactic treatment for blood clotting (thrombosis) due to atherosclerosis. Rivaroxaban was the first FXa inhibitor on the market and then followed by apixaban, edoxaban and betrixaban.
Future perspectives:
Direct Xa inhibitors in clinical trials Rivaroxaban, apixaban, edoxaban and betrixaban are already on the market. As of October 2016, several new direct Xa inhibitors have entered clinical trials. These are letaxaban from Takeda and eribaxaban from Pfizer.
Antidotes Andexxa (Andexanet alfa) from Portola Pharmaceuticals is a recombinant protein that is given intravenously. It works as an antidote to all direct and indirect FXa inhibitors. Andexxa acts as a decoy receptor for Xa inhibitors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transfer RNA**
Transfer RNA:
Transfer RNA (abbreviated tRNA and formerly referred to as sRNA, for soluble RNA) is an adaptor molecule composed of RNA, typically 76 to 90 nucleotides in length (in eukaryotes), that serves as the physical link between the mRNA and the amino acid sequence of proteins. Transfer RNA (tRNA) does this by carrying an amino acid to the protein synthesizing machinery of a cell called the ribosome. Complementation of a 3-nucleotide codon in a messenger RNA (mRNA) by a 3-nucleotide anticodon of the tRNA results in protein synthesis based on the mRNA code. As such, tRNAs are a necessary component of translation, the biological synthesis of new proteins in accordance with the genetic code.
Transfer RNA:
Typically, tRNAs genes from Bacteria are shorter (mean = 77.6 bp) than tRNAs from Archaea (mean = 83.1 bp) and eukaryotes (mean = 84.7 bp). The mature tRNA follows an opposite pattern with tRNAs from Bacteria being usually longer (median = 77.6 nt) than tRNAs from Archaea (median = 76.8 nt), with eukaryotes exhibiting the shortest mature tRNAs (median = 74.5 nt).
Overview:
While the specific nucleotide sequence of an mRNA specifies which amino acids are incorporated into the protein product of the gene from which the mRNA is transcribed, the role of tRNA is to specify which sequence from the genetic code corresponds to which amino acid. The mRNA encodes a protein as a series of contiguous codons, each of which is recognized by a particular tRNA. One end of the tRNA matches the genetic code in a three-nucleotide sequence called the anticodon. The anticodon forms three complementary base pairs with a codon in mRNA during protein biosynthesis.
Overview:
On the other end of the tRNA is a covalent attachment to the amino acid that corresponds to the anticodon sequence. Each type of tRNA molecule can be attached to only one type of amino acid, so each organism has many types of tRNA. Because the genetic code contains multiple codons that specify the same amino acid, there are several tRNA molecules bearing different anticodons which carry the same amino acid.
Overview:
The covalent attachment to the tRNA 3’ end is catalysed by enzymes called aminoacyl tRNA synthetases. During protein synthesis, tRNAs with attached amino acids are delivered to the ribosome by proteins called elongation factors, which aid in association of the tRNA with the ribosome, synthesis of the new polypeptide, and translocation (movement) of the ribosome along the mRNA. If the tRNA's anticodon matches the mRNA, another tRNA already bound to the ribosome transfers the growing polypeptide chain from its 3’ end to the amino acid attached to the 3’ end of the newly delivered tRNA, a reaction catalysed by the ribosome. A large number of the individual nucleotides in a tRNA molecule may be chemically modified, often by methylation or deamidation. These unusual bases sometimes affect the tRNA's interaction with ribosomes and sometimes occur in the anticodon to alter base-pairing properties.
Structure:
The structure of tRNA can be decomposed into its primary structure, its secondary structure (usually visualized as the cloverleaf structure), and its tertiary structure (all tRNAs have a similar L-shaped 3D structure that allows them to fit into the P and A sites of the ribosome). The cloverleaf structure becomes the 3D L-shaped structure through coaxial stacking of the helices, which is a common RNA tertiary structure motif. The lengths of each arm, as well as the loop 'diameter', in a tRNA molecule vary from species to species.
Structure:
The tRNA structure consists of the following: The acceptor stem is a 7- to 9-base pair (bp) stem made by the base pairing of the 5′-terminal nucleotide with the 3′-terminal nucleotide (which contains the CCA 3′-terminal group used to attach the amino acid). In general, such 3′-terminal tRNA-like structures are referred to as 'genomic tags'. The acceptor stem may contain non-Watson-Crick base pairs.
Structure:
The CCA tail is a cytosine-cytosine-adenine sequence at the 3′ end of the tRNA molecule. The amino acid loaded onto the tRNA by aminoacyl tRNA synthetases, to form aminoacyl-tRNA, is covalently bonded to the 3′-hydroxyl group on the CCA tail. This sequence is important for the recognition of tRNA by enzymes and critical in translation. In prokaryotes, the CCA sequence is transcribed in some tRNA sequences. In most prokaryotic tRNAs and eukaryotic tRNAs, the CCA sequence is added during processing and therefore does not appear in the tRNA gene.
Structure:
The D loop is a 4- to 6-bp stem ending in a loop that often contains dihydrouridine.
The anticodon loop is a 5-bp stem whose loop contains the anticodon. The tRNA 5′-to-3′ primary structure contains the anticodon but in reverse order, since 3′-to-5′ directionality is required to read the mRNA from 5′-to-3′.
The ΨU loop is named so because of the characteristic presence of the unusual base ΨU in the loop, where Ψ is pseudouridine, a modified uridine. The modified base is often found within the sequence 5' -TΨUCG-3'.
The variable loop sits between the anticodon loop and the ΨU loop and, as its name implies, varies in size from 3 to 21 bases.
Anticodon:
An anticodon is a unit of three nucleotides corresponding to the three bases of an mRNA codon. Each tRNA has a distinct anticodon triplet sequence that can form 3 complementary base pairs to one or more codons for an amino acid. Some anticodons pair with more than one codon due to wobble base pairing. Frequently, the first nucleotide of the anticodon is one not found on mRNA: inosine, which can hydrogen bond to more than one base in the corresponding codon position.: 29.3.9 In genetic code, it is common for a single amino acid to be specified by all four third-position possibilities, or at least by both pyrimidines and purines; for example, the amino acid glycine is coded for by the codon sequences GGU, GGC, GGA, and GGG. Other modified nucleotides may also appear at the first anticodon position—sometimes known as the "wobble position"—resulting in subtle changes to the genetic code, as for example in mitochondria.
Anticodon:
Per cell, 61 tRNA types are required to provide one-to-one correspondence between tRNA molecules and codons that specify amino acids, as there are 61 sense codons of the standard genetic code. However, many cells have under 61 types of tRNAs because the wobble base is capable of binding to several, though not necessarily all, of the codons that specify a particular amino acid. At least 31 tRNAs are required to translate, unambiguously, all 61 sense codons.
Aminoacylation:
Aminoacylation is the process of adding an aminoacyl group to a compound. It covalently links an amino acid to the CCA 3′ end of a tRNA molecule.
Aminoacylation:
Each tRNA is aminoacylated (or charged) with a specific amino acid by an aminoacyl tRNA synthetase. There is normally a single aminoacyl tRNA synthetase for each amino acid, despite the fact that there can be more than one tRNA, and more than one anticodon for an amino acid. Recognition of the appropriate tRNA by the synthetases is not mediated solely by the anticodon, and the acceptor stem often plays a prominent role.
Aminoacylation:
Reaction: amino acid + ATP → aminoacyl-AMP + PPi aminoacyl-AMP + tRNA → aminoacyl-tRNA + AMPCertain organisms can have one or more aminophosphate-tRNA synthetases missing. This leads to charging of the tRNA by a chemically related amino acid, and by use of an enzyme or enzymes, the tRNA is modified to be correctly charged. For example, Helicobacter pylori has glutaminyl tRNA synthetase missing. Thus, glutamate tRNA synthetase charges tRNA-glutamine(tRNA-Gln) with glutamate. An amidotransferase then converts the acid side chain of the glutamate to the amide, forming the correctly charged gln-tRNA-Gln.
Aminoacylation:
Interference with aminoacylation may be useful as an approach to treating some diseases: cancerous cells may be relatively vulnerable to disturbed aminoacylation compared to healthy cells. The protein synthesis associated with cancer and viral biology is often very dependent on specific tRNA molecules. For instance, for liver cancer charging tRNA-Lys-CUU with lysine sustains liver cancer cell growth and metastasis, whereas healthy cells have a much lower dependence on this tRNA to support cellular physiology. Similarly, hepatitis E virus requires a tRNA landscape that substantially differs from that associated with uninfected cells. Hence, inhibition of aminoacylation of specific tRNA species is considered a promising novel avenue for the rational treatment of a plethora of diseases.
Binding to ribosome:
The ribosome has three binding sites for tRNA molecules that span the space between the two ribosomal subunits: the A (aminoacyl), P (peptidyl), and E (exit) sites. In addition, the ribosome has two other sites for tRNA binding that are used during mRNA decoding or during the initiation of protein synthesis. These are the T site (named elongation factor Tu) and I site (initiation). By convention, the tRNA binding sites are denoted with the site on the small ribosomal subunit listed first and the site on the large ribosomal subunit listed second. For example, the A site is often written A/A, the P site, P/P, and the E site, E/E. The binding proteins like L27, L2, L14, L15, L16 at the A- and P- sites have been determined by affinity labeling by A. P. Czernilofsky et al. (Proc. Natl. Acad. Sci, USA, pp. 230–234, 1974).
Binding to ribosome:
Once translation initiation is complete, the first aminoacyl tRNA is located in the P/P site, ready for the elongation cycle described below. During translation elongation, tRNA first binds to the ribosome as part of a complex with elongation factor Tu (EF-Tu) or its eukaryotic (eEF-1) or archaeal counterpart. This initial tRNA binding site is called the A/T site. In the A/T site, the A-site half resides in the small ribosomal subunit where the mRNA decoding site is located. The mRNA decoding site is where the mRNA codon is read out during translation. The T-site half resides mainly on the large ribosomal subunit where EF-Tu or eEF-1 interacts with the ribosome. Once mRNA decoding is complete, the aminoacyl-tRNA is bound in the A/A site and is ready for the next peptide bond to be formed to its attached amino acid. The peptidyl-tRNA, which transfers the growing polypeptide to the aminoacyl-tRNA bound in the A/A site, is bound in the P/P site. Once the peptide bond is formed, the tRNA in the P/P site is acylated, or has a free 3’ end, and the tRNA in the A/A site dissociates the growing polypeptide chain. To allow for the next elongation cycle, the tRNAs then move through hybrid A/P and P/E binding sites, before completing the cycle and residing in the P/P and E/E sites. Once the A/A and P/P tRNAs have moved to the P/P and E/E sites, the mRNA has also moved over by one codon and the A/T site is vacant, ready for the next round of mRNA decoding. The tRNA bound in the E/E site then leaves the ribosome.
Binding to ribosome:
The P/I site is actually the first to bind to aminoacyl tRNA, which is delivered by an initiation factor called IF2 in bacteria. However, the existence of the P/I site in eukaryotic or archaeal ribosomes has not yet been confirmed. The P-site protein L27 has been determined by affinity labeling by E. Collatz and A. P. Czernilofsky (FEBS Lett., Vol. 63, pp. 283–286, 1976).
tRNA genes:
Organisms vary in the number of tRNA genes in their genome. For example, the nematode worm C. elegans, a commonly used model organism in genetics studies, has 29,647 genes in its nuclear genome, of which 620 code for tRNA. The budding yeast Saccharomyces cerevisiae has 275 tRNA genes in its genome. The number of tRNA genes per genome can vary widely, with bacterial species from groups such as Fusobacteria and Tenericutes having around 30 genes per genome while complex eukaryotic genomes such as the zebrafish (Danio rerio) can bear more than 10 thousand tRNA genes.In the human genome, which, according to January 2013 estimates, has about 20,848 protein coding genes in total, there are 497 nuclear genes encoding cytoplasmic tRNA molecules, and 324 tRNA-derived pseudogenes—tRNA genes thought to be no longer functional (although pseudo tRNAs have been shown to be involved in antibiotic resistance in bacteria). As with all eukaryotes, there are 22 mitochondrial tRNA genes in humans. Mutations in some of these genes have been associated with severe diseases like the MELAS syndrome. Regions in nuclear chromosomes, very similar in sequence to mitochondrial tRNA genes, have also been identified (tRNA-lookalikes). These tRNA-lookalikes are also considered part of the nuclear mitochondrial DNA (genes transferred from the mitochondria to the nucleus). The phenomenon of multiple nuclear copies of mitochondrial tRNA (tRNA-lookalikes) has been observed in many higher organisms from human to the opossum suggesting the possibility that the lookalikes are functional.
tRNA genes:
Cytoplasmic tRNA genes can be grouped into 49 families according to their anticodon features. These genes are found on all chromosomes, except the 22 and Y chromosome. High clustering on 6p is observed (140 tRNA genes), as well on 1 chromosome.The HGNC, in collaboration with the Genomic tRNA Database (GtRNAdb) and experts in the field, has approved unique names for human genes that encode tRNAs.
tRNA genes:
Evolution The top half of tRNA (consisting of the T arm and the acceptor stem with 5′-terminal phosphate group and 3′-terminal CCA group) and the bottom half (consisting of the D arm and the anticodon arm) are independent units in structure as well as in function. The top half may have evolved first including the 3′-terminal genomic tag which originally may have marked tRNA-like molecules for replication in early RNA world. The bottom half may have evolved later as an expansion, e.g. as protein synthesis started in RNA world and turned it into a ribonucleoprotein world (RNP world). This proposed scenario is called genomic tag hypothesis. In fact, tRNA and tRNA-like aggregates have an important catalytic influence (i.e., as ribozymes) on replication still today. These roles may be regarded as 'molecular (or chemical) fossils' of RNA world.Genomic tRNA content is a differentiating feature of genomes among biological domains of life: Archaea present the simplest situation in terms of genomic tRNA content with a uniform number of gene copies, Bacteria have an intermediate situation and Eukarya present the most complex situation. Eukarya present not only more tRNA gene content than the other two kingdoms but also a high variation in gene copy number among different isoacceptors, and this complexity seem to be due to duplications of tRNA genes and changes in anticodon specificity.
tRNA genes:
Evolution of the tRNA gene copy number across different species has been linked to the appearance of specific tRNA modification enzymes (uridine methyltransferases in Bacteria, and adenosine deaminases in Eukarya), which increase the decoding capacity of a given tRNA. As an example, tRNAAla encodes four different tRNA isoacceptors (AGC, UGC, GGC and CGC). In Eukarya, AGC isoacceptors are extremely enriched in gene copy number in comparison to the rest of isoacceptors, and this has been correlated with its A-to-I modification of its wobble base. This same trend has been shown for most amino acids of eukaryal species. Indeed, the effect of these two tRNA modifications is also seen in codon usage bias. Highly expressed genes seem to be enriched in codons that are exclusively using codons that will be decoded by these modified tRNAs, which suggests a possible role of these codons—and consequently of these tRNA modifications—in translation efficiency.It is important to note that many species have lost specific tRNAs during evolution. For instance, both mammals and birds lack the same 14 out of the possible 64 tRNA genes, but other life forms contain these tRNAs. For translating codons for which an exactly pairing tRNA is missing, organisms resort to a strategy called wobbling, in which imperfectly matched tRNA/mRNA pairs still give rise to translation, although this strategy also increases to propensity for translation errors. The reasons why tRNA genes have been lost during evolution remains under debate but may relate improving resistance to viral infection. Because nucleotide triplets can present more combinations than there are amino acids and associated tRNAs, there is redundancy in the genetic code, and several different 3-nucleotide codons can express the same amino acid. This codon bias is what necessitates codon optimization.
tRNA genes:
tRNA-derived fragments tRNA-derived fragments (or tRFs) are short molecules that emerge after cleavage of the mature tRNAs or the precursor transcript. Both cytoplasmic and mitochondrial tRNAs can produce fragments. There are at least four structural types of tRFs believed to originate from mature tRNAs, including the relatively long tRNA halves and short 5’-tRFs, 3’-tRFs and i-tRFs. The precursor tRNA can be cleaved to produce molecules from the 5’ leader or 3’ trail sequences. Cleavage enzymes include Angiogenin, Dicer, RNase Z and RNase P. Especially in the case of Angiogenin, the tRFs have a characteristically unusual cyclic phosphate at their 3’ end and a hydroxyl group at the 5’ end. tRFs appear to play a role in RNA interference, specifically in the suppression of retroviruses and retrotransposons that use tRNA as a primer for replication. Half-tRNAs cleaved by angiogenin are also known as tiRNAs. The biogenesis of smaller fragments, including those that function as piRNAs, are less understood.tRFs have multiple dependencies and roles; such as exhibiting significant changes between sexes, among races and disease status. Functionally, they can be loaded on Ago and act through RNAi pathways, participate in the formation of stress granules, displace mRNAs from RNA-binding proteins or inhibit translation. At the system or the organismal level, the four types of tRFs have a diverse spectrum of activities. Functionally, tRFs are associated with viral infection, cancer, cell proliferation and also with epigenetic transgenerational regulation of metabolism.tRFs are not restricted to humans and have been shown to exist in multiple organisms.Two online tools are available for those wishing to learn more about tRFs: the framework for the interactive exploration of mitochondrial and nuclear tRNA fragments (MINTbase) and the relational database of Transfer RNA related Fragments (tRFdb). MINTbase also provides a naming scheme for the naming of tRFs called tRF-license plates (or MINTcodes) that is genome independent; the scheme compresses an RNA sequence into a shorter string.
tRNA genes:
Engineered tRNAs Artificial suppressor elongator tRNAs are used to incorporate unnatural amino acids at nonsense codons placed in the coding sequence of a gene. Engineered initiator tRNAs (tRNAfMet2 with CUA anticodon encoded by metY gene) have been used to initiate translation at the amber stop codon UAG. This type of engineered tRNA is called a nonsense suppressor tRNA because it suppresses the translation stop signal that normally occurs at UAG codons. The amber initiator tRNA inserts methionine and glutamine at UAG codons preceded by a strong Shine-Dalgarno sequence. An investigation of the amber initiator tRNA showed that it was orthogonal to the regular AUG start codon showing no detectable off-target translation initiation events in a genomically recoded E. coli strain.
tRNA biogenesis:
In eukaryotic cells, tRNAs are transcribed by RNA polymerase III as pre-tRNAs in the nucleus.
RNA polymerase III recognizes two highly conserved downstream promoter sequences: the 5′ intragenic control region (5′-ICR, D-control region, or A box), and the 3′-ICR (T-control region or B box) inside tRNA genes.
The first promoter begins at +8 of mature tRNAs and the second promoter is located 30–60 nucleotides downstream of the first promoter. The transcription terminates after a stretch of four or more thymidines.
tRNA biogenesis:
Pre-tRNAs undergo extensive modifications inside the nucleus. Some pre-tRNAs contain introns that are spliced, or cut, to form the functional tRNA molecule; in bacteria these self-splice, whereas in eukaryotes and archaea they are removed by tRNA-splicing endonucleases. Eukaryotic pre-tRNA contains bulge-helix-bulge (BHB) structure motif that is important for recognition and precise splicing of tRNA intron by endonucleases. This motif position and structure are evolutionarily conserved. However, some organisms, such as unicellular algae have a non-canonical position of BHB-motif as well as 5′- and 3′-ends of the spliced intron sequence.
tRNA biogenesis:
The 5′ sequence is removed by RNase P, whereas the 3′ end is removed by the tRNase Z enzyme.
A notable exception is in the archaeon Nanoarchaeum equitans, which does not possess an RNase P enzyme and has a promoter placed such that transcription starts at the 5′ end of the mature tRNA.
The non-templated 3′ CCA tail is added by a nucleotidyl transferase.
Before tRNAs are exported into the cytoplasm by Los1/Xpo-t, tRNAs are aminoacylated.
The order of the processing events is not conserved.
For example, in yeast, the splicing is not carried out in the nucleus but at the cytoplasmic side of mitochondrial membranes.Nonetheless, In March 2021, researchers reported evidence suggesting that a preliminary form of transfer RNA could have been a replicator molecule in the very early development of life, or abiogenesis.
History:
The existence of tRNA was first hypothesized by Francis Crick as the "adaptor hypothesis" based on the assumption that there must exist an adapter molecule capable of mediating the translation of the RNA alphabet into the protein alphabet. Paul C Zamecnik and Mahlon Hoagland discovered tRNA Significant research on structure was conducted in the early 1960s by Alex Rich and Donald Caspar, two researchers in Boston, the Jacques Fresco group in Princeton University and a United Kingdom group at King's College London. In 1965, Robert W. Holley of Cornell University reported the primary structure and suggested three secondary structures. tRNA was first crystallized in Madison, Wisconsin, by Robert M. Bock. The cloverleaf structure was ascertained by several other studies in the following years and was finally confirmed using X-ray crystallography studies in 1974. Two independent groups, Kim Sung-Hou working under Alexander Rich and a British group headed by Aaron Klug, published the same crystallography findings within a year. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.