text
stringlengths
60
353k
source
stringclasses
2 values
**Earthquake rotational loading** Earthquake rotational loading: Earthquake rotational loading indicates the excitation of structures due to the torsional and rocking components of seismic actions. Nathan M. Newmark was the first researcher who showed that this type of loading may result in unexpected failure of structures, and its influence should be considered in design codes. There are various phenomena that may lead to the earthquake rotational loading of structures, such as propagation of body wave, surface wave, special rotational wave, block rotation, topographic effect, and soil structure interaction.One of challenges in structural engineering is defining reliable and accurate loading patterns for design of earthquake-resistant structures based on all components of seismic motions, i.e., three translational and three rotational components. From earthquake engineering perspective, it is usually assumed that the rotational components of strong ground motions are induced due to the spatial variation of the seismic waves and, consequently, these components are estimated in terms of corresponding translational components. When the earthquake shaking can be specified at a single point, the rotational loading of structures can be performed by point rotation, which corresponds with gradient of a point on the ground surface. Most investigations on the earthquake rotational loading, by considering the effect of point rotation on the behavior of structures have shown that the rotational components based on their frequency content can severely change dynamic behavior of structures, which are sensitive to high-frequency motions, such as secondary systems, historical monuments, nuclear reactors, tall asymmetric buildings or irregular frames, slender tower shape structures, bridges, vertically irregular structures, and even ordinary multi-story buildings. The contribution of the rotational components to the seismic response of structures supported on the rigid mat foundation can even be amplified if the effects of the kinematic and dynamic soil structure interaction are considered in structural loading and modeling. In a recent study, the combined action of the rotational loading and multi-support excitation on the seismic behavior of short-span bridges was investigated. The numerical results suggested that depending on structure properties and excitation characteristics, rotational components decrease beneficial effects of multi-support excitation on the structure response. In spite of the fact that the rotational components may significantly affect the seismic behavior of structures, their influence is not currently considered in most modern design codes, which the main reasons of this ignorance may be attributed to: (1) lack of sufficient recorded data on the rotational accelerations; (2) difficulty in presenting a quantitative assessment of the rotational acceleration components for given translational components; (3) complexity in derivation of simplified seismic loading patterns for structures subjected to rotational excitations, and (4) lack of commercial computing programs for structural analysis. To better understand the effects of the rotational components on the seismic behavior of structures, recently, new seismic intensity parameters were proposed to evaluate the contribution of the rotational components to the structural response.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rainbow Raider** Rainbow Raider: Rainbow Raider (Roy G. Bivolo) is a fictional supervillain appearing in comic books by DC Comics. His real name is a pun based on the acronym "ROYGBIV", a mnemonic for the colors of a rainbow. He is a minor, though recurring, enemy of the Flash and other heroes.Two incarnations of the Rainbow Raider appear in The Flash, with Roy G. Bivolo appearing in the first and ninth seasons, portrayed by Paul Anthony, and a female incarnation named Carrie Bates appearing in the seventh season, portrayed by Jona Xiao. Publication history: Rainbow Raider first appeared in The Flash #286 (June 1980), and was created by Cary Bates and Don Heck.Bates said in a 2008 interview that "Rainbow Raider's color-blindness (as well as the color-emotion powers and origin) was an attempt on his part to emulate those classic Rogues' Gallery villain origins Bates enjoyed so much from the sixties".Bates elaborated on the characters creation stating "Having grown up on a Flash Rogue’s gallery full of villains who were adept at weaponizing things like mirrors, cold, heat, magic, boomerangs, etc., Julie and I thought the color spectrum gimmick had the potential to be a worthwhile addition." Fictional character biography: As a child, Roy G. Bivolo always dreamed of a career as an artist, a lofty goal considering he was completely colorblind. He would often paint what he thought were beautiful pieces of art, and indeed showed great technical skill only to be told that it was made up of clashing colors. His father, an optometrist and genius in optical technology, swore he would find a cure for his son's disorder. Due to failing health, he was unable to complete his product, but instead created a sophisticated pair of goggles that would allow Roy to create beams of solid rainbow-colored light. On his death-bed, his father presents him with this gift, and it was not long before Roy found a sinister use for it. Fictional character biography: Turning to crime because the world did not appreciate his art, Roy, now the Rainbow Raider, went on a crime spree focused mostly on art galleries, saying that if he could not appreciate the great works of art in them (due to his disability), then no one else would. During this time he often clashes with the Flash, and sparks a rivalry that would last several years. Some years later he would fight Booster Gold as well. Rainbow Raider becomes the mind-addled slave of a crime lord in one of many alternate futures within the Armageddon 2001 storyline. He is a central plot point in the first issue of the Underworld Unleashed storyline because even Neron, the demonic antagonist, considered him pathetic, indeed even calling him a "paramecium".Rainbow Raider once traded opponents with Batman villain Doctor Double X after meeting a motivational therapist named Professor Andrea Wye. Both of them are defeated by Batman and Flash. He later becomes a minor enemy of the Justice League, appearing briefly at a villains gathering. Rainbow Raider later taking part in the riot in the super-hero prison of Belle Reve Penitentiary (he is quickly defeated by a single punch from Zauriel). During his time at Belle Reve, he was part of the Color Queens prison gang alongside Crazy Quilt, Doctor Light, Doctor Spectro, and Multi-Man.Roy is slain by the villainess Blacksmith when she impaled him with his latest work of art.During the Blackest Night storyline, Rainbow Raider is one of the many deceased characters temporarily reanimated as a zombie within the Black Lantern Corps.In 2011, "The New 52" rebooted the DC universe. Roy uses the alias of Chroma, rather than Rainbow Raider. During the Forever Evil storyline, Chroma was present in Central City when Gorilla Grodd invaded the city with his army of gorillas. He, Girder, and Tar Pit saw Pied Piper defeated by Gorilla Grodd. After Gorilla Grodd punches Girder enough to crumble, Chroma runs away with Tar Pit. Gorilla Grodd later kills Chroma to serve as a warning to the other villains that the Gem Cities are his. Upon Solovar being chained up, the heads of Chroma and the Mayor of Central City are placed around him.Chroma later appears somehow alive and intact. He and Tar Pit are robbing jewelry stores until they are stopped by Flash. Fictional character biography: Rainbow Raiders Since Rainbow Raider's death, a team of color-themed supervillains have dubbed themselves the Rainbow Raiders in his honor. Powers and abilities: Rainbow Raider's powers are derived entirely from the special goggles he wears, which allow him to project solid beams of rainbow-colored light he can either use offensively or as a slide for travel. In addition, he can coat people in certain colors of light to induce emotions (coating someone in blue light, for instance, would make them sad). Reception: Heavy.com lists Rainbow Raider as one of the worst supervillains of all time. Francesco Marciuliano from Smosh.com ranked Rainbow Raider as having one of the worst supervillain gadgets of all time. Other characters named Rainbow Raider: Jonathan Kent posed as a supervillain called Rainbow Raider as part of a plot to get Superboy to capture gangster Vic Munster and his gang by using a hypnotic device on his helmet. Vic Munster later used the Rainbow Raider identity where he was defeated by Superboy. Other characters named Rainbow Raider: Dr. Quin (a villain from the first Dial H for Hero series) appears in House of Mystery #167 (June 1967) as a different Rainbow Raider. This version temporarily gave himself powers using a rare crystal that changed his body into different colors (slowly following the sequence of the rainbow). Depending on which color he was at the time, he would gain a different superpower: Red gave him a super-hot beam, Orange gave him an obscuring cloud, Yellow gave him the ability to drain energy and super powers, Green enables him to slow the bodies of others to the point of paralysis for an hour, and Violet shrinks people and objects for an hour. His Blue and Indigo powers are never shown. He also had a secret final color power called Ultra-Violet which made him invisible. In other media: Television Two incarnations of Rainbow Raider appear in The Flash: Roy G. Bivolo appears in the first and ninth seasons, portrayed by Paul Anthony. This version is a metahuman capable of inciting rage in people via eye contact. Additionally, he is originally nicknamed "Prism" by Cisco Ramon, but Caitlin Snow suggests "Rainbow Raider", which Bivolo is referred to as from then on despite Ramon calling the nickname lame. As of the ninth season, Bivolo was recruited into the Red Death's Rogues. In other media: A female incarnation named Carrie Bates / Rainbow Raider 2.0 appears in the seventh season episode "Good-Bye Vibrations", portrayed by Jona Xiao. She is a former collections officer who was previously fired from three collection agencies for cancelling debts instead of collecting them and became a metahuman capable of inducing euphoria. Roy G. Bivolo appears in the Teen Titans Go! episode "Real Art", voiced by Scott O'Brien. Rainbow Raider makes a non-speaking cameo appearance in the Harley Quinn episode "B.I.T.C.H." Film Rainbow Raider appears in Teen Titans Go! To the Movies. Video games Rainbow Raider appears as a downloadable playable character in Lego Batman 3: Beyond Gotham as part of the "Rainbow" DLC pack. Miscellaneous Rainbow Raider appears in Batman: The Brave and the Bold #14. Rainbow Raider appears in The Flash tie-in novel The Haunting of Barry Allen.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lomax distribution** Lomax distribution: The Lomax distribution, conditionally also called the Pareto Type II distribution, is a heavy-tail probability distribution used in business, economics, actuarial science, queueing theory and Internet traffic modeling. It is named after K. S. Lomax. It is essentially a Pareto distribution that has been shifted so that its support begins at zero. Characterization: Probability density function The probability density function (pdf) for the Lomax distribution is given by p(x)=αλ[1+xλ]−(α+1),x≥0, with shape parameter α>0 and scale parameter λ>0 . The density can be rewritten in such a way that more clearly shows the relation to the Pareto Type I distribution. That is: p(x)=αλα(x+λ)α+1 Non-central moments The ν th non-central moment E[Xν] exists only if the shape parameter α strictly exceeds ν , when the moment has the value E(Xν)=λνΓ(α−ν)Γ(1+ν)Γ(α) Related distributions: Relation to the Pareto distribution The Lomax distribution is a Pareto Type I distribution shifted so that its support begins at zero. Specifically: If Pareto then Lomax (α,λ). The Lomax distribution is a Pareto Type II distribution with xm=λ and μ=0: If Lomax then P(II) (xm=λ,α,μ=0). Relation to the generalized Pareto distribution The Lomax distribution is a special case of the generalized Pareto distribution. Specifically: μ=0,ξ=1α,σ=λα. Related distributions: Relation to the beta prime distribution The Lomax distribution with scale parameter λ = 1 is a special case of the beta prime distribution. If X has a Lomax distribution, then Xλ∼β′(1,α) Relation to the F distribution The Lomax distribution with shape parameter α = 1 and scale parameter λ = 1 has density f(x)=1(1+x)2 , the same distribution as an F(2,2) distribution. This is the distribution of the ratio of two independent and identically distributed random variables with exponential distributions. Related distributions: Relation to the q-exponential distribution The Lomax distribution is a special case of the q-exponential distribution. The q-exponential extends this distribution to support on a bounded interval. The Lomax parameters are given by: α=2−qq−1,λ=1λq(q−1). Relation to the (log-) logistic distribution The logarithm of a Lomax(shape = 1.0, scale = λ)-distributed variable follows a logistic distribution with location log(λ) and scale 1.0. This implies that a Lomax(shape = 1.0, scale = λ)-distribution equals a log-logistic distribution with shape β = 1.0 and scale α = log(λ). Gamma-exponential (scale-) mixture connection The Lomax distribution arises as a mixture of exponential distributions where the mixing distribution of the rate is a gamma distribution. If λ|k,θ ~ Gamma(shape = k, scale = θ) and X|λ ~ Exponential(rate = λ) then the marginal distribution of X|k,θ is Lomax(shape = k, scale = 1/θ). Since the rate parameter may equivalently be reparameterized to a scale parameter, the Lomax distribution constitutes a scale mixture of exponentials (with the exponential scale parameter following an inverse-gamma distribution).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ethacridine lactate** Ethacridine lactate: Ethacridine lactate (ethacridine monolactate monohydrate, acrinol, trade name Rivanol) is an aromatic organic compound based on acridine. Its formal name is 2-ethoxy-6,9-diaminoacridine monolactate monohydrate. It forms orange-yellow crystals with a melting point of 226 °C and it has a stinging smell. Ethacridine lactate: Its primary use is as an antiseptic in solutions of 0.1%. It is effective against mostly Gram-positive bacteria, such as Streptococci and Staphylococci, but ineffective against Gram-negative bacteria such as Pseudomonas aeruginosa.Ethacridine is also used as an agent for second trimester abortion. Up to 150 ml of a 0.1% solution is instilled extra-amniotically using a foley catheter. After 20 to 40 hours, 'mini labor' ensues. In China, an intra-amniotic method has also been used. Ethacridine as an abortifacient is found to be safer and better tolerated than 20% hypertonic saline.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jewel Pod** Jewel Pod: The Jewel Pod (ジュエルポッド, Jueru Poddo) is a touch-screen like device used by the Jewelpets first appearing in the Jewelpet Twinkle Anime series. As it became a main staple to the franchise, the Jewel Pod was made into an interactive toy by Sega Toys in 2010. Description: The Jewel Pod, as it says is loosely inspired from Apple's iPhone, which uses a touch screen concept. The first version of the Jewel Pod is first released by both Sanrio and Sega Toys in 2010, and is only used for communication purposes by touching the screen in a certain movement to type out the letters. The first version comes only in two colors: a Red one with uses Ruby's Voice and a White one that came out in 2011, that uses Labra's voice. A second version is also released in the same year to coincide the release of Jewelpet Sunshine, called Jewel Pod Crystal. The Crystal variant contains more playability than the first version such as minigames, fortune telling and improved messaging capabilities. An improved version called Jewel Pod Crystal Plus is released in 2012. Description: The third and current Jewel Pod is also released in 2012, called Jewel Pod Diamond. As the diamond retains the Crystal Plus's features, it also adds newer capabilities such as a camera and an SD Card Slot. The fourth incarnations, called Jewel Music Pod is released in April 2013 and the Jewel Pod Diamond Premium, which is released in July 2013. Versions: Jewel Pod The First Generation of the Jewel Pod released in 2010, coinciding the release of Jewelpet Twinkle. The main function of the first Jewel Pod is mainly messaging, and only uses a touch panel instead of the Liquid Crystal Display. On writing messages, the user must touch the screen in a certain movement or magical order to type out the letters. An infrared sensor is also equipped on the toy, allowing it for sending and receiving voice messages. The first version comes only in two colors: a Red one with uses Ruby's Voice and a White one that came out in 2011, that uses Labra's voice. Versions: Jewel Pod Crystal/Crystal Plus The Second Generation of the Jewel Pod released in 2011, coinciding the release of Jewelpet Sunshine. The function of the Crystal is the same as the first version, but the touch panel is replaced with a monochrome LCD display. Newer features present in crystal includes Fortune Telling and Mini Games. The second version comes in both red and white colors when it first came out. An enhanced version called the Jewel Pod Crystal Plus is released, with both pink and purple colors. Versions: Jewel Pod Diamond The Third Generation of the Jewel Pod released in 2012, coinciding the release of Jewelpet Kira Deco. A colored LCD display is implemented into the design as well as a camera and an SD Card Slot. The Diamond has a more enhanced messaging capabilities unlike the previous two incarnations, now implementing both the English alphabet and symbols. The diamond also has a built in memory, capable on storing messages that's received as well as pictures captured using the Jewel Pod's camera. The applications in the Diamond is much more customizable as custom apps can be played using an SD Card. The Diamond only comes in three colors: Pink, Purple and Blue. Versions: Jewel Music Pod A music device released in April 2013, coinciding the release of Jewelpet Happiness. Unlike its previous counterparts, the Music Pod is small in size, shaped like a crystal heart and has a small colored LCD screen with three buttons for operation. The toy operates as a Portable media player, allowing it to play MP3 files with the use of a SD card and has 8 official features, such as minigames and fortune telling. It also has connectivity with the Jewel Pod Diamond to view pictures through the photo viewer. Versions: Jewel Pod Diamond Premium The Fourth Generation of the Jewel Pod revealed at the 2013 Tokyo Toy Show. The new incarnation includes a new design with a newly built Operating System and touch screen that can be used by hand or with an included stylus. The new version is also compatible with the upcoming JSPod. Jewel Pod Premium Heart The Fifth generation of the Jewel Pod released in May 2014, coinciding the release of Lady Jewelpet. Aside from the updated aesthetics and design, the new version features a much faster core operating system and implementation of the Motion Detection Sensor for minigames. Versions: Jewel Pad The Jewel Pad is a tablet-like device to be released officially on August 7, 2014. Basically a larger version of the Jewel Pod, the toy spots a 7-inch LCD screen with an included stylus and connectivity with the Jewel Pod Premium Heart. It is also has a rechargeable battery pack, which can be used for 5 consecutive hours in full charge. Aside from the basic app features, more applications can be installed through a QR code and connectivity to a USB port with an included USB connector. In the Anime: Jewel Pods had become a staple communication and magical device used by the Jewelpets in the Anime series. It transports a Jewelpet into the Human World through a magical portal when it used. It can even send a human and the Jewelpet into Jewel Land using the same transportation, but only through a computer. It can also freeze time, making the Jewelpet and its human partner have a good time in Jewel Land until they return to Earth. Several features of the Jewel Pot can be activated by touching it like a touch screen phone, which came handy on browsing spells, using a radar to find its human partner or another Jewelpet and stores magical items like the Rare Rare Drops and the Ble Ble Drops. In the Anime: In Jewelpet Sunshine, the Jewel Pod and Jewel Pod Crystal retains its usefulness as a magical item for the students of Sunshine Academy, even for Jewelpets. The Jewel Pod is used as a normal smartphone as well as casting magic, In Kira Deco, the Jewel Pod Diamond is used by all Jewelpets in Jewel Land and each of them were decorated with jewels and other decorations, depending on the user. It also allows the user to use magic as well. The Kira Deco 5 also obtained their own version of the Jewel Pod Diamond, which now allows them to contact the Jewelpets and identify the Deco Stones. In Episode 27, Ruby's obtains Jewelina's Jewel Pod Diamond and uses it to store the Deco Stones. Reception: The Jewel Pod and Jewel Pod Crystal sold in a combined total of 300,000 Units in Japan alone. In the press release by Sega Toys on October 1, 2012, the Jewel Pod Diamond sold about 160,000 units in less than two months since its release and topped the charts as the most sold girl's toy in Japan. The company will have the target sales of 400,000 in the end of the year and will have a combined total of 700,000 units sold on all three Jewel Pods.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Witch and the Hundred Knight 2** The Witch and the Hundred Knight 2: The Witch and the Hundred Knight 2 (魔女と百騎兵2, Majo to Hyakkihei 2) is an action role-playing video game developed and published by Nippon Ichi Software for the PlayStation 4. A sequel to The Witch and the Hundred Knight, the game released in February 2017 in Japan and in March 2018 in North America and Europe. It takes place in a different universe as the original title, and centers around Amalie, a member of an anti-witch organization, who secretly enlists the help of Chelka - a witch inhabiting the body of her younger sister Milm - to uncover a conspiracy within the organization and save the world of Kevala from disaster, with the assistance of the mysterious Hundred Knight. The game received mixed reviews from critics, who praised the art and music, but criticized the game's story and characters, repetitive combat, and lack of innovation compared to its predecessor. Gameplay: The Witch and the Hundred Knight 2 is an action role-playing game with a top-down isometric view. Players fight their way through a variety of levels as Hundred Knight, a magical creature that can be equipped with five types of weapons: swords, hammers, staffs, lances, and spears. Each weapon type differs in attack range, speed, and motion. By changing the order of use, players create many different combos. Plot: The world of Kevala is corrupted by the Witch Disease, an illness developed in children under age 10. Its cause is unknown. A third eye appears on the forehead of those infected with the Witch Disease, and when the eye opens they awaken as a witch. A girl named Amalie lives in a remote village after she lost her parents to a witch. Her younger sister Milm is the only family she has left. One day, Milm suddenly disappears and Amalie eagerly searches for her. When she is about to give up, Milm shows up again, covered in mud and with the witch's eye on her forehead.Upon the beginning of the game's events, Milm is seen being operated on by the Weiss Ritter, an anti-witch organization, to cure the Witch Disease. It seems to fail and result in her death, but then a witch named Chelka awakens in Milm's body and destroys the entire building. This also causes Milm's Hundred Knight doll to come alive as well, and begin fighting for Chelka. When Amalie discovers that Milm is alive and Chelka is in her body, the two of them move into the abandoned Durga Castle and Hundred Knight starts obeying Amalie's orders as well. They forge an uneasy relationship, with Amalie unwilling to hurt Chelka, and Chelka unable to harm Amalie lest Milm come to the surface. Plot: The Hundred Knight's exploits defeating the witch Isabel are credited to Amalie, causing her to become a Holy Valkyrie. However, when Amalie realizes that the Valkyries are murdering the children with the Witch Disease that the WR cannot operate on, she starts to doubt her mission. She is sent to defeat Prim, the world's strongest witch, which the Hundred Knight does successfully, but they find records of misdeeds by the WR in Prim's castle. This leads Amalie to infiltrate the WR and discover that they have been covering up the fact that there is no cure for the Witch Disease, and she is actually an artificial witch who was never "cured". Plot: This causes her to be forced to fight one of the Valkyries, defeating her. For this, she is branded a traitor and sentenced to death, though she surrenders willingly, losing hope that Milm will ever return to normal. However, Chelka rescues her. They defeat another of the Valkyries and, later, go after Theodore, the leader of the WR, though they are stopped by the final Valkyrie, Gabrielle, who reveals herself to be Francesca, the first witch, and actually a "Holy Maiden". They attempt to kidnap Milm, whose third eye is actually one of the three eyes of the all-powerful witch Rangda, and use the eyes to rebirth the world into a twisted utopia, but they are stopped with the help of Prim. However, the end of the world continues regardless. Plot: Chelka and the others realize that the world has been going in an endless cycle for thousands of years, and discover a supply of mana that was removed from the cycle. They decide to absorb this "Manathree" and fight Rangda directly to break the cycle and prevent the world's destruction. The Hundred Knight succeeds in defeating Rangda's illusions and Chelka destroys Rangda herself. They realize that Rangda created the cycle and isolated the world from the multiverse to prevent the godlike interdimensional being, Niike, from destroying it as he did once before. Chelka decides to return the world to the multiverse, and all the other characters, living and dead, are reborn in a new world as non-witches. Development: The Witch and the Hundred Knight 2 was first revealed in May 2015 through a short video in which Nippon Ichi Software confirmed that the game was in development. In October 2016, director Kenta Asano told Dengeki PlayStation that the PlayStation 4 was chosen as platform for the game because The Witch and the Hundred Knight Revival, an enhanced port of the original game for the system, was well received. A Winter 2017 release was announced at the same time. Later that month, the February 23 release date was revealed. The game was released in North America and Europe in March 2018. Reception: The Witch and the Hundred Knight 2 received a 33/40 score in issue 1472 of Famitsu upon its Japanese release. Commercially, the game was not as successful in Japan as its predecessor. According to Media Create, only 13,421 physical copies of the game were sold during the week of release, compared to 49,209 copies in the week of launch of the first game for PlayStation 3 in 2013.The game saw similar mixed reviews to its predecessor upon its Western release, with an aggregate score of 61 out of 100 on Metacritic, based on 24 reviews.Antonio Savino of Eurogamer Italia rated the game 6/10, saying that while the gameplay and customization were "engaging", the narration style combined with the game's repetitive levels makes the game "too boring" and "not very satisfying".Alana Hagues of RPGFan rated the game 45/100, saying that while she never played the original, the sequel was "one of the most mind-numbing experiences I’ve had to wade through in recent years". While praising the main character's ability to switch between facets, and the game's combat, she criticized the story, saying it "goes hardly anywhere", and the characters, saying "I hated nearly everyone". Stating that Chelka was "the most irritating and bratty interpretation of a witch I’ve ever encountered", she said that while Amalie was bearable, she nevertheless "never gets a chance to prove herself". She also criticized the fact that "the first game’s unique environments have been scrapped in favour of procedurally generated dungeons," stating that "the lack of variety wore very thin very quickly". In conclusion, she states that "I’d struggle to recommend the game even to fans of the original".Joshua Carpenter of RPGamer rated the game an even lower 1.5/5, calling it "the worst gaming experience I've had in recent memory". With regards to the story, he called Amalie "a sympathetic, likable character", but criticized her lack of prominence in the gameplay, saying that "every time a crisis happens, she is shunted aside, and Hundred Knight comes to save the day". Saying that the combat "ultimately dooms" the game due to how the weapons handle, he also stated that with the upgrade system, "the execution is lacking". Declaring that the game had "repetitive dungeons, bad combat, and poorly-designed boss encounters", he concluded that the game "doesn’t have enough good ideas to be worth saving".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Permethrin** Permethrin: Permethrin is a medication and an insecticide. As a medication, it is used to treat scabies and lice. It is applied to the skin as a cream or lotion. As an insecticide, it can be sprayed onto outer clothing or mosquito nets to kill the insects that touch them.Side effects include rash and irritation at the area of use. Use during pregnancy appears to be safe. It is approved for use on and around people over the age of two months. Permethrin is in the pyrethroid family of medications. It works by disrupting the function of the neurons of lice and scabies mites.Permethrin was discovered in 1973. It is on the World Health Organization's List of Essential Medicines. In 2020, it was the 427th most commonly prescribed medication in the United States, with more than 100,000 prescriptions. Uses: Insecticide In agriculture, to protect crops (a drawback is that it is lethal to bees) In agriculture, to kill livestock parasites For industrial and domestic insect control In the textile industry, to prevent insect attack of woollen products In aviation, the WHO, IHR and ICAO require arriving aircraft be disinsected prior to embarkation, departure, descent, or deplaning in certain countries. Aircraft disinsection with permethrin-based products is recommended only prior to embarkation. Prior to departure (after boarding), at the top of descent or on arrival, d-phenothrin-based (1R-trans phenothrin) aircraft insecticides are recommended. Uses: Insect incapacitation As a personal protective measure, permethrin is applied to outer clothing. It is a cloth impregnant, notably in mosquito nets and field wear. While permethrin may be marketed as an insect repellent, it does not prevent insects from landing. Instead it works by incapacitating or killing insects on contact before they can bite. In 2016, Consumer Reports found that, as consecutive washes reduce permethrin concentration, incapacitation becomes too slow to prevent bites. In these cases, other common topical repellents such as icaridin may be applied to the clothing, though some, such as DEET and IR3535, can damage certain synthetic fabrics. Uses: In pet flea preventive collars or treatment (safe for use on dogs but not cats) In timber treatment Medical use Permethrin is available for topical use as a cream or lotion. It is indicated for the treatment and prevention in exposed individuals of head lice and treatment of scabies. It has an excellent safety profile; its main drawback is its cost.For treatment of scabies: Adults and children older than 2 months are instructed to apply the cream to the entire body from head to the soles of the feet. Wash off the cream after 8–14 hours. In general, one treatment is curative. A single application of permethrin is more effective than a single oral dose of ivermectin for scabies. In addition permethrin provides more rapid symptomatic relief than ivermectin. When a second dose of ivermectin is days later, the efficacy between permethrin and ivermectin approach parity.For treatment of head lice: Apply to hair, scalp, and neck after shampooing. Leave in for 10 minutes and rinse. Avoid contact with eyes. Uses: Pest control / effectiveness and persistence In agriculture, permethrin is mainly used on cotton, wheat, maize, and alfalfa crops. Its use is controversial because, as a broad-spectrum chemical, it kills indiscriminately; as well as the intended pests, it can harm beneficial insects, including honey bees, as well as cats and aquatic life.Permethrin kills ticks and mosquitoes on contact with treated clothing. A method of reducing deer tick populations by treating rodent vectors involves stuffing biodegradable cardboard tubes with permethrin-treated cotton. Mice collect the cotton for lining their nests. Permethrin on the cotton kills any immature ticks feeding on the mice.Permethrin is used in tropical areas to prevent mosquito-borne disease such as dengue fever and malaria. Mosquito nets used to cover beds may be treated with a solution of permethrin. This increases the effectiveness of the bed net by killing parasitic insects before they are able to find gaps or holes in the net. Personnel working in malaria-endemic areas may be instructed to treat their clothing with permethrin as well.Permethrin is the most commonly used insecticide worldwide for the protection of wool from keratinophagous insects such as Tineola bisselliella.To better protect soldiers from the risk and annoyance of biting insects, the British and US armies are treating all new uniforms with permethrin.Permethrin (as well as other long-term pyrethroids) is effective over several months, in particular when used indoors. International studies report that permethrin can be detected in house dust, in fine dust, and on indoor surfaces even years after the application. Its degradation rate under indoor conditions is approximately 10% after 3 months. Uses: Resistance Contrary to the most common mechanism of insecticide resistance evolution – selection for preexisting, low-frequency alleles – in Aedes aegypti permethrin resistance has arisen through the mechanism common to pyrethroids and DDT known as "knockdown resistance" (kdr) mutations. García et al 2009 found that a kdr allele has rapidly spread throughout Mexico and recently become dominant there. Adverse effects: Permethrin disrupts the endocrine system and should not be inhaled. Applying it to the skin is safe when treating temporary conditions such as scabies, but it should not be applied repeatedly, such as to prevent mosquito or tick bites.Permethrin application can cause mild skin irritation and burning. Permethrin has little systemic absorption, and is considered safe for topical use in adults and children over the age of two months. The FDA has assigned it as pregnancy category B. Animal studies have shown no effects on fertility or teratogenicity, but studies in humans have not been performed. The excretion of permethrin in breastmilk is unknown, and it is recommended that breastfeeding be temporarily discontinued during treatment. Skin reactions are uncommon. Adverse effects: Excessive exposure to permethrin can cause nausea, headache, muscle weakness, excessive salivation, shortness of breath, and seizures. Worker exposure to the chemical can be monitored by measurement of the urinary metabolites, while severe overdose may be confirmed by measurement of permethrin in serum or blood plasma.Permethrin does not present any notable genotoxicity or immunotoxicity in humans and farm animals, but is classified by the EPA as a likely human carcinogen when ingested, based on reproducible studies in which mice fed permethrin developed liver and lung tumors. An 2018 review failed to link permethrin exposure in humans to cancer. Pharmacokinetics: Permethrin is a chemical categorized in the pyrethroid insecticide group. The chemicals in the pyrethroid family are created to emulate the chemicals found in the chrysanthemum flower. Absorption Absorption of topical permethrin is minimal. One in vivo study demonstrated 0.5% absorption in the first 48 hours based upon excretion of urinary metabolites. Distribution Distribution of permethrin has been studied in rat models, with highest amounts accumulating in fat and the brain. This can be explained by the lipophilic nature of the permethrin molecule. Metabolism Metabolism of permethrin occurs mainly in the liver, where the molecule undergoes oxidation by the cytochrome P450 system, as well as hydrolysis, into metabolites. Elimination of these metabolites occurs via urinary excretion. Stereochemistry: Permethrin has four stereoisomers (two enantiomeric pairs), arising from the two stereocenters in the cyclopropane ring. The trans enantiomeric pair is known as transpermethrin. (1R,3S)-trans and (1R,3R)-cis enantiomers are responsible for the insecticidal properties of permethrin. History: Permethrin was first made in 1973.Numerous synthetic routes exist for the production of the DV-acid ester precursor. The pathway known as the Kuraray Process uses four steps. In general, the final step in the total synthesis of any of the synthetic pyrethroids is a coupling of a DV-acid ester and an alcohol. In the case of permethrin synthesis, the DV-acid cyclopropanecarboxylic acid, 3-(2,2-dichloroethenyl)-2,2-dimethyl-, ethyl ester, is coupled with the alcohol, m-phenoxybenzyl alcohol, through a transesterification reaction with base. Tetraisopropyl titanate or sodium ethylate may be used as the base.The alcohol precursor may be prepared in three steps. First, m-cresol, chlorobenzene, sodium hydroxide, potassium hydroxide, and cuprous chloride react to yield m-phenoxytoluene. Second, oxidation of m-phenoxytoluene over selenium dioxide provides m-phenoxybenzaldehyde. Third, a Cannizzaro reaction of the benzaldehyde in formaldehyde and potassium hydroxide affords the m-phenoxybenzyl alcohol. Brand names: In Nordic countries and North America, a permethrin formulation for lice treatment is marketed under trade name Nix, available over the counter. Johnson & Johnson's UK brand Lyclear covers an assortment of different products, mostly non-insecticidal, but a few of which are based on permethrin.Stronger concentrations of permethrin are used to treat scabies (which embed inside the skin), compared to lice (which remain outside the skin). In the U.S. the more concentrated products such as Elimite are available by prescription only. Other animals: It is known to be highly toxic to cats, fish and aquatic species with long-lasting effects. Other animals: Cats Permethrin is toxic to cats; however, it has little effect on dogs. Pesticide-grade permethrin is toxic to cats. Many cats die after being given flea treatments intended for dogs, or by contact with dogs having recently been treated with permethrin. In cats it may induce hyperexcitability, tremors, seizures, and death.Toxic exposure of permethrin can cause several symptoms, including convulsion, hyperaesthesia, hyperthermia, hypersalivation, and loss of balance and coordination. Exposure to pyrethroid-derived drugs such as permethrin requires treatment by a veterinarian, otherwise the poisoning is often fatal. This intolerance is due to a defect in glucuronosyltransferase, a common detoxification enzyme in other mammals, that also makes the cat intolerant to paracetamol (acetaminophen). Based on those observations, the use of any external parasiticides based on permethrin is contraindicated for cats. Other animals: Aquatic organisms Permethrin is listed as a "restricted use" substance by the US Environmental Protection Agency (EPA) due to its high toxicity to aquatic organisms, so permethrin and permethrin-contaminated water should be properly disposed of. Permethrin is quite stable, having a half life of 51–71 days in an aqueous environment exposed to light. It is also highly persistent in soil.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pharyngeal veins** Pharyngeal veins: The pharyngeal veins commence in the pharyngeal plexus superficial to the pharynx. The pharyngeal veins receive as tributaries meningeal vein, and the vein of the pterygoid canal. The pharyngeal veins typically empty into the internal jugular vein (but may occasionally instead empty into the facial vein, lingual vein, or superior thyroid vein).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Organic synthesis** Organic synthesis: Organic synthesis is a special branch of chemical synthesis and is concerned with the intentional construction of organic compounds. Organic molecules are often more complex than inorganic compounds, and their synthesis has developed into one of the most important branches of organic chemistry. There are several main areas of research within the general area of organic synthesis: total synthesis, semisynthesis, and methodology. Total synthesis: A total synthesis is the complete chemical synthesis of complex organic molecules from simple, commercially available petrochemical or natural precursors. Total synthesis may be accomplished either via a linear or convergent approach. In a linear synthesis—often adequate for simple structures—several steps are performed one after another until the molecule is complete; the chemical compounds made in each step are called synthetic intermediates. Most often, each step in a synthesis refers to a separate reaction taking place to modify the starting compound. For more complex molecules, a convergent synthetic approach may be preferable, one that involves individual preparation of several "pieces" (key intermediates), which are then combined to form the desired product. Convergent synthesis has the advantage of generating higher yield, compared to linear synthesis. Total synthesis: Robert Burns Woodward, who received the 1965 Nobel Prize for Chemistry for several total syntheses (e.g., his 1954 synthesis of strychnine), is regarded as the father of modern organic synthesis. Some latter-day examples include Wender's, Holton's, Nicolaou's, and Danishefsky's total syntheses of the anti-cancer therapeutic, paclitaxel (trade name, Taxol). Methodology and applications: Each step of a synthesis involves a chemical reaction, and reagents and conditions for each of these reactions must be designed to give an adequate yield of pure product, with as few steps as possible. A method may already exist in the literature for making one of the early synthetic intermediates, and this method will usually be used rather than an effort to "reinvent the wheel". However, most intermediates are compounds that have never been made before, and these will normally be made using general methods developed by methodology researchers. To be useful, these methods need to give high yields, and to be reliable for a broad range of substrates. For practical applications, additional hurdles include industrial standards of safety and purity.Methodology research usually involves three main stages: discovery, optimisation, and studies of scope and limitations. The discovery requires extensive knowledge of and experience with chemical reactivities of appropriate reagents. Optimisation is a process in which one or two starting compounds are tested in the reaction under a wide variety of conditions of temperature, solvent, reaction time, etc., until the optimal conditions for product yield and purity are found. Finally, the researcher tries to extend the method to a broad range of different starting materials, to find the scope and limitations. Total syntheses (see above) are sometimes used to showcase the new methodology and demonstrate its value in a real-world application. Such applications involve major industries focused especially on polymers (and plastics) and pharmaceuticals. Some syntheses are feasible on a research or academic level, but not for industry level production. This may lead to further modification of the process. Stereoselective synthesis: Most complex natural products are chiral, and the bioactivity of chiral molecules varies with the enantiomer. Historically, total syntheses targeted racemic mixtures, mixtures of both possible enantiomers, after which the racemic mixture might then be separated via chiral resolution. Stereoselective synthesis: In the later half of the twentieth century, chemists began to develop methods of stereoselective catalysis and kinetic resolution whereby reactions could be directed to produce only one enantiomer rather than a racemic mixture. Early examples include stereoselective hydrogenations (e.g., as reported by William Knowles and Ryōji Noyori, and functional group modifications such as the asymmetric epoxidation of Barry Sharpless; for these specific achievements, these workers were awarded the Nobel Prize in Chemistry in 2001. Such reactions gave chemists a much wider choice of enantiomerically pure molecules to start from, where previously only natural starting materials could be used. Using techniques pioneered by Robert B. Woodward and new developments in synthetic methodology, chemists became more able to take simple molecules through to more complex molecules without unwanted racemisation, by understanding stereocontrol, allowing final target molecules to be synthesised as pure enantiomers (i.e., without need for resolution). Such techniques are referred to as stereoselective synthesis. Synthesis design: Elias James Corey brought a more formal approach to synthesis design, based on retrosynthetic analysis, for which he won the Nobel Prize for Chemistry in 1990. In this approach, the synthesis is planned backwards from the product, using standard rules. The steps "breaking down" the parent structure into achievable component parts are shown in a graphical scheme that uses retrosynthetic arrows (drawn as ⇒, which in effect, mean "is made from"). Synthesis design: More recently, and less widely accepted, computer programs have been written for designing a synthesis based on sequences of generic "half-reactions".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Data-driven testing** Data-driven testing: Data-driven testing (DDT), also known as table-driven testing or parameterized testing, is a software testing methodology that is used in the testing of computer software to describe testing done using a table of conditions directly as test inputs and verifiable outputs as well as the process where test environment settings and control are not hard-coded. In the simplest form the tester supplies the inputs from a row in the table and expects the outputs which occur in the same row. The table typically contains values which correspond to boundary or partition input spaces. In the control methodology, test configuration is "read" from a database. Introduction: In the testing of software or programs, several methodologies are available for implementing this testing. Each of these methods co-exist because they differ in the effort required to create and subsequently maintain. The advantage of Data-driven testing is the ease to add additional inputs to the table when new partitions are discovered or added to the product or system under test. Also, in the data-driven testing process, the test environment settings and control are not hard-coded. The cost aspect makes DDT cheap for automation but expensive for manual testing. Methodology overview: Data-driven testing is the creation of test scripts to run together with their related data sets in a framework. The framework provides re-usable test logic to reduce maintenance and improve test coverage. Input and result (test criteria) data values can be stored in one or more central data sources or databases, the actual format, organization and tools can be implementation specific. Methodology overview: The data comprises variables used for both input values and output verification values. In advanced (mature) automation environments data can be harvested from a running system using a purpose-built custom tool or sniffer, the DDT framework thus performs playback of harvested data producing a powerful automated regression testing tool. Methodology overview: Automated test suites contain user's interactions through the system's GUI, for repeatable testing. Each test begins with a copy of the "before" image reference database. The "user interactions" are replayed through the "new" GUI version and result in the "post test" database. The reference "post test" database is compared to the "post test" database, using a tool. Differences reveal probable regression. Methodology overview: Navigation through the program, reading of the data sources, and logging of test status and information are all coded in the test script. Data driven: Anything that has a potential to change (also called "variability," and includes elements such as environment, end points, test data, locations, etc.) is separated out from the test logic (scripts) and moved into an 'external asset'. This can be a configuration or test dataset. The logic executed in the script is dictated by the data values. Keyword-driven testing is similar except that the logic for the test case itself is encoded as data values in the form of a set of "action words", and not embedded or "hard-coded" in the test script. The script is simply a "driver" (or delivery mechanism) for the data that is held in the data source. The databases used for data-driven testing can include: Data pools DAO objects ADO objects
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quality storyboard** Quality storyboard: A quality storyboard is a method of illustrating the quality control process of a manufactured good (referred to as the QC story) which involves the creation of a storyboard. Examples of Quality storyboards: At Yokogawa-Hewlett-Packard in Japan, the QC story is told using a flip chart of size 6 x 6 feet (2 x 2 meters). The project team uses colored markers to show the PDSA cycle (Shewhart cycle) and the SDSA cycle (Standardize, Do, Study, Act). After each manager writes an interpretation of the policy statement, the interpretation is discussed with the next manager above to reconcile differences in understanding and direction. In this way, they play "catch all" with the policy and develop a consensus. Worker participation in managerial diagnostics: When the management attempts to make a managerial diagnosis, it is important that the people whose work is being diagnosed be properly prepared to enter the discussion. For this purpose, it is very helpful if everyone knows how to tell the QC story. Telling the story properly requires seven steps. Worker participation in managerial diagnostics: 1. Problem definition: This step includes an explanation of why the problem is important (which will tie it to the priority statements of the top management or to a problem that is essential as seen at the lower levels). Normally, this step includes a discussion of the losses that occur because of the problem, the team that will make an estimate of what should be done, and work on it. A target is often specified though it is understood that reaching such a target cannot be guaranteed. A schedule is proposed. Worker participation in managerial diagnostics: 2. Data collection: This step involves observing the time, place, type, and symptoms of the problem. It involves data gathering and displays an attempt to understand the important aspects of the problem. 3. Analysis: In this step the various tools for quality analysis are used, such as Control charts, Pareto charts, cause-and-effect diagrams, scatter diagrams, histograms, etc. 4. Action: Based on the analysis, an action is taken. 5. Study: The results are studied to see if they conform to what was expected and to learn from what was not expected. Data is taken to confirm the action. 6. Act/Standardize: Appropriate steps are taken to see that the gains are secured. New standard procedures are introduced. 7. Plans for the future/Continuity: As a result of solving this problem, other problems will have been identified and other opportunities recognized. Worker participation in managerial diagnostics: These seven steps do not describe how a problem is solved. Problem-solving requires a great deal of iteration and it is often necessary to go back to a previous step as new data is found and better analysis is made. However, when the time comes to report on what was done, the above format provides the basis for telling the story in a way that makes it comprehensible to the upper levels of management. Questions to guide constructing a Quality storyboard: Definition of the problem: Does the Problem definition contain these three parts:- Direction, Measure, Reference? Did you avoid words like "improve" and "lack of"? Have you avoided using "and" to address more than one issue in the Problem definition?Why Selected: Have you explained how you know this is the most important issue to work on? Have you shown how the issue relates to the customer or customer satisfaction, or how it will benefit the customer? Have you explained the method used to select the issue?Initial state: Have you described, in numerical terms, the status of the measure in the Problem definition? Have you collected time series data? Have you provided some historical information about the status of the measure? Is data displayed in a visual, graphical format? Is there a flow chart or other explanation of the status of the process at the beginning of the project? Have you included other facts that would help the reader understand the initial situation?Analysis of Causes: Is there a clear statement of the major cause(s) of the issue? Have you explained how the possible causes were theorized? Is data included showing how the main causes were identified? Is data displayed in such a way that the connection between the issue and the cause(s) is clear? Have you explained how the data were collected and over what time-period they were collected?Plans: Is there a complete Purpose Statement and objectives designed to move toward the purpose: Direction, measure, reference, target, time frame, and owner? Is it clear how the target was derived from the analysis? Is it clear that the actions in the plan are aimed at correcting root cause(s)? Have you indicated what alternative solutions were considered, and how they were evaluated to select the best improvement theory? Have you included a copy of the planning documents? Have you indicated whether the plan was implemented on schedule?Study: Is there a comparison of the target in the improvement theory and the actual results? Are the results displayed in the same graphical format as the information in "Initial state(s)" or "Analysis/analyses? Have you indicated whether the results were achieved in the expected time frame? If the results did not match the objectives or were achieved outside the expected time, have you provided an analysis of the differences? Have you included any other related results, good or bad?Acts and Standardization: Have you explained the actions taken to hold the gain and updated all related documentation, training in the new process, skills training, physical reorganization, sharing, or process monitoring?Future Plans: Have you included a list of possible next projects? Have you indicated which of the possible projects will be the next issue for improvement?It is believed to have been first developed by a Japanese tractor company, Komatsu. Questions to guide constructing a Quality storyboard: Quality storyboards were also used by Florida Power & Light as part of their quality drive during the 1980s to win the Deming Prize.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Streptonigrin** Streptonigrin: Streptonigrin is an aminoquinone antitumor and antibacterial antibiotic produced by Streptomyces flocculus. Streptonigrin was a successful target of Total synthesis in 2011. Notes: Antitumor antibiotic streptonigrin and its derivatives as inhibitors of nitric oxide-dependent activation of soluble guanylyl cyclase
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paraconsistent mathematics** Paraconsistent mathematics: Paraconsistent mathematics, sometimes called inconsistent mathematics, represents an attempt to develop the classical infrastructure of mathematics (e.g. analysis) based on a foundation of paraconsistent logic instead of classical logic. A number of reformulations of analysis can be developed, for example functions which both do and do not have a given value simultaneously. Paraconsistent mathematics: Chris Mortensen claims (see references): One could hardly ignore the examples of analysis and its special case, the calculus. There prove to be many places where there are distinctive inconsistent insights; see Mortensen (1995) for example. (1) Robinson's non-standard analysis was based on infinitesimals, quantities smaller than any real number, as well as their reciprocals, the infinite numbers. This has an inconsistent version, which has some advantages for calculation in being able to discard higher-order infinitesimals. The theory of differentiation turned out to have these advantages, while the theory of integration did not. (2)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OmcS oxidoreductase** OmcS oxidoreductase: OmcS nanowires (Geobacter nanowires) are conductive filaments found in some species of bacteria, including Geobacter sulfurreducens, where they catalyze the transfer of electrons. They are multiheme c-Type cytochromes localized outside of the cell of some exoelectrogenic bacterial species, serving as mediator of extracellular electron transfer from cells to Fe(III) oxides and other extracellular electron acceptors.OmcS (3D structure) has a core of six low-spin bis-histidinyl hexacoordinated heme groups inside a sinusoidal filament ~5-7.4 nm in diameter, with 46.7 Å rise per subunit and 4.3 subunits per turn. The six-heme packing motif of OmcS is identical to that seen in a ~3 nm diameter cytochrome nanowire, OmcE (3D structure), even though OmcE and OmcS share no sequence similarity.The OmcS gene can be one the most highly up-regulated genes in the Geobacter sulfurreducens KN400 strain when cultivated in a microbial fuel cell, as compared to the PCA strain, although a role for OmcS in electron transfer to electrodes has never been demonstrated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1/100 Regularity Rally** 1/100 Regularity Rally: The 1/100 regularity rally is a typically European format of regularity rally often for classic cars. As with other regularity rallies, the aim is not to be the fastest but rather to stay on the prescribed time across all timed checkpoints. Accordingly, 1/100 regularity rallies carry a negligible risk of damage to the vehicles and participants. 1/100 Regularity Rally: 1/100 regularity rallies are generally conducted on open, public roads alongside regular traffic, without the contestants knowing the route in advance. Teams usually take off at fixed intervals, creating a field that is spread along the course. The route is described in a roadbook sign by sign, to be deciphered by the navigator. In addition to having to adhere to the prescribed arrival times to the timed checkpoints, the route is also sprinkled with 1/100 challenges - where the denomination of the rally comes from. 1/100 Regularity Rally: 1/100 challenges are special tasks between ordinary stages of the rally, timed to the accuracy of 1/100 second (0.01s). The roadbook will usually include a chart with the layout for each challenge with their prescribed completion times, and competitors would receive penalty points for every 0.01s too early or too late across the finish line. Characteristics: A typical 1/100 regularity rally may run for a few hours or it may run over a series of stages over a few days. Competitors are usually briefed about the event at the start, and may be required to submit their cars for inspection. Each team is given a roadbook and a timecard prior to departure. This timecard will record departure and arrival times at all timed checkpoints. Teams' scores are determined by adding all penalty points from timed checkpoints, 1/100 challenges, missing stamps or other control measures, and route errors. In addition, some events apply a multiplier to the overall score of participants based on the year of manufacture of their vehicle, to offset any potential advantage of more modern technology. The team with the lowest number of penalties wins. Equipment: Most 1/100 regularity rallies require a stopwatch to complete. The rules of each event determine what kind of devices are permitted. Some common aids include: Odometer: Odometers can range from the odometer included on the dashboard of most cars to specially manufactured rally odometers. Speedometer: As with odometers, speedometers used by rallyists range from those built into the vehicle to specially manufactured rally speedometers. Stopwatch: Accurate time is essential in regularity rallying - in 1/100 rallies a mechanical stopwatch is preferred. Notable 1/100 Regularity Rallies: Austria's famous Silvretta Classic has been held since 1998 with 200 cars attending every year. The hosting Motor Klassik magazine organises several similar rallies, such as the annual Paul Pietsch Classic and the Sachsen Classic. The Hungarian Oldtimer Supercup comprises 4-8 rallies for classic cars every year since 2002, based on the 1/100 regularity format. From 2012 Australia has been hosting their own 1/100 regularity series called the Australia Classic.In 2009, in episode 6 of season 13 of Top Gear Jeremy Clarkson and his team participated in the Rally Clásico Isla Mallorca, a 1/100 regularity rally in Mallorca.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Columnar jointing** Columnar jointing: Columnar jointing is a geological structure where sets of intersecting closely spaced fractures, referred to as joints, result in the formation of a regular array of polygonal prisms, or columns. Columnar jointing occurs in many types of igneous rocks and forms as the rock cools and contracts. Columnar jointing can occur in cooling lava flows and ashflow tuffs (ignimbrites), as well as in some shallow intrusions. Columnar jointing also occurs rarely in sedimentary rocks if they have been heated by nearby hot magma. Columnar jointing: The columns can vary from 3 meters to a few centimeters in diameter, and can be as much as 30 meters tall. They are typically parallel and straight, but can also be curved and vary in diameter. An array of regular, straight, and larger-diameter columns is called a colonnade; an irregular, less-straight, and smaller-diameter array is termed an entablature. The number of sides of the individual columns can vary from 3 to 8, with 6 sides being the most common. Places: Some famous locations in the United States where columnar jointing can be found are the Devils Tower in Wyoming, the Devils Postpile in California and the Columbia River flood basalts in Oregon, Washington and Idaho. Other famous places include the Giant's Causeway in Northern Ireland and Fingal's Cave on the island of Staffa, Scotland. Places: Devils Tower The Devils Tower in Wyoming in the United States is about 40 million years old and 382 meters (1,253 feet) high. Geologists agree that the rock forming the Devils Tower solidified from an intrusion but it has not been established whether the magma from this intrusion ever reached the surface. Most columns are 6-sided, but 4, 5, and 7-sided ones can also be found. Places: Giant's Causeway The Giant's Causeway (Irish: Clochán An Aifir) on the north Antrim coast of Northern Ireland was created by volcanic activity 60 million years ago, and consists of over 40,000 columns. According to a legend, the giant Finn McCool created the Giant's Causeway, as a causeway to Scotland. Sōunkyō Gorge Sōunkyō Gorge, a part of the town of Kamikawa, Hokkaido, Japan, features a 24-kilometer stretch of columnar jointing, which is the result of an eruption of the Daisetsuzan Volcanic Group 30,000 years ago. Deccan Traps The late Cretaceous Deccan Traps of India constitute one of the largest volcanic provinces of Earth, and examples of columnar jointing can be found in St. Mary's Island in the state of Karnataka. High Island Reservoir Formed in Cretaceous, the columnar rocks are found around the reservoir and the islands nearby in Sai Kung, Hong Kong. It is special that the rocks are not mafic, but felsic tuff instead. Makhtesh Ramon The columnar jointed sandstone of the HaMinsara (Carpentry Shop) in the makhtesh (erosion cirque) of Makhtesh Ramon, Negev desert, Israel. Cerro Kõi There are several examples of columnar jointed sandstones in the greater Asunción region of Paraguay. The best known is Cerro Kõi in Areguá, but there are also several quarries in Luque. Mars Several exposures of columnar jointing have been discovered on the planet Mars by the High Resolution Imaging Science Experiment (HiRISE) camera, which is carried by the Mars Reconnaissance Orbiter (MRO). Sawn Rocks Sawn Rocks, in Mount Kaputar National Park close to Narrabri, New South Wales, Australia, features 40 meters of columnar jointing above the creek and 30 meters below the surface. Basaltic Prisms of Santa María Regla Alexander von Humboldt documented the prisms located in Huasca de Ocampo, in the Mexican state of Hidalgo. Columnar basalt of Tawau (Batu Bersusun) At Kampung Balung Cocos, Tawau, Malaysia, the river flows through the area of columnar basalt. One section is seen vertically high on river bank. The rest lies on river bank. The water flows from the lowest area forming waterfall.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ramanujan machine** Ramanujan machine: The Ramanujan machine is a specialised software package, developed by a team of scientists at the Technion: Israeli Institute of Technology, to discover new formulas in mathematics. It has been named after the Indian mathematician Srinivasa Ramanujan because it supposedly imitates the thought process of Ramanujan in his discovery of hundreds of formulas. Ramanujan machine: The machine has produced several conjectures in the form of continued fraction expansions of expressions involving some of the most important constants in mathematics like e and π (pi). Some of these conjectures produced by the Ramanujan machine have subsequently been proved true. The others continue to remain as conjectures. The software was conceptualised and developed by a group of undergraduates of the Technion under the guidance of Ido Kaminer, an Electrical engineering faculty member of Technion. The details of the machine were published online on 3 February 2021 in the journal Nature.According to George Andrews, an expert on the mathematics of Ramanujan, even though some of the results produced by the Ramanujan machine are amazing and difficult to prove, the results produced by the machine are not of the caliber of Ramanujan and so calling the software the Ramanujan machine is slightly outrageous. Doron Zeilberger, an Israeli mathematician, has opined that the Ramanujan machine is a harbinger of a new methodology of doing mathematics. Formulas discovered by the Ramanujan machine: The following are some of the formulas discovered by the Ramanujan machine which have been later proved to be true: 12 15 −⋱ ee−2=4−15−16−27−38−⋱ The following are some of the many formulas conjectured by the Ramanujan machine whose truth or falsity has not yet been established: 19 37 61 −⋱ log 14 72 30 288 52 800 80 −⋱ In the last expression, the numbers 4, 14, 32, 58, . . . are defined by the sequence an=3n2+7n+4 for n=0,1,2,3,… and the numbers 8, 72, 288, 800, . . . are generated using the formula bn=2n2(n+1)2 for n=1,2,3…
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cleaver (Stone Age tool)** Cleaver (Stone Age tool): In archaeology, a cleaver is a type of biface stone tool of the Lower Palaeolithic. Cleavers resemble hand axes in that they are large and oblong or U-shaped tools meant to be held in the hand. But, unlike hand axes, they have a wide, straight cutting edge running at right angles to the axis of the tool. Acheulean cleavers resemble handaxes but with the pointed end truncated away. Flake cleavers have a cutting edge created by a tranchet flake being struck from the primary surface. Differences between cleavers and hand axes: Cleavers, found in many Acheulean assemblages such as Africa, were similar in size and manner of hand axes. The differences between a hand axe and a cleaver is that a hand axe has a more pointed tip, while a cleaver will have a more transverse "bit" that consists of an untrimmed portion of the edge oriented perpendicular to the long axis of the tool. These were used in lithic technology. It is unclear if it was used for heavy digging or not. More experiments were shown that the cleaver in Africa was more used as a butchering instrument. They were also helpful in skinning large game, and breaking bones. Cleavers in Africa: In Africa, cleavers first appeared from the late Oldowan to the Acheulean. In Peninj on the western shore of Lake Notron, cleavers constitute 16% of all findings.One type of cleaver made in Africa was formed by the Tavelbala Tachengit technique. This procedure was used in the north-west Sahara desert. It was done by detachment of large flakes for biface manufacture. This was a significant technological advance because the cleavers normally produced were made on large flakes, which was impossible to produce with using the nodular flint of the size used on the European continent. Once the large flakes were detached, there were only minor modifications needed to get the desired end product that is now known as the flake-cleaver. There are collections of these flakes in the Institute of Human Paleontology in Paris. They include larger hand axes and smaller hand axes ranging from 6–26 cm, and large flake cleavers that were commonly found in this area. Cleavers in Africa: Another way to make the tools was known as the "Victoria West" technique, termed by Goodwin in 1934. This process was used in the Middle Pleistocene of South Africa, and was the chosen method of producing large flakes that was used to create cleavers. This production date was between 285 k.A BP to 510 k.a. BP. Cleavers in Europe: Outside Africa where cleavers were most abundant, cleavers have appeared in Southwestern Europe. In these regions, they are more abundant than hand axes where raw material occurs in the form of large quartzite cobbles that do not need extensive decortication and shaping prior to the removal of large flakes. Cleavers can also be found made from different raw materials such as flint or limestone, but these are not nearly so common. Cleavers in Europe: There is also a chronological gap between the two lithic assemblages of the early core and flake techniques to the later Acheulian. Central France shows that both of these existed. One of the cleavers found in Central France was a 20 cm long cleaver. This particular one was found in the Mazieres/ Cresuse Valley, and looks slightly different from the cleavers which were found in Africa based solely on the technique used at this time period. Cleavers in Asia: This tool type is known from various lower Paleolithic sites across western Asia and Indian subcontinent. There are a few examples found in China too.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polyketone** Polyketone: Polyketones are a family of high-performance thermoplastic polymers. The polar ketone groups in the polymer backbone of these materials gives rise to a strong attraction between polymer chains, which increases the material's melting point (255 °C for copolymer (carbon monoxide ethylene), 220 °C for terpolymer (carbon monoxide, ethylene, propylene). Trade names include Poketone, Carilon, Karilon, Akrotek, and Schulaketon. Such materials also tend to resist solvents and have good mechanical properties. Unlike many other engineering plastics, aliphatic polyketones such as Shell Chemicals' Carilon are relatively easy to synthesize and can be derived from inexpensive monomers. Carilon is made with a palladium(II) catalyst from ethylene and carbon monoxide. A small fraction of the ethylene is generally replaced with propylene to reduce the melting point somewhat. Shell Chemical commercially launched Carilon thermoplastic polymer in the U.S. in 1996, but discontinued it in 2000. SRI International offers Carilon thermoplastic polymers. Hyosung announced that they would launch production in 2015. Industrial production: The ethylene-carbon monoxide co-polymer is most significant. Industrially, this polymer is synthesized either as a methanol slurry, or via a gas phase reaction with immobilized catalysts. Polymerization mechanism: Initiation and termination Where external initiation is not employed for the methanol system, initiation can take place via methanolysis of the palladium(II) precursor, giving either a methoxide or a hydride complex. Termination occurs also by methanolysis. Depending on the end of the growing polymer chain, this results in either an ester or a ketone end group, and regenerating the palladium methoxide or hydride catalysts respectively. Polymerization mechanism: Propagation A mechanism for the propagation of this reaction using a palladium(II)-phenanthroline catalyst has been proposed by Brookhart: Polyketones are noted for having extremely low defects (double ethylene insertions or double carbonyl insertions, in red): The activation barrier to give double carbonyl insertions is very high, so it does not occur. Brookhart's mechanistic studies show that the concentration of the alkyl-ethylene palladium complex required to give double ethylene insertions is very low at any one point: Additionally, the Gibbs energy of activation of the alkyl-ethylene insertion is ~ 3 kcal/mol higher than the corresponding activation barrier for the alkyl-carbon monoxide insertion. As a result, defects occur at an extremely low rate (~ 1 part per million). The industrially-relevant palladium-dppp catalyst has also been investigated. Polymerization mechanism: Importance of bidentate ligands Where palladium(II) pre-catalysts bearing monodentate phosphine ligands are used in methanol, a relatively high fraction of methyl propionate is produced. In comparison, where chelating diphosphine ligands are used, this side-product is absent. This observation is rationalized: the bis(phosphine) complex can undergo cis-trans isomerization to give the sterically favored trans isomer. The propionyl ligand is now trans- to the open coordination site or ethylene ligand, and is unable to undergo migratory insertion. Instead, solvolysis by methanol occurs, which gives the undesired methyl propionate side-product. Polymerization mechanism: However, it has also been shown that the production of high-molecular weight, strictly-alternating ethylene/carbon monoxide copolymers can be straightforwardly generated without the use of any mono- or polydentate ligands. In this case, the medium used is neither methanol nor is it gas-phase, but instead a combination of selected strongly Lewis acidic media in combination with various transition metals or metal compounds. Also, in the case of Lewis Acidic solvents, Palladium is the preferred metal catalyst, and does not require the use of other ligand-precursors to form the copolymer. Palladium metal was actually found to be an acceptable precursor for the active catalyst.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Perfluoromethylcyclohexane** Perfluoromethylcyclohexane: Perfluoromethylcyclohexane is a fluorocarbon liquid—a perfluorinated derivative of the hydrocarbon methylcyclohexane. It is chemically and biologically inert. Manufacture: Perfluoromethylcyclohexane can be manufactured by the Fowler process, which involves moderating the action of elemental fluorine with cobalt fluoride in the gas phase from toluene. This is preferred as the starting material over methylcyclohexane as less fluorine is required. Properties: Perfluoromethylcyclohexane is chemically inert and thermally stable (to over 400 °C). It is non-toxic.It is a clear, colorless liquid, with a relatively high density, low viscosity and low surface tension that will rapidly evaporate. It is a relatively good solvent for gases, but a poor solvent for solids and liquids.In common with other cyclic perfluorocarbons, perfluoromethylcyclohexane can be detected at extremely low concentrations, making it ideal as a tracer. Applications: Heat transfer agent Dielectric fluid Perfluorocarbon tracer
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Auxiliary particle filter** Auxiliary particle filter: The auxiliary particle filter is a particle filtering algorithm introduced by Pitt and Shephard in 1999 to improve some deficiencies of the sequential importance resampling (SIR) algorithm when dealing with tailed observation densities. Motivation: Particle filters approximate continuous random variable by M particles with discrete probability mass πt , say 1/M for uniform distribution. The random sampled particles can be used to approximate the probability density function of the continuous random variable if the value M→∞ . The empirical prediction density is produced as the weighted summation of these particles: f^(αt+1|Yt)=∑j=1Mf(αt+1|αtj)πtj , and we can view it as the "prior" density. Note that the particles are assumed to have the same weight πtj=1M Combining the prior density f^(αt+1|Yt) and the likelihood f(yt+1|αt+1) , the empirical filtering density can be produced as: f^(αt+1|Yt+1)=f(yt+1|αt+1)f^(αt+1|Yt)f(yt+1|Yt)∝f(yt+1|αt+1)∑j=1Mf(αt+1|αtj)πtj , where f(yt+1|Yt)=∫f(yt+1|αt+1)dF(αt+1|Yt) On the other hand, the true filtering density which we want to estimate is f(αt+1|Yt+1)=f(yt+1|αt+1)f(αt+1|Yt)f(yt+1|Yt) The prior density f^(αt+1|Yt) can be used to approximate the true filtering density f(αt+1|Yt+1) The particle filters draw R samples from the prior density f^(αt+1|Yt) . Each sample are drawn with equal probability. Motivation: Assign each sample with the weights πj=ωj∑i=1Rωi,ωj=f(y|αj) . The weights represent the likelihood function f(yt+1|αt+1) If the number R→∞ , than the samples converge to the desired true filtering density. Motivation: The R particles are resampled to M particles with the weight πj .The weakness of the particle filters includes: If the weight { ωj } has a large variance, the sample amount R must be large enough for the samples to approximate the empirical filtering density. In other words, while the weight is widely distributed, the SIR method will be imprecise and the adaption is difficult.Therefore, the auxiliary particle filter is proposed to solve this problem. Auxiliary particle filter: Auxiliary variable Comparing with the empirical filtering density which has f^(αt+1|Yt+1)∝f(yt+1|αt+1)∑j=1Mf(αt+1|αtj)πtj , we now define f^(αt+1,k|Yt+1)∝f(yt+1|αt+1)f(αt+1|αtk)πk , where k=1,...,M . Being aware that f^(αt+1|Yt+1) is formed by the summation of M particles, the auxiliary variable k represents one specific particle. With the aid of k , we can form a set of samples which has the distribution g(αt+1,k|Yt+1) . Then, we draw from these sample set g(αt+1,k|Yt+1) instead of directly from f^(αt+1|Yt+1) . In other words, the samples are drawn from f^(αt+1|Yt+1) with different probability. The samples are ultimately utilized to approximate f(αt+1|Yt+1) . Take the SIR method for example: The particle filters draw R samples from g(αt+1,k|Yt+1) Assign each samples with the weight πj=ωj∑i=1Rωi,ωj=f(yt+1|αt+1j)f(αt+1j|αtk)g(αt+1j,kj|Yt+1) By controlling yt+1 and αtk , the weights are adjusted to be even. Auxiliary particle filter: Similarly, the R particles are resampled to M particles with the weight πj .The original particle filters draw samples from the prior density, while the auxiliary filters draw from the joint distribution of the prior density and the likelihood. In other words, the auxiliary particle filters avoid the circumstance which the particles are generated in the regions with low likelihood. As a result, the samples can approximate f(αt+1|Yt+1) more precisely. Auxiliary particle filter: Selection of the auxiliary variable The selection of the auxiliary variable affects g(αt+1,k|Yt+1) and controls the distribution of the samples. A possible selection of g(αt+1,k|Yt+1) can be: g(αt+1,k|Yt+1)∝f(yt+1|μt+1k)f(αt+1|αtk)πk , where k=1,...,M and μt+1k is the mean. Auxiliary particle filter: We sample from g(αt+1,k|Yt+1) to approximate f(αt+1|Yt+1) by the following procedure: First, we assign probabilities to the indexes of f(αt+1|αtk) . We named these probabilities as the first-stage weights λk , which are proportional to g(k|Yt+1)∝πkf(yt+1|μt+1k) Then, we draw R samples from f(αt+1|αtk) with the weighted indexes. By doing so, we are actually drawing the samples from g(αt+1,k|Yt+1) Moreover, we reassign the second-stage weights πj=ωj∑i=1Rωi as the probabilities of the R samples, where ωj=f(yt+1|αt+1j)f(yt+1|μt+1j) . The weights are aim to compensate the effect of μt+1k .Finally, the R particles are resampled to M particles with the weights πj .Following the procedure, we draw the R samples from g(αt+1,k|Yt+1) . Since g(αt+1,k|Yt+1) is closely related to the mean μt+1k , it has high conditional likelihood. As a result, the sampling procedure is more efficient and the value R can be reduced. Auxiliary particle filter: Other point of view Assume that the filtered posterior is described by the following M weighted samples: p(xt|z1:t)≈∑i=1Mωt(i)δ(xt−xt(i)). Auxiliary particle filter: Then, each step in the algorithm consists of first drawing a sample of the particle index k which will be propagated from t−1 into the new step t . These indexes are auxiliary variables only used as an intermediary step, hence the name of the algorithm. The indexes are drawn according to the likelihood of some reference point μt(i) which in some way is related to the transition model xt|xt−1 (for example, the mean, a sample, etc.): k(i)∼P(i=k|zt)∝ωt(i)p(zt|μt(i)) This is repeated for i=1,2,…,M , and using these indexes we can now draw the conditional samples: xt(i)∼p(x|xt−1k(i)). Auxiliary particle filter: Finally, the weights are updated to account for the mismatch between the likelihood at the actual sample and the predicted point μtk(i) :ωt(i)∝p(zt|xt(i))p(zt|μtk(i)). Sources: Pitt, M.K.; Shephard, N. (1999). "Filtering Via Simulation: Auxiliary Particle Filters". Journal of the American Statistical Association. American Statistical Association. 94 (446): 590–591. doi:10.2307/2670179. JSTOR 2670179. Archived from the original on 2007-10-16. Retrieved 2008-05-06.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ITPKC** ITPKC: ITPKC is one of 3 human genes that encode for an Inositol-trisphosphate 3-kinase. This gene that has been associated with Kawasaki disease. Kawasaki disease is an acute febrile illness that involves the inflammation of blood vessels throughout the body. The majority of cases that have been diagnosed involve children under the age of 5. In untreated cases involving children, 15 to 25 percent of these cases developed coronary artery aneurysms. The overproduction of T cells may be correlated with the immune hyperactivity in Kawasaki disease. ITPKC: This gene is located at chromosome 19q13.1, it codes for one of three isoenzymes. The other two enzymes being ITPKA and ITPKB. ITPKC is involved in the Ca(2+)/NFAT pathway, negatively regulating T cell activation.A mutation in this gene occurs through a single-nucleotide polymorphism. When a mutation occurs the gene does not produce a functioning enzyme, meaning it will no longer be effective in negatively regulating T cells. When there is this reduced expression of the enzyme, ITPKC, there is a higher amount of IP3 which leads to the calcium channels being opened, and a higher amount of calcium being released. Leading to overly active T cells, and having this mutation in ITPKC is correlated to the increased risk of developing symptoms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**New York lunar sample displays** New York lunar sample displays: The New York lunar sample displays are two commemorative plaques consisting of small fragments of Moon specimen brought back with the Apollo 11 and Apollo 17 lunar missions and given in the 1970s to the people of the state of New York by United States President Richard Nixon as goodwill gifts. Description: Apollo 11 At the request of Nixon, NASA had about 250 presentation plaques made following Apollo 11 in 1969. Each included about four rice-sized particles of Moon dust from the mission totaling about 50 mg. The Apollo 11 lunar sample display has an acrylic plastic button containing the Moon dust mounted with the recipient's country or state flag that had been to the Moon and back. All 135 countries received the display, as did the 50 states of the United States and the U.S. provinces and the United Nations.The plaques were given as gifts by Nixon in 1970. Description: Apollo 17 The sample Moon rock collected during the Apollo 17 mission was later named lunar basalt 70017, and dubbed the Goodwill rock. Pieces of the rock weighing about 1.14 grams were placed inside a piece of acrylic lucite, and mounted along with a flag from the country that had flown on Apollo 17 it would be distributed to.In 1973 Nixon had the plaques sent to 135 countries, and to the United States with its territories, as a goodwill gesture. History: The New York Apollo 17 "goodwill Moon rocks" plaque display is located at the New York State Museum in secure storage.The online publication collectSpace has no record of the location of the New York Apollo 11 lunar sample display as of 2012.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hills Hoist** Hills Hoist: A Hills Hoist is a height-adjustable rotary clothes line, designed to permit the compact hanging of wet clothes so that their maximum area can be exposed for wind drying by rotation. They are considered one of Australia's most recognisable icons, and are used frequently by artists as a metaphor for Australian suburbia in the 1950s and 1960s.For decades, beginning in 1945, the devices were mainly manufactured in Adelaide, South Australia, using designs and patents purchased by Lance Hill. The local emphasis led to Hills Hoist becoming the generic term for rotary clothes lines in Australia. The manufacturer soon became nationally market-dominant and rotary washing (clothes) lines have become common across much of the world. Direct successors to his product are now mostly manufactured in China. History: As early as 1895, Colin Stewart and Allan Harley of Sun Foundry in Adelaide applied for a patent for an 'Improved Rotary and Tilting Clothes Drying Rack'. In their design the uppermost part tilted to allow access to the hanging lines.Gilbert Toyne of Geelong patented, manufactured and marketed four rotary clothes hoists designs between 1911 and 1946. Toyne's first patented clothes hoist was sold through the Aeroplane Clothes Hoist Company established in 1911, prior to the First World War. History: After returning from World War I, Toyne continued to perfect his designs, despite his own troubles stemming from injuries suffered from the war. In 1925, he patented an all-metal rotary clothes hoist with its enclosed crown wheel-and-pinion winding mechanism and began selling them the following year.Prolific South Australian inventor Gerhard ‘Pop’ Kaesler also designed a modern rotary clothesline, two decades before they went into commercial production in Adelaide. Lance Hill bought the metre-high wooden prototype model and plans from Kaesler. In 1945, he began to manufacture the Hills rotary clothes hoist in his backyard. His wife apparently wanted an inexpensive replacement to the line and prop she had for drying clothes, as she had no room on the line due to her growing lemon tree.Lance Hill's brother-in-law Harold Ling returned from the war and joined him to form a partnership in 1946. Ling became the key figure in expanding the production and marketing of the Hills hoist and seem to have dropped any idea of a possessive apostrophe from the outset. In 1947, Hills Hoists began manufacturing a windable clothes hoist which was identical to Toyne's expired 1925 patent with the crown wheel-and-pinion winding mechanism.Initially the clothes hoists were constructed and sold from Lance Hill's home on Bevington Road, Glenunga. Soon production moved to a nearby site on Glen Osmond Road and within a decade the factory had relocated to a larger site at Edwardstown. The company Hills Hoists became Hills Industries in 1958. History: In 1974, a Darwin family reported that the only thing left standing after Cyclone Tracy was their Hills Hoist.In January 2017, Hills Industries sold the manufacturing and sale rights of its Hills Home Living brands to AMES Australasia, a subsidiary of the American Griffon Corporation. As of 2018 Austral ClothesHoists and Daytek Australia are the only Australian manufacturers of rotary clotheslines. Cultural impacts: Hills Hoists are considered one of Australia's most recognisable icons, and are used frequently by artists as a metaphor for Australian suburbia in the 1950s and 1960s. The Hills Hoist is listed as a National Treasure by the National Library of Australia. The opening ceremony of the 2000 Sydney Olympics featured giant roaming Hills Hoist robots.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mathematicism** Mathematicism: Mathematicism is 'the effort to employ the formal structure and rigorous method of mathematics as a model for the conduct of philosophy'. or else it is the epistemological view that reality is fundamentally mathematical. The term has been applied to a number of philosophers, including Pythagoras and René Descartes although the term is not used by themselves. The role of mathematics in Western philosophy has grown and expanded from Pythagoras onwards. It is clear that numbers held a particular importance for the Pythagorean school, although it was the later work of Plato that attracts the label of mathematicism from modern philosophers. Furthermore it is René Descartes who provides the first mathematical epistemology which he describes as a mathesis universalis, and which is also referred to as mathematicism. Pythagoras: Although we don't have writings of Pythagoras himself, good evidence that he pioneered the concept of mathematicism is given by Plato, and summed up in the quotation often attributed to him that "everything is mathematics". Aristotle says of the Pythagorean school: The first to devote themselves to mathematics and to make them progress were the so-called Pythagoreans. They, devoted to this study, believed that the principles of mathematics were also the principles of all things that be. Now, since the principles of mathematics are numbers, and they thought they found in numbers, more than in fire and earth and water, similarities with things that are and that become (they judged, for example, that justice was a particular property of numbers, the soul and mind another, opportunity another, and similarly, so to say, anything else), and since furthermore they saw expressed by numbers the properties and the ratios of harmony, since finally everything in nature appeared to them to be similar to numbers, and numbers appeared to be first among all there is in nature, they thought that the elements of numbers were the elements of all that there is, and that the whole world was harmony and number. And all the properties they could find in numbers and in musical chords, corresponding to properties and parts of the sky, and in general to the whole cosmic order, they gathered and adapted to it. And if something was missing, they made an effort to introduce it, so that their tractation be complete. To clarify with an example: since ten seems to be a perfect number and to contain in itself the whole nature of numbers, they said that the bodies that move in the sky are also ten: and since one can only see nine, they added as tenth the anti-Earth. Pythagoras: Further evidence for the views of Pythagoras and his school, although fragmentary and sometimes contradictory, comes from Alexander Polyhistor. Alexander tells us that central doctrines of the Pythagorieans were the harmony of numbers and the ideal that the mathematical world has primacy over, or can account for the existence of, the physical world.According to Aristotle, the Pythagoreans used mathematics for solely mystical reasons, devoid of practical application. They believed that all things were made of numbers. The number one (the monad) represented the origin of all things and other numbers similarly had symbolic representations. Nevertheless modern scholars debate whether this numerology was taught by Pythagoras himself or whether it was original to the later philosopher of the Pythagorean school, Philolaus of Croton.Walter Burkert argues in his study Lore and Science in Ancient Pythagoreanism, that the only mathematics the Pythagoreans ever actually engaged in was simple, proofless arithmetic, but that these arithmetic discoveries did contribute significantly to the beginnings of mathematics. Plato: The Pythagorian school influenced the work of Plato. Mathematical Platonism is the metaphysical view that (a) there are abstract mathematical objects whose existence is independent of us, and (b) there are true mathematical sentences that provide true descriptions of such objects. The independence of the mathematical objects is such that they are non physical and do not exist in space or time. Neither does their existence rely on thought or language. For this reason, mathematical proofs are discovered, not invented. The proof existed before its discovery, and merely became known to the one who discovered it.In summary, therefore, Mathematical Platonism can be reduced to three propositions: Existence. There are mathematical objects. Plato: Abstractness. Mathematical objects are abstract. Independence. Mathematical objects are independent of intelligent agents and their language, thought, and practices.It is again not clear the extent to which Plato held to these views himself but they were associated with the Platonist school. Nevertheless, this was a significant progression in the ideas of mathematicism. Plato: Markus Gabriel refers to Plato in his Fields of Sense: A New Realist Ontology, and in so doing provides a definition for mathematicism. He says: Ultimately, set-theoretical ontology is a remainder of Platonic mathematicism. Let mathematicism from here on be the view that everything that exists can be studied mathematically either directly or indirectly. It is an instance of theory-reduction, that is, a claim to the effect that every vocabulary can be translated into that of mathematics such that this reduction grounds all derivative vocabulary and helps us understand it significantly better. Plato: He goes on, however, to show that the term need not be applied merely to the set-theroetical ontology that he takes issue with, but for other mathematical ontologies. Set-theoretical ontology is just one instance of mathematicism. Depending on one’s preferred candidate for the most fundamental theory of quantifiable structure, one can wind up with a graphtheoretical mathematicism, a set-theoretical, category-theoretical, or some other (maybe hybrid) form of mathematicism. However, mathematicism is metaphysics, and metaphysics need not be associated with ontology. René Descartes: Although mathematical methods of investigation have been used to establish meaning and analyse the world since Pythagoras, it was Descartes who pioneered the subject as epistemology, setting out Rules for the Direction of the Mind. He proposed that method, rather than intuition, should direct the mind, saying: So blind is the curiosity with which mortals are possessed that they often direct their minds down untrodden paths, in the groundless hope that they will chance upon what they are seeking, rather like someone who is consumed with such a senseless desire to discover treasure that he continually roams the streets to see if he can find any that a passerby might have dropped [...] By 'a method' I mean reliable rules which are easy to apply, and such that if one follows them exactly, one will never take what is false to be true or fruitlessly expend one’s mental efforts, but will gradually and constantly increase one’s knowledge till one arrives at a true understanding of everything within one’s capacity In the discussion of Rule Four, Descartes' describes what he calls mathesis universalis: Rule Four We need a method if we are to investigate the truth of things.[...] I began my investigation by inquiring what exactly is generally meant by the term 'mathematics' and why it is that, in addition to arithmetic and geometry, sciences such as astronomy, music, optics, mechanics, among others, are called branches of mathematics. [...] This made me realize that there must be a general science which explains all the points that can be raised concerning order and measure irrespective of the subject-matter, and that this science should be termed mathesis universalis — a venerable term with a well-established meaning — for it covers everything that entitles these other sciences to be called branches of mathematics. [...] The concept of mathesis universalis was, for Descartes, a universal science modeled on mathematics. It is this mathesis universalis that is referred to when writers speak of Descartes' mathematicism. René Descartes: Following Descartes, Leibniz attempted to derive connections between mathematical logic, algebra, infinitesimal calculus, combinatorics, and universal characteristics in an incomplete treatise titled "Mathesis Universalis", published in 1695. Following on from Leibniz, Benedict de Spinoza and then various 20th century philosophers, including Bertrand Russell, Ludwig Wittgenstein, and Rudolf Carnap have attempted to elaborate and develop Leibniz's work on mathematical logic, syntactic systems and their calculi and to resolve problems in the field of metaphysics. Gottfried Leibniz: Leibniz attempted to work out the possible connections between mathematical logic, algebra, infinitesimal calculus, combinatorics, and universal characteristics in an incomplete treatise titled "Mathesis Universalis" in 1695. In his account of mathesis universalis, Leibniz proposed a dual method of universal synthesis and analysis for the ascertaining truth, described in De Synthesi et Analysi universale seu Arte inveniendi et judicandi (1890). Ludwig Wittgenstein: One of the perhaps most prominent critics of the idea of mathesis universalis was Ludwig Wittgenstein and his philosophy of mathematics. As Anthropologist Emily Martin notes: Tackling mathematics, the realm of symbolic life perhaps most difficult to regard as contingent on social norms, Wittgenstein commented that people found the idea that numbers rested on conventional social understandings "unbearable". Bertrand Russell and Alfred North Whitehead: The Principia Mathematica is a three-volume work on the foundations of mathematics written by the mathematicians Alfred North Whitehead and Bertrand Russell and published in 1910, 1912, and 1913. According to its introduction, this work had three aims: to analyze to the greatest possible extent the ideas and methods of mathematical logic and to minimize the number of primitive notions, axioms, and inference rules; to precisely express mathematical propositions in symbolic logic using the most convenient notation that precise expression allows; to solve the paradoxes that plagued logic and set theory at the turn of the 20th century, like Russell's paradox.There is no doubt that Principia Mathematica is of great importance in the history of mathematics and philosophy: as Irvine has noted, it sparked interest in symbolic logic and advanced the subject by popularizing it; it showcased the powers and capacities of symbolic logic; and it showed how advances in philosophy of mathematics and symbolic logic could go hand-in-hand with tremendous fruitfulness. Indeed, the work was in part brought about by an interest in logicism, the view on which all mathematical truths are logical truths. It was in part thanks to the advances made in Principia Mathematica that, despite its defects, numerous advances in meta-logic were made, including Gödel's incompleteness theorems. Michel Foucault: In The Order of Things, Michel Foucault discuses mathesis as the conjunction point in the ordering of simple natures and algebra, paralleling his concept of taxinomia. Though omitting explicit references to universality, Foucault uses the term to organise and interpret all of human science, as is evident in the full title of his book: "The Order of Things: An Archaeology of the Human Sciences". Tim Maudlin: Tim Maudlin's mathematical universe hypothesis attempts to construct "a rigorous mathematical structure using primitive terms that give a natural fit with physics" and investigating why mathematics should provide such a powerful language for describing the physical world. According to Maudlin, "the most satisfying possible answer to such a question is: because the physical world literally has a mathematical structure".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Osteonectin** Osteonectin: Osteonectin (ON) also known as secreted protein acidic and rich in cysteine (SPARC) or basement-membrane protein 40 (BM-40) is a protein that in humans is encoded by the SPARC gene. Osteonectin is a glycoprotein in the bone that binds calcium. It is secreted by osteoblasts during bone formation, initiating mineralization and promoting mineral crystal formation. Osteonectin also shows affinity for collagen in addition to bone mineral calcium. A correlation between osteonectin over-expression and ampullary cancers and chronic pancreatitis has been found. Gene: The human SPARC gene is 26.5 kb long, and contains 10 exons and 9 introns and is located on chromosome 5q31-q33. Structure: Osteonectin is a 40 kDa acidic and cysteine-rich glycoprotein consisting of a single polypeptide chain that can be broken into 4 domains: 1) a Ca2+ binding domain near the glutamic acid-rich region at the amino terminus (domain I), 2) a cysteine-rich domain (II), 3) a hydrophilic region (domain III), and 4) an EF hand motif at the carboxy terminus region (domain IV). Function: Osteonectin is an acidic extracellular matrix glycoprotein that plays a vital role in bone mineralization, cell-matrix interactions, and collagen binding. Osteonectin also increases the production and activity of matrix metalloproteinases, a function important to invading cancer cells within bone. Additional functions of osteonectin beneficial to tumor cells include angiogenesis, proliferation and migration. Overexpression of osteonectin is reported in many human cancers such as breast, prostate, colon and pancreatic.This molecule has been implicated in several biological functions, including mineralization of bone and cartilage, inhibiting mineralization, modulation of cell proliferation, facilitation of acquisition of differentiated phenotype and promotion of cell attachment and spreading. Function: A number of phosphoproteins and glycoproteins are found in bone. The phosphate is bound to the protein backbone through phosphorylated serine or threonine amino acid residues. The best characterized of these bone proteins is osteonectin. It binds collagen and hydroxyapatite in separate domains, is found in relatively large amounts in immature bone, and promotes mineralization of collagen. Tissue distribution: Fibroblasts, including periodontal fibroblasts, synthesize osteonectin. This protein is synthesized by macrophages at sites of wound repair and platelet degranulation, so it may play an important role in wound healing. SPARC does not support cell attachment, and like tenascin, is anti-adhesive and an inhibitor of cell spreading. It disrupts focal adhesions in fibroblasts. It also regulates the proliferation of some cells, especially endothelial cells, mediated by its ability to bind to cytokines and growth factors. Osteonectin has also been found to decrease DNA synthesis in cultured bone.High levels of immunodetectable osteonectin are found in active osteoblasts and marrow progenitor cells, odontoblasts, periodontal ligament and gingival cells, and some chondrocytes and hypertrophic chondrocytes. Osteonectin is also detectable in osteoid, bone matrix proper, and dentin. Osteonectin has been localized in a variety of tissues, but is found in greatest abundance in osseous tissue, tissues characterized by high turnover (such as intestinal epithelium), basement membranes, and certain neoplasms. Osteonectin is expressed by a wide variety of cells, including chondrocytes, fibroblasts, platelets, endothelial cells, epithelial cells, Leydig cells, Sertoli cells, luteal cells, adrenal cortical cells, and numerous neoplastic cell lines (such as SaOS-2 cells from human osteosarcoma). Model organisms: Model organisms have been used in the study of SPARC function. A conditional knockout mouse line, called Sparctm1a(EUCOMM)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists.Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty six tests were carried out on mutant mice and six significant abnormalities were observed. Homozygous mutant animals had unusually white incisors, decreased bone mineral density, abnormal lens morphology, cataracts and a decreased length of long bones.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Material conditional** Material conditional: The material conditional (also known as material implication) is an operation commonly used in logic. When the conditional symbol → is interpreted as material implication, a formula P→Q is true unless P is true and Q is false. Material implication can also be characterized inferentially by modus ponens, modus tollens, conditional proof, and classical reductio ad absurdum.Material implication is used in all the basic systems of classical logic as well as some nonclassical logics. It is assumed as a model of correct conditional reasoning within mathematics and serves as the basis for commands in many programming languages. However, many logics replace material implication with other operators such as the strict conditional and the variably strict conditional. Due to the paradoxes of material implication and related problems, material implication is not generally considered a viable analysis of conditional sentences in natural language. Notation: In logic and related fields, the material conditional is customarily notated with an infix operator → . The material conditional is also notated using the infixes ⊃ and ⇒ . In the prefixed Polish notation, conditionals are notated as Cpq . In a conditional formula p→q , the subformula p is referred to as the antecedent and q is termed the consequent of the conditional. Conditional statements may be nested such that the antecedent or the consequent may themselves be conditional statements, as in the formula (p→q)→(r→s) History In Arithmetices Principia: Nova Methodo Exposita (1889), Peano expressed the proposition “If A then B ” as A Ɔ B with the symbol Ɔ, which is the opposite of C. He also expressed the proposition A⊃B as A Ɔ B . Hilbert expressed the proposition “If A then B” as A→B in 1918. Russell followed Peano in his Principia Mathematica (1910–1913), in which he expressed the proposition “If A then B” as A⊃B . Following Russell, Gentzen expressed the proposition “If A then B” as A⊃B . Heyting expressed the proposition “If A then B” as A⊃B at first but later came to express it as A→B with a right-pointing arrow. Bourbaki expressed the proposition “If A then B” as A⇒B in 1954. Definitions: Semantics From a semantic perspective, material implication is the binary truth functional operator which returns "true" unless its first argument is true and its second argument is false. This semantics can be shown graphically in a truth table such as the one below. Truth table The truth table of p → q: The 3rd and 4th logical cases of this truth table, where the antecedent p is false and p → q is true, are called vacuous truths. Deductive definition Material implication can also be characterized deductively in terms of the following rules of inference. Definitions: Modus ponens Conditional proof Classical contraposition Classical reductio ad absurdumUnlike the semantic definition, this approach to logical connectives permits the examination of structurally identical propositional forms in various logical systems, where somewhat different properties may be demonstrated. For example, in intuitionistic logic, which rejects proofs by contraposition as valid rules of inference, (p → q) ⇒ ¬p ∨ q is not a propositional theorem, but the material conditional is used to define negation. Formal properties: When disjunction, conjunction and negation are classical, material implication validates the following equivalences: Contraposition: P→Q≡¬Q→¬P Import-Export: P→(Q→R)≡(P∧Q)→R Negated conditionals: ¬(P→Q)≡P∧¬Q Or-and-if: P→Q≡¬P∨Q Commutativity of antecedents: (P→(Q→R))≡(Q→(P→R)) Distributivity: (R→(P→Q))≡((R→P)→(R→Q)) Similarly, on classical interpretations of the other connectives, material implication validates the following entailments: Antecedent strengthening: P→Q⊨(P∧R)→Q Vacuous conditional: ¬P⊨P→Q Transitivity: (P→Q)∧(Q→R)⊨P→R Simplification of disjunctive antecedents: (P∨Q)→R⊨(P→R)∧(Q→R) Tautologies involving material implication include: Reflexivity: ⊨P→P Totality: ⊨(P→Q)∨(Q→P) Conditional excluded middle: ⊨(P→Q)∨(P→¬Q) Discrepancies with natural language: Material implication does not closely match the usage of conditional sentences in natural language. For example, even though material conditionals with false antecedents are vacuously true, the natural language statement "If 8 is odd, then 3 is prime" is typically judged false. Similarly, any material conditional with a true consequent is itself true, but speakers typically reject sentences such as "If I have a penny in my pocket, then Paris is in France". These classic problems have been called the paradoxes of material implication. In addition to the paradoxes, a variety of other arguments have been given against a material implication analysis. For instance, counterfactual conditionals would all be vacuously true on such an account.In the mid-20th century, a number of researchers including H. P. Grice and Frank Jackson proposed that pragmatic principles could explain the discrepancies between natural language conditionals and the material conditional. On their accounts, conditionals denote material implication but end up conveying additional information when they interact with conversational norms such as Grice's maxims. Recent work in formal semantics and philosophy of language has generally eschewed material implication as an analysis for natural-language conditionals. In particular, such work has often rejected the assumption that natural-language conditionals are truth functional in the sense that the truth value of "If P, then Q" is determined solely by the truth values of P and Q. Thus semantic analyses of conditionals typically propose alternative interpretations built on foundations such as modal logic, relevance logic, probability theory, and causal models.Similar discrepancies have been observed by psychologists studying conditional reasoning. For instance, the notorious Wason selection task study, less than 10% of participants reasoned according to the material conditional. Some researchers have interpreted this result as a failure of the participants to confirm to normative laws of reasoning, while others interpret the participants as reasoning normatively according to nonclassical laws.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fiddler's Green** Fiddler's Green: Fiddler's Green is an after-life where there is perpetual mirth, a fiddle that never stops playing, and dancers who never tire. In 19th-century English maritime folklore, it was a kind of after-life for sailors who had served at least fifty years at sea. In literature: Fiddler's Green appears in Frederick Marryat's novel The Dog Fiend; Or, Snarleyyow, published in 1856, as lyrics to a sailors' song: Herman Melville describes a Fiddler's Green as a sailors' term for the place on land "providentially set apart for dance-houses, doxies, and tapsters" in his posthumous novella Billy Budd, Sailor. In literature: Fiddler's Green is the title of a 1950 novel by Ernest K. Gann, about a fugitive criminal who works as a seaman after stowing away.The author Richard McKenna wrote a story, first published in 1967, titled "Fiddler's Green,” in which he considers the power of the mind to create a reality of its own choosing, especially when a number of people consent to it. The main characters in this story are also sailors, and have known of the legend of Fiddler's Green for many years.Fiddler's Green is an extrasolar colony mentioned in Robert A. Heinlein's novels The Cat Who Walks Through Walls and Friday. In literature: In Neil Gaiman's The Sandman comic book series, Fiddler's Green is a place located inside of the Dreaming, a place that sailors have dreamed of for centuries. Fiddler's Green is also personified as a character as well as a location in the fictional world, the former largely based upon casual associations of G. K. Chesterton. In the 2022 TV adaption of the books, the personification is played by Stephen Fry. From November 12 to 14, 2004, a comic book convention promoted as "Fiddler's Green, A Sandman Convention" was held at the Millennium Hotel in Minneapolis, Minnesota. Author Neil Gaiman and several Sandman series artists, and others involved in the series' publication, participated in the convention, with profits benefiting the Comic Book Legal Defense Fund. In literature: In Patrick O'Brian's novel Post Captain, the character Jack Aubrey describes several seamen living together on land by saying, "We'll lay in beer and skittles – it will be Fiddler's Green!". In music: A song called "Fiddler's Green", or more often "Fo'c'sle Song", was written by John Conolly in 1966, a Lincolnshire songwriter. It has been recorded by Tim Hart and Maddy Prior for their album Folk Songs of Olde England Vol. 2 (1968), by The Dubliners for their album Plain and Simple (1973), by The Yetties for their album All at Sea (1973), and by The Irish Rovers for their album Upon a Shamrock Shore: Songs of Ireland & the Irish (2000). The American sailor band Schooner Fare credits the song for bringing together their band. The song is sung worldwide in nautical and Irish traditional circles, and is often mistakenly thought to be a traditional song. In music: "Fiddler's Green" is a song from the album Road Apples by Canadian rock group The Tragically Hip, written for lead singer Gord Downie's young nephew Charles Gillespie, who died before the album was released. The track was covered by Welsh band Stereophonics on their 1999 Deluxe album Performance and Cocktails "Fiddler's Green" is a song from Marley's Ghost's album Four Spacious Guys (1996). In music: Fiddler's Green is the title track and name of Tim O'Brien's Grammy Award-winning 2005 album. Fiddler's Green is a German folk-rock band, formed in 1990. "Fiddler on the Green" is a song by German-American power metal supergroup Demons & Wizards, from their self-titled album released in 1999. Fiddler's Green is mentioned in the Archie Fisher song "The Final Trawl" from the album Windward Away, about fishermen whose livelihoods are passing away. Fiddler's Green is also mentioned in the extended version of the song "Hoist the Colors" from the Pirates of the Caribbean films. Friends of Fiddler's Green is a folk music group form Canada, founded in 1971. Fiddler's Green is an outdoor amphitheatre in Greenwood Village, Colorado. In art: Statue by Ray Lonsdale, installed in 2017 on Fish Quay in North Shields, England. In film: In the George Romero movie Land Of The Dead, the human holdout, surrounded by water, is a former luxury development called Fiddler's Green. In the United States military: The Cavalrymen's Poem, also entitled "Fiddlers' Green" was published in the US Army's Cavalry Journal in 1923 and became associated with the 1st Cavalry Division. The name has had other military uses. Many places associated with the US military have been named Fiddler's Green: The US Marine Corps operated Firebase Fiddler's Green in the Helmand River Valley, in Helmand Province, Afghanistan. An artillery Fire Support Base in Military Region III in Vietnam in 1972, occupied principally by elements of 2nd Squadron, 11th Armored Cavalry The US Navy's enlisted men's club in Sasebo, Japan from 1952 to 1976 The cavalryman's poem about Fiddler's Green is the regimental poem of the US 2nd Cavalry Regiment. In the United States military: The enlisted men's club at United States Naval Training Center Bainbridge An informal bar at the Fort Sill Officers' Open Mess The stable and pasture used by Parsons Mounted Cavalry, a cadet group at Texas A&M University in College Station, Texas A bar at the Saber & Quill in Fort Knox, Kentucky The larger of the two bars at the Leader's Club at Fort Benning, Georgia Building 2805 at Fort Cavazos Texas, the former officer's club A small enlisted club on the Marine Corps Base Camp Pendleton in California The base pub at the Joint Forces Training Base in Los Alamitos, California Former dining facility used by 2nd Cavalry Regiment at Fort Polk, Louisiana An artillery-only pub for the 10th Marine Regiment, Marine Corps Base Camp Lejeune in North Carolina A privately-owned restaurant in San Diego, California adjacent to Naval Base Point Loma and Marine Corps Recruit Depot San Diego
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Topo (DC Comics)** Topo (DC Comics): Topo is a fictional character that has appeared in American comic books published by DC Comics, notable as a loyal sidekick to Aquaman and often aids him and his allies in combat. Publication history: Topo first appears in Adventure Comics vol 1 #229 and was created by Ramona Fradon. Publication history: As of current continuity there have been three different versions of the character which bear the name Topo. The first version is an intelligent octopus who is usually seen babysitting Aquababy to the best of his ability in the Pre-Crisis continuity. There is a second Topo who becomes an ally to the second Aquaman. He is an anthropomorphic squid-boy from Dyss, who helps Aquaman open portals throughout the ocean. All versions of the character has assisted Aquaman in his adventures and also appeared assisting other heroes as well. Fictional character history: The original Topo was born in or near the undersea continent of Atlantis where he became a favored pet of Aquaman. The creature appears to be gifted with an exceptional intelligence compared to that of an average octopus, and possesses superior dexterity and problem solving skills as well. Topo once demonstrated his skill with a bow and arrow, and was even known to have developed a keen ear for music; supposedly he was able to play several musical instruments simultaneously. Fictional character history: The second version of Topo appears in Aquaman: Sword of Atlantis. This version is more humanoid in form, but still has many octopus-like abilities. His skin is grey with spots, and he has three fingers on each hand. His lower facial features, including mouth, are hidden behind six short tentacles. When Mera, Tempest and Cal Durham need to return to Sub Diego, he leads the group to hidden hatches that act as portals. Aquaman soon joins the group and Topo offers to lead them on a trip, but they are surprised by Baron Gargos who was at the behest of the Deep Church to kill them. After the fight with Gargos, they finally reach Sub Diego and notice that the city was dominated by Black Manta, who killed the local police and took Alonzo Malrey hostage to lure Orin. Realizing that it is not the original Aquaman, Black Manta orders his goons to shoot them all, Topo takes position and squirts ink as a distraction so they have a chance to escape. Fictional character history: In The New 52, Topo is reintroduced as a fearsome sea monster, a gigantic creature that is part octopus and part crab that only Aquaman can summon with a special conch. Aquaman summoned it to deal with the Scavenger, and uses his full telepathic power to unleash the creature on the Scavenger's fleet. However, this version of Topo is found to be too intelligent to be controlled by Aquaman's telepathy; while the creature managed to destroy the enemy submarines, the strain of mentally commanding Topo causes Aquaman to suffer from nosebleeding before passing out of consciousness. Other versions: Topo appears in DC Super Friends, Tiny Titans and Scooby-Doo team up comics. In other media: Topo appears in the Young Justice episode "Downtime", voiced by James Arnold Taylor. He appears in semi-humanoid form as an Atlantean sorcery student. He returns in Young Justice: Phantoms. Topo appears in the film Aquaman, providing drum accompaniment during Arthur and Orm's Ring of Fire combat.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ZP3** ZP3: Zona pellucida sperm-binding protein 3, also known as zona pellucida glycoprotein 3 (Zp-3) or the sperm receptor, is a ZP module-containing protein that in humans is encoded by the ZP3 gene. ZP3 is the glycoprotein in the zona pellucida most important for inducting the acrosome reaction of sperm cells at the beginning of fertilization. Function: The zona pellucida (ZP) is a specialized extracellular matrix that surrounds the oocyte and early embryo. It is composed of three or four glycoproteins (ZP1-4) with various functions during oogenesis, fertilization and preimplantation development. The protein encoded by this gene is a major structural component of the ZP and functions in primary binding and stimulation of the sperm acrosome reaction. The nascent protein contains a N-terminal signal peptide sequence, a conserved "ZP domain" module, a consensus furin cleavage site (CFCS), a polymerization-blocking external hydrophobic patch (EHP), and a C-terminal transmembrane domain. Cleavage at the CFCS separates the mature protein from the EHP, allowing it to incorporate into nascent ZP filaments. A variation in the last exon of this gene has previously served as the basis for an additional ZP3 locus; however, sequence and literature review reveals that there is only one full-length ZP3 locus in the human genome. Another locus encoding a bipartite transcript designated POMZP3 contains a duplication of the last four exons of ZP3, including the above described variation, and maps closely to this gene.Orthologs of these genes are found throughout Vertebrata. The western clawed frog appears to have two orthologs, and the sea lamprey has seven. 3D Structure: X-ray crystallographic studies of the N-terminal half of mammalian ZP3 (PDB: 3D4C, 3D4G, 3EF7, 5OSQ​) as well as its full-length avian homolog (PDB: 3NK3, 3NK4​) revealed that the protein's ZP module consists of two immunoglobulin-like domains, ZP-N and ZP-C. The latter, which contains EHP as well as a ZP3-specific subdomain, interacts with the ZP-N domain of a second molecule to generate an antiparallel homodimeric arrangement required for protein secretion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strainer bar** Strainer bar: A strainer bar is used to construct a wooden stretcher frame used by artists to mount their canvases. They are traditionally a wooden framework support on which an artist fastens a piece of canvas. They are also used for small-scale embroidery to provide steady tension, affixing the edges of the fabric with push-pins or a staple gun before beginning to sew, and then removing it from the frame when the work is complete. Strainer bar frames are usually in the shape of a rectangle, although shaped canvases are also possible. Strainer bar: A stretcher frame constructed from strainer bars should not be confused with one constructed from stretcher bars. Strainer bars are fixed to one another with wood glue, nails or staples, often in conjunction. Strainer bar frames are often reinforced with other fixed elements such as corner and cross braces. These frames are not built to accommodate the insertion of tightening keys into their corners to further tighten the canvas stretched upon them as a stretcher bar frame would.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Econometric Theory** Econometric Theory: Econometric Theory is an economics journal specialising in econometrics, published by Cambridge Journals. Its current editor is Peter Phillips. It is one of the main econometrics journals.The journal was founded against a backdrop of strong growth in econometrics research in 1985. At the time of its foundation, a main goal was to support theoretical developments in econometrics. Whereas many early articles focused exclusively on theory, disregarding practical applications, it became standard practice to include empirical illustrations or simulations in recent decades.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Podao** Podao: Podao or pudao (Chinese: 朴刀; pinyin: pōdāo) is a Chinese single-edged infantry weapon that is still used primarily for training in various Chinese martial arts. The blade of the weapon is shaped like a Chinese broadsword, but the weapon has a longer handle, usually around one to two meters (about three to six feet) which is circular in cross-section. It looks somewhat similar to the guandao. Podao: The pudao is sometimes called a "horse-cutter sword" since it is speculated to have been used to slice the legs out from under a horse during battle (like the zhanmadao). It is somewhat analogous to the Japanese nagamaki, although the nagamaki sword may have been developed independently. The pudao also resembles the Korean hyeopdo. Popular culture: Shang-Chi and the Legend of the Ten Rings features locations in Ta Lo as well as Razor Fist using podaos of dragon scales to fight the Dweller-in-Darkness.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Laminotomy** Laminotomy: A laminotomy is an orthopaedic neurosurgical procedure that removes part of the lamina of a vertebral arch in order to relieve pressure in the vertebral canal. A laminotomy is less invasive than conventional vertebral column surgery techniques, such as laminectomy because it leaves more ligaments and muscles attached to the spinous process intact and it requires removing less bone from the vertebra. As a result, laminotomies typically have a faster recovery time and result in fewer postoperative complications. Nevertheless, possible risks can occur during or after the procedure like infection, hematomas, and dural tears. Laminotomies are commonly performed as treatment for lumbar spinal stenosis and herniated disks. MRI and CT scans are often used pre- and post surgery to determine if the procedure was successful. Anatomy overview: The spinal cord is housed in a bony hollow tube called the vertebral column. The vertebral column is composed of many ring-like bones called vertebra (plural: vertebrae) and it spans from the skull to the sacrum. Each vertebra has a hole in the center called the vertebral foramen through which the spinal cord traverses.Laminae (singular: lamina) are the anatomical structures of primary importance in a laminotomy. Laminae are part of the vertebral arch which is the region of bone on the back side of each vertebra that forms a protective covering for the back side of the spinal cord.The vertebral arch is composed of several anatomical features in addition to laminae that must be taken into account when performing a laminotomy. In the center of the vertebral arch is a bony projection called the spinous process. The spinous process is located on the posterior or back side of the vertebra and serves as the attachment point for ligaments and muscles which support and stabilize the vertebral column. Each vertebra has two lateral bony projections called the transverse processes which are located on either side of the vertebral arch. Transverse processes come into contact with the ribs and serve as attachment points for muscles and ligaments that stabilize the vertebral column. The lamina is the segment of bone that connects the spinous process to the transverse process. Each vertebra has two lamina, one on each side of the spinous process. Types: Different types of laminotomy are defined by the type of instrument used to visualize the procedure, what vertebra the procedure is performed on, and whether or not both lamina of a vertebra are operated on or just one.Common types of laminotomy: Microscopic/Microdecompression laminotomy uses an operating microscope in order to magnify the area being operated on. The operating microscope is typically mounted to the surgery table and held over the area of operation Endoscopic/Microendoscopic decompression laminotomy uses an endoscope, a small tube-shaped camera inserted into the patient in order to visualize the procedure internally. Types: Cervical laminotomy is performed on cervical vertebrae, the ones closest to the head. Thoracic laminotomy is performed on thoracic (middle) vertebrae. Lumbar laminotomy is performed on lumbar vertebrae, the ones closest to the sacrum. Bilateral laminotomy is removal of a part of the bone from both lamina of a single vertebra. Unilateral laminotomy is removal of a part of the bone from only one lamina.These classifications of laminotomies can be combined to form the most descriptive name for the procedure possible. For example, an endoscopic unilateral lumbar laminotomy is the removal of bone from only one lamina of a lumbar vertebrae using an endoscope. Procedure: The procedure of a laminotomy remains largely the same regardless of the instrument used, or the level of vertebrae operated on. Laminotomies require general or spinal anesthesia and frequently require a hospital stay following the procedure—although the duration of the stay depends on the physical condition of the individual and their reason for having a laminotomy. A laminotomy takes about 70–85 minutes depending on the type of procedure used. Unilateral laminotomies typically require less time because bone is removed from only one lamina, whereas bilateral laminotomies usually take more time because bone is removed from both laminae. The level of the vertebrae that the laminotomy is performed on and what instrument is used produce no significant differences in the length of the procedure. Both unilateral and bilateral laminotomies are performed in a shorter time period compared to a conventional laminectomy which takes over 100 minutes on average.During a laminotomy, the individual lies on his or her stomach with the back facing up towards the physician. An initial incision is made down the middle of the back exposing the vertebrae on which the laminotomy will be performed. In this procedure, the spinous process and the ligaments of the vertebral column are kept intact, but the muscles adjacent to the vertebral column known as the paraspinous muscles (example: spinalis muscle) must be separated from the spinous process and vertebral arch. In a unilateral laminotomy, these muscles are detached only from the side on which the laminotomy is being performed. During a bilateral laminotomy, these muscles must be removed on both sides of the vertebrae. The ligaments connecting the lamina of upper and lower vertebrae, known as Ligamenta flava are often removed or remodeled in this procedure to adjust for the small amount of bone lost. Using either a microscope or an endoscope to have a visual of the procedure, a small surgical drill is used to remove a part of bone from one or both laminae of the vertebrae. Laminotomies can be performed on multiple vertebrae during the same surgery; this is known as a multi-level laminotomy.A slightly different, but commonly used procedure of laminotomy is the unilateral laminotomy for bilateral spinal decompression. This minimally invasive procedure is often used to treat patients with excessive pressure in the vertebral column that must be relieved. In this procedure, the same spinal ligaments are kept intact and the paraspinous muscles must still be detached. A unilateral laminotomy is performed on one lamina of a vertebra. This removal of bone from one lamina provides an opening into the spinal canal. Using a microscope or an endoscope to visualize the procedure, surgical tools are inserted through this opening into the spinal canal. The surgical tools are then navigated underneath the spinous process and across the spinal canal to reach the other lamina on the opposite side of the vertebra to perform a second laminotomy. The incision for this procedure is smaller because doctors need only access one lamina yet can perform a bilateral laminotomy—remove bone from both lamina of a single vertebra. The unilateral laminotomy with bilateral spinal decompression procedure was developed almost 20 years ago and is a common successful surgical treatment for lumbar spinal stenosis. Reasons for performing a laminotomy: A laminotomy is typically used to relieve pressure from the spinal canal. Excessive pressure in the spinal canal causes the spinal canal and spinal nerves to be compressed which can be very painful and can impair motor control and/or sensation. A common disorder that causes increased pressure in the spinal canal is lumbar spinal stenosis. Lumbar spinal stenosis is formally defined as a decline in diameter length of either the neural foramina, lateral recess, or spinal canal. Stenosis is classified as a decaying disease because it causes the canal to gradually become more and more narrow which can cause pain or loss of function. Common symptoms of lumbar stenosis are pain, fatigue, weakness of the muscle and numbness. Stenosis can be caused by old age or an injury to the vertebral column and usually requires a CT scan or MRI to diagnose. Performing a laminotomy can relieve pressure in the spinal canal caused by lumbar stenosis and therefore alleviate symptoms.Laminotomies are also performed to create a window into the spinal canal. Laminotomies are frequently used as a way to surgically repair a spinal disc herniation at any level of the vertebral column (cervical, thoracic, lumbar). A herniated disc can compress spinal nerves and cause intense pain and impaired sensation. Removing a portion of the lamina allows physicians to be able to access and repair the herniated disc. Laminotomies may also be used to treat intraspinal lesions such as spinal tumor or problems with the blood vessels supplying the spinal cord. In any scenario where the inside of the spinal canal must be accessed or there is an increase in pressure in the spinal canal, laminotomy may be used to treat the disorder or alleviate symptoms. Reasons for performing a laminotomy: Benefits The laminotomy procedure has many benefits as to why it is a preferred spinal surgery since it is less invasive than other spinal procedures such as a laminectomy or a spinal fusion. Once a laminotomy procedure is done, patients have a great improvement in their pain and mobility. Laminotomies are usually safer than other surgeries that are open or invasive. This surgery usually is shorter than other spinal decompression procedures by having an average duration of 70–85 minutes, whereas other decompression surgeries can have a duration anywhere from 90 to 109 minutes. Laminotomies are usually more cost efficient than other surgical decompression surgeries. In 2007, it was seen that laminotomies were around $10,000, whereas other surgical procedures were around $24,000. Smaller skin incisions and scarring as well as less surgical trauma are also a benefit of laminotomy. With this procedure there is usually a faster recovery time, and a shorter hospital stay if one is necessary at all. During the surgery there is also a benefit of minimizing the injury to muscles, ligaments, and bones in the spine since more invasive surgeries have a greater risk of damaging them. General anesthesia is usually required, but postoperative spinal instability is typically limited. Reasons for performing a laminotomy: Risks and potential complications Since this procedure is a surgical technique there are many complications that can occur either during or after the surgery. Some major complications that can occur are cerebrospinal fluid leaks, dural tears, infection, or epidural hematomas. Death is also a risk; however, it occurs only once per thousand surgeries. Other potential complications are nerve root damage, which can lead to nerve injury or paraplegia, and a significant amount of blood loss that will lead to blood transfusions. Laminotomy versus laminectomy: Historically, laminectomies have been the primary way to treat lumbar spinal stenosis. A laminectomy is a more invasive method with the aim to decrease the total amount of pain and numbness associated with lumbar spinal stenosis. It is a surgery that eliminates the entire lamina to allow the nerves around this region to function properly. Laminectomies also often produce a longer recovery time as well as a greater risk for post-operative complications. There is typically more damage to the surrounding muscle tissue accompanied by a laminectomy. Since a laminectomy involves the excision of the entire lamina, a laminectomy will usually cause more spinal instability than a laminotomy. When going with the option of laminotomy, the procedure reduces the total amount of muscle severed. Because a laminotomy does not damage the spinous process and critical ligaments, there is not as much muscle weakness, pain, and lumbar instability seen with laminectomies. Laminotomies are fairly new compared to laminectomies, and it involves using less invasive methods with precise instruments to minimize the risk of tissue damage. Radiographic imaging: X-Rays For radiographic imaging, an x-ray is the least effective way to collect information when observing a patient with lumbar spinal stenosis. A CT scan provides a 360-degree compiled view of the vertebrae that is more precise than an x-ray. Radiographic imaging: MRI Since an MRI provides excellent imaging of blood vessels and tissues, it is recognized as the best type of imaging to observe signs associated with lumbar compression. The precise measurement of the diameter of the spinal canal is a particularly important component when determining the severity of the stenosis itself. High strength 3-Tesla MRI machines are being utilized due to the increased vascular imaging capabilities. Better resolution capacity allows for more detailed observations by the healthcare provider. The sharp contrast of the high power MRI outlines details in the vertebra that are critical when examining a patient with lumbar spinal stenosis who may need a laminotomy. MRI scanning post invasive surgery is used to see the quality of the surgery itself, yet the appropriate postoperative time elapsed before conducting an MRI is a debated topic. Radiographic imaging: CT scans A CT scan is not the most effective imaging technique when observing lumbar abnormalities, however it can supplement an MRI by detecting certain degenerative processes. When determining whether or not a laminotomy will be beneficial for the patient, a healthcare provider must assess the severity of the possible abnormalities. Out of all the potential reasons to have a laminotomy performed, lumbar spinal stenosis is the chief reason. CT scans are used specifically to pinpoint a buckled lumbar ligamentum flavum as well as facet hypertrophy, which are some of the main pathophysiological changes indicative of lumbar spinal stenosis. Even though a CT scan can reveal these pertinent signs of lumbar spinal stenosis, it can sometimes give a cloudy image due to the shadowing of the tissue contrast. When this occurs, an intrathecal myelography contrast is conducted with the CT scan to fix the abnormal contrast. A CT scan can also reveal an increase in the cross sectional area of the L3 vertebrae, which ultimately decreases the cross sectional area of the spinal canal. As an increase in the size of the L3 vertebrae occurs, pressure builds up on the cauda equina, commonly causing pain in the lower back and lower extremities. Cauda equina compression can also be due to stenosis of L4-5 region as well. Even though the CT scan allows for intensive image studying, the fixed nature of the image collection process alone is not enough to reach a definitive diagnosis of lumbar spinal stenosis. The outcome of the CT scan can help compile physiological evidence that the patient has lumbar spinal stenosis, and that the patient may potentially benefit from a laminotomy to improve his or her quality of life.Other than static imaging processes, a CT scan can also be used for observing changes in spinal canal features before and after a laminotomy. One of the main signs of lumbar spinal stenosis is the thickening of the ligamentum flavum, causing it to expand towards the spinal canal. When observing the cross sectional area of the spinal canal of a human cadaver, it was found that the area had decreased due to ligamentum flavum thickening. The ligamentum flavum did not appear to alter the dynamic alterations in the dimensions of the spinal cord. Even after the intervertebral disc was removed, the ligamentum flavum did not appear to be a factor in the change in the dimensions of the spinal canal. By understanding the magnitude of the role that ligamentum flavum hypertrophy plays in lumbar sacral stenosis, the necessity of an invasive lumbar spinal procedure can be accurately measured. Alternative minimally invasive procedures: Minimally invasive procedures are a more common alternative due to the decreased risk of damaging significant muscle tissue. The difference between invasive and minimally invasive spinal surgeries is that minimally invasive procedures involves a series of small incisions. Minimally invasive procedures can be performed anywhere along the spine, and have been used to treat various abnormalities. The percutaneous pedicle screw fixation technique allows for a procedure that presents minimal risk to the patient. Fluoroscopic image guided navigation through these portals allows for surgeons to perform more efficient procedures. Minimally invasive procedures often yield a much faster recovery time than fully invasive surgeries, making them more appealing to patients. Laminectomies have always been the gold standard when treating lumbar spinal stenosis, but recently, less invasive surgeries have emerged as a safer alternative treatment that helps maintain the postoperative structural integrity of the spine. Alternative minimally invasive procedures: Spinal microsurgery Spinal microsurgery is a minimally invasive unilateral laminotomy used to correct bilateral lumbar spinal compression. Spinal microsurgery is the most common and effective microsurgical decompression treatment for patients who present with moderate to severe spinal stenosis. Spinal microsurgeries are performed with high magnification 3-D imaging of the fixated area of the spine, reducing the potential risk of harming the architecture of the spine itself. Alternative minimally invasive procedures: Endoscopic spine surgery Endoscopic spine surgeries can be used to treat thoracic lesions, and have been proven to be a much safer option than a thoracotomy. However, an endoscopic spine surgery can be performed to treat other spinal conditions, such as a herniated lumbar disc. Recovery time from this type of surgical treatment is often very quick, with patients ambulating within a few hours of the procedure. Alternative minimally invasive procedures: Spinal fusion Spinal fusion involves fusing two vertebrae together using a spacer, and is intended to prohibit movement at that particular segment. Screws are typically inserted to assure that the spacer is held in place. The most common lumbar spinal fusion occurs between L4 and L5. A lumbar spinal fusion may be recommended when non-surgical treatment options for severe degenerative disc disease are ineffective. A laminotomy would not be effective in this case, since this procedure is concerning a degenerated disc that needs to be removed in order to relieve certain symptoms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Evolution of schizophrenia** Evolution of schizophrenia: The evolution of schizophrenia refers to the theory of natural selection working in favor of selecting traits that are characteristic of the disorder. Positive symptoms are features that are not present in healthy individuals but appear as a result of the disease process. These include visual and/or auditory hallucinations, delusions, paranoia, and major thought disorders. Negative symptoms refer to features that are normally present but are reduced or absent as a result of the disease process, including social withdrawal, apathy, anhedonia, alogia, and behavioral perseveration. Cognitive symptoms of schizophrenia involve disturbances in executive functions, working memory impairment, and inability to sustain attention.Given the high numbers of individuals diagnosed with schizophrenia (nearly 1% of modern-day populations), it is unlikely that the disorder has arisen solely from random mutations. Instead it is believed that, despite its maladaptive nature, schizophrenia has been either selected for throughout the years or exists as a selective by-product. Hypotheses: Balancing Selection and Positive Selection Hypothesis The balancing selection hypothesis suggests that balancing selection, an evolutionary mechanism, has allowed for the persistence of certain schizophrenia genes. This mechanism is defined as maintaining multiple alleles of a gene in the gene pool of a population despite having selective pressures. Heterozygote advantage, a mechanism of balancing selection, is when the presence of both the dominant and recessive allele for a particular gene allow for greater fitness in an individual as compared to if the individual only expressed one type of allele. This mechanism can be seen in the carriers for the schizophrenia gene who express both the dominant and recessive allele. These carriers may express certain advantageous traits that would allow the schizophrenia gene to be selected for. Evidence has suggested a carrier of the schizophrenia gene could experience selective advantage due to their expression of advantageous traits as compared to those who do not express the schizophrenia gene. Studies have shown that some of the carriers for the schizophrenia gene may express adaptive benefits such as a decreased frequency of viral infections. Additional beneficial traits may include a higher IQ, increased creativity, and mathematical reasoning. Due to the presence of these beneficial traits, the schizophrenia gene has not been selected against and has remained prevalent in human development over numerous generations. While the idea of balancing selection hypothesis sounds plausible, there is no substantial evidence in support of this hypothesis. Within the studies that found a positive correlation between specific favorable characteristics and the schizophrenia gene, only a few carriers were tested, meaning that there is no sufficient evidence to assume a direct correlation between these advantageous traits and the carriers of schizophrenia. Although this hypothesis has not yet been substantiated, the advantageous traits that these carriers express could provide a reasonable explanation for why the genes for schizophrenia have not been eliminated.Positive selection is another mechanism that has allowed for the selection of genes contributing to the presence of schizophrenia. Positive selection is a mechanism of natural selection in which beneficial traits are selected for and become prevalent over time in a population. In a study conducted using phylogeny-based maximum-likelihood (PAML), a method that was used to test for positive selection, significant evidence of positive selection was found in the genes associated with schizophrenia. An example of a beneficial trait that has been selected for through positive selection is creativity. Three allelic variants of creativity genes that are also associated with schizophrenia include SLC6A4, TPH1 and DRD2. The high inheritance of creative and cognitive characteristics by these allelic variants in individuals expressing schizophrenia confirms evidence of positive selection within some schizophrenia genes. Additional studies conducted using SNP analysis on the SLC39A8 gene, a gene associated with schizophrenia, found that the T-allele on the gene was associated with reduced blood pressure and a decreased risk of hypertension. These beneficial traits associated with schizophrenia genes provide an explanation for selection of these genes in human development. While promising evidence persists, additional evidence claims that the effect of positive selection may not play a significant role in the presence of schizophrenia. Studies conducted through the use of FST and methods based on sample frequency spectrum (SFS) failed to find convincing signals of positive selection on the CGC-type of the ST8SIA2 gene, another gene associated with schizophrenia. Hypotheses: Social brain hypothesis A social brain refers to the higher cognitive and affective systems of the brain, evolving as a result of social selection and serving as the basis for social interaction; it is the basis of the complexity of social interactions of which humans are capable. Mechanisms comprising the social brain include emotional processing, theory of mind, self-referencing, prospection and working memory. Patients display defects in various regions of the social brain, such as an inability to grasp social goals, which serves as an indication of a defect in theory of mind. This defect can be caused by the rapid selection for genes associated with language and cognitive ability within the human species. These rapid evolutionary changes, in some cases, may impede normal development within the social brain.As schizophrenia is foremost a disorder of the consciousness, it has been suggested that schizophrenia exists as an unwanted byproduct of the evolution of the prefrontal cortex and other brain regions constituting the social brain. Under increasingly selective pressure induced by increasingly complex social living, the regions of the brain have grown as a means of accommodation and in turn have given rise to vulnerable neural systems. One hypothesis suggests this vulnerability in neural systems has made it possible for changes in genes associated with the social brain that affect neurogenesis, neuronal migration, arborisation, or apoptosis. Although it is unclear which of these factors have exhibited gene changes, it is likely that these changes have contributed to the defect in neurodevelopment seen in schizophrenia patients. A second hypothesis suggests that disturbance in the brain's frontal circuits, a region that largely constitutes the social brain, can lead to a lack of regulation in cognitive control and processing. This defect in regulation could increase the susceptibility for a social disorder like schizophrenia. Hypotheses: Social advantage hypothesis This hypothesis refers to the worship of psychics and seers in the times of early civilization; the hallucinatory behavior and delusions brought by schizophrenia may have been highly regaled and allowed the individual to be conferred the title of saint or prophet, raising him on the social spectrum and allowing for social selection to act on the behalf of the disorder. This hypothesis lacks evidence and has not aided in explaining the continued persistence of schizophrenia in modern-day society where people showing symptoms of schizophrenia are typically not identified as saints or prophets. Hypotheses: Physiological advantage hypothesis This hypothesis maintains that people with schizophrenia possess a physiological advantage in the form of disease or infection resistance, a theory that has found basis in diseases such as sickle-cell anemia. In one particular study, NAD, an energy carrier found in animals and yeast, is found to be capable of diminishing infectivity of tuberculosis when present in large quantities; this is done by repressing gene expression. However, M. tuberculosis bacterium has been shown to be capable of acting as a drain on NAD supply.Studies in kynurenine pathway activation reveal that M. tuberculosis infection of the pathway causes niacin receptors in the pathway to indicate high levels of niacin, a precursor to NAD that makes de novo synthesis of NAD from tryptophan unnecessary. This change creates the illusion that NAD levels are adequate and that tryptophan conversion is unnecessary. Coevolution with M. tuberculosis has resulted in an attempt to overcome this illusion in a variety of manners, including the up-regulation of niacin receptors and up-regulation of de novo synthesis of NAD from tryptophan via the kynurenine pathway.An enzyme implicated in the initiation of the kynurenine pathway, tryptophan 2,3-dioxygenase (TDO2) is found to activate during niacin-deficient conditions and is also found to be in increased levels in schizophrenic brains. In the postmortem brain tissue of people with schizophrenia, the protein for the high affinity niacin receptor was significantly decreased and, as a result, would allow for the up-regulation of mRNA transcript for the niacin receptor. Hypotheses: Shamanistic hypothesis This hypothesis purports that schizophrenia is a vestigial behaviour that was once adaptive to hunting and gathering tribes. Psychosis prompts shamans to communicate with the spirit world, which results in the formation of religious myths. The shamanistic theory posits that the universal presence of shamanism in all hunting and gathering societies is likely due to heritable factors – the same heritable factors that support the worldwide distribution of schizophrenia. One modern version of the theory has invoked the evolutionary mechanism of group selection in order to explain the apparent genetic-based task specialization of shamanism. Hypotheses: Immune system Hypothesis Perinatal exposure It has been suggested that acute neuroinflammation during early fetal development may contribute to schizophrenia pathogenesis. The risk of schizophrenia is higher among those who experienced prenatal maternal viral infections like influenza, rubella, measles, and polio as well as bacterial or reproductive infections. The brain is highly sensitive to environmental insults during early development. Factors common to the immune response to a variety of pathogens are mediators in linking the commonalities between prenatal/perinatal infection and neurodevelopmental disorders. Hypotheses: One hypothesis suggests that enhanced expression of proinflammatory cytokines and other mediators of inflammation in the maternal, fetal, and neonatal compartments may interfere with brain development, thereby increasing the risk for long-term brain dysfunction later in life.Increased Pro-inflammatory Cytokines Another hypothesis seeking to explain why schizophrenia occurs aim at understanding the activation of the immune system. The activation of the inflammatory response system mediated by cytokines may play a key role in the pathogenesis of schizophrenia. Evidence suggests that serum levels of IL-2, IL-6, IL-8, and TNF-α are significantly elevated in patients with chronic treatment-resistant schizophrenia. Nuclear factor-kappa B regulates the expression of cytokines and an increase in NF-κB levels leads to an increase in proinflammatory cytokine levels Brain-derived Neurotrophic Factor Individuals with schizophrenia have lower levels of brain-derived neurotrophic factor or BDNF. BDNF is responsible for promoting the proliferation, regeneration, and survival of neurons. It is also important for the regulation of cognitive function, something individuals with schizophrenia have trouble doing. Lower BDNF expression is associated with increased IL-6 expression, and increased cortisol levels. The more pro-inflammatory cytokines in circulation, the more the BDNF production decreases. This implies that an excess amount of pro-inflammatory cytokines negatively affects BDNF production. This, in turn, affects the presence and severity of psychosis in individuals with schizophrenia. Hypotheses: Self-domestication hypothesis The theory of self-domestication asserts that during the late Pleistocene period, archaic humans split from their hominid ancestors and underwent behavioral changes that led to a reduction of aggression and an increase in "tameness". As a result of this transformation, changes to humans' biological, morphological, physiological, and genetic development occurred; leading to anatomical changes in size, craniofacial structure, and brain structural differences, as well as changes in behavior related reduced levels of stress hormones and delayed maturation of the adrenal glands. The self-domestication hypothesis for evolution of schizophrenia observes the importance our self-domesticated evolution, with emphasis on its contribution to the altered genetic development of the neural crest and our relaxed our social cultural niche. Adaptations related these domesticated changes favored the emergence of complex cognitive abilities, including advanced linguistic cognition.The self-domestication hypothesis suggests that schizophrenia results from hypofunction of the neural crest development, triggered by the selection for domesticated "tameness", and emphasize the domestic characteristics that make up the clinical phenotype of schizophrenia. Deficits related to language production and processing are prevalent in both positive and negative symptoms of schizophrenia. In addition, schizophrenic patients often demonstrate more marked domesticated traits at the morphological, physiological, and behavioral levels; including craniofacial abnormalities, desensitized cortical response to stress, and disorganized speech.A study published in 2017 targeted various candidate genes (FOXD3, RET, SOX9, SOX10, GDNF) with overlapping function in relation to schizophrenia, domestication, and neural crest development, and found the largest number of brain area expressions include to be in the frontal cortex, associate striatum nucleus, and hippocampus. Although the results do not reflect the molecular events that occurred during early neural development or evolution, they provide insight into the molecular network that underlies the impaired cognitive and social scenarios that act in the schizophrenic brain, and further suggest that self-domestication, language processing, and schizophrenia have an intimately intertwined relationship. Hypotheses: Sexual selection hypothesis This hypothesis builds upon Crespi and Badcock's imprinted brain hypothesis of autism and psychosis by suggesting that the behavioral traits associated with autism and schizophrenia have been beneficial for individual reproductive, mating, and parental strategies; and therefore, have been maintained throughout the human population via sexual selection. Under this hypothesis, autistic- and schizotypy-like traits exist as diametric opposites joined on the same spectrum of normal cognition, and most people display moderate degrees of one or both types of traits.When the spectrum of traits intertwine with the dynamics of genomic imprinting and principles of sexual selection within the context of bipaternal investment patterns, traits act as ornaments of mating behavior. Whereas autistic-like traits are selected for based on their display of mechanistic and practical intelligence for obtaining resources that indicate support for a long-term relationship, schizotypy-traits demonstrate verbal and artistic creativity that indicate strong genetic fitness for a short-term mating strategy.Therefore, variation in different cognitive traits remain adaptive life-history, reproductive, and paternal strategies according to the local ecological conditions and personal characteristics. Although the hypothesis proposes that the cognitive traits do not originate by means of sexual selection and likely evolved for reasons unrelated to mating, the behavioral effects dictated by the genetic autistic- and schizotypy-traits remain varied in the environment and remain under selection; only extreme variants of either of the traits result in their respective clinical condition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NFIL3** NFIL3: Nuclear factor, interleukin 3 regulated, also known as NFIL3 or E4BP4 is a protein which in humans is encoded by the NFIL3 gene. Function: Expression of interleukin-3 (IL-3) is restricted to activated T cells, natural killer (NK) cells, and mast cell lines. Transcription initiation depends on the activating capacity of specific protein factors, such as NFIL3, that bind to regulatory regions of the gene, usually upstream of the transcription start site.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flosequinan** Flosequinan: Flosequinan is a quinolone vasodilator that was discovered and developed by Boots UK and was sold for about a year under the trade name Manoplax. It had been approved in 1992 in the US and UK to treat people with heart failure who could not tolerate ACE inhibitors or digitalis.Boots initiated a clinical trial called PROFILE to see if the drug could be useful in a wider population. The study was terminated early in 1993 due to increased mortality in the drug arm of the trial; preliminary results were published in a conference abstract by the PI Milton Packer and others, which promised data and analysis would be forthcoming in a future paper, which was finally published in 2017.Boots withdrew it from the market in July 1993.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Abnormal cannabidiol** Abnormal cannabidiol: Abnormal cannabidiol (Abn-CBD) is a synthetic regioisomer of cannabidiol, which unlike most other cannabinoids produces vasodilator effects, lowers blood pressure, and induces cell migration, cell proliferation and mitogen-activated protein kinase activation in microglia, but without producing any psychoactive effects. Receptor activity: It has been shown that the actions of abnormal cannabidiol are mediated through a site separate from the CB1 and CB2 receptors, which responds to abnormal cannabidiol, O-1602, and the endogenous ligands: anandamide (AEA), N-arachidonoyl glycine (NAGly) and N-arachidonoyl L-serine. Multiple lines of evidence support the proposed identification of this novel target in microglia as the previously "orphan" receptor GPR18. Another possible target of abnormal cannabidiol is GPR55, which has also received much attention as a putative cannabinoid receptor, although a growing body of evidence points to lysophosphatidylinositol (LPI) as the endogenous ligand for GPR55. Further research suggests there are yet more additional cannabinoid receptors. Pharmacodynamics: Research of the effects on abnormal cannabidiol in mice has indicated that atypical cannabinoids have therapeutic potential in a variety of inflammatory conditions, including those of the gastrointestinal tract. After inducing colitis by means of trinitrobenzene sulfonic acid, wound healing of both human umbilical vein endothelial and epithelial cells was enhanced by the Abn-CBD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jolliffeite** Jolliffeite: Jolliffeite is a rare selenide mineral with formula NiAsSe or (Ni,Co)AsSe. It is the selenium analogue of the sulfide mineral gersdorffite, NiAsS, with a common impurity of cobalt, CoAsSe. It is named for its discoverer, Alfred Jolliffe, (1907–1988), a Canadian geologist of Queen's University, Kingston, Ontario. Crystallography: Jolliffeite has cubic symmetry and is therefore isometric meaning crystallographically, it contains three perpendicular axes of equal lengths. It has four three-fold axes all inclined at the same angle to the crystallographic axes. Optically, Jolliffeite is isotropic. Isotropic minerals have a single refractive index and are not birefringent. The single refractive index of Jolliffeite can be determined by its relief. Discovery and Importance: Jollifeite was discovered in 1991 within a fracture zone near a contact of an intrusive peridotite with dolomite from the Fish Hook Bay area, Shirley Peninsula, north shore of Lake Athabaska, Saskatchewan, Canada, in a geological environment with high concentration of platinum group elements. The area of the Shirley Peninsula consists of rocks mainly Archean in age. The rare finding of Jolliffeite in this region is helping geologists piece together more of the area's history. Its composition of nickel and arsenic makes it important for mineral resources as the two have many common uses. Jollifeite's presence may also lead to the finding of other important minerals as it was found in the drill core with chalcopyrite, pyrite, hematite, arsenopyrite, bornite, calcite, dolomite, quartz, cofinite, pitchblende, native gold, native silver, cobaltite, and micas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wye (rail)** Wye (rail): In railroad structures, and rail terminology, a wye (like the 'Y' glyph) or triangular junction (often shortened to just triangle) is a triangular joining arrangement of three rail lines with a railroad switch (set of points) at each corner connecting to the incoming lines. A turning wye is a specific case. Where two rail lines join, or where a spur diverges from a railroad's mainline, wyes can be used at a mainline rail junction to allow incoming trains to travel in either direction. Wye (rail): Wyes can also be used for turning railway equipment, and generally cover less area than a balloon loop doing the same job, but at the cost of two additional sets of points to construct and then maintain. These turnings are accomplished by performing the railway equivalent of a three-point turn through successive junctions of the wye. The direction of travel and the relative orientation of a locomotive or railway vehicle thus can be reversed. Where a wye is built specifically for equipment reversing purposes, one or more of the tracks making up the junction will typically be a stub siding. Wye (rail): Tram or streetcar tracks also make use of triangular junctions and sometimes have a short triangle or wye stubs to turn the car at the end of the line. Considerations: At junctions The use of triangular junctions allows flexibility in routing trains from any line to either of the two other paths, without the need to reverse the train. For this reason they are common across most rail networks. A slower train may be signaled to temporarily enter a wye (as a refuge siding in lieu of a passing loop) for a meet with an oncoming train, or to allow a faster one to overtake, and then reverse out to continue in the original direction. Considerations: Where one or more of the lines forming the junction are multi-track, the presence of a triangular junction does introduce a number of potential conflicting moves. For this reason, where traffic is heavy the triangle may incorporate flying junctions on some of the legs. Considerations: For turning equipment From time to time it is necessary to turn both individual pieces of railroad equipment or whole trains. This may be because the piece of equipment is not directionally symmetrical, for example, most steam locomotives and some diesel locomotives, or where the consist has a dedicated tail end car such as an observation car. Even where equipment is symmetrical, periodic turning may still be necessary in order to equalize wear (e.g., on the London Underground's Circle Line). Several different techniques can be used to achieve such turning. Turntables require the least space, but can generally only deal with a single piece of equipment at a time. Balloon or turning loops can turn trains of any length — up to the total length of the loop — in a single operation, but require far more space than wyes. Rail wyes can be constructed on sites where a loop would not be possible, and can turn trains up to the length of the stub tracks at the end of the wye. Considerations: Railroad systems in North America and Australia have tended to have more wyes than railroads elsewhere. North American locomotives and cars (such as observation cars) are more likely to be directional than those found on other continents. In Canada and the United States, the railroad often was built before other structures, and railway builders had much more freedom to lay down tracks where they wished. Similarly, when not constrained by space limitations many early Australian railways made use of wyes (particularly in rural locations) for their lower installation and maintenance costs; however, their necessity and use diminished from the 1960s onwards with the major trend in most states toward bidirectional locomotives and railcars. Considerations: In Europe, although some use was made of bi-directional tank locomotives and push–pull trains, most steam locomotives were uni-directional. Because of land usage considerations, turntables were normally used to turn such locomotives, and most terminal stations and locomotive depots were so equipped. Over time, most diesel and electric locomotives ordered in Europe have been designed to be fully bi-directional and normally with two driving cabs. Thus most rail wyes, where they existed, and turntables have been taken out of use. Considerations: Streetcar or tram systems Similar considerations as for mainline rail systems apply to the use of triangular junctions and reversing wyes on streetcar and tram systems. Many, although by no means all, streetcar and tram systems use single ended vehicles that have doors on one side only, and that must be turned at each end of the route. Considerations: However, the vehicles used on such systems tend to have much smaller minimum curvature requirements than heavy rail equipment. This renders the use of a balloon loop more practical in a small amount of space, and with street-running vehicles such a loop may be able to use side streets or street squares. However, although turning loops are the most common way of turning such vehicles, wye tracks are also sometimes used. Considerations: Disadvantages A triangle may have a situational disadvantage in train operations when space constraints of the local geography cause one leg of triangle to bypass a main station. In tight city environments, this can happen easily, as it did, for example, at Cootamundra West, Australia and Tecuci, Romania, where extra passenger stations had to be built to serve trains taking the shortcut. Considerations: In contrast, the engineering of a terminus station such as Woodville Railway Station, New Zealand avoided this problem by building a balloon loop (reversing loop) so that trains can serve the main station in either direction without the need to reverse. In a midline station where it is desired to reverse a consist or locomotive, a double-track and turning wye arrangement is far more common. Considerations: Land usage The land within a triangle is cut off from the adjacent area (and normally fenced off) and has marginal commercial value, so will be purposed mainly for the railway's exclusive use – generally being used for maintenance depots, storage, or vehicle parking. On electrified lines substations tend to be located inside triangles, in part because the land is cheap, and also because it provides the most convenient and flexible sectioning arrangements. Earliest examples: The earliest British (and possibly worldwide) example is the double-tracked triangle within Earlestown railway station on the Liverpool and Manchester Railway, which was completed by the Grand Junction Railway in 1837. The triangle has two passenger platform faces on each of its three sides and five of the six platforms are in frequent (half-hourly, etc.) use by passenger trains. When steam engines were in regular use the triangle (which is of course also traversed by freight trains) was also used to turn locomotives, and can still be so used. Earliest examples: An earlier example may be on the Cromford and High Peak Railway, which had been opened in 1831 as a horse-drawn railway. This appears to have been used for reversing trains of wagons with end doors that have just come up the rope-hauled inclines to the highest level of the railway before they proceeded down the remaining inclines. The site of this can still be seen near Hindlow, in Derbyshire. (National Grid location grid reference ST316887.). Examples by country: Australia Sefton railway station in Sydney lies on one corner of a triangular junction, which allows trains to branch off in either direction without the need to terminate or change ends. One train a day from Birrong to Sefton does terminate and reverse at Regents Park station (in order to clean the rust off the crossover rails). There is a goods branch from Chullora and, in the future, the possibility of a separate single track freight line. The three passenger stations at the vertices of the triangle have island platforms making it convenient to change trains. The sharp curves of the triangle, and especially the turnouts on those sharp curves, restrict train speeds to between 10 and 50 km/h (6.2 and 31.1 mph). Examples by country: Near Hamilton station on the Central Coast and Newcastle line there is a wye for freight trains and regional trains. This puts them directly on the main northern line A number of triangular junctions were built on the Victorian Railways network, both at major junctions, and for turning locomotives and train consists in places where the provision of a turntable was impractical or unnecessarily expensive. These included: Wodonga – Built on the junction of the Cudgewa Line and used to turn the consists of the Sydney Limited and Spirit of Progress trains. Examples by country: Ararat North Geelong – built to allow trains to travel directly between Geelong, Ballarat or Melbourne without using a run-around or turntable. It was also used to turn trains, such as during the demonstration run of the Spirit of Progress in 1937.A triangular junction is used to turn tramcars on the Portland Cable Tram line in Portland, Victoria. Examples by country: Ireland In the Republic of Ireland two triangular junctions are in use. One is at Limerick Junction, and the other at Lavistown, near Kilkenny. The former allows direct Limerick–Dublin passenger trains to bypass the Limerick Junction station, and is also occasionally used to turn steam locomotives on railtours, whilst the latter is used primarily by freight trains running between the Port of Waterford and County Mayo to avoid having to run around in Kilkenny station. Examples by country: In Belfast, Northern Ireland, a triangular junction exists at Great Victoria Street station. It is rarely used to turn locomotives, save the occasional steam engine. It is usually used by Enterprise express trains to bypass Great Victoria Street and continue on to terminate at Belfast Central. Commuter trains enter the junction from one direction (e.g., the Portadown line), stop at Great Victoria Street, and then continue out on the other direction towards Bangor station. Commuter trains on NI Railways are all diesel multiple unit railcars, so they do not need to use the junction as a turning method. Examples by country: The only other operational triangular junction in Ireland is Downpatrick Loop on the Downpatrick and County Down Railway. Originally constructed to allow direct Belfast–Newcastle trains to bypass Downpatrick station, the triangle forms the basis of a heritage railway, the only heritage railway of this type in the British Isles. There is one station at each end of the triangle and another in the southernmost corner. Examples by country: Historical triangular junctions in Ireland include Moyasta Junction on the West Clare line, the Monkstown/Greenisland/Bleach Green triangle on the Northern Counties Committee and Bundoran Junction on the Great Northern Railway. Though two sides of the former are still in mainline use, the "back line" between Monkstown and Greenisland has been removed, whilst the latter was closed altogether in 1957. Additionally, the Great Northern's largest locomotive yard at Adelaide never had a turntable, using a dedicated turning triangle instead. Examples by country: The Luas tram system has a triangular junction on the Red Line between the stations of Busáras, Connolly and George's Dock. The line that goes between George's Dock and Connolly is never used, as no trams operate between The Point Luas stop & Connolly Italy Railways in Italy used a number of "inversion stars" for turning locomotives. This uses a pentagram layout, requiring four movements and five turnouts to reverse. It allows a smaller layout, without excessively tight curve radii, compared to a triangle. Examples by country: Some of these still survive, such as at the original terminus of Carbonia in Sardinia and at Mals or Malles Venosta in Val Venosta in the South Tyrol. Inversion star or convoluted wye In addition to small terminal stations such as Carbonia and Malles Venosta, inversion stars were also installed at some principal stations such as Verona Porta Nuova and Brenner at the summit of the Brenner Pass. Examples by country: Namibia Tsumeb railway station in Namibia has two triangles. The first and smaller one is for turning engines and is near the station. The second and larger one is to bypass the dead-end station at Tsumeb for trains travelling directly between the new extension towards Angola and Windhoek. This direct bypass line can save an hour of shunting time, particularly if the train is longer than the loops in the station. Examples by country: Switzerland There is a turning triangle partly tunnelled into the mountain at Kleine Scheidegg at the summit of the 800mm gauge Wengernalpbahn in the Bernese Oberland, Switzerland. Kleine Scheidegg is reached from two lower termini, Lauterbrunnen and Grindelwald, located on opposite sides of the col. Trains normally descend in the direction they have arrived from and are designed accordingly with the power unit at the lower end and seating angled to compensate for the gradient. They therefore have to be turned at the summit should it be necessary to make a through journey. Whilst limitations of space dictated that the triangle had to be partly constructed in tunnels it also ensures that in winter it is snow-free and thus readily available in emergencies. Examples by country: United Kingdom In Britain triangular layouts that could be used for turning locomotives were usually the result of junctions of two or more lines. There are many examples, including the one known as the Maindee triangle in Newport, South Wales. Here the ex-GWR South Wales mainline from London to Swansea is joined by another GWR line from Shrewsbury via Hereford. The significance of it is that steam-hauled trains can run to Newport and their engines be turned using the triangle. Its National Grid location is grid reference ST316887. Shrewsbury also has a triangular route formation that was used to turn steam locomotives, and is still available. A triangle, grid reference SH294789, was provided in 1989 adjacent to the transfer sidings for Wylfa Nuclear Power Station, near to Valley on Anglesey in Wales. This enables the North Wales Coast Line to be used by steam hauled excursions. The turntable at Holyhead has long been removed and the area re-developed; the sidings at Valley some 4 miles (6.4 km) from the terminus are the nearest suitable site. Examples by country: An unusual arrangement, unique in Britain, was constructed at Grantham. Its location was grid reference SK914349 and it is shown on the 1963 edition of OS 1 inch to 1 mile sheet 113. It was built in the 1950s after the turntable at the locomotive shed failed and expenditure on a replacement was no longer justified. Locomotives requiring to be turned had to travel to Barkston Junction to traverse the triangular layout there (this was where Mallard with a dynamometer car attached was turned before starting out south on its record-breaking run on 3 July 1938). The journey to Barkston Junction and back was a time-consuming business involving a round trip of some 8 miles (13 km) along the busy East Coast Main Line. Eventually authority was given to construct a turning arrangement on a strip of spare land to the west of the main line, just south of Grantham station. There was insufficient space for a conventional triangle but this was overcome by constructing an "inside-out" triangle whereby the approach tracks intersected in a scissors crossing. Examples by country: United States Many North American passenger terminals in large cities had wye tracks to allow the turning and backing of directional passenger trains onto a main line. Freight traffic could bypass the terminal through the wye. Notable examples include the Los Angeles Union Station, which has a double wye, the Saint Paul Union Depot, and the Memphis Union Station. Examples by country: A typical use for a stub-end passenger station would be as follows: A wye was incorporated at the "throat" where the rows of tracks converged from the station to facilitate the turning of trains. An arriving train came to a stop on the main line after passing the wye. Once the switches on the wye are aligned, the train reversed, with the brakeman at the rear of the last car regulating the speed with the brake lever upon approach to the platform. After coming to a complete stop at the end of the track, passengers were allowed to disembark safely. Examples by country: Meanwhile, the locomotives could be uncoupled from the train and sent to the engine terminal to be serviced for their next assignment. Then, the head-end cars could be uncoupled from the rest of the train and spotted by a station switcher at the parcel facility where mail and express packages were handled. The departing train was reassembled, freshly cleaned and serviced for the next journey. A steam pipe from the station's steam generator could have been attached to the train's steam line from the rear to supply heat until the locomotives were coupled up front to supply steam. Examples by country: The train was announced for boarding with a list of destinations. With switches aligned, the train slowly departed to the main line, continuing on its journey or returning toward the direction from which it arrived by rounding the opposite leg from the one it reversed on upon arrival. The Keddie Wye in Keddie, California, was built by the Western Pacific Railroad and is a remarkable engineering feat. Two sides of the wye are built on tall trestles and one side is in a tunnel bored through solid rock. The town of Wyeville, Wisconsin, is named after the Union Pacific Railway, formerly the Chicago and North Western Transportation Company wye and crossover nearby. Examples by country: A primary feature of the Bay Area Rapid Transit system is the Oakland Wye. Located beneath Downtown Oakland, California, the vast majority of the system's trains run through the wye primarily to and from San Francisco with some services running north and south along the East Bay. This section of track is considered a bottleneck for system-wide capacity based on speed restrictions and timing difficulties from distant branch lines. Examples by country: The southern terminus of the Amtrak Auto Train in Sanford, Florida, uses a wye to turn the locomotives around for the return trip north. A road that crosses the eastern side of the wye allows access to the inner part of the wye where there is a rock supply company. In Arizona, the Grand Canyon Railway (GCRY) has a wye at both the Williams and South Rim/Grand Canyon Village termini of its line. The train is turned around at the South Rim/Grand Canyon Village wye with the passengers on board. At the Williams end, the train is turned around after the passengers disembark. The Chowchilla Wye is a primary feature of the planned California High-Speed Rail System. It will allow for transfers from feeder services on the third leg and facilitate more routing options as future phases are completed. Convoluted wye: Convoluted wye, turning star or reversing star (Italian: Stella di inversione) is a special wye layout used in places where the space is tight. It has a pentagram-like form and consists of five turnouts (versus three for a wye) and three, four or five diamond crossings. Because of this, a reversing star is more expensive to build and service. It takes four changes of direction of movement to turn a piece of rolling stock on a reversing star. There was a "star" layout at the summit of the Brenner Pass, on the Austrian–Italian border. It was still there in 1991, covered over with gravel so that market-stalls could function on top.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hoop (magazine)** Hoop (magazine): HOOP is an official NBA publication, produced by Professional Sports Publications. The magazine features in-depth interviews with players, and also highlights the players' lives off the court. Other popular sections include celebrity interviews and Dance Life. Los Angeles Lakers guard Steve Nash answers readers' questions in his "Straight Shooter" column. Golden State Warriors guard Nate Robinson is the player video game editor and Miami Heat forward Shane Battier serves as Tech Editor and reviews products online for hoopmag.com. Hoop (magazine): HOOP also publishes international editions such as HOOP Japan, which features basketball English lessons from English, baby!. NBA player contributors: Current columnists Miami Heat forward Shane Battier (Tech editor) Indiana Pacers forward Danny Granger (Movie editor) Atlanta Hawks guard Devin Harris (Car editor) New Orleans Hornets forward Carl Landry (Music editor) Los Angeles Lakers guard Steve Nash (Straight Shooter) Atlanta Hawks center Zaza Pachulia (Fashion editor) Chicago Bulls guard Nate Robinson (Video Game editor) Former Indiana Pacers guard Jalen Rose (Fab 5 column) Minnesota Lynx guard Candice Wiggins (Fashion editor) Philadelphia 76ers forward Thaddeus Young (Music editor) Former columnists Morris Almond (Rookie columnist, 2008) NBA Hall of Famer Rick Barry Former San Antonio Spurs forward Bruce Bowen (Defensive editor, 2007-08) Former Orlando Magic center Adonal Foyle (Book reviewer) Phoenix Suns center Channing Frye (Straight Shooter) Former Orlando Magic forward Pat Garrity (Straight Shooter, 2007-08) Former Houston Rockets guard Kenny Smith (Fashion editor, 2007) New Jersey Nets guard Deron Williams (Car editor, 2008)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Low-voltage detect** Low-voltage detect: A low-voltage detect (LVD) is a microcontroller or microprocessor peripheral that generates a reset signal when the Vcc supply voltage falls below Vref. Sometimes is combined with power-on reset (POR) and then it is called POR-LVD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Entropy of entanglement** Entropy of entanglement: The entropy of entanglement (or entanglement entropy) is a measure of the degree of quantum entanglement between two subsystems constituting a two-part composite quantum system. Given a pure bipartite quantum state of the composite system, it is possible to obtain a reduced density matrix describing knowledge of the state of a subsystem. The entropy of entanglement is the Von Neumann entropy of the reduced density matrix for any of the subsystems. If it is non-zero, i.e. the subsystem is in a mixed state, it indicates the two subsystems are entangled. Entropy of entanglement: More mathematically; if a state describing two subsystems A and B |ΨAB⟩=|ϕA⟩|ϕB⟩ is a separable state, then the reduced density matrix Tr B⁡|ΨAB⟩⟨ΨAB|=|ϕA⟩⟨ϕA| is a pure state. Thus, the entropy of the state is zero. Similarly, the density matrix of B would also have 0 entropy. A reduced density matrix having a non-zero entropy is therefore a signal of the existence of entanglement in the system. Bipartite entanglement entropy: Suppose that a quantum system consists of N particles. A bipartition of the system is a partition which divides the system into two parts A and B , containing k and l particles respectively with k+l=N . Bipartite entanglement entropy is defined with respect to this bipartition. Bipartite entanglement entropy: Von Neumann entanglement entropy The bipartite von Neumann entanglement entropy S is defined as the von Neumann entropy of either of its reduced states, since they are of the same value (can be proved from Schmidt decomposition of the state with respect to the bipartition); the result is independent of which one we pick. That is, for a pure state ρAB=|Ψ⟩⟨Ψ|AB , it is given by: Tr log Tr log ⁡ρB]=S(ρB) where Tr B⁡(ρAB) and Tr A⁡(ρAB) are the reduced density matrices for each partition. Bipartite entanglement entropy: The entanglement entropy can be expressed using the singular values of the Schmidt decomposition of the state. Any pure state can be written as |Ψ⟩=∑i=1mαi|ui⟩A⊗|vi⟩B where |ui⟩A and |vi⟩B are orthonormal states in subsystem A and subsystem B respectively. The entropy of entanglement is simply: log {\textstyle -\sum _{i}\alpha _{i}^{2}\log(\alpha _{i}^{2})} This form of writing the entropy makes it explicitly clear that the entanglement entropy is the same regardless of whether one computes partial trace over the A or B subsystem. Bipartite entanglement entropy: Many entanglement measures reduce to the entropy of entanglement when evaluated on pure states. Among those are: Distillable entanglement Entanglement cost Entanglement of formation Relative entropy of entanglement Squashed entanglementSome entanglement measures that do not reduce to the entropy of entanglement are: Negativity Logarithmic negativity Robustness of entanglement Renyi entanglement entropies The Renyi entanglement entropies Sα are also defined in terms of the reduced density matrices, and a Renyi index α≥0 . It is defined as the Rényi entropy of the reduced density matrices: log tr ⁡(ρAα)=Sα(ρB) Note that in the limit α→1 , The Renyi entanglement entropy approaches the Von Neumann entanglement entropy. Example with coupled harmonic oscillators: Consider two coupled quantum harmonic oscillators, with positions qA and qB , momenta pA and pB , and system Hamiltonian H=(pA2+pB2)/2+ω12(qA2+qB2)/2+ω22(qA−qB)2/2 With ω±2=ω12+ω22±ω22 , the system's pure ground state density matrix is ρAB=|0⟩⟨0| , which in position basis is exp ⁡(−ω+(qA+qB)2/2−ω−(qA−qB)2/2−ω+(qA′+qB′)2/2−ω−(qA′−qB′)2/2) . Then exp ⁡(2(ω+−ω−)2qAqA′−(8ω+ω−+(ω+−ω−)2)(qA2+qA′2)8(ω++ω−)) Since ρA happens to be precisely equal to the density matrix of a single quantum harmonic oscillator of frequency ω≡ω+ω− at thermal equilibrium with temperature T ( such that cosh −1⁡(8ω+ω−+(ω+−ω−)2(ω+−ω−)2) where kB is the Boltzmann constant), the eigenvalues of ρA are λn=(1−e−ω/kBT)e−nω/kBT for nonnegative integers n . The Von Neumann Entropy is thus ln ln ⁡(1−e−ω/kBT) .Similarly the Renyi entropy Sα(ρA)=(1−e−ω/kBT)α1−e−αω/kBT/(1−α) Area law of bipartite entanglement entropy: A quantum state satisfies an area law if the leading term of the entanglement entropy grows at most proportionally with the boundary between the two partitions. Area laws are remarkably common for ground states of local gapped quantum many-body systems. This has important applications, one such application being that it greatly reduces the complexity of quantum many-body systems. The density matrix renormalization group and matrix product states, for example, implicitly rely on such area laws. References/sources: Janzing, Dominik (2009). "Entropy of Entanglement". In Greenberger, Daniel; Hentschel, Klaus; Weinert, Friedel (eds.). Compendium of Quantum Physics. Springer. pp. 205–209. doi:10.1007/978-3-540-70626-7_66. ISBN 978-3-540-70626-7.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mn2+-dependent ADP-ribose/CDP-alcohol diphosphatase** Mn2+-dependent ADP-ribose/CDP-alcohol diphosphatase: Mn2+-dependent ADP-ribose/CDP-alcohol diphosphatase (EC 3.6.1.53, Mn2+-dependent ADP-ribose/CDP-alcohol pyrophosphatase, ADPRibase-Mn) is an enzyme with systematic name CDP-choline phosphohydrolase. This enzyme catalyses the following chemical reaction (1) CDP-choline + H2O ⇌ CMP + phosphocholine (2) ADP-ribose + H2O ⇌ AMP + D-ribose 5-phosphateThis enzyme requires Mn2+, which cannot be replaced by Mg2+.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Grain whisky** Grain whisky: Grain whisky normally refers to any whisky made, at least in part, from grains other than malted barley. Frequently used grains include maize, wheat, and rye. Grain whiskies usually contain some malted barley to provide enzymes needed for mashing and are required to include it if they are produced in Ireland or Scotland. Whisky made only from malted barley is generally called "malt whisky" rather than grain whisky. Most American and Canadian whiskies are grain whiskies. Definition: Under the regulations governing the production of both Irish and Scotch whisky, malt whisky must be produced from a mash of 100% malted barley and must be distilled in a pot still. In Scotland, a whisky that uses other malted or unmalted cereals in the mash in addition to malted barley is termed a grain whisky. In Ireland, where regulations define "pot still whiskey" as one distilled from a specific mixed mash of at least 30% malted barley, at least 30% unmalted barley, and other unmalted cereals in a pot still, grain whisky refers to whisky produced from a mixed mash of no more than 30% malted barley in a column still.In both countries, grain whisky is typically distilled in a continuous column still in a manner that results in a higher percentage of alcohol by volume (ABV) but a less flavourful spirit than that derived from a pot still. As a result, grain whisky is seldom bottled by itself in either country but is instead used primarily for blending with malt or pot still whisky to create blended whiskies, which now account for more than 90% of both countries' whisky sales. The comparative lightness of the clearer, more-neutral-flavoured grain whisky is used in blends to smooth out the often harsher characteristics of single malts and single pot still whiskeys. Occasionally well-aged grain whiskies are released as single grain whisky if made at one distillery or blended grain whisky if combining spirits from multiple distilleries.Outside Ireland and Scotland, the use of continuous column stills and the use of a non-barley mash are not so closely associated with the production of "light" whisky (whisky with little flavour due to distillation at a very high ABV). For example, nearly all American whiskey is produced using column stills, and all American whiskey that is labelled as "straight whiskey" (including straight Bourbon and straight rye) is required to use a distillation level not exceeding 80% ABV. In the United States, whiskey produced at greater than 80% ABV is formally classified as "light whiskey" and cannot be labelled with the name of a grain or called malt, bourbon or straight.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Smokeless tobacco keratosis** Smokeless tobacco keratosis: Smokeless tobacco keratosis (STK) is a condition which develops on the oral mucosa (the lining of the mouth) in response to smokeless tobacco use. Generally it appears as a white patch, located at the point where the tobacco is held in the mouth. The condition usually disappears once the tobacco habit is stopped. It is associated with slightly increased risk of mouth cancer. Smokeless tobacco keratosis: There are many types of smokeless tobacco. Chewing tobacco is shredded, air-cured tobacco with flavoring. Dipping tobacco ("moist snuff") is air or fire-cured, finely cut tobacco. Dry snuff is ground or pulverised tobacco leaves. In the Indian subcontinent, the Middle-East and South-East Asia, tobacco may be combined in a quid or paan with other ingredients such as betel leaf, Areca nut and slaked lime. Use of Areca nut is associated with oral submucous fibrosis. An appearance termed Betel chewer's mucosa describes morsicatio buccarum with red-staining of mucosa due to betel quid ingredients. In Scandinavian countries, snus, a variant of dry snuff, is sometimes used. In the United States of America, the most common form of smokeless tobacco is dipping tobacco, although chewing tobacco is sometimes used by outdoor workers and dry snuff is common among females in the Southern states. The overall prevalence of smokeless tobacco use in the USA is about 4.5%, but this is higher in Mid-Western and Southern states. Signs and symptoms: STK typically occurs in the buccal sulcus (inside the cheek) or the labial sulcus (between the lips and the teeth) and corresponds to the site where the tobacco is held in the mouth. It is painless.The appearance of the lesion is variable depending upon the type of tobacco used, and the frequency and duration of use. It takes about 1-5 years of smokeless tobacco use for the lesion to appear. Early lesions may appear as thin, translucent and granular or wrinkled mucosa. The later lesion may appear thicker, more opaquely white and hyperkeratotic with fissures and folds. Oral snuff causes more pronounced changes in the oral mucosa than tobacco chewing. Snuff dipping is associated more with verrucous keratosis.As well as the white changes of the oral mucosa, there may be gingival recession (receding gums) and staining of tooth roots in the area where the tobacco is held. Diagnosis: Diagnosis is mainly clinical, based on the history and clinical appearance. The differential diagnosis includes other oral white lesions such as Leukoplakia, squamous cell carcinoma, oral candidiasis, lichen planus, white sponge nevus and contact stomatitis. In contrast to pseudomembraneous candidiasis, this white patch cannot be wiped off. Tissue biopsy is sometimes carried out to rule out other lesions, although biopsy is not routinely carried out for this condition. Treatment: Apart from stopping the habit, no other treatment is indicated. Long term follow-up is usually carried out. Some recommend biopsy if the lesions persists more than 6 weeks after giving up smokeless tobacco use, or if the lesion undergoes a change in appearance (e.g. ulceration, thickening, color changes, especially to speckled white and red or entirely red). Surgical excision may be carried out if the lesion does not resolve. Prognosis: Usually this lesion is reversible if the tobacco habit is stopped completely, even after many years of use. In one report, 98% of lesions disappeared within 2 weeks of stopping tobacco use. The risk of the lesion developing into oral cancer (generally squamous cell carcinoma and its variant verrucous carcinoma) is relatively low. Indeed, veruccous carcinoma is sometimes term "snuff dipper's cancer". In most reported cases, malignant transformation has occurring in individuals with a very long history of chewing tobacco or who use dry snuff.Smokeless tobacco use is also accompanied by increased risk of other oral conditions such as dental caries (tooth decay), periodontitis (gum disease), attrition (tooth wear) and staining. Epidemiology: STK is extremely common among smokeless tobacco users. Given the association with smokeless tobacco use, this condition tends to occur in adults. A national USA survey estimated an overall prevalence of 1.5% of all types of smokeless tobacco lesions, with males affected more commonly than females.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subspace identification method** Subspace identification method: In mathematics, specifically in control theory, subspace identification (SID) aims at identifying linear time invariant (LTI) state space models from input-output data. SID does not require that the user parametrizes the system matrices before solving a parametric optimization problem and, as a consequence, SID methods do not suffer from problems related to local minima that often lead to unsatisfactory identification results. History: SID methods are rooted in the work by the German mathematician Leopold Kronecker (1823–1891). Kronecker showed that a power series can be written as a rational function when the rank of the Hankel operator that has the power series as its symbol is finite. The rank determines the order of the polynomials of the rational function. History: In the 1960s the work of Kronecker inspired a number of researchers in the area of Systems and Control, like Ho and Kalman, Silverman and Youla and Tissi, to store the Markov parameters of an LTI system into a finite dimensional Hankel matrix and derive from this matrix an (A,B,C) realization of the LTI system. The key observation was that when the Hankel matrix is properly dimensioned versus the order of the LTI system, the rank of the Hankel matrix is the order of the LTI system and the SVD of the Hankel matrix provides a basis of the column space observability matrix and row space of the controllability matrix of the LTI system. Knowledge of this key spaces allows to estimate the system matrices via linear least squares.An extension to the stochastic realization problem where we have knowledge only of the Auto-correlation (covariance) function of the output of an LTI system driven by white noise, was derived by researchers like Akaike.A second generation of SID methods attempted to make SID methods directly operate on input-output measurements of the LTI system in the decade 1985–1995. One such generalization was presented under the name of the Eigensystem Realization Algorithm (ERA) made use of specific input-output measurements considering the impulse inputs. It has been used for modal analysis of flexible structures, like bridges, space structures, etc. These methods have demonstrated to work in practice for resonant structures they did not work well for other type of systems and an input different from an impulse. A new impulse to the development of SID methods was made for operating directly on generic input-output data and avoiding to first explicitly compute the Markov parameters or estimating the samples of covariance functions prior to realizing the system matrices. Pioneers that contributed to these breakthroughs were Van Overschee and De Moor – introducing the N4SID approach, Verhaegen – introducing the MOESP approach and Larimore – presenting ST in the framework of Canonical Variate Analysis (CVA)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Green nails** Green nails: Green nail syndrome is an infection that can develop in individuals whose hands are frequently submerged in water resulting in green discolouration of the nails. It may also occur as transverse green stripes that are ascribed to intermittent episodes of infection. It is usually caused by the bacteria Pseudomonas aeruginosa and is linked to hands being constantly moist or exposed to chemicals, or in individuals who have damaged or traumatised nails. There are several activities and nail injuries or conditions that are linked to higher risk of contracting the condition. Symptoms and signs: Green nail syndrome (chloronychia or Goldman-Fox syndrome) is characterised by discolouration of the infected nail, inflammation of the skin around the nail known as paronychia, and an odour resembling fruit. The colour may range from light or blueish green or yellow-green to darker green or black. Nails may be separated from the nail bed (onycholysis) and may have green stripes from repeated infections. Chronic fungal infection (onychomycosis) may also be present. Causes: Green nail syndrome is caused when the nail is exposed to a bacterial organism, which leads to opportunistic infection. Pseudomonas aeruginosa, the most common cause but not the only one, is frequently found in nature including in water sources, humans, animals and soil. These bacteria do not normally survive on dry, healthy skin, but can thrive in moist conditions. The seal between the nail and finger acts as a physical barrier to prevent infection, however hyper-hydration or destruction of the epidermis can impair the barrier, allowing the bacteria to colonise. The nail turns green due to the bacteria secreting pigments such as pyocyanin and pyoverdin. Causes: Risk factors Green nail syndrome occurs rarely in healthy individuals, but can occur in the immunocompromised or those whose hands are frequently immersed in water or who have other nail problems. The elderly and people who have had trauma to a finger or nail are at greater risk of contracting green nail syndrome.Green nail syndrome has been linked to manicures, heat, dermatitis, ulcerations, occlusions and excess sweating. Higher risk of contracting the infection is also linked to soccer players and military personnel due to the prolonged periods of time in which they exercise while wear tight fitting shoes as well as immunosuppressed persons and those with a weakened epidermis barrier. Causes: Pseudomonas can be transferred among clients in a nail salon if appropriate hygiene standards are not practiced, allowing transfer of the bacteria to clients. Artificial nails may be a contributing factor, and their use can result in diagnostic delay.A man working in a job where he was regularly mixing chemicals developed green nails secondary to exposure to chemicals; he mostly wore latex gloves, but sometimes did not, and the type of gloves he used was inadequate, resulting in a constantly moist environment.Cloronychia may be transferred to patients in clinics by medical practitioners, even when they are wearing gloves. Diagnosis: Diagnosis can typically be made from a physical examination of the nail, although cultures are sometimes needed. Nail scrapings can be performed to rule out fungal infections. Differential diagnosis Green nails may also be seen with Proteus mirabilis infection, in psoriasis, or because of use of triphenylmethane dyes or other stains and lacquers or chemical solutions. Melanoma is an infrequent differential diagnosis, which must be ruled out in hard-to-treat cases. Other differential diagnoses include hematoma and fungal infections (onychomycosis). Prevention: Preventative measures should be implemented by those who are most at risk of contracting green nail syndrome due to their predisposition or lifestyle and workplace choices. Wearing waterproof gloves or rubber boots can be effective in preventing prolonged exposure of the nails to water. Avoiding trauma that could lead to the destruction of the epidermis seal is on the nails is a priority in preventing green nail syndrome recurrences. Treatment: As of 2020, there have not been controlled, blinded studies on the treatment of green nail syndrome and there are no treatment guidelines as of 2021. Keeping the nails dry and avoiding excessive immersion of the nails are key. In some cases, surgical removal of the infected nail may be required, as a last choice. The patient is advised to avoid further trauma to the infected nail regardless of the treatment they received. Treatment: Pharmaceutical Oral antibiotics are rarely necessary, helpful or recommended by all practitioners. Moderate cases of green nail syndrome may be prescribed topical antibiotics (silver sulfadiazine, gentamicin, ciprofloxacin, bacitracin and polymyxin B). Oral antibiotics are sometimes used if other therapies fail. Tobramycin eye drops are sometimes used. Alternative The least invasive treatment includes soaking the nail in alcohol and regularly trimming the nail back, to dry out the area and prevent bacterial colonization. Some at-home treatments include soaking the nails in vinegar or a chlorine bleach solution (diluted with water 1:4) at regular intervals. History: Goldman–Fox syndrome was first described in 1944 by Leon Goldman, a dermatology professor at the University of Cincinnati, and Harry Fox. Works cited: Books Grover C, Rigopoulos D, Haneke E, Baran R, eds. (2021). Nail Therapies: Current Clinical Practice. CRC Press. pp. 111–113. ISBN 9781000366983. Hsu TS, Arndt KA, Schalock PC, eds. (2010). Lippincott's Primary Care Dermatology. Wolters Kluwer Health/Lippincott Williams & Wilkins. p. 398. ISBN 9780781793780. Zou X, Liu J (2022). Practical Dermoscopy. Springer Nature Singapore. p. 218. ISBN 9789811914607. Journal reviews Schwartz RA, Reynoso-Vasquez N, Kapila R (2020). "Chloronychia: The Goldman-Fox Syndrome - Implications for Patients and Healthcare Workers". Indian J Dermatol. 65 (1): 1–4. doi:10.4103/ijd.IJD_277_19. PMC 6986112. PMID 32029931. Sierra-Maeda KY, Segundo-López LD, Vega DC, Juárez DE, Arenas R (January–March 2022). "Green nail syndrome: A review" (PDF). Dermatología Cosmética, Médica y Quirúrgica (in Spanish). 20 (1): 78–85. Spernovasilis N, Psichogiou M, Poulakou G (April 2021). "Skin manifestations of Pseudomonas aeruginosa infections". Curr Opin Infect Dis. 34 (2): 72–79. doi:10.1097/QCO.0000000000000717. PMID 33492004.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Model-based enterprise** Model-based enterprise: Model-based enterprise (MBE) is a term used in manufacturing, to describe a strategy where an annotated digital three-dimensional (3D) model of a product serves as the authoritative information source for all activities in that product's lifecycle.A key advantage of MBE is that it replaces digital drawings. In MBE, a single 3D model contains all the information typically found on in an entire set of engineering drawings, including geometry, topology, dimensions, tolerances, materials, finishes, and weld call-outs.MBE was originally championed by the aerospace and defense industries, with the automotive industry following. It has been adopted by many manufacturers around the world, in a wide range of industries. Significant benefits for manufacturers include reduced time to market and savings in production costs from improved tool design and fabrication, fewer overall assembly hours, less rework, streamlined development and better collaboration on engineering changes.There are two prerequisites to implementing MBE: The first is the creation of annotated 3D models, known as a Model-based definitions (MBD). This requires the use of a CAD system capable of creating precise solid models, with product and manufacturing information (PMI), a form of 3D annotation which may include dimensions, GD&T, notes, surface finish, and material specifications. (The mechanical CAD systems used in aerospace, defense, and automotive industries generally have these capabilities.) The second prerequisite is transforming MBDs into a form where they can be used in downstream lifecycle activities. As a rule, CAD models are stored in proprietary data formats, so they must be translated to a suitable MBD-compatible standard format, such as 3D PDF, JT, STEP AP 242, or ANSI QIFThe core MBE tenet is that models are used to drive all aspects of the product lifecycle and that data is created once and reused by all downstream data consumers. Data reusability requires computer interpretability, where an MBD can be processed directly by downstream applications, and associativity of PMI to specific model features within the MBD. History: Historically, engineering and manufacturing activities have relied on hardcopy and/or digital documents (including 2D drawings) to convey engineering data and drive manufacturing processes. These documents required interpretation by skilled practitioners, often leading to ambiguities and errors.In the 1980s, improvements in 3D solid modeling made it possible for CAD systems to precisely represent the shape of most manufactured goods—however, even enthusiastic adopters of solid modeling technology continued to rely upon 2D drawings (often CAD generated) as the authority (or master) product representation. 3D models, even if geometrically accurate, lacked a method to represent dimensions, tolerances, and other annotative information required to drive manufacturing processes. History: In the early-to-middle 2000s, the ASME Y14.41-2003 Digital Product Data Definition Practices and ISO 16792:2006 Technical product documentation—Digital product definition data practices standards were released, providing support for PMI annotations in 3D CAD models, and introducing the concept of MBD (or, alternatively, digital product definition)The model-based enterprise concept first appeared about 2005. Initially it was construed broadly, referring to the pervasive use of modeling and simulation technologies (of almost any type) throughout an enterprise. In the late 2000s, An active community advocating development of MBE grew, based on the collaborative efforts of the Office of the Secretary of Defense, Army Research Laboratory, Armament Research Development Engineering Center (ARDEC), Army ManTech, BAE Systems, NIST, and the NIST Manufacturing Extension Partnership (MEP). The "MBE Team" included industry participants such as General Dynamics, Pratt & Whitney Rocketdyne, Elysium, Adobe, EOS, ITI TranscenData, Vistagy, PTC, Dassault Systemes Delmia, Boeing, and BAE Systems.Over time, based on community feedback, MBE became more narrowly construed, referring to the use of MBD data to drive product lifecycle activities. In 2011, the MBE Team published these definitions: MBD: A 3D annotated model and its associated data elements that fully define the product definition in a manner that can be used effectively by all downstream customers in place of a traditional drawing. History: MBE: A fully integrated and collaborative environment founded on 3D product definition detail and shared across the enterprise; to enable rapid, seamless, and affordable deployment of products from concept to disposalBy 2015, with improvements to ASME Y14.41 and ISO 16792, and the development of open CAD data exchange standards capable of adequately representing PMI, MBE started to become more widely adopted by manufacturers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**L3enc** L3enc: Fraunhofer l3enc was the first public software able to encode pulse-code modulation (PCM) .wav files to the MP3 format. The first public version was released on July 13, 1994. This commandline tool was shareware and limited to 112 kbit/s. It was available for MS-DOS, Linux, Solaris, SunOS, NeXTstep and IRIX. A licence that allowed full use (encoding up to 320 kbit/s) cost 350 Deutsche Mark, or about $250 (US).Since the release in September 1995 of Fraunhofer WinPlay3, the first real-time MP3 software player, people were able to store and play back MP3 files on PCs. For full playback quality (stereo) one would have needed to meet the minimum requirements of a 486DX4/100 processor.By the end of 1997 l3enc stopped being developed in favour of its successor MP3enc. Development of MP3enc stopped in late 1998 to favour development of a parallel branch FhG had been developing for some time, called Fastenc. None of these programs are still marketed. An mp3 Surround encoder and mp3HD codec and Software Tools are now promoted on the Fraunhofer MP3 website.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HAT-trie** HAT-trie: The HAT-trie is a type of radix trie that uses array nodes to collect individual key–value pairs under radix nodes and hash buckets into an associative array. Unlike a simple hash table, HAT-tries store key–value in an ordered collection. The original inventors are Nikolas Askitis and Ranjan Sinha. Askitis & Zobel showed that building and accessing the HAT-trie key/value collection is considerably faster than other sorted access methods and is comparable to the array hash which is an unsorted collection. This is due to the cache-friendly nature of the data structure which attempts to group access to data in time and space into the 64 byte cache line size of the modern CPU. Description: A new HAT-trie starts out as a NULL pointer representing an empty node. The first added key allocates the smallest array node and copies into it the key/value pair, which becomes the first root of the trie. Each subsequent key/value pair is added to the initial array node until a maximum size is reached, after which the node is burst by re-distributing its keys into a hash bucket with new underlying array nodes, one for each occupied hash slot in the bucket. The hash bucket becomes the new root of the trie. The key strings are stored in the array nodes with a length encoding byte prefixed to the key value bytes. The value associated with each key can be stored either in-line alternating with the key strings, or placed in a second array, e.g., memory immediately after and joined to the array node.Once the trie has grown into its first hash bucket node, the hash bucket distributes new keys according to a hash function of the key value into array nodes contained underneath the bucket node. Keys continue to be added until a maximum number of keys for a particular hash bucket node is reached. The bucket contents are then re-distributed into a new radix node according to the stored key value's first character, which replaces the hash bucket node as the trie root (e.g. see Burstsort). The existing keys and values contained in the hash bucket are each shortened by one character and placed under the new radix node in a set of new array nodes. Description: Sorted access to the collection is provided by enumerating keys into a cursor by branching down the radix trie to assemble the leading characters, ending at either a hash bucket or an array node. Pointers to the keys contained in the hash bucket or array node are assembled into an array that is part of the cursor for sorting. Since there is a maximum number of keys in a hash bucket or array node, there is a pre-set fixed limit to the size of the cursor at all points in time. After the keys for the hash bucket or array node are exhausted by get-next (or get-previous) (see Iterator) the cursor is moved into the next radix node entry and the process repeats.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alpha-1 antitrypsin deficiency** Alpha-1 antitrypsin deficiency: Alpha-1 antitrypsin deficiency (A1AD or AATD) is a genetic disorder that may result in lung disease or liver disease. Onset of lung problems is typically between 20 and 50 years of age. This may result in shortness of breath, wheezing, or an increased risk of lung infections. Complications may include chronic obstructive pulmonary disease (COPD), cirrhosis, neonatal jaundice, or panniculitis.A1AD is due to a mutation in the SERPINA1 gene that results in not enough alpha-1 antitrypsin (A1AT). Risk factors for lung disease include tobacco smoking and environmental dust. The underlying mechanism involves unblocked neutrophil elastase and buildup of abnormal A1AT in the liver. It is autosomal co-dominant, meaning that one defective allele tends to result in milder disease than two defective alleles. The diagnosis is suspected based on symptoms and confirmed by blood tests or genetic tests.Treatment of lung disease may include bronchodilators, inhaled steroids, and, when infections occur, antibiotics. Intravenous infusions of the A1AT protein or in severe disease lung transplantation may also be recommended. In those with severe liver disease liver transplantation may be an option. Avoiding smoking is recommended. Vaccination for influenza, pneumococcus, and hepatitis is also recommended. Life expectancy among those who smoke is 50 years while among those who do not smoke it is almost normal.The condition affects about 1 in 2,500 people of European descent. Severe deficiency occurs in about 1 in 5,000. In Asians it is uncommon. About 3% of people with COPD are believed to have the condition. Alpha-1 antitrypsin deficiency was first described in the 1960s. Signs and symptoms: Individuals with A1AD may develop emphysema, or chronic obstructive pulmonary disease during their thirties or forties even without a history of smoking, though smoking greatly increases the risk. Symptoms may include shortness of breath (on exertion and later at rest), wheezing, and sputum production. Symptoms may resemble recurrent respiratory infections or asthma.A1AD may cause several manifestations associated with liver disease, which include impaired liver function and cirrhosis. In newborns, alpha-1 antitrypsin deficiency can result in early onset jaundice followed by prolonged jaundice. Between 3% and 5% of children with ZZ mutations develop life-threatening liver disease, including liver failure. A1AD is a leading reason for liver transplantation in newborns. In newborns and children, A1AD may cause jaundice, poor feeding, poor weight gain, hepatomegaly and splenomegaly. Signs and symptoms: Apart from COPD and chronic liver disease, α1-antitrypsin deficiency has been associated with necrotizing panniculitis (a skin condition) and with granulomatosis with polyangiitis in which inflammation of the blood vessels may affect a number of organs but predominantly the lungs and the kidneys. Genetics: Serpin peptidase inhibitor, clade A, member 1 (SERPINA1) is the gene that encodes the protein alpha-1 antitrypsin. SERPINA1 has been localized to chromosome 14q32. Over 75 mutations of the SERPINA1 gene have been identified, many with clinically significant effects. The most common cause of severe deficiency, PiZ, is a single base-pair substitution leading to a glutamic acid to lysine mutation at position 342 (dbSNP: rs28929474), while PiS is caused by a glutamic acid to valine mutation at position 264 (dbSNP: rs17580). Other rarer forms have been described. Pathophysiology: A1AT is a glycoprotein mainly produced in the liver by hepatocytes, and, in some quantity, by enterocytes, monocytes, and macrophages. In a healthy lung, it functions as an inhibitor against neutrophil elastase, a neutral serine protease that controls lung elastolytic activity which stimulates mucus secretion and CXCL8 release from epithelial cells that perpetuate the inflammatory state. With A1AT deficiency, neutrophil elastase can disrupt elastin and components of the alveolar wall of the lung that may lead to emphysema, and hypersecretion of mucus that can develop into chronic bronchitis. Both conditions are the makeup of chronic obstructive pulmonary disease (COPD).Normal blood levels of alpha-1 antitrypsin may vary with analytical method but are typically around 1.0-2.7 g/L. In individuals with PiSS, PiMZ and PiSZ genotypes, blood levels of A1AT are reduced to between 40 and 60% of normal levels; this is usually sufficient to protect the lungs from the effects of elastase in people who do not smoke. However, in individuals with the PiZZ genotype, A1AT levels are less than 15% of normal, and they are likely to develop panlobular emphysema at a young age. Cigarette smoke is especially harmful to individuals with A1AD. In addition to increasing the inflammatory reaction in the airways, cigarette smoke directly inactivates alpha-1 antitrypsin by oxidizing essential methionine residues to sulfoxide forms, decreasing the enzyme activity by a factor of 2,000.With A1AT deficiency, the pathogenesis of the lung disease is different from that of the liver disease, which is caused by the accumulation of abnormal A1AT proteins in the liver, resulting in liver damage. As such, lung disease and liver disease of A1AT deficiency appear unrelated, and the presence of one does not appear to predict the presence of the other. Between 10 and 15% of people with the PiZZ genotype will develop liver fibrosis or liver cirrhosis, because the A1AT is not secreted properly and therefore accumulates in the liver. The mutant Z form of A1AT protein undergoes inefficient protein folding (a physical process where a protein chain achieves its final conformation). 85 percent of the mutant Z form are unable to be secreted and remain in the hepatocyte. Nearly all liver disease caused by A1AT is due to the PiZZ genotype, although other genotypes involving different combinations of mutated alleles (compound heterozygotes) may also result in liver disease. A liver biopsy in such cases will reveal PAS-positive, diastase-resistant inclusions within hepatocytes. Unlike glycogen and other mucins which are diastase sensitive (i.e., diastase treatment disables PAS staining), A1AT deficient hepatocytes will stain with PAS even after diastase treatment - a state thus referred to as "diastase resistant". The accumulation of these inclusions or globules is the main cause of liver injury in A1AT deficiency. However, not all individuals with PiZZ genotype develop liver disease (incomplete penetrance), despite the presence of accumulated mutated protein in the liver. Therefore, additional factors (environmental, genetic, etc.) likely influence whether liver disease develops. Diagnosis: The gold standard of diagnosis for A1AD consists of blood tests to determine the phenotype of the AAT protein or genotype analysis of DNA. Liver biopsy is the gold standard for determining the extent of hepatic fibrosis and assessing for the presence of cirrhosis.A1AT deficiency remains undiagnosed in many patients. Patients are usually labeled as having COPD without an underlying cause. It is estimated that about 1% of all COPD patients actually have an A1AT deficiency. Testing is recommended in those with COPD, unexplained liver disease, unexplained bronchiectasis, granulomatosis with polyangiitis or necrotizing panniculitis. American guidelines recommend that all people with COPD are tested, whereas British guidelines recommend this only in people who develop COPD at a young age with a limited smoking history or with a family history. The initial test performed is serum A1AT level. A low level of A1AT confirms the diagnosis and further assessment with A1AT protein phenotyping and A1AT genotyping should be carried out subsequently.As protein electrophoresis does not completely distinguish between A1AT and other minor proteins at the alpha-1 position (agarose gel), antitrypsin can be more directly and specifically measured using a nephelometric or immunoturbidimetric method. Thus, protein electrophoresis is useful for screening and identifying individuals likely to have a deficiency. A1AT is further analyzed by isoelectric focusing (IEF) in the pH range 4.5-5.5, where the protein migrates in a gel according to its isoelectric point or charge in a pH gradient. Diagnosis: Normal A1AT is termed M, as it migrates toward the center of such an IEF gel. Other variants are less functional and are termed A-L and N-Z, dependent on whether they run proximal or distal to the M band. The presence of deviant bands on IEF can signify the presence of alpha-1 antitrypsin deficiency. Since the number of identified mutations has exceeded the number of letters in the alphabet, subscripts have been added to most recent discoveries in this area, as in the Pittsburgh mutation described above. As every person has two copies of the A1AT gene, a heterozygote with two different copies of the gene may have two different bands showing on electrofocusing, although a heterozygote with one null mutant that abolishes expression of the gene will only show one band. In blood test results, the IEF results are notated as, e.g., PiMM, where Pi stands for protease inhibitor and "MM" is the banding pattern of that person.Other detection methods include use of enzyme-linked-immuno-sorbent-assays in vitro and radial immunodiffusion. Diagnosis: Alpha-1 antitrypsin levels in the blood depend on the genotype. Some mutant forms fail to fold properly and are, thus, targeted for destruction in the proteasome, whereas others have a tendency to polymerize, thereafter being retained in the endoplasmic reticulum. The serum levels of some of the common genotypes are: PiMM: 100% (normal) PiMS: 80% of normal serum level of A1AT PiSS: 60% of normal serum level of A1AT PiMZ: 60% of normal serum level of A1AT PiSZ: 40% of normal serum level of A1AT PiZZ: 10–15% (severe alpha-1 antitrypsin deficiency) Treatment: Treatment of lung disease may include bronchodilators, inhaled steroids, and, when infections occur, antibiotics. Intravenous infusions of the A1AT protein or in severe disease lung transplantation may also be recommended. In those with severe liver disease liver transplantation may be an option. Avoiding smoking and getting vaccinated for influenza, pneumococcus, and hepatitis is also recommended.People with lung disease due to A1AD may receive intravenous infusions of alpha-1 antitrypsin, derived from donated human plasma. This augmentation therapy is thought to arrest the course of the disease and halt any further damage to the lungs. Long-term studies of the effectiveness of A1AT replacement therapy are not available. It is currently recommended that patients begin augmentation therapy only after the onset of emphysema symptoms. As of 2015 there were four IV augmentation therapy manufacturers in the United States, Canada, and several European countries. IV therapies are the standard mode of augmentation therapy delivery.Liver disease due to A1AD does not include any specific treatment, beyond routine care for chronic liver disease. However, the presence of cirrhosis affects treatment is several ways. Individuals with cirrhosis and portal hypertension should avoid contact sports to minimize the risk of splenic injury. All people with A1AD and cirrhosis should be screened for esophageal varices, and should avoid all alcohol consumption. Nonsteroidal antiinflammatory drugs (NSAIDs) should also be avoided, as these medications may worsen liver disease in general, and may particularly accelerate the liver injury associated with A1AD. Augmentation therapy is not appropriate for people with liver disease. If progressive liver failure or decompensated cirrhosis develop, then liver transplantation may be necessary. Epidemiology: People of Northern European and Iberian ancestry are at the highest risk for A1AD. Four percent of them carry the PiZ allele; between 1 in 625 and 1 in 2000 are homozygous.Another study detected a frequency of 1 in 1550 individuals. The highest prevalence of the PiZZ variant was recorded in the northern and western European countries with mean gene frequency of 0.0140. Worldwide, an estimated 1.1 million people have A1AT deficiency and roughly 116 million are carriers of mutations.A1AD is one of the most common genetic diseases worldwide and the second most common metabolic disease affecting the liver. History: A1AD was discovered in 1963 by Carl-Bertil Laurell (1919–2001), at the University of Lund in Sweden. History: Laurell, along with a medical resident, Sten Eriksson, made the discovery after noting the absence of the α1 band on protein electrophoresis in five of 1500 samples; three of the five patients were found to have developed emphysema at a young age.The link with liver disease was made six years later, when Harvey Sharp et al. described A1AD in the context of liver disease. Research: Recombinant and inhaled forms of A1AT are being studied.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ICOM IC-7100** ICOM IC-7100: The ICOM IC-7100 is a multimode HF/VHF/UHF mobile amateur radio transceiver. The IC-7100 has support for a wide variety of commonly used amateur radio modes including ICOMs proprietary digital voice mode DSTAR. Additionally the radio offers 100 watts on HF, 50 watts on VHF, and 35 watts on UHF. The IC-7100 is unique in that it has a large detachable control head with a slanted display so the transmitter can be installed elsewhere in a vehicle or home. The receiver used in the IC-7100 is a triple conversion superheterodyne and has exachellet DSP and audio filters. The IC-7100 allows for connection to a computer over USB which enables the radio to be used for popular digital modes such as FT8, Winlink, and Packet Operation. Locations of nearby repeaters and sending APRS locations can be done with an optional GPS receiver attachment. Notable features that the IC-7100 lacks is an internal antenna tuner. Specifications: Specifications of the ICOM IC-7100: Frequency Range: Tx: 1.8 – 450 MHz (Amateur Bands Only) Rx: 30 kHz – 199.999 MHz and 400-470MHz Modes of Emission: A1A (CW), A3E (AM), J3E (LSB, USB), F3E (FM) Impedance: SO-239 50 Ohms, unbalanced Supply Voltage: 13.8 VDC External Current Consumption: Rx: 1.5 A Tx: 22 A Case Size (WxHxD): 200×83.5×82 mm; 7.9×3.3×3.2 in Weight (Approx.): 2.3 kg; 5.0 lb Output Power: 100 watts on HF, 50 watts on VHF, and 35 watts on UHF
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SmartUse** SmartUse: SmartUse is a collaborative construction software. The solution offers a touch friendly mobile app who helps construction projects to move in a paperless environment through features like field markups, photos and issues tracking.SmartUse is a privately-held company based in Montreal, Canada. Founded in 2012 by Dominic Sévigny, the company was sold to Newforma in 2014 and re-acquired in September 2017 by the original founder and Louis Dagenais. SmartUse: SmartUse is available on Windows since 2012 and on iPad since 2016.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FBLN1** FBLN1: FBLN1 is the gene encoding fibulin-1, an extracellular matrix and plasma protein. Function: Fibulin-1 is a secreted glycoprotein that is found in association with extracellular matrix structures including fibronectin-containing fibers, elastin-containing fibers and basement membranes. Fibulin-1 binds to a number of extracellular matrix constituents including fibronectin, nidogen-1, and the proteoglycan, versican. Fibulin-1 is also a blood protein capable of binding to fibrinogen. Structure: Fibulin-1 has modular domain structure and includes a series of nine epidermal growth factor-like modules followed by a fibulin-type module, a module found in all members of the fibulin gene family.The human fibulin-1 gene, FBLN1, encodes four splice variants designated fibulin-1A, B, C and D, which differ in their carboxy terminal regions. In mouse, chicken and the nematode, C. elegans, only two fibulin-1 variants are produced, fibulin-1C and fibulin-1D. Interactions: FBLN1 has been shown to interact with:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bobby sock** Bobby sock: Bobby socks are a style of women's sock, white, ankle-length or collected at the ankle, instead of at full extension up the leg. The term "bobby sox" indicates the socks are "bobbed" instead of full-length.[1] The term bobby soxer derives from this type of sock.They were initially popular in the United States in the 1940s and 1950s, later making a comeback in the 1980s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neurogammon** Neurogammon: Neurogammon is a computer backgammon program written by Gerald Tesauro at IBM's Thomas J. Watson Research Center. It was the first viable computer backgammon program implemented as a neural net, and set a new standard in computer backgammon play. It won the 1st Computer Olympiad in London in 1989, handily defeating all opponents. Its level of play was that of an intermediate-level human player.Neurogammon contains seven separate neural networks, each with a single hidden layer. One network makes doubling-cube decisions; the other six choose moves at different stages of the game. The networks were trained by backpropagation from transcripts of 400 games in which the author played himself. The author's move was taught as the best move in each position. Neurogammon: In 1992, Tesauro completed TD-Gammon, which combined a form of reinforcement learning with the human-designed input features of Neurogammon, and played at the level of a world-class human tournament player.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CARKD** CARKD: Carbohydrate kinase domain containing protein (abbreviated as CARKD), encoded by CARKD gene, is a human protein of unknown function. The CARKD gene encodes proteins with a predicted mitochondrial propeptide (mCARKD), a signal peptide (spCARKD) or neither of them (cCARKD). Confocal microscopy analysis of transfected CHO (Chinese-hamster ovary) cells indicated that cCARKD remains in the cytosol, whereas mCARKD and spCARKD are targeted to the mitochondria and the endoplasmic reticulum respectively. The protein is conserved throughout many species, and has predicted orthologs through eukaryotes, bacteria, and archea. Structure: Gene Human CARKD gene has 10 exons and resides on Chromosome 13 at q34. The following genes are near CARKD on the chromosome: COL4A2: A2 Subunit of type IV collagen RAB20: Potential regulator of Connexin 43 trafficking. Structure: CARS2: Mitochondrial Cystienyl-tRNA Synthetase 2 ING1: Tumor-Suppressor Protein Protein This protein is part of the phosphomethylpyrimidine kinase: ribokinase / pfkB superfamily. This family is characterized by the presence of a domain shared by the family. CARKD contains a carbohydrate kinase domain (Pfam PF01256). This family is related to Pfam PF02210 and Pfam PF00294 implying that it also is a carbohydrate kinase. Structure: Predicted properties The following properties of CARKD were predicted using bioinformatic analysis: Molecular Weight: 41.4 KDal Isoelectric point: 9.377CARKD orthologs have highly variable isoelectric points. Post-translational modification: Three post-translational modifications are predicted: Modified Phosphotyrosine Residue Two N-Linked Glycosylation Sites A Signal Peptide and signal peptide cleavage site was predicted. Function: Tissue distribution CARKD appears to be ubiquitously expressed at high levels. Expression data in the human protein, and the mouse ortholog, indicate its expression in almost all tissues. One peculiar expression pattern of CARKD is its differential expression through the development of oligodendrocytes. Its expression is lower in oligodendrocyte progenitor cells than in mature oligodendrocytes. Function: Binding partners The human protein apolipoprotein A-1 binding precursor (APOA1BP) was predicted to be a binding partner for CARKD. This prediction is based on co-occurrence across genomes and co-expression. In addition to these data, the orthologs of CARKD in E. coli contain a domain similar to APOA1BP. This indicates that the two proteins are likely to have originated from a common evolutionary ancestor and, according to Rosetta stone analysis theory, are likely interaction partners even in species such as humans where the two proteins are not produced as a single polypeptide. Clinical significance: Based on allele-specific expression of CARKD, CARKD may play a role in acute lymphoblastic leukemia. In addition, microarray data indicates that CARKD is up-regulated in Glioblastoma multiforme tumors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Superswell** Superswell: A superswell is a large area of anomalously high topography and shallow ocean regions. These areas of anomalous topography are byproducts of large upwelling of mantle material from the core–mantle boundary, referred to as superplumes. Two present day superswells have been identified: the African superswell and the South Pacific superswell. In addition to these, the Darwin Rise in the south central Pacific Ocean is thought to be a paleosuperswell, showing evidence of being uplifted compared to surrounding ancient ocean topography. Superplume: Data shows a dramatic increase in crustal production from 125-120 Ma. (million years) to 70 Ma, largely in East Pacific Rise areas, although the marked increase in production rates of crustal material was also seen in the Gondwana ridges, as well as in oceanic plateaus. This period of increased crustal production is interpreted as a superplume event. This "pulse" of increased crustal production peaked soon after the initial plume (between 120 Ma and 100 Ma), and then declined over the next thirty million years. Along with the increase in crustal output from ridges, there is an extended period in the time frame from 125 Ma to 40 Ma where the Earth's magnetic field reversal frequency declines sharply. The last remnants of this superplume event are the South Pacific superswell located underneath Tahiti. Superplume mechanism of action: Superplume/superswell creation is a large upwelling of material. Normal upwellings in the mantle are a common occurrence, as it is generally accepted that these upwellings are the driving force behind mantle convection and subsequent plate motion. In the case of the upwelling in the mid-Cretaceous Period along the East Pacific Rise, its origin lies deep within the Earth, near the core–mantle boundary. This conclusion is taken from the fact that the Earth retained a constant field polarity at the same time that this upwelling occurred. Superplume mechanism of action: A more current superplume/superswell is in the southern and eastern region of Africa. Seismic analysis shows a large low-shear-velocity province, which coincides with a region of upwelling of semi-liquid material which is a poor conductor of seismic waves. While there are several processes at work in the formation of these high topography zones, lithospheric thinning and lithospheric heating have been unable to predict the topographic upwelling on the African plate. Dynamic topography models have, on the other hand, been able to predict this upwelling utilizing calculations of the instantaneous flow of Earth's mantle. Evidence for mid-Cretaceous superswell: Isotopic samples taken from the Pacific-Antarctic ridge basalts have disassembled the long-held belief that there was a coherent geochemical province stretching from the Australian-Antarctic discordance to the Juan de Fuca plate. Instead, samples have shown that there are instead two distinct geochemical domains above and below the Easter microplate. Measurements of the average depth of ridge axes also shows that this boundary line lies on the southeastern side of the Darwin Rise/Pacific Superswell. It was concluded that this upwelling was responsible for the disparity between the two geochemical regions. Volcanic island chain offsets by superswell activity: One of the many ways that plate motions are mapped is by using hotspot activity and volcanic island chains. It is assumed that hotspots are stable relative to the motion of the island chain, and is therefore used as a point of reference. In the case of the Marquesas Islands, an island chain in the region of the South Pacific Superswell, the age progression of the island chain is much shorter than models have predicted. Also, the path that these island chains take does not coincide with the motion of the plate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Borel–Carathéodory theorem** Borel–Carathéodory theorem: In mathematics, the Borel–Carathéodory theorem in complex analysis shows that an analytic function may be bounded by its real part. It is an application of the maximum modulus principle. It is named for Émile Borel and Constantin Carathéodory. Statement of the theorem: Let a function f be analytic on a closed disc of radius R centered at the origin. Suppose that r < R. Then, we have the following inequality: sup Re ⁡f(z)+R+rR−r|f(0)|. Here, the norm on the left-hand side denotes the maximum value of f in the closed disc: max max |z|=r|f(z)| (where the last equality is due to the maximum modulus principle). Proof: Define A by sup Re ⁡f(z). If f is constant c, the inequality follows from Re ⁡c≥(R−r)|c| , so we may assume f is nonconstant. First let f(0) = 0. Since Re f is harmonic, Re f(0) is equal to the average of its values around any circle centered at 0. That is, Re Re ⁡f(z)dz. Proof: Since f is regular and nonconstant, we have that Re f is also nonconstant. Since Re f(0) = 0, we must have Re f(z)>0 for some z on the circle |z|=R , so we may take A>0 . Now f maps into the half-plane P to the left of the x=A line. Roughly, our goal is to map this half-plane to a disk, apply Schwarz's lemma there, and make out the stated inequality. Proof: w↦w/A−1 sends P to the standard left half-plane. w↦R(w+1)/(w−1) sends the left half-plane to the circle of radius R centered at the origin. The composite, which maps 0 to 0, is the desired map: w↦Rww−2A. From Schwarz's lemma applied to the composite of this map and f, we have |Rf(z)||f(z)−2A|≤|z|. Take |z| ≤ r. The above becomes R|f(z)|≤r|f(z)−2A|≤r|f(z)|+2Ar so |f(z)|≤2ArR−r ,as claimed. In the general case, we may apply the above to f(z)-f(0): sup Re sup Re ⁡f(w)+|f(0)|), which, when rearranged, gives the claim. Alternative result and proof: We start with the following result: Applications: Borel–Carathéodory is often used to bound the logarithm of derivatives, such as in the proof of Hadamard factorization theorem. The following example is a strengthening of Liouville's theorem. Sources: Lang, Serge (1999). Complex Analysis (4th ed.). New York: Springer-Verlag, Inc. ISBN 0-387-98592-1. Titchmarsh, E. C. (1938). The theory of functions. Oxford University Press.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reconstruction (architecture)** Reconstruction (architecture): Reconstruction in architectural conservation is the returning of a place to a known earlier state by the introduction of new materials. It is related to the architectural concepts of restoration (repairing existing building fabric) and preservation (the prevention of further decay), wherein the most extensive form of reconstruction is creating a replica of a destroyed building. More narrowly, such as under the Secretary of Interior's Standards in the United States, "reconstruction" is "the act or process of depicting, by means of new construction, the form, features, and detailing of a non-surviving site, landscape, building, structure, or object for the purpose of replicating its appearance at a specific period of time and in its historic location". Reconstruction of buildings and structures: There may be several reasons for the construction of a building or creation of a replica building or structure. Reconstruction of buildings and structures: Sometimes, it is the result of destruction of landmark monuments that is experienced as traumatic by inhabitants of the region, such as through war, planning errors and politically motivated destruction, other times, merely the result of natural disaster. Examples include Yongdingmen (former Peking city gate temporarily sacrificed to traffic considerations), St Mark's Campanile in Venice (collapsed in 1902), House of the Blackheads (Riga), Iberian Gate and Chapel and the Cathedral of Christ the Saviour in Moscow (destroyed by order of Joseph Stalin), Dresden Frauenkirche and Semperoper in Dresden (bombed at the end of World War II). A specifically well-known example is the rebuilding of the historic city center of Warsaw after 1945. The Old Town and the Royal Castle had been badly damaged already at the outset of World War II. It was systematically razed to the ground by German troops after the Warsaw Uprising of 1944. The reconstruction of Warsaw's historic center (e.g., St. John's Cathedral, St. Kazimierz Church, Ujazdów Castle) and, e.g., the replica of the Stari Most built in Mostar (Bosnia Herzegovina) have met with official approval by UNESCO. Reconstruction of buildings and structures: Other times, reconstructions are made in the case of sites where the historic and cultural significance was not recognized until long after its destruction, common in North America, especially with respect to its early history. Examples include the reconstruction of Colonial Williamsburg in Virginia, the rebuilding of numerous structures in Independence National Historical Park in Philadelphia, and Fort William Historical Park in Ontario, Canada. Types of reconstruction: There are different approaches to reconstruction, which differ in the degree of fidelity to the original and in the sensitivity to implementation. In architecture, Georg Mörsch describes reconstruction as a "scientific method of extracting sources to rebuild things that have gone under, regardless of the time that has passed since then".True-to-the-original reconstruction is a reconstruction carried out using the same materials and the same methods as possible after extensive source research. Often existing original components are used. This type of reconstruction can be found above all in culturally and historically significant buildings, which then serve as objects for viewing and are used as museums. An example of this is the completion of Cologne Cathedral, which was finally completed in the late 19th century when the original construction plans were discovered and these were followed. Types of reconstruction: Modelled reconstruction is one that does not meet the requirements for fidelity to the original due to a lack of sources. Typical examples are, for example, when only façade plans or image documentation of buildings are preserved - the rest of the necessary information is "reinvented" as much as possible by comparing it with similar contemporary objects. This type of "new creative" reconstruction, combined with a lot of imagination, had its heyday especially in historicism (with neo-Romanesque, neo-Gothic, neo-renaissance and neo-baroque). Many neo-Gothic castles have been created from the remains of medieval castles, such as Hohenschwangau Castle, Eilean Donan Castle, Scotland, Hohenzollern Castle and numerous others into the first third of the 20th century. Types of reconstruction: Replicative reconstruction is a form of reconstruction which, for functionalist reasons, serves to imitate (not: interpret), preserve or produce a (historicized) style, but with a different use and no longer has anything to do with the original or old building. (Example: Nikolaiviertel built in Berlin during the GDR era). Types of reconstruction: Interpretive reconstruction creates a new design based on historical sources. Buildings or parts of buildings are created that correspond to the character and overall impression of the original, without attempting a one-to-one copy. Examples are the Prinzipalmarkt in Münster or the additions to the Frankfurt Römerberg. The facades and gables of the houses were partly redesigned, but the overall impression of the market was to be retained. This method is derived from the neutral retouching as a modern restoration. This fulfils the desires of a reconstruction by restoring the overall impression of a place without the concerns over authenticity around replicas. Types of reconstruction: Didactic reconstructions: In connection with the development of archaeological excavation sites into didactic theme parks, reconstructions of striking ancient structures such as city walls, city gates, temples, villas or forts. Types of reconstruction: Experimental reconstructions are part of experimental archaeology. An example of this is the Guédelon Castle which has been under construction since 1997 using only the techniques and materials of the 13th century in order to research the construction method and duration. The Campus Galli is another construction project to build a medieval monastery town based on the St. Gallen monastery plan. These are previously non-existent buildings, the focus is on the research aspect. Types of reconstruction: Challenges with reconstructions Regardless of what type of reconstruction is done, there are some recurring challenges and questions. Original structures are often incompletely documented, so the missing parts have to be imagined, inferred or made in a different way to the original. Building materials or construction techniques used to build the original are barely or not at all available or not financially affordable. The same applies to craftspeople who still (or again) master the historical techniques and materials. The original would not correspond to the space requirements that the new use of the building will make. The inside of the building will be restructured and subdivided. The replica would not meet today's static safety requirements , so you have to change the structure. The original or replica with the same interior structure would not comply with the statutory safety regulations , such as in fire protection or escape routes. The original or replica would not meet today's legal requirements If implemented exactly, the original would no longer meet today's comfort requirements (air conditioning, electrical engineering, sanitary installations), so the original design is adapted accordingly. Reasons for and against reconstructions Since the end of the Second World War, the reconstruction of buildings has been the subject of controversy, especially in cities destroyed by the war. Types of reconstruction: In the public debate around reconstruction it is mostly assumed that historical or historicising architecture is perceived by the average citizen as more appealing than contemporary architecture. The loss of the "beautiful old" is seen as an aesthetic diminution, historically created and poorly closed building gaps are experienced as a permanent "flaw in the cityscape".The reconstruction of buildings is often controversial among architects and preservationists . There are different motives and values. Overall, the question of the reconstruction of prominent urban locations in the context of the cityscape proves to be significantly more conflict-prone than is the case with remote buildings or in the open, for example with experimental or didactic reconstructions. Types of reconstruction: Many reconstructions are new buildings with a historical façade design, but modern construction technology and with completely new uses. The original building fabric is often hardly preserved and architects in particular argue against this approach, saying that it merely creates a historical impression in order to appeal to a certain groups of buyers.However, there are also prominent examples of reconstructions with missing original substance. The reconstruction of completely destroyed Warsaw's Old Town is a reconstruction even in the UNESCO list of World Heritage. Reconstructed buildings are generally not perceived as such by those who are unfamiliar with them, which makes the cityscape more attractive in the eyes of the beholder. Even in the awareness of the residents, the fact of the reconstruction of a building is mostly forgotten after a while, the buildings are perceived again as an organic part of their environment. The desire for the original substance, which is usually put forward by monument conservationists, cannot be met in many old buildings either; one speaks of the Theseus paradox . Types of reconstruction: A crucial question in monument protection today is that of whether the value and originality is in the materials, or in the design. This does not only refer to the material erected at the time of construction, but also to the various later layers that are evidence of their times. The practice of both architectural and art history does not regard a certain version of an object as "the original", neither the first version nor the most splendid or most popular at the time, nor the last one that has been remembered. The Venice Charter of 1964 was an international guideline for dealing with the original building fabric for the preservation of monuments; it is the most important monument conservation text of the 20th century and defines central values and procedures for the conservation and restoration of monuments . Types of reconstruction: Opponents of reconstruction often point out that rebuilding could contribute to the transfiguration of the past. Reconstruction critics from the architectural profession and related professions assume that modern urban design and contemporary architecture are an expression of social identity that is continuously developing. According to this, it is important for a society to maintain its architecture, which meets its living conditions and needs and whose expression it is, through building projects, and not, on the other hand, to recreate old architecture. This consensus on what is contemporary is questioned by those in favor of reconstruction. From cultural and historical Critics see reconstruction as a phenomenon of the 19th and 20th centuries that had hardly any role models in history and is now outdated. Reconstruction can thus only be historically legitimized to a limited extent. On the other hand, the term cityscape - as an architectural unit extending beyond the individual building - only came into the field of vision of architecture in the course of modernity. Proponents of the reconstruction, on the other hand, have little fear of contact with the harmonistic architectural conceptions of the 19th century and also point to the lasting popularity of the domes that were "then completed" according to the principles that are not permitted today. However, it is precisely the free access to the formal language of all earlier epochs that is considered one of the essential features of historicism as seen in postmodernism . In a different sense, the reconstruction fulfils the demand for an answer to the needs of the time and in this sense is an expression of contemporary building activity. How later historical epochs will judge the contemporary phase of architecture and its peculiarities cannot be said. Types of reconstruction: For architects it is often not desirable to create replicas instead of creating something new. In this sense, every new building is "more historically accurate" because the destroyed objects were an expression of their own time. On the one hand, the "idea of a building" is the actual work of an architect and a reconstruction would represent an appreciation in this sense. On the other hand, every architect works in some way with the history of the building site. This reference to the previous buildings is to be seen as an appreciation, even if it is in explicit contrast. Building solutions by the architects of the historical compete with a new project. The fundamental question that remains is why something should be created again instead of a new building. Types of reconstruction: Prominent individual examples of reconstruction projects and executions show that architecture is a factor in the public that can still polarize just as much as that from the history of architecture known all time. From a global perspective, the entire discussion about the pros and cons of reconstruction is a problem rooted in Eurocentric sensitivities. Other cultures, both the Anglo-American region and Asia, deal with the topic differently: The regular, complete rebuilding of a Buddhist temple is part of the centuries-old tradition in Asian architecture, the European concept of "true to the original" plays in this culture, which has everything in the philosophical core Material regarded as worthless shell, until today a subordinate role. The 2000 year old Ise-jingū-Shrines in Japan are ritually rebuilt every 20 years according to exactly the same plans made of wood. In China, for example, while entire historic cities and city centers are being sacrificed to major urban and economic planning projects ( Shanghai , 3 Gorges Dam ), conversely, historicizing projects are also being implemented - such as the old town project of Datong, a city in the Ming style, or the restoration of the one in the cultural revolution destroyed sacred buildings. In the USA, too, the monument concept plays only a subordinate role today and relates much more to historic monuments that are significant in terms of time and culture than to those of architectural history. Types of reconstruction: Acceptance of reconstructions In a representative survey by the Forsa Institute on behalf of the German Federal Building Culture Foundation , 80% of all participants were in favor of the reconstruction of historic buildings and 15% were against. The approval of reconstructions was particularly high among women (83%) and 18 to 29 year olds (86%). When asked whether historical buildings should also be rebuilt for other uses, 80% of all participants answered with "yes" and 16% with "no". Examples of reconstructions: Prominent examples with worldwide attention that illuminate the diversity of reconstructive intentions and methods: Before 1945 Basilica of Saint Paul Outside the Walls in Rome: destroyed in a fire in 1823, rebuilt true to the original by 1840 St Mark's Campanile in Venice: The largely true-to-original copy of the building, which collapsed in 1902, was a trend-setting project for the beginning of the 20th century - the late Wilhelminian era was still completely entangled with the idea of complete urban redesign with the welcome removal of all outdated structures. Examples of reconstructions: Ypres Cloth Hall: destroyed in 1918, reconstructed until 1967, UNESCO World Heritage since 1999 Stonehenge in southern England: Megalithic structures that were largely preserved in the 16th century, most of which were overturned by the 19th century, were re-erected by William Gowland around 1900. The altered positions of the structures from the reconstruction obscure the original alignment and intended purpose which may have been astro-chronological in nature. Examples of reconstructions: Alcázar of Toledo: The fortress, which was destroyed in the Spanish Civil War 1936-39, was subsequently rebuilt largely true to the original. Church of the Flagellation in Jerusalem: Duke Max Joseph in Bavaria financed the purchase of the long-dilapidated chapel by the Custody of the Holy Land and its restoration for worship in 1838. 1927–1929, the building, which still exists today, was built in the style of the Middle Ages under the architect Antonio Barluzzi. Governor's Palace in Williamsburg, Virginia: The governor's palace , which was destroyed by fire in 1781, was rebuilt in 1927–1934 from the perspective of completing the tourist-museum cityscape of Colonial Williamsburg according to old models. Examples of reconstructions: After 1945 Australia St Kilda Pavilion (2006) Belgium Ypres Cloth Hall (after 1918) Bosnia and Herzegovina Stari Most, rebuilt with original materials (2004) Canada Cathedral-Basilica of Notre-Dame de Québec (1923) Montreal Biosphère (1995) Ontario Legislative Building (1912) Saint-Joachim de Pointe-Claire Church (1885) China Pavilion of Prince Teng (1989) Yellow Crane Tower (1981) Yongdingmen Gate, Beijing (2005) Porcelain Tower of Nanjing (2015) Croatia Church of Pentecost, Vinkovci Czech Republic Bethlehem Chapel, Prague, Czech Republic (1953) France Soissons Cathedral, Soissons (after 1918) Vendôme Column, Paris Germany Berlin City Palace Leibnizhaus, Hannover (1981) Falkenhaus, Würzburg Town Hall, Osnabrück St. Michael's Church, Hamburg Semperoper, Dresden Schloss Johannisburg, Aschaffenburg Roman limes Heilig-Geist-Spital, Nuremberg Hildesheim Cathedral Buddenbrookhaus, Lübeck Bauakademie, Berlin City Palace, Potsdam Münster Cathedral Butchers' Guild Hall, Hildesheim (1989) Old Castle (Stuttgart) Dresden Castle Dresden Cathedral Dresden Frauenkirche Munich Residence Munich Frauenkirche Dom-Römer Project, Frankfurt am Main Saarbrücken, Ludwigskirche Greece Stoa of Attalos, Athens (1956) Hungary Matthias Church, Budapest (1896) Buda Castle, Budapest (1952-1985: not true-to-original reconstruction of the building destroyed in 1944; since 2019: true-to-original reconstruction of some parts of the building) Royal Palace, Gödöllő (since 1994) India Daksheswara Mahadev Temple (1963) Tabo Monastery (1983) Iraq Babylon (1983) Israel Hurva Synagogue, Jerusalem (2010) Italy Abbey of Monte Cassino (1964) Opera house La Fenice, Venice (2003) St Mark's Campanile, Venice (1912) Basilica of Saint Paul Outside the Walls, Rome (1840) Japan Heijō Palace, Nara, Nara Prefecture. Examples of reconstructions: Latvia House of the Blackheads, Riga, Latvia Lithuania Palace of the Grand Dukes of Lithuania (2002-2009) Trakai Island Castle Malta Chapel of St Anthony of Padua in Fort Manoel (2009) Chapel of St Roche on St Michael's Counterguard (2014) Wignacourt Arch (2015)Plans are also being made for reconstructing the Birgu Clock Tower, which was destroyed in 1942. Examples of reconstructions: Poland Warsaw Old Town, Warsaw Sigismund's Column, Warsaw St. Kazimierz Church, Warsaw (1947-1953) Green Gate, Gdańsk, Poland Warsaw Barbican (1952–1954) St. Alexander's Church, Warsaw (1949-1952) Holy Cross Church, Warsaw (1953) Church of the Holy Spirit in Warsaw (1956) Malbork Castle (1960-1993) Royal Castle, Warsaw (1971-1974) St. Hyacinth's Church, Warsaw St. John's Archcathedral (Warsaw) St. Florian's Cathedral, Warsaw (1972) Russia Königsberg Cathedral King's Gate (Kaliningrad) Kazan Cathedral, Moscow (1993) Cathedral of Christ the Saviour, Moscow (2000) Slovakia Trenčín Castle Serbia Avala Tower Ukraine Golden Gate, Kiev (1982) St. Michael's Golden-Domed Monastery, Kiev (1999) United Kingdom Arbeia Roman Fort, South Shields, England Butser Ancient Farm, England Shakespeare's Globe, London Lunt Roman Fort, England United States Blennerhassett Mansion Colonial Williamsburg (mostly since 1920s) Governor's Palace (Williamsburg, Virginia) (1931–34) Nauvoo Temple (2002) Palace of Fine Arts, San Francisco (1965) White House Reconstruction (1949–52) Planned or under construction reconstructions: Buddhas of Bamiyan: After the destruction of the UNESCO World Heritage Site by the Taliban in 2001, there are vague plans to reconstruct the monumental statues of gods. Work has since begun on restoring the Buddhas using the process of anastylosis, where original elements are combined with modern material. The restoration of the caves and Buddhas has also involved training and employing local people as stone carvers. The work has come under some criticism. Planned or under construction reconstructions: Palmyra: Nalmyra: After the destruction of the UNESCO World Heritage by the Islamic State, there are vague plans to restore the ancient oasis city and many other destroyed temples, churches and mosques in Syria and Iraq. Old town hall in Halle: considered one of the most important secular buildings in Central Germany, badly damaged in an air raid in 1945, completely demolished by 1950. Currently collecting donations for the reconstruction of the baroque entrance way. Saxon Palace in Warsaw: former residence of the kings of Poland, part of the Saxon Axis , redesigned in a classicist style in 1842, destroyed in 1944 under German occupation. In 2018 the Polish government announced that it would reconstruct the palace as a senate building. Notre-Dame Cathedral in Paris: After the cathedral was partially destroyed by a major fire in 2019, the French Parliament decided to reconstruct Notre-Dame true to the original. Reconstruction will begin in 2021 with the aim of completion by Spring 2024, in time for the opening of the 2024 Summer Olympics in Paris. Planned or under construction reconstructions: Mercator House in Duisburg: home of the cartographer Gerhard Mercator (1512–1594), destroyed in World War II, foundations uncovered during archaeological excavations in 2012, reconstruction as an educational facility by 2021 Town hall towers in Frankfurt am Main: colloquially known as "Langer Franz" and "Kleiner Cohn", destroyed in an air raid in 1944, then covered with emergency roofs. Currently fundraising for the reconstruction of the tower roofs. Planned or under construction reconstructions: Garrison church in Potsdam: considered a major work of the European Baroque, built by Philipp Gerlach from 1730 to 1735, burned out in an air raid in 1945, blown up in 1968 for ideological reasons. The tower has been under reconstruction since 2017. Planned or under construction reconstructions: Old Market Square, Potsdam: once considered to be one of the most beautiful squares in Europe, especially in the time of Frederick the Great when many copies of Italian palaces were built there. Burned in an air raid in 1945, and then was demolished for ideological reasons in East Germany. Reconstruction of individual facades has taken place since 2013, including the Barberini Museum. Planned or under construction reconstructions: Berliner Bauakademie: was considered the original building of modern architecture, erected from 1832 to 1836 by Karl Friedrich Schinkel, burned out in an air raid in 1945 and was demolished in 1962. The building has been sold by the state of Berlin to the federal government which had a resolution in 2016 to reconstruct the building. Planned or under construction reconstructions: The Temple of Bel, the Temple of Baalshamin and the Monumental Arch in Palmyra, Syria, will be reconstructed using a anastylosis technique involving incorporating the original materials. The temples had been destroyed by the Islamic State of Iraq and the Levant in 2015. Following the recapture of Palmyra by the Syrian Army in March 2016, director of antiquities Maamoun Abdelkarim announced the plans for their reconstruction. Planned or under construction reconstructions: The Acropolis of Athens project began in 1975 with the goal to reverse the decay of centuries of attrition, pollution, destruction from military actions, and misguided past restorations. The project included collection and identification of all stone fragments, even small ones, from the Acropolis and its slopes and the attempt was made to restore as much as possible using reassembled original material (anastylosis), with new marble from Mount Pentelicus used sparingly. Planned or under construction reconstructions: Shuri Castle, the 5th reconstruction of the former Ryukyu Royal Castle is underway after the 2019 fire. Lighthouse of Alexandria, since 1978 a number of proposals have been made to replace the lighthouse with a modern reconstruction. In 2015, the Egyptian government and the Alexandria governorate suggested building a skyscraper on the site of the lighthouse as part of the regeneration of the eastern harbour of Alexandria Port.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Joule (journal)** Joule (journal): Joule is a monthly peer-reviewed scientific journal published by Cell Press. It was established in 2017 as a sister journal to Cell. The editor-in-chief is Philip Earis. Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 41.248.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Delft tower experiment** Delft tower experiment: In 1586, scientists Simon Stevin and Jan Cornets de Groot conducted an early scientific experiment on the effects of gravity. The experiment, which established that objects of identical size and different mass fall at the same speed, was conducted by dropping lead balls from the Nieuwe Kerk in the Dutch city of Delft. The experiment is considered a foundational moment in the history of statics, which Stevin's work helped to codify. History: In the late 16th century, increasing interest in physics resulted in a number of European scientists conducting experiments into the intricacies of the scientific field. Many of these experiments were—directly or indirectly—presenting a challenge to the laws of physics formulated by Aristotle, whose theory was then the dominant school of thought in Europe. While most contemporaneous scientific experimentation was undertaken by Italian scholars, by the 1580s new ideas on physics had proliferated to the rest of Europe.One of the European scientists to embrace the new view of physics was Simon Stevin, a Flemish engineer and mathematician. Stevin was employed as a military adviser for the court of William the Silent, and as such resided in the city of Delft while William's government occupied the city; one of Stevin's main benefactors was Maurice, Prince of Orange, whose patronage allowed Stevin to further his scientific interests. While Stevin's primary concern at court was the design of defensive fortifications, he also took interest in fluid dynamics, designing a series of improvements for Delft's windmills. To gain permission to tinker with Delft's mills, Stevin employed the services of Jan Cornets de Groot, a local lawyer and future father of legal scholar Hugo de Groot. The elder De Groot and Stevin became friends, with the former eventually investing in several new mills built using Stevin's design.In 1586 Stevin and De Groot collaborated to perform an experiment intended to challenge Aristotle's theory that objects fall at a speed directly proportional to their mass. To conduct their experiment, the two carried a pair of identically-sized lead balls up the Nieuwe Kerk in Delft before dropping them onto a wooden platform 30 feet below; of the pair, one ball was ten times heavier than the other. When the balls were dropped, both spheres hit the wooden platform below at substantively the same time, indicating that objects of the same size fall at the same speed regardless of mass. Stevin concluded that Aristotle's theory was therefore incorrect.While the Delft tower experiment had been a success, it was not conducted with the same scientific rigor that later experiments were; Stevin lacked an instrument to accurately measure the speed of the falling spheres, and was forced to rely on audio feedback (caused by the spheres impacting the wooden platform below) and eyewitness accounts to deduce that the balls had fallen at the same speed. As such, the experiment staged at the Nieuwe Kerk was given less credence than similar experiments, namely the more substantive work of Galileo Galilei and his famous thought experiment at the Leaning Tower of Pisa in 1589.Stevin published his findings in his 1586 work De Beghinselen Der Weeghconst—translatable to The Principles of Statics & The Principles of the Art of Weighing. Stevin and De Groot's experiment is—along with those of their Italian contemporaries—considered to be one of the foundational experiments in the history of modern statics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Presentation of a monoid** Presentation of a monoid: In algebra, a presentation of a monoid (or a presentation of a semigroup) is a description of a monoid (or a semigroup) in terms of a set Σ of generators and a set of relations on the free monoid Σ∗ (or the free semigroup Σ+) generated by Σ. The monoid is then presented as the quotient of the free monoid (or the free semigroup) by these relations. This is an analogue of a group presentation in group theory. Presentation of a monoid: As a mathematical structure, a monoid presentation is identical to a string rewriting system (also known as a semi-Thue system). Every monoid may be presented by a semi-Thue system (possibly over an infinite alphabet).A presentation should not be confused with a representation. Construction: The relations are given as a (finite) binary relation R on Σ∗. To form the quotient monoid, these relations are extended to monoid congruences as follows: First, one takes the symmetric closure R ∪ R−1 of R. This is then extended to a symmetric relation E ⊂ Σ∗ × Σ∗ by defining x ~E y if and only if x = sut and y = svt for some strings u, v, s, t ∈ Σ∗ with (u,v) ∈ R ∪ R−1. Finally, one takes the reflexive and transitive closure of E, which then is a monoid congruence. Construction: In the typical situation, the relation R is simply given as a set of equations, so that R={u1=v1,…,un=vn} . Thus, for example, ⟨p,q|pq=1⟩ is the equational presentation for the bicyclic monoid, and ⟨a,b|aba=baa,bba=bab⟩ is the plactic monoid of degree 2 (it has infinite order). Elements of this plactic monoid may be written as aibj(ba)k for integers i, j, k, as the relations show that ba commutes with both a and b. Inverse monoids and semigroups: Presentations of inverse monoids and semigroups can be defined in a similar way using a pair (X;T) where (X∪X−1)∗ is the free monoid with involution on X , and T⊆(X∪X−1)∗×(X∪X−1)∗ is a binary relation between words. We denote by Te (respectively Tc ) the equivalence relation (respectively, the congruence) generated by T. We use this pair of objects to define an inverse monoid Inv1⟨X|T⟩. Let ρX be the Wagner congruence on X , we define the inverse monoid Inv1⟨X|T⟩ presented by (X;T) as Inv1⟨X|T⟩=(X∪X−1)∗/(T∪ρX)c. Inverse monoids and semigroups: In the previous discussion, if we replace everywhere (X∪X−1)∗ with (X∪X−1)+ we obtain a presentation (for an inverse semigroup) (X;T) and an inverse semigroup Inv⟨X|T⟩ presented by (X;T) A trivial but important example is the free inverse monoid (or free inverse semigroup) on X , that is usually denoted by FIM(X) (respectively FIS(X) ) and is defined by FIM(X)=Inv1⟨X|∅⟩=(X∪X−1)∗/ρX, or FIS(X)=Inv⟨X|∅⟩=(X∪X−1)+/ρX.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ferroniobium** Ferroniobium: Ferroniobium is an important iron-niobium alloy, with a niobium content of 60-70%. It is the main source for niobium alloying of HSLA steel and covers more than 80% of the worldwide niobium production. The niobium is mined from pyrochlore deposits and is subsequently transformed into the niobium pentoxide Nb2O5. This oxide is mixed with iron oxide and aluminium and is reduced in an aluminothermic reaction to niobium and iron. The component metals can be purified in an electron beam furnace or the alloy can be used as it is. For alloying with steel the ferroniobium is added to molten steel before casting. The largest producers of ferroniobium are the same as for niobium and are located in Brazil and Canada.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Overhand throw** Overhand throw: The overhand (or overhead) throw is a single-handed throw of a projectile where the object is thrown above the shoulder. Overhand throw: The overhand throw is a complex motor skill that involves the entire body in a series of linked movements starting from the legs, progressing up through the pelvis and trunk, and culminating in a ballistic motion in the arm that propels a projectile forward. It is used almost exclusively in athletic events. The throwing motion can be broken down into three basic steps: cocking, accelerating, and releasing. Overhand throw: Desired qualities in the action produce a fast, accurate throw. These qualities are affected by the physical attributes of the thrower like height, strength, and flexibility. However it is mainly the throwing motion mechanics and the thrower's ability to coordinate them that determines the quality of the throw. Determining the desired qualities of the throwing motion is difficult to assess due to the extremely short amount of time that it takes professionals to perform the motion. The motion: In the overhead throwing motion the body is a kinetic chain, and the efficiency of the kinetic chain determines the quality of the throw (velocity and accuracy of the projectile). The thrower uses muscle segments throughout the whole body to transfer potential energy from the lower extremities to the upper extremities where it is then transformed into kinetic energy as the projectile is released.This throwing motion is described based on the analysis of professional athletes, mainly baseball pitchers, as they are recognized as having mastered this skill. There are variations in the throwing motion unique to the thrower, but generally the throwing motion is performed as follows. The motion: Starting position Proper technique for the start of the overhead throwing motion involves the thrower's body facing approximately 90 degrees from the intended target, with the throwing arm on the opposite side. The motion: Cocking The first stage of the throwing motion includes the time from the start of the motion to when the shoulder has reached its maximum external rotation. The throwing motion is initiated by first taking a stride toward the target with the leg opposite of the throwing arm. The stride foot should be in line with the thrower's stance foot and the target; placing the foot wide from the target creates a breakdown of the motion due to over-rotation of the pelvis, and placing the foot inward from the target forces the thrower to throw across his or her body. The purpose of the stride is to increase the distance over which linear and angular trunk motions occur, allowing more energy to be produced and transferred up the body. The stride step is performed while raising the throwing arm back to the point of maximum external shoulder rotation. At this point the arm is fully “cocked”. It is important to note that the ball does not move forward during the cocking stage. The motion: Acceleration The acceleration phase is initiated once the projectile begins its forward motion, which is also about the same time as the stride foot makes contact with the ground. The acceleration phase is the most explosive part of the overhead throwing motion, as the projectile's velocity increases from zero to its maximum velocity in this short amount of time. The ball is brought forward while the thrower's body rotates towards the target starting from the stride foot, moving up to the pelvis, followed by the trunk and spinal rotation, and then up to the shoulders. Although not visibly obvious, trunk muscular control is an important factor in high velocity throwing During this phase the thrower's trunk will tilt to the side opposite the throwing arm to allow for greater distance of acceleration, which transfers more energy to the projectile. The acceleration phase ends at the time of the projectile's release from the hand, at which point it has attained its maximum velocity. The motion: Release and follow-through Where the ball is released depends on the distance of the thrower's target; a farther target requires a higher release point and the same applies conversely. The purpose of the follow-through is to decelerate the throwing arm. Once the projectile is released the throwing arm keeps moving across the body. This rapid deceleration is actually the most violent part of the throwing motion, as the greatest amount of joint loading occurs at this stage. For professional baseball pitchers the leg opposite the stride leg also steps forward and squares the pitcher with his target. Uses: The main use of the overhead throwing motion is for competitive sports, including: Baseball Cricket Quarterback position in American football. Handball Volleyball: serving a ball uses similar overhead motions. Water Polo Javelin throw Shot put Dodgeball Axe Throwing Related injuries: Frequent use of the overhead throwing motion at high performance levels, such as by professional athletes, can lead to injury. This is due to the large amount of stress placed on the elbow and shoulder, which are the most common areas injured. These injuries can include but are not limited to: Elbow injuries Torn ulnar collateral ligament of elbow joint (requires Tommy John surgery) Injury to the common flexor tendon Shoulder injuries Injury to the rotator cuff Injury to the labrum Scapular dyskinesia Abdominal injuries Injury to the oblique muscles
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mammary analogue secretory carcinoma** Mammary analogue secretory carcinoma: Mammary analogue secretory carcinoma (MASC), also termed MASCSG, (the "SG" subscript indicates salivary gland) is a salivary gland neoplasm. It is a secretory carcinoma which shares the microscopic pathologic features with other types of secretory carcinomas including mammary secretory carcinoma, secretory carcinoma of the skin, and salivary gland–type carcinoma of the thyroid. MASCSG was first described by Skálová et al. in 2010. The authors of this report found a chromosome translocation in certain salivary gland tumors, i.e. a (12;15)(p13;q25) fusion gene mutation. The other secretory carcinoma types carry this fusion gene. Fusion gene: The translocation found in MASCSG occurs between the ETV6 gene located on the short arm (designated p) of chromosome 12 at position p13.2 (i.e. 12p13.2) and the NTRK3 gene located on the long arm (designated q) of chromosome 15 at position q25.3 (i.e. 15q25.3) to create the (12;15)(p13;q25) fusion gene, ETV6-NTRK3. This mutant fusion gene also occurs in congenital fibrosarcoma, congenital mesoblastic nephroma, secretory breast cancer (also termed juvenile breast cancer), acute myelogenous leukemia, ALK-negative Inflammatory myofibroblastic tumour, and radiation-induced papillary thyroid carcinoma. The MASCSG gene codes for the transcription factor protein, ETV6, which suppresses the expression of, and thereby regulates, various genes that in mice are required for normal hematopoiesis as well as the development and maintenance of the vascular network.The NTRK3 gene codes for Tropomyosin receptor kinase C (also termed TrkC or TEL), the receptor for neurotrophin-3. TrkC is a RTK class VII tyrosine kinase receptor. When bound to neurotrophin-3, it becomes active as a tyrosine kinase to phosphorylate cellular proteins and thereby stimulate cell signaling pathways that lead to cellular differentiation and growth while inhibiting cellular death. TrkC makes particularly important contributions to development of the central and peripheral nervous systems. NTRK3 forms chromosomal translation-mediated fusions with many other genes in addition to ETV6 to form fused genes that are associated with the induction of a wide range of cancers including those of the lung, thyroid gland, colon, rectum, and brain.ETV6-NTRK3 fusion genes in some MASCSG disease cases display atypical exon junctions and may be associated with more tissue infiltrating disease and less favorable clinical outcomes. Fusion protein: The ETV6-NTRK3 fusion gene's product, ETV6-NTRK3 protein, contains the N-terminus of ETV6 that is responsible for its dimerization/polymerization ETV6, a step required for it to inhibit transcription. The protein's C-terminus contains the C-terminus of the TrkC. The fusion protein lacks transcription regulating activity but has dysregulated, i.e. continuously active tyrosine kinase activity. In consequence of the latter effect, the fusing protein continuously stimulates pro-growth and pro-survival pathways and thereby the malignant growth of its parent cells. Clinical presentation and diagnosis: Mammary analogue secretory carcinoma occurs somewhat more commonly in men (male to female ratio of <1.5:1.0). Patients with this disease have a mean age of 46 years although ~12% of cases occur in pediatric patients. Individuals typically present with symptomless tumors in the parotid salivary gland (68%), buccal mucosa salivary glands (9%), submandibular salivary gland (8%) or in the small salivary glands of the lower lip (5%), upper lip (4%), and hard palate (4%). Histologically, these tumors are described as having a morphology similar to secretory breast carcinoma; they typically having one or more of the following histological patterns: microcystic, papillary-cystic, follicular, and/or solid lobular. Other histological features of these tissues include: the presence of eosinophilic secretions as detected by staining strongly for eosin Y; positive staining with periodic acid-Schiff stain (often after diastase); the presence of vesicular oval nuclei with a single small but prominent nucleolus; and the absence of basophilic Haematoxylin or zymogen granules (i.e. vesicles that store enzymes near the cell's plasma membrane).The cited histology features are insufficient to distinguish MASCSG from other Salivary gland neoplasms such as acinic cell carcinoma, low-grade cribriform cystadenocarcinoma, and adenocarcinoma not otherwise specified. MASCSG can be distinguished from these and other histologically similar tumors by either tissue identification of a) the ETV6-NTRK3 fusion gene using Fluorescence in situ hybridization or reverse transcription polymerase chain reaction gene detection methods or b) a specific pattern of marker proteins as registered using specific antibody-based detection methods, i.e. MASCSG tissue should have detectable S100 (a family of calcium binding proteins), Mammaglobin (a breast cancer marker), Keratin 7 (an intermediate filament found in epithelial cells), GATA3 (a transcription factor and breast cancer biomarker), SOX10 (a transcription factor important in neural crest origin cells and development of the peripheral nervous system), and STAT5A (a transcription factor) but lack antibody-detectable TP63 (a transcription factor in the same family as p53) and Anoctamin-1 (a voltage sensitive calcium activated chloride channel). Clinical course and treatment: MASCSG is currently treated as a low-grade (i.e. Grade 1) carcinoma with an overall favorable prognosis. These cases are treated by complete surgical excision. However, the tumor does have the potential to recur locally and/or spread beyond surgically dissectible margins as well as metastasize to regional lymph nodes and distant tissues, particularly in tumors with histological features indicating a high cell growth rate potential. One study found lymph node metastasis in 5 of 34 MASCSG patients at initial surgery for the disease; these cases, when evidencing no further spread of disease, may be treated with radiation therapy. The treatment of cases with disease spreading beyond regional lymph nodes has been variable, ranging from simple excision to radical resections accompanied by adjuvant radiotherapy and/or chemotherapy, depending on the location of disease. Mean disease-free survival for MASCSG patients has been reported to be 92 months in one study.The tyrosine kinase activity of NTRK3 as well as the ETV6-NTRK3 protein is inhibited by certain tyrosine kinase inhibitory drugs such as Entrectinib and LOXO-101; this offers a potential medical intervention method using these drugs to treat aggressive MASCSG disease. Indeed, one patient with extensive head and neck MASCSG disease obtained an 89% fall in tumor size when treated with entrectinib. This suppression lasted only 7 months due to the tumor's acquirement of a mutation in the ETV6-NTRK3 gene. The newly mutated gene encoded an entrectinib-reisistant ETV6-NTRK3 protein. Treatment of aggressive forms of MASCSG with NTRK3-inhibiting tyrosine kinase inhibiting drugs, perhaps with switching to another type of tyrosine kinase inhibitor drug if the tumor acquires resistance to the initial drug, is under study.STARTRK-2
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cast urethanes** Cast urethanes: Cast Urethanes are similar to injection molding. During the process of injection molding, a hard tool is created. The hard tool, made of an A side and a B side, forms a void within and that void is injected with plastics ranging in material property, durability, and consistency. Plastic cups, dishware, and toys are most commonly made using the process of injection molding because they are common consumer items that need to be produced on a mass scale, and injection molding (once the hard tool has been created) is designed for mass production. Cast urethanes: Casting urethanes is similar in that polyurethanes are injected into a tool. But with cast urethanes, the tool is a soft tool, typically made with a type of silicone mold. The mold is created via a master pattern. Master patterns for cast urethanes can be created with CNC machining (which is a common process for injection molding) but cast urethane master patterns are often created with additive manufacturing (or 3D Printing) and the reasons for this vary. Cast urethanes: Creating a cast urethane master pattern is different from the steps involved in creating hard tooling for injection molding. Hard tools for injection molding are going to be subjected to a lot of stress and heat during the injection process. They will see runs of thousands of parts per day. The care that goes into a hard tool involves intense machine programming which costs thousands of dollars alone. The price for hard tooling is balanced by the mass production the tooling brings, which is where cast urethanes begin to differ. Cast urethanes are suited for smaller runs of parts and prototyping. Because the cost for soft tooling is lower, down in the hundreds rather than hundreds of thousands, cast urethanes are excellent resources for creators still testing product design, for one-off products, or for testing market and consumer response to a new product. Master Patterns: Cast urethane master patterns can be produced using machining, additive manufacturing—even an already existing product. The master pattern is used to create an A side and a B side for a mold. The pattern is used to form a void within a mold. The mold material is one that easily picks up surface detail (such as silicones) because the mold will be responsible for reflecting the surface of the product. Applications: There are many types of cast urethane applications including: Urethane Molded Bearings Urethane Covered Rollers Urethane Cast Wheels FDA Approved Cast Urethane Parts Urethane seals and gasket covers Process: Cast urethane starts as a liquid that can be dispensed into a mold, post cured in ovens and where required, secondary machining operations can be added. Cast thermoset urethanes have better physical properties than most injection or extruded thermoplastics. Dispensing liquid urethane into open molds or compression tools makes it possible to cast just about any configuration from affordable tooling. Process: Steps include first printing a master pattern for an accurate silicone mold, which is then encased in liquid silicone. After the mold cures, it is cut into distinct sections and the pattern is removed. The cavity formed is used for casting the end product. The cavity or void is filled with a material, which will cure and be removed from the tool. Industries: The types of industries that utilize cast urethane include: Distribution Centers Printing Wheels for Skateboards, Robots and other rotating applications Conveyor Systems Benefits of Cast Urethane: Abrasion resistance Reduction of noise Excellent tear strength High / Low temperature
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Degradation of Mayan archeological sites** Degradation of Mayan archeological sites: Sites of the ancient Maya civilization deteriorate as a result of both environmental and human factors. Archaeologists consider a number of different factors in evaluation site formation processes, site preservation, and site destruction. Deterioration from looting and defacement can destroy vital information. The natural forces of reforestation and erosion can also degrade archeological sites. Causes: Biodegradation Limestone is the dominant material used in Maya architecture, in large part due to its abundance in the region. However, it is a relatively soft stone that can deteriorate easily. Microorganisms on the surface of the limestone create acids that slowly eat away at the stone, creating cracks, fissures and weak points. Over time, biofilms can completely erode the stone.Many sites are especially prone to the effects of Yucatan’s climate, lending to biodegradation due to a warm, humid and generally equatorial climate. Sites in the Guatemalan highlands and in Chiapas enjoy more alpine and subalpine habitats, with elevations 2600–2800 meters above sea levels. As a result, these sites are less likely to experience biodegradation to the degree that hotter, wetter subtropical climates of the lowland sites do. Causes: Rising sea levels Rising sea levels have submerged a number of different sites, such as Stingray Lagoon and Wild Cane Cay, in Belize. At Stingray Lagoon, artifacts related to the production of salt, including pottery used to boil down brine, a process known as sal cocida, were found underwater . Evidence suggests Wild Cane Cay, was used as a trading post with deposits submerged about a meter underwater dating from both the Classic and Post Classic periods. Similarly submerged remains extend about a meter to a meter and a half in depth at other sites in the Port Honduras, suggesting a dramatically different environment at the time of occupation. The speed at which a site becomes inundated can have major effects on the condition of the remains. A gradual rise in water levels partially exposes the remains to erosion through the wave force, and a fast rise in water levels could leave a completely submerged site relatively undamaged. As a result the condition of underwater remains and the amount of decay can infer insights into the location and the cause of its abandonment, however every site is different what can be true of one archeological site may not be true for another. Causes: Human Destruction There has been significant intentional destruction of sites through looting. Building materials used for the construction of Colonial-era churches and other buildings were taken from Maya sites. Degradation of Maya sites due to human effects has increased due to tourism. In some cases, outright vandalism, including graffiti, has permanently altered remains. In other cases, erosion due to overuse as thousands of tourists damages sites.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Amine N-methyltransferase** Amine N-methyltransferase: In enzymology, an amine N-methyltransferase (EC 2.1.1.49) is an enzyme that is ubiquitously present in non-neural tissues and that catalyzes the N-methylation of tryptamine and structurally related compounds.The chemical reaction taking place is: S-adenosyl-L-methionine + an amine ⇌ S-adenosyl-L-homocysteine + a methylated amineThus, the two substrates of this enzyme are S-adenosyl methionine and amine, whereas its two products are S-adenosylhomocysteine and methylated amine. In the case of tryptamine and serotonin these then become the dimethylated indolethylamines N,N-dimethyltryptamine (DMT) and bufotenine respectively.This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:amine N-methyltransferase. Other names in common use include nicotine N-methyltransferase, tryptamine N-methyltransferase, indolethylamine N-methyltransferase, and arylamine N-methyltransferase. This enzyme participates in tryptophan metabolism. Amine N-methyltransferase: A wide range of primary, secondary and tertiary amines can act as acceptors, including tryptamine, aniline, nicotine and a variety of drugs and other xenobiotics. Structural studies: As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2A14.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Page replacement algorithm** Page replacement algorithm: In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide which memory pages to page out, sometimes called swap out, or write to disk, when a page of memory needs to be allocated. Page replacement happens when a requested page is not in memory (page fault) and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold. Page replacement algorithm: When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in from disk), and this involves waiting for I/O completion. This determines the quality of the page replacement algorithm: the less time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware, and tries to guess which pages should be replaced to minimize the total number of page misses, while balancing this with the costs (primary storage and processor time) of the algorithm itself. Page replacement algorithm: The page replacing problem is a typical online problem from the competitive analysis perspective in the sense that the optimal deterministic algorithm is known. History: Page replacement algorithms were a hot topic of research and debate in the 1960s and 1970s. History: That mostly ended with the development of sophisticated LRU (least recently used) approximations and working set algorithms. Since then, some basic assumptions made by the traditional page replacement algorithms were invalidated, resulting in a revival of research. In particular, the following trends in the behavior of underlying hardware and user-level software have affected the performance of page replacement algorithms: Size of primary storage has increased by multiple orders of magnitude. With several gigabytes of primary memory, algorithms that require a periodic check of each and every memory frame are becoming less and less practical. History: Memory hierarchies have grown taller. The cost of a CPU cache miss is far more expensive. This exacerbates the previous problem. History: Locality of reference of user software has weakened. This is mostly attributed to the spread of object-oriented programming techniques that favor large numbers of small functions, use of sophisticated data structures like trees and hash tables that tend to result in chaotic memory reference patterns, and the advent of garbage collection that drastically changed memory access behavior of applications.Requirements for page replacement algorithms have changed due to differences in operating system kernel architectures. In particular, most modern OS kernels have unified virtual memory and file system caches, requiring the page replacement algorithm to select a page from among the pages of both user program virtual address spaces and cached files. The latter pages have specific properties. For example, they can be locked, or can have write ordering requirements imposed by journaling. Moreover, as the goal of page replacement is to minimize total time waiting for memory, it has to take into account memory requirements imposed by other kernel sub-systems that allocate memory. As a result, page replacement in modern kernels (Linux, FreeBSD, and Solaris) tends to work at the level of a general purpose kernel memory allocator, rather than at the higher level of a virtual memory subsystem. Local vs. global replacement: Replacement algorithms can be local or global. When a process incurs a page fault, a local page replacement algorithm selects for replacement some page that belongs to that same process (or a group of processes sharing a memory partition). A global replacement algorithm is free to select any page in memory. Local vs. global replacement: Local page replacement assumes some form of memory partitioning that determines how many pages are to be assigned to a given process or a group of processes. Most popular forms of partitioning are fixed partitioning and balanced set algorithms based on the working set model. The advantage of local page replacement is its scalability: each process can handle its page faults independently, leading to more consistent performance for that process. However global page replacement is more efficient on an overall system basis. Detecting which pages are referenced and modified: Modern general purpose computers and some embedded processors have support for virtual memory. Each process has its own virtual address space. A page table maps a subset of the process virtual addresses to physical addresses. In addition, in most architectures the page table holds an "access" bit and a "dirty" bit for each page in the page table. The CPU sets the access bit when the process reads or writes memory in that page. The CPU sets the dirty bit when the process writes memory in that page. The operating system can modify the access and dirty bits. The operating system can detect accesses to memory and files through the following means: By clearing the access bit in pages present in the process' page table. After some time, the OS scans the page table looking for pages that had the access bit set by the CPU. This is fast because the access bit is set automatically by the CPU and inaccurate because the OS does not immediately receive notice of the access nor does it have information about the order in which the process accessed these pages. Detecting which pages are referenced and modified: By removing pages from the process' page table without necessarily removing them from physical memory. The next access to that page is detected immediately because it causes a page fault. This is slow because a page fault involves a context switch to the OS, software lookup for the corresponding physical address, modification of the page table and a context switch back to the process and accurate because the access is detected immediately after it occurs. Detecting which pages are referenced and modified: Directly when the process makes system calls that potentially access the page cache like read and write in POSIX. Precleaning: Most replacement algorithms simply return the target page as their result. This means that if target page is dirty (that is, contains data that have to be written to the stable storage before page can be reclaimed), I/O has to be initiated to send that page to the stable storage (to clean the page). In the early days of virtual memory, time spent on cleaning was not of much concern, because virtual memory was first implemented on systems with full duplex channels to the stable storage, and cleaning was customarily overlapped with paging. Contemporary commodity hardware, on the other hand, does not support full duplex transfers, and cleaning of target pages becomes an issue. Precleaning: To deal with this situation, various precleaning policies are implemented. Precleaning is the mechanism that starts I/O on dirty pages that are (likely) to be replaced soon. The idea is that by the time the precleaned page is actually selected for the replacement, the I/O will complete and the page will be clean. Precleaning assumes that it is possible to identify pages that will be replaced next. Precleaning that is too eager can waste I/O bandwidth by writing pages that manage to get re-dirtied before being selected for replacement. The (h,k)-paging problem: The (h,k)-paging problem is a generalization of the model of paging problem: Let h,k be positive integers such that h≤k . We measure the performance of an algorithm with cache of size h≤k relative to the theoretically optimal page replacement algorithm. If h<k , we provide the optimal page replacement algorithm with strictly less resource. The (h,k)-paging problem is a way to measure how an online algorithm performs by comparing it with the performance of the optimal algorithm, specifically, separately parameterizing the cache size of the online algorithm and optimal algorithm. Marking algorithms: Marking algorithms is a general class of paging algorithms. For each page, we associate it with a bit called its mark. Initially, we set all pages as unmarked. During a stage of page requests, we mark a page when it is first requested in this stage. A marking algorithm is such an algorithm that never pages out a marked page. Marking algorithms: If ALG is a marking algorithm with a cache of size k, and OPT is the optimal algorithm with a cache of size h, where h≤k , then ALG is kk−h+1 -competitive. So every marking algorithm attains the kk−h+1 -competitive ratio. LRU is a marking algorithm while FIFO is not a marking algorithm. Conservative algorithms: An algorithm is conservative, if on any consecutive request sequence containing k or fewer distinct page references, the algorithm will incur k or fewer page faults. If ALG is a conservative algorithm with a cache of size k, and OPT is the optimal algorithm with a cache of h≤k , then ALG is kk−h+1 -competitive. So every conservative algorithm attains the kk−h+1 -competitive ratio. LRU, FIFO and CLOCK are conservative algorithms. Page replacement algorithms: There are a variety of page replacement algorithms: The theoretically optimal page replacement algorithm The theoretically optimal page replacement algorithm (also known as OPT, clairvoyant replacement algorithm, or Bélády's optimal page replacement policy) is an algorithm that works as follows: when a page needs to be swapped in, the operating system swaps out the page whose next use will occur farthest in the future. For example, a page that is not going to be used for the next 6 seconds will be swapped out over a page that is going to be used within the next 0.4 seconds. Page replacement algorithms: This algorithm cannot be implemented in a general purpose operating system because it is impossible to compute reliably how long it will be before a page is going to be used, except when all software that will run on a system is either known beforehand and is amenable to static analysis of its memory reference patterns, or only a class of applications allowing run-time analysis. Despite this limitation, algorithms exist that can offer near-optimal performance — the operating system keeps track of all pages referenced by the program, and it uses those data to decide which pages to swap in and out on subsequent runs. This algorithm can offer near-optimal performance, but not on the first run of a program, and only if the program's memory reference pattern is relatively consistent each time it runs. Page replacement algorithms: Analysis of the paging problem has also been done in the field of online algorithms. Efficiency of randomized online algorithms for the paging problem is measured using amortized analysis. Page replacement algorithms: Not recently used The not recently used (NRU) page replacement algorithm is an algorithm that favours keeping pages in memory that have been recently used. This algorithm works on the following principle: when a page is referenced, a referenced bit is set for that page, marking it as referenced. Similarly, when a page is modified (written to), a modified bit is set. The setting of the bits is usually done by the hardware, although it is possible to do so on the software level as well. Page replacement algorithms: At a certain fixed time interval, a timer interrupt triggers and clears the referenced bit of all the pages, so only pages referenced within the current timer interval are marked with a referenced bit. When a page needs to be replaced, the operating system divides the pages into four classes: 3. referenced, modified 2. referenced, not modified 1. not referenced, modified 0. not referenced, not modifiedAlthough it does not seem possible for a page to be modified yet not referenced, this happens when a class 3 page has its referenced bit cleared by the timer interrupt. The NRU algorithm picks a random page from the lowest category for removal. So out of the above four page categories, the NRU algorithm will replace a not-referenced, not-modified page if such a page exists. Note that this algorithm implies that a modified but not-referenced (within the last timer interval) page is less important than a not-modified page that is intensely referenced. Page replacement algorithms: NRU is a marking algorithm, so it is kk−h+1 -competitive. Page replacement algorithms: First-in, first-out The simplest page-replacement algorithm is a FIFO algorithm. The first-in, first-out (FIFO) page replacement algorithm is a low-overhead algorithm that requires little bookkeeping on the part of the operating system. The idea is obvious from the name – the operating system keeps track of all the pages in memory in a queue, with the most recent arrival at the back, and the oldest arrival in front. When a page needs to be replaced, the page at the front of the queue (the oldest page) is selected. While FIFO is cheap and intuitive, it performs poorly in practical application. Thus, it is rarely used in its unmodified form. This algorithm experiences Bélády's anomaly. Page replacement algorithms: In simple words, on a page fault, the frame that has been in memory the longest is replaced. FIFO page replacement algorithm is used by the OpenVMS operating system, with some modifications. Partial second chance is provided by skipping a limited number of entries with valid translation table references, and additionally, pages are displaced from process working set to a systemwide pool from which they can be recovered if not already re-used. FIFO is a conservative algorithm, so it is kk−h+1 -competitive. Page replacement algorithms: Second-chance A modified form of the FIFO page replacement algorithm, known as the Second-chance page replacement algorithm, fares relatively better than FIFO at little cost for the improvement. It works by looking at the front of the queue as FIFO does, but instead of immediately paging out that page, it checks to see if its referenced bit is set. If it is not set, the page is swapped out. Otherwise, the referenced bit is cleared, the page is inserted at the back of the queue (as if it were a new page) and this process is repeated. This can also be thought of as a circular queue. If all the pages have their referenced bit set, on the second encounter of the first page in the list, that page will be swapped out, as it now has its referenced bit cleared. If all the pages have their reference bit cleared, then second chance algorithm degenerates into pure FIFO. Page replacement algorithms: As its name suggests, Second-chance gives every page a "second-chance" – an old page that has been referenced is probably in use, and should not be swapped out over a new page that has not been referenced. Page replacement algorithms: Clock Clock is a more efficient version of FIFO than Second-chance because pages don't have to be constantly pushed to the back of the list, but it performs the same general function as Second-Chance. The clock algorithm keeps a circular list of pages in memory, with the "hand" (iterator) pointing to the last examined page frame in the list. When a page fault occurs and no empty frames exist, then the R (referenced) bit is inspected at the hand's location. If R is 0, the new page is put in place of the page the "hand" points to, and the hand is advanced one position. Otherwise, the R bit is cleared, then the clock hand is incremented and the process is repeated until a page is replaced. This algorithm was first described in 1969 by Fernando J. Corbató. Page replacement algorithms: Variants of clock GCLOCK: Generalized clock page replacement algorithm. Clock-Pro keeps a circular list of information about recently referenced pages, including all M pages in memory as well as the most recent M pages that have been paged out. This extra information on paged-out pages, like the similar information maintained by ARC, helps it work better than LRU on large loops and one-time scans. WSclock. By combining the Clock algorithm with the concept of a working set (i.e., the set of pages expected to be used by that process during some time interval), the performance of the algorithm can be improved. In practice, the "aging" algorithm and the "WSClock" algorithm are probably the most important page replacement algorithms. Clock with Adaptive Replacement (CAR) is a page replacement algorithm that has performance comparable to ARC, and substantially outperforms both LRU and CLOCK. The algorithm CAR is self-tuning and requires no user-specified magic parameters.CLOCK is a conservative algorithm, so it is kk−h+1 -competitive. Page replacement algorithms: Least recently used The least recently used (LRU) page replacement algorithm, though similar in name to NRU, differs in the fact that LRU keeps track of page usage over a short period of time, while NRU just looks at the usage in the last clock interval. LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions too. While LRU can provide near-optimal performance in theory (almost as good as adaptive replacement cache), it is rather expensive to implement in practice. There are a few implementation methods for this algorithm that try to reduce the cost yet keep as much of the performance as possible. Page replacement algorithms: The most expensive method is the linked list method, which uses a linked list containing all the pages in memory. At the back of this list is the least recently used page, and at the front is the most recently used page. The cost of this implementation lies in the fact that items in the list will have to be moved about every memory reference, which is a very time-consuming process. Page replacement algorithms: Another method that requires hardware support is as follows: suppose the hardware has a 64-bit counter that is incremented at every instruction. Whenever a page is accessed, it acquires the value equal to the counter at the time of page access. Whenever a page needs to be replaced, the operating system selects the page with the lowest counter and swaps it out. Page replacement algorithms: Because of implementation costs, one may consider algorithms (like those that follow) that are similar to LRU, but which offer cheaper implementations. One important advantage of the LRU algorithm is that it is amenable to full statistical analysis. It has been proven, for example, that LRU can never result in more than N-times more page faults than OPT algorithm, where N is proportional to the number of pages in the managed pool. Page replacement algorithms: On the other hand, LRU's weakness is that its performance tends to degenerate under many quite common reference patterns. For example, if there are N pages in the LRU pool, an application executing a loop over array of N + 1 pages will cause a page fault on each and every access. As loops over large arrays are common, much effort has been put into modifying LRU to work better in such situations. Many of the proposed LRU modifications try to detect looping reference patterns and to switch into suitable replacement algorithm, like Most Recently Used (MRU). Page replacement algorithms: Variants on LRU LRU-K evicts the page whose K-th most recent access is furthest in the past. For example, LRU-1 is simply LRU whereas LRU-2 evicts pages according to the time of their penultimate access. LRU-K improves greatly on LRU with regards to locality in time. The ARC algorithm extends LRU by maintaining a history of recently evicted pages and uses this to change preference to recent or frequent access. It is particularly resistant to sequential scans. Page replacement algorithms: The 2Q algorithm improves upon the LRU and LRU/2 algorithm. By having two queues, one for hot-path items and the other for slow-path items, items are first placed in the slow-path queue and after a second access of the items placed in the hot-path items. Because references to added items are longer hold than in the LRU and LRU/2 algorithm, it has a better hot-path queue which improves the hit rate of the cache.A comparison of ARC with other algorithms (LRU, MQ, 2Q, LRU-2, LRFU, LIRS) can be found in Megiddo & Modha 2004.LRU is a marking algorithm, so it is kk−h+1 -competitive. Page replacement algorithms: Random Random replacement algorithm replaces a random page in memory. This eliminates the overhead cost of tracking page references. Usually it fares better than FIFO, and for looping memory references it is better than LRU, although generally LRU performs better in practice. OS/390 uses global LRU approximation and falls back to random replacement when LRU performance degenerates, and the Intel i860 processor used a random replacement policy (Rhodehamel 1989). Page replacement algorithms: Not frequently used (NFU) The not frequently used (NFU) page replacement algorithm requires a counter, and every page has one counter of its own which is initially set to 0. At each clock interval, all pages that have been referenced within that interval will have their counter incremented by 1. In effect, the counters keep track of how frequently a page has been used. Thus, the page with the lowest counter can be swapped out when necessary. Page replacement algorithms: The main problem with NFU is that it keeps track of the frequency of use without regard to the time span of use. Thus, in a multi-pass compiler, pages which were heavily used during the first pass, but are not needed in the second pass will be favoured over pages which are comparably lightly used in the second pass, as they have higher frequency counters. This results in poor performance. Other common scenarios exist where NFU will perform similarly, such as an OS boot-up. Thankfully, a similar and better algorithm exists, and its description follows. Page replacement algorithms: The not frequently used page-replacement algorithm generates fewer page faults than the least recently used page replacement algorithm when the page table contains null pointer values. Page replacement algorithms: Aging The aging algorithm is a descendant of the NFU algorithm, with modifications to make it aware of the time span of use. Instead of just incrementing the counters of pages referenced, putting equal emphasis on page references regardless of the time, the reference counter on a page is first shifted right (divided by 2), before adding the referenced bit to the left of that binary number. For instance, if a page has referenced bits 1,0,0,1,1,0 in the past 6 clock ticks, its referenced counter will look like this: 10000000, 01000000, 00100000, 10010000, 11001000, 01100100. Page references closer to the present time have more impact than page references long ago. This ensures that pages referenced more recently, though less frequently referenced, will have higher priority over pages more frequently referenced in the past. Thus, when a page needs to be swapped out, the page with the lowest counter will be chosen. Page replacement algorithms: The following Python code simulates the aging algorithm. Counters Vi are initialized with 0 and updated as described above via Vi←(Ri≪(k−1))|(Vi≫1) , using arithmetic shift operators. In the given example of R-bits for 6 pages over 5 clock ticks, the function prints the following output, which lists the R-bits for each clock tick t and the individual counter values Vi for each page in binary representation. Page replacement algorithms: Note that aging differs from LRU in the sense that aging can only keep track of the references in the latest 16/32 (depending on the bit size of the processor's integers) time intervals. Consequently, two pages may have referenced counters of 00000000, even though one page was referenced 9 intervals ago and the other 1000 intervals ago. Generally speaking, knowing the usage within the past 16 intervals is sufficient for making a good decision as to which page to swap out. Thus, aging can offer near-optimal performance for a moderate price. Page replacement algorithms: Longest distance first (LDF) page replacement algorithm The basic idea behind this algorithm is Locality of Reference as used in LRU but the difference is that in LDF, locality is based on distance not on the used references. In the LDF, replace the page which is on longest distance from the current page. If two pages are on same distance then the page which is next to current page in anti-clock rotation will get replaced. Implementation details: Techniques for hardware with no reference bit Many of the techniques discussed above assume the presence of a reference bit associated with each page. Some hardware has no such bit, so its efficient use requires techniques that operate well without one. Implementation details: One notable example is VAX hardware running OpenVMS. This system knows if a page has been modified, but not necessarily if a page has been read. Its approach is known as Secondary Page Caching. Pages removed from working sets (process-private memory, generally) are placed on special-purpose lists while remaining in physical memory for some time. Removing a page from a working set is not technically a page-replacement operation, but effectively identifies that page as a candidate. A page whose backing store is still valid (whose contents are not dirty, or otherwise do not need to be preserved) is placed on the tail of the Free Page List. A page that requires writing to backing store will be placed on the Modified Page List. These actions are typically triggered when the size of the Free Page List falls below an adjustable threshold. Implementation details: Pages may be selected for working set removal in an essentially random fashion, with the expectation that if a poor choice is made, a future reference may retrieve that page from the Free or Modified list before it is removed from physical memory. A page referenced this way will be removed from the Free or Modified list and placed back into a process working set. The Modified Page List additionally provides an opportunity to write pages out to backing store in groups of more than one page, increasing efficiency. These pages can then be placed on the Free Page List. The sequence of pages that works its way to the head of the Free Page List resembles the results of a LRU or NRU mechanism and the overall effect has similarities to the Second-Chance algorithm described earlier. Implementation details: Another example is used by the Linux kernel on ARM. The lack of hardware functionality is made up for by providing two page tables – the processor-native page tables, with neither referenced bits nor dirty bits, and software-maintained page tables with the required bits present. The emulated bits in the software-maintained table are set by page faults. In order to get the page faults, clearing emulated bits in the second table revokes some of the access rights to the corresponding page, which is implemented by altering the native table. Implementation details: Page cache in Linux Linux uses a unified page cache for brk and anonymous mmaped-regions. This includes the heap and stack of user-space programs. It is written to swap when paged out. Non-anonymous (file-backed) mmaped regions. If present in memory and not privately modified the physical page is shared with file cache or buffer. Shared memory acquired through shm_open. The tmpfs in-memory filesystem; written to swap when paged out. The file cache including; written to the underlying block storage (possibly going through the buffer, see below) when paged out. Implementation details: The cache of block devices, called the "buffer" by Linux (not to be confused with other structures also called buffers like those use for pipes and buffers used internally in Linux); written to the underlying storage when paged out.The unified page cache operates on units of the smallest page size supported by the CPU (4 KiB in ARMv8, x86 and x86-64) with some pages of the next larger size (2 MiB in x86-64) called "huge pages" by Linux. The pages in the page cache are divided in an "active" set and an "inactive" set. Both sets keep a LRU list of pages. In the basic case, when a page is accessed by a user-space program it is put in the head of the inactive set. When it is accessed repeatedly, it is moved to the active list. Linux moves the pages from the active set to the inactive set as needed so that the active set is smaller than the inactive set. When a page is moved to the inactive set it is removed from the page table of any process address space, without being paged out of physical memory. When a page is removed from the inactive set, it is paged out of physical memory. The size of the "active" and "inactive" list can be queried from /proc/meminfo in the fields "Active", "Inactive", "Active(anon)", "Inactive(anon)", "Active(file)" and "Inactive(file)". Working set: The working set of a process is the set of pages expected to be used by that process during some time interval. The "working set model" isn't a page replacement algorithm in the strict sense (it's actually a kind of medium-term scheduler)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clapper loader** Clapper loader: A clapper loader or second assistant camera (2nd AC) is part of a film crew whose main functions are that of loading the raw film stock into camera magazines, operating the clapperboard (slate) at the beginning of each take, marking the actors as necessary, and maintaining all records and paperwork for the camera department. The name "clapper loader" tends to be used in the United Kingdom and Commonwealth, while "second assistant camera" tends to be favored in the United States, but the job is essentially the same whichever title is used. The specific responsibilities and division of labor within the department will almost always vary depending on the circumstances of the shoot. Functions: Clapper loaders have a very important role as practically the only people on set who directly and physically oversee the state of the undeveloped negative. The loader – the only person who actually handles the negative between the manufacturer and the laboratory – thus can easily render an entire day's work useless if the film is handled improperly. Additionally, the loader usually controls all records with regard to the film stock – from when it is received until when it is sent out to the lab; if this information is miscommunicated or missing, this too can destroy an expensive shoot. Furthermore, the loader usually has much more to do in addition to these tasks. Noted director of photography Oliver Stapleton has written on his website: The LOADER loads the camera, oddly enough, with film made by either Kodak or Fuji. Loading may not sound like much of a job, but in actuality it is very important. If the wrong film is in the camera, or if it gets loaded twice, or lost, or put in the wrong can, then the scene which corresponded to: Scene 56 – The Army advanced over the hill, the jets dropped their bombs, and the volcano erupted... could be lost. When this happens the Loader can become deeply unpopular very quickly. Kubrick fired one loader I know on his first day of work for walking across the set holding a magazine upside down. Not Kubrick’s first day of work – the Loader's. This was a trifle harsh, but there is a right way to do the job, and the rules are there for a very good reason. If you screw up the minimum cost is about $20,000 and the max any figure you might care to imagine. Duties: A full description of the job duties includes the following (although different shoots may often not always require all of these): generally assisting the rest of the camera crew (focus puller, camera operator, director of photography) utilizing the camera trainee, film loader, and/or camera runner if one has been brought onto the production keeping inventory of all equipment, film, and expendables requesting film stock as needed securing the equipment unloading/loading equipment off/on the camera truck daily if necessary checking loading materials and spaces to prevent light leaks cleaning and keeping clean the magazines and the loading environment organizing and cleaning the equipment space maintaining and cleaning the equipment loading and unloading of film stock from and to the magazines labelling of equipment, boxes, magazines, and storage spaces marking actors and props (leaving a marker of their positions as the scene is blocked for the purpose of measuring distance from the camera so that its focus can be adjusted throughout the scene) marking and operating the clapperboard properly keeping meticulous and accurate camera notes writing negative report sheets in detail interfacing with continuity in order to note which takes to print charging of batteries for camera and accessories preparation of film to be sent to the lab keeping records of time, per diems, and expenses for the entire camera crew liaising regularly with production, rental houses, editing, laboratories, and unions recordkeeping of all camera-related paperwork, including negative reports, daily stock reports, film inventory reports, lab orders, rental contracts, and expendable orders ensuring that all instructions from the director of photography are passed along properly to labs and post houses relaying reports from the lab about the rushes to the director of photography
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solar eclipse of July 14, 1749** Solar eclipse of July 14, 1749: An annular solar eclipse occurred on July 14, 1749. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. An annular solar eclipse occurs when the Moon's apparent diameter is smaller than the Sun's, blocking most of the Sun's light and causing the Sun to look like an annulus (ring). An annular eclipse appears as a partial eclipse over a region of the Earth thousands of kilometres wide. Description: The eclipse was visible in much of South America except for Guyaranquil, New Granada (now in Ecuador) and around it and the southern tip, Florida (then also as Spanish Florida) and the Caribbean, the Atlantic, much of Africa except for the Ottoman lands of a part of Tunis including Philippeville (now Bizerte) and Carthage, Cyrenaica, Egypt and East Nubia as well as the Somali Peninsula and much of Spain and Portugal except for the Pyrenees. It was also visible in a small part of the Indian Ocean and hundreds of miles (or kilometers) offshore from Antarctica. It was part of solar saros 132.The umbral portion which was as far as 141 km (88 km) included the northern Amazon, Roraima, British Guiana (now Guyana), Surinam (then also Dutch Guyana) and French Guiana in South America, in Africa it was 85–95 miles (135–150 km) south of the island of Santiago in (Portuguese) Cape Verde, on the mainland, it included the Bijagós Archipelago, Conakry, present-day Ivory Coast and Ghana, Angola, Great Zimbabwe (now Zimbabwe) and Mozambique, off the mainland, it included the southernmost tip of Madagascar. The greatest occurred just over a mile (2 km) west of Koulikoro in Tonkpi in present-day Ivory Coast at 7.8 N, 7.2 W at 12:19 UTC and lasted for over 4 minutes.The eclipse showed up to 50% obscuration in Santo Domingo (now in the Dominican Republic), Rio de Oro, the Tuareg lands, the fringes of Mali, Songhai, Bornu, Wadai (now part of Chad), Kivu, Uganda, the Kilimanjaro and the Serengeti regions (now part of Tanzania) and on the other side the edges of the Amazon, Tocantins and Recife in Brazil and the southernmost area of Africa. Description: The eclipse started in South America with a bit of the Pacific Ocean around it and ended at sunset east of the Mascarene Islands (compromising Réunion and Mauritius). The subsolar marking was in Mali. In parts of the world, the eclipse was not seen in some areas in areas that had monsoon rains mainly within the rim areas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Knowledge Utilization Research Center** Knowledge Utilization Research Center: The Knowledge Utilization Research Center (Persian: مركز تحقيقات بهره برداري از دانش سلامت) is one of the Tehran University of Medical Sciences' research centers which works in the area of knowledge translation (KT). Background: In 2006 the Knowledge Utilization Research Center began its work on Tehran University of Medical Sciences under the title of the 'KTE Study Group' in the 'Center of Academic and Health Policy'. In less than two years the center succeed in publishing more than 20 research paper in national and international journals. In these years the center also run several research projects in the field of knowledge translation. In 2008 it was approved as a research center by the Ministry of Health and Medical Education (Iran). Mission statement: The Knowledge Utilization Research Center aims to produce knowledge, localize it, and promote policies, methods and activities that lead to the better utilization of health knowledge in Iran. The goal of this center is to create change in health decision makers' behavior, i.e. to make decisions on the basis of scientific and research evidence on one hand, and to strengthen researchers' efforts in transferring research results on the other hand and improve their communicating environment. Mission statement: To obtain these aims, center have some Strategies for its vision on 2014 based on People, Policy makers and managers, Health service providers, Researchers. Achievements: Education WorkshopsThe Knowledge Utilization Research center has designed and executed different workshops like Knowledge Transfer and Exchange, Adaptation of Clinical Guidelines, Systematic Review and Meta Analysis and Economic Evaluation which held in national and international level (sponsored by the Regional Office of the Eastern Mediterranean (EMRO)). Capacity BuildingResearchers employed in the Knowledge Utilization Research center held educational classes at the center periodically once a week and discuss about the latest articles and project results at the world. Research Self Assessment toolThe self-assessment tool for knowledge translation has been designed to assess the knowledge translation status in research organizations (Research centers, faculties and universities) by the center. WebsiteTo strengthen and create an appropriate environment for promoting the culture of knowledge translation, the center website has been designed in three languages of Persian, English and Arabic that includes various sections such as online learning, bank of ideas, newsletter etc. KTE NewsletterThe Knowledge Translation newsletter is published monthly by the center to raise awareness in researchers and research users on knowledge translation. Publications: Books Knowledge Translation and Utilization of Research ResultsPublished in 2008 as a Persian book which is a review on knowledge translation models and theories and also an introduction to its methods. Research Studies in Zone 17 Presented as a Narrative Image This booklet has been published to inform the community of the applied research studies conducted in Zone 17 of Tehran Municipality and has been presented as a narrative image and in lay language. Articles The results of KURC's research projects have been published in the form of papers in various national and international journals. These projects are designed in different areas of KT: Knowledge Translation, models, tools and methods. Knowledge Translation and researchers. Knowledge Translation and research organizations. Knowledge Translation and police makers. Knowledge Translation and media
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pre-credit** Pre-credit: In film production, the pre-credit is the section of the film which is shown before the opening or closing credits are shown. Many films will by common convention have a short scene before the credits to introduce characters who may, or may not, become crucial to the film's plot. This sequence is normally an expositional scene with either an obvious important plot point or an event which is seemingly minor but whose significance will later in the film become apparent. Pre-credit: A characteristic of pre-credit scenes in the horror genre is a character (seemingly a main character) who is killed quickly, as a heralding "warning kill" of the antagonist. For example, Cube. Pre-credit: The James Bond franchise has become well known for elaborate high-concept pre-credit sequences, sometimes over ten minutes in length. Television series often have a pre-credit sequence, especially ones from the mid-1960s onward. (Such series as Captain Kangaroo, The Dick Van Dyke Show, The Andy Griffith Show, the first incarnation of The Twilight Zone, I Love Lucy, and the Disney anthology television series did not.) One series famous for its pre-credits is Law & Order. Their famous sound effect will close the pre-credit after each episode's victim is discovered.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Starblanket** Starblanket: Starblanket may refer to: A blanket (of native culture) sew together with different articles of fabric that forms a star in the middle. It is typically given to people as gifts for special occasions (birth, graduation, marriage etc). People: Ahtahkakoop (Cree: Atāhkakohp, "Starblanket", c. 1816–1896), Cree chief Ahchuchhwahauhhatohapit, or Ahchacoosacootacoopits (Cree: Acāhkosa kā-otakohpit, "[One who has] Star[s for a ]blanket"), Cree chief Noel Starblanket (born 1946), First Nation leader
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flight instruments** Flight instruments: Flight instruments are the instruments in the cockpit of an aircraft that provide the pilot with data about the flight situation of that aircraft, such as altitude, airspeed, vertical speed, heading and much more other crucial information in flight. They improve safety by allowing the pilot to fly the aircraft in level flight, and make turns, without a reference outside the aircraft such as the horizon. Visual flight rules (VFR) require an airspeed indicator, an altimeter, and a compass or other suitable magnetic direction indicator. Instrument flight rules (IFR) additionally require a gyroscopic pitch-bank (artificial horizon), direction (directional gyro) and rate of turn indicator, plus a slip-skid indicator, adjustable altimeter, and a clock. Flight into instrument meteorological conditions (IMC) require radio navigation instruments for precise takeoffs and landings.: 3–1 The term is sometimes used loosely as a synonym for cockpit instruments as a whole, in which context it can include engine instruments, navigational and communication equipment. Many modern aircraft have electronic flight instrument systems. Flight instruments: Most regulated aircraft have these flight instruments as dictated by the US Code of Federal Regulations, Title 14, Part 91. They are grouped according to pitot-static system, compass systems, and gyroscopic instruments.: 3–1 Pitot-static systems: Instruments which are pitot-static systems use air pressure differences to determine speed and altitude. Pitot-static systems: Altimeter The altimeter shows the aircraft's altitude above sea-level by measuring the difference between the pressure in a stack of aneroid capsules inside the altimeter and the atmospheric pressure obtained through the static system. The most common unit for altimeter calibration worldwide is hectopascals (hPa), except for North America and Japan where inches of mercury (inHg) are used. The altimeter is adjustable for local barometric pressure which must be set correctly to obtain accurate altitude readings, usually in either feet or meters. As the aircraft ascends, the capsules expand and the static pressure drops, causing the altimeter to indicate a higher altitude. The opposite effect occurs when descending. With the advancement in aviation and increased altitude ceiling, the altimeter dial had to be altered for use both at higher and lower altitudes. Hence when the needles were indicating lower altitudes i.e. the first 360-degree operation of the pointers was delineated by the appearance of a small window with oblique lines warning the pilot that he or she is nearer to the ground. This modification was introduced in the early sixties after the recurrence of air accidents caused by the confusion in the pilot's mind. At higher altitudes, the window will disappear.: 3–3 Airspeed indicator The airspeed indicator shows the aircraft's speed relative to the surrounding air. Knots is the currently most used unit, but kilometers per hour is sometimes used instead. The airspeed indicator works by measuring the ram-air pressure in the aircraft's Pitot tube relative to the ambient static pressure. The indicated airspeed (IAS) must be corrected for nonstandard pressure and temperature in order to obtain the true airspeed (TAS). The instrument is color coded to indicate important airspeeds such as the stall speed, never-exceed airspeed, or safe flap operation speeds.: 3-7 to 3-8 Vertical speed indicator The VSI (also sometimes called a variometer, or rate of climb indicator) senses changing air pressure, and displays that information to the pilot as a rate of climb or descent in feet per minute, meters per second or knots.: 3-8 to 3-9 Compass systems: Magnetic compass The compass shows the aircraft's heading relative to magnetic north. Errors include Variation, or the difference between magnetic and true direction, and Deviation, caused by the electrical wiring in the aircraft, which requires a Compass Correction Card. Additionally, the compass is subject to Dip Errors. While reliable in steady level flight it can give confusing indications when turning, climbing, descending, or accelerating due to the inclination of the Earth's magnetic field. For this reason, the heading indicator is also used for aircraft operation, but periodically calibrated against the compass.: 3-9 to 3-13, 3–19 Gyroscopic systems: Attitude Indicator The attitude indicator (also known as an artificial horizon) shows the aircraft's relation to the horizon. From this the pilot can tell whether the wings are level (roll) and if the aircraft nose is pointing above or below the horizon (pitch).: 3-18 to 3-19  Attitude is always presented to users in the unit degrees (°). The attitude indicator is a primary instrument for instrument flight and is also useful in conditions of poor visibility. Pilots are trained to use other instruments in combination should this instrument or its power fail. Gyroscopic systems: Heading indicator The heading indicator (also known as the directional gyro, or DG) displays the aircraft's heading in compass points, and with respect to magnetic north when set with a compass. Bearing friction causes drift errors from precession, which must be periodically corrected by calibrating the instrument to the magnetic compass.: 3-19 to 3-20  In many advanced aircraft (including almost all jet aircraft), the heading indicator is replaced by a horizontal situation indicator (HSI) which provides the same heading information, but also assists with navigation. Gyroscopic systems: Turn indicator These include the Turn-and-Slip Indicator and the Turn Coordinator, which indicate rotation about the longitudinal axis. They include an inclinometer to indicate if the aircraft is in Coordinated flight, or in a Slip or Skid. Additional marks indicate a Standard rate turn.: 3-20 to 3-22  The turn rate is most commonly expressed in either degrees per second (deg/s) or minutes per turn (min/tr). Flight director systems: These include the Horizontal Situation Indicator (HSI) and Attitude Director Indicator (ADI). The HSI combines the magnetic compass with navigation signals and a Glide slope. The navigation information comes from a VOR/Localizer, or GPS. The ADI is an Attitude Indicator with computer-driven steering bars, a task reliever during instrument flight.: 3-22 to 3-23, 7–10 Navigational systems: Very-High Frequency Omnidirectional Range (VOR) The VOR indicator instrument includes a Course deviation indicator (CDI), Omnibearing Selector (OBS), TO/FROM indicator, and Flags. The CDI shows an aircraft's lateral position in relation to a selected radial track. It is used for orientation, tracking to or from a station, and course interception.: 7-8 to 7-11  On the instrument, the vertical needle indicates the lateral position of the selected track. A horizontal needle allows the pilot to follow a glide slope when the instrument is used with an ILS. Navigational systems: Nondirectional Radio Beacon (NDB) The Automatic direction finder (ADF) indicator instrument can be a fixed-card, movable card, or a Radio magnetic indicator (RMI). An RMI is remotely coupled to a gyrocompass so that it automatically rotates the azimuth card to represent aircraft heading.: 7-3 to 7-4  While simple ADF displays may have only one needle, a typical RMI has two, coupled to different ADF receivers, allowing for position fixing using one instrument. Layout: Most aircraft are equipped with a standard set of flight instruments which give the pilot information about the aircraft's attitude, airspeed, and altitude. Layout: T arrangement Most US aircraft built since the 1940s have flight instruments arranged in a standardized pattern called the "T" arrangement. The attitude indicator is in the top center, airspeed to the left, altimeter to the right and heading indicator under the attitude indicator. The other two, turn-coordinator and vertical-speed, are usually found under the airspeed and altimeter, but are given more latitude in placement. The magnetic compass will be above the instrument panel, often on the windscreen centerpost. In newer aircraft with glass cockpit instruments the layout of the displays conform to the basic T arrangement. Layout: Early history In 1929, Jimmy Doolittle became the first pilot to take off, fly and land an airplane using instruments alone, without a view outside the cockpit. In 1937, the British Royal Air Force (RAF) chose a set of six essential flight instruments which would remain the standard panel used for flying in instrument meteorological conditions (IMC) for the next 20 years. They were: altimeter (feet) airspeed indicator (knots) turn and bank indicator (turn direction and coordination) vertical speed indicator (feet per minute) artificial horizon (attitude indication) directional gyro / heading indicator (degrees)This panel arrangement was incorporated into all RAF aircraft built to official specification from 1938, such as the Miles Master, Hawker Hurricane, Supermarine Spitfire, and 4-engined Avro Lancaster and Handley Page Halifax heavy bombers, but not the earlier light single-engined Tiger Moth trainer, and minimized the type-conversion difficulties associated with blind flying, since a pilot trained on one aircraft could quickly become accustomed to any other if the instruments were identical. Layout: This basic six set, also known as a "six pack", was also adopted by commercial aviation. After the Second World War the arrangement was changed to: (top row) airspeed, artificial horizon, altimeter, (bottom row) turn and bank indicator, heading indicator, vertical speed. Layout: Further development In glass cockpits the flight instruments are shown on monitors. Primary flight display, is given a central place on the panel, superseding the artificial horizon, often, with a horizontal situation indicator next to it or integrated with the PFD. The indicated airspeed, altimeter, and vertical speed indicator are displayed as moving "tapes" with the indicated airspeed to the left of the horizon and the altimeter and the vertical speed to the right in the same layout as in most older style "clock cockpits". Layout: Different significance and some other instrumentation In good weather a pilot can fly by looking out the window. However, when flying in cloud or at night at least one gyroscopic instrument is necessary to orient the aircraft, being either an artificial horizon, turn and slip, or a gyro compass. Layout: The vertical speed indicator, or VSI, is more of "a good help" than absolutely essential. On jet aircraft it displays the vertical speed in thousands of feet per minute, usually in the range −6 to +6. The gyrocompass can be used for navigation, but it is indeed a flight instrument as well. It is needed to control the adjustment of the heading, to be the same as the heading of the landing runway. Indicated airspeed, or IAS, is the second most important instrument and indicates the airspeed very accurately in the range of 45 to 250 kn (83 to 463 km/h). At higher altitude a MACH-meter is used instead, to prevent the aircraft from overspeed. An instrument called true airspeed, or TAS, exists on some aircraft. TAS shows airspeed in knots in the range from 200 kn (370 km/h) and higher (It is like the Mach-meter: not really a flight instrument). The altimeter displays the altitude in feet, but must be corrected to local air pressure at the landing airport. The altimeter may be adjusted to show an altitude of zero feet on the runway, but far more common is to adjust the altimeter to show the actual altitude when the aircraft has landed. In the latter case pilots must keep the runway elevation in mind. However a radio altimeter (displaying the height above the ground if lower than around 2,000–2,500 ft (610–760 m) has been standard for decades. This instrument is however not among the "big five", but must still be considered as a flight instrument.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ribose-phosphate diphosphokinase** Ribose-phosphate diphosphokinase: Ribose-phosphate diphosphokinase (or phosphoribosyl pyrophosphate synthetase or ribose-phosphate pyrophosphokinase) is an enzyme that converts ribose 5-phosphate into phosphoribosyl pyrophosphate (PRPP). It is classified under EC 2.7.6.1. Ribose-phosphate diphosphokinase: The enzyme is involved in the synthesis of nucleotides (purines and pyrimidines), cofactors NAD and NADP, and amino acids histidine and tryptophan, linking these biosynthetic processes to the pentose phosphate pathway, from which the substrate ribose 5-phosphate is derived. Ribose 5-phosphate is produced by the HMP Shunt Pathway from Glucose-6-Phosphate. The product phosphoribosyl pyrophosphate acts as an essential component of the purine salvage pathway and the de novo synthesis of purines. Dysfunction of the enzyme would thereby undermine purine metabolism. Ribose-phosphate pyrophosphokinase exists in bacteria, plants, and animals, and there are three isoforms of human ribose-phosphate pyrophosphokinase. In humans, the genes encoding the enzyme are located on the X chromosome. Reaction mechanism: Ribose-phosphate diphosphokinase transfers the diphosphoryl group from Mg-ATP (Mg2+ coordinated to ATP) to ribose 5-phosphate. The enzymatic reaction begins with the binding of ribose 5-phosphate, followed by binding of Mg-ATP to the enzyme. In the transition state upon binding of both substrates, the diphosphate is transferred. The enzyme first releases AMP before releasing the product phosphoribosyl pyrophosphate. Experiments using oxygen 18 labelled water demonstrate that the reaction mechanism proceeds with the nucleophilic attack of the anomeric hydroxyl group of ribose 5-phosphate on the beta-phosphorus of ATP in an SN2 reaction. Structure: Crystallization and X-ray diffraction studies elucidated the structure of the enzyme, which was isolated by cloning, protein expression, and purification techniques. One subunit of ribose-phosphate diphosphokinase consists of 318 amino acids; the active enzyme complex consists of three homodimers (or six subunits, a hexamer). The structure of one subunit is a five-stranded parallel beta sheet (the central core) surrounded by four alpha helices at the N-terminal domain and five alpha helices at the C-terminal domain, with two short anti-parallel beta-sheets extending from the core. Structure: The catalytic site of the enzyme binds ATP and ribose 5-phosphate. The flexible loop (Phe92–Ser108), pyrophosphate binding loop (Asp171–Gly174), and flag region (Val30–Ile44 from an adjacent subunit) comprise the ATP binding site, located at the interface between two domains of one subunit. The flexible loop is so named because of its large variability in conformation. The ribose 5-phosphate binding site consists of residues Asp220–Thr228, located in the C-terminal domain of one subunit. Structure: The allosteric site, which binds ADP, consists of amino acid residues from three subunits. Function: The product of this reaction, phosphoribosyl pyrophosphate (PRPP), is used in numerous biosynthesis (de novo and salvage) pathways. PRPP provides the ribose sugar in de novo synthesis of purines and pyrimidines, used in the nucleotide bases that form RNA and DNA. PRPP reacts with orotate to form orotidylate, which can be converted to uridylate (UMP). UMP can then be converted to the nucleotide cytidine triphosphate (CTP). The reaction of PRPP, glutamine, and ammonia forms 5-Phosphoribosyl-1-amine, a precursor to inosinate (IMP), which can ultimately be converted to adenosine triphosphate (ATP) or guanosine triphosphate (GTP). PRPP plays a role in purine salvage pathways by reacting with free purine bases to form adenylate, guanylate, and inosinate. PRPP is also used in the synthesis of NAD: the reaction of PRPP with nicotinic acid yields the intermediate nicotinic acid mononucleotide. Regulation: Ribose-phosphate diphosphokinase requires Mg2+ for activity; the enzyme acts only on ATP coordinated with Mg2+. Ribose-phosphate diphosphokinase is regulated by phosphorylation and allostery. It is activated by phosphate and inhibited by ADP; it is suggested that phosphate and ADP compete for the same regulatory site. At normal concentrations, phosphate activates the enzyme by binding to its allosteric regulatory site. However, at high concentrations, phosphate is shown to have an inhibitory effect by competing with the substrate ribose 5-phosphate for binding at the active site. ADP is the key allosteric inhibitor of ribose-phosphate diphosphokinase. It has been shown that at lower concentrations of the substrate ribose 5-phosphate, ADP may inhibit the enzyme competitively. Ribose-phosphate pyrophosphokinase is also inhibited by some of its downstream biosynthetic products. Role in disease: Because its product is a key compound in many biosynthetic pathways, ribose-phosphate diphosphokinase is involved in some rare disorders and X-linked recessive diseases. Mutations that lead to super-activity (increased enzyme activity or de-regulation of the enzyme) result in purine and uric acid overproduction. Super-activity symptoms include gout, sensorineural hearing loss, weak muscle tone (hypotonia), impaired muscle coordination (ataxia), hereditary peripheral neuropathy, and neurodevelopmental disorder. Mutations that lead to loss-of-function in ribose-phosphate diphosphokinase result in Charcot-Marie-Tooth disease and Arts syndrome.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Charlotte Froese Fischer** Charlotte Froese Fischer: Charlotte Froese Fischer (born 1929) is a Canadian-American applied mathematician and computer scientist noted for the development and implementation of the Multi-Configurational Hartree–Fock (MCHF) approach to atomic-structure calculations and its application to the description of atomic structure and spectra. The experimental discovery of the negative ion of calcium was motivated by her theoretical prediction of its existence. This was the first known anion of a Group 2 element. Its discovery was cited in Froese Fischer's election to Fellow of the American Physical Society. Early life: Charlotte Froese was born on September 21, 1929, in the village of Stara Mykolaivka (formerly Pravdivka, and Nikolayevka), in the Donetsk region, in the present-day Ukraine, to parents of Mennonite descent. Her parents immigrated to Germany in 1929 on the last train allowed to cross the border before its closure by Soviet authorities. After a few months in a refugee camp, her family was allowed to immigrate to Canada, where they eventually established themselves in Chilliwack, British Columbia. Education and research: She obtained both a B.A. degree, with honors, in Mathematics and Chemistry and an M.A. degree in Applied Mathematics from the University of British Columbia in 1952 and 1954, respectively. She then obtained her Ph.D. in Applied Mathematics and Computing at Cambridge University in 1957, pursuing coursework in quantum theory with Paul Dirac. She worked under the supervision of Douglas Hartree, whom she assisted in programming the Electronic Delay Storage Automatic Calculator (EDSAC) for atomic-structure calculations. Education and research: She served on the mathematics faculty of the University of British Columbia from 1957 till 1968, where she introduced numerical analysis and computer courses into the curriculum and was instrumental in the formation of the Computer Science Department.Froese Fischer spent 1963-64 at the Harvard College Observatory, where she extended her research on atomic-structure calculations. While at Harvard, she was the first woman scientist to be awarded an Alfred P. Sloan Fellowship. In 1991 she became a Fellow of the American Physical Society, in part for her contribution to the discovery of negative calcium. In 1995 she was elected a member of the Royal Physiographic Society in Lund, in 2004 a foreign member of the Lithuanian Academy of Sciences, and in 2015 she was awarded an Honorary Doctorate in Technology from Malmö University, Sweden. Contributions: Froese Fischer is the author of over 300 research articles on computational atomic theory, many of which have had far-reaching impact in the area of atomic-structure calculations. The early version of the MCHF program, published in the first volume of Computer Physics Communications received two Citation Classics Awards in 1987. She authored an influential monograph on Hartree-Fock approaches to the first-principles calculation of atomic structure, and coauthored a substantial successor work. One of her largest efforts in the field is the calculation of the complete lower spectra of the beryllium-like to argon-like isoelectronic sequences, amounting to the publication of data covering 400 journal pages and a total of over 150 ions. Contributions: She also authored a scientific biography of her Ph.D. thesis advisor, Douglas Hartree.Froese Fischer is currently an emerita research professor of computer science at Vanderbilt University and a Guest Scientist in the Atomic Spectroscopy Group at NIST. She is the widow of Patrick C. Fischer, himself a noted computer scientist and former professor at Vanderbilt. An autobiographical account of her own life up to the year 2000 was published in Molecular Physics, and a biographical review of her scientific work up to 2019 has been published in Atoms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bipartite matroid** Bipartite matroid: In mathematics, a bipartite matroid is a matroid all of whose circuits have even size. Example: A uniform matroid Unr is bipartite if and only if r is an odd number, because the circuits in such a matroid have size r+1 Relation to bipartite graphs: Bipartite matroids were defined by Welsh (1969) as a generalization of the bipartite graphs, graphs in which every cycle has even size. A graphic matroid is bipartite if and only if it comes from a bipartite graph. Duality with Eulerian matroids: An Eulerian graph is one in which all vertices have even degree; Eulerian graphs may be disconnected. For planar graphs, the properties of being bipartite and Eulerian are dual: a planar graph is bipartite if and only if its dual graph is Eulerian. As Welsh showed, this duality extends to binary matroids: a binary matroid is bipartite if and only if its dual matroid is an Eulerian matroid, a matroid that can be partitioned into disjoint circuits. Duality with Eulerian matroids: For matroids that are not binary, the duality between Eulerian and bipartite matroids may break down. For instance, the uniform matroid U64 is non-bipartite but its dual U62 is Eulerian, as it can be partitioned into two 3-cycles. The self-dual uniform matroid U63 is bipartite but not Eulerian. Computational complexity: It is possible to test in polynomial time whether a given binary matroid is bipartite. However, any algorithm that tests whether a given matroid is Eulerian, given access to the matroid via an independence oracle, must perform an exponential number of oracle queries, and therefore cannot take polynomial time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Door furniture** Door furniture: Door furniture (British and Australian English) or door hardware (North American English) refers to any of the items that are attached to a door or a drawer to enhance its functionality or appearance. Design of door furniture is an issue to disabled persons who might have difficulty opening or using some kinds of door, and to specialists in interior design as well as those usability professionals which often take their didactic examples from door furniture design and use. Items of door furniture fall into several categories, described below. Hinges: A hinge is a component that attaches one edge of a door to the frame, while allowing the other edge to swing from it. It usually consists of a pair of plates, each with a set of open cylindrical rings (the knuckles) attached to them. The knuckles of the two plates are offset from each other and mesh together. A hinge pin is then placed through the two sets of knuckles and usually fixed, to combine the plates and make the hinge a single unit. One door usually has about three hinges, but it can vary. Handles: Doors generally have at least one fixed handle, usually accompanied with a latch (see below). A typical "handle set" is composed of the exterior handle, escutcheon, an independent deadbolt, and the interior package (knob or lever). On some doors the latch is incorporated into a hinged handle that releases when pulled on. See also: Doorknob – A knob or lever on an axle that is rotated to release the bolt; Crash bar or Panic bar; Doorhandles for all types of doors (glass, wooden, etc.) Flush pull handle for sliding glass door. Locks: A lock is a device that prevents access by those without a key or combination, generally by preventing one or more latches from being operated. Often accompanied by an escutcheon. Some doors, particularly older ones, will have a keyhole accompanying the lock. Fasteners: Most doors make use of one or more fasteners to hold the door closed. Typical or common fasteners include: Latch – A device that allows one to fasten a door, but doesn't necessarily require an external handle Bolt – A (nearly always) metal shaft usually internal to the door, attached by cleats or a specific form of bracket, that slides into the jamb to fasten a door. Fasteners: Latch bolt – A bolt that has an angled surface which acts as a ramp to push the bolt in while the door is being closed. By the use of a latch bolt, a door can be closed without having to operate the handle. Fasteners: Deadbolt – Deadbolts usually extend deeper into the frame and are not automatically retractable the way latch bolts are. They are typically manipulated with a lock on the outside and either a lock or a latch on the inside. Deadbolts are generally used for security purposes on external doors in case somebody tries to kick the door in or use a tool such as a crowbar or a hammer and screwdriver etc. Fasteners: Strike plate – A plate with a hole in the middle made to receive a bolt. If the strike is for a latch bolt, it typically also includes a small ramped area to help the bolt move inward while the door is being closed. (Also known as just "strike") It's also available as electric strike which allows you to open the door even though the mechanical lock is locked. Fasteners: Dust Socket - A (nearly always) plastic socket that sits under the Strike plate concealing the rough wood of the mortise. Aldrop- A device to keep the two panels of a door in closed position. It a set of curved plates fixed on the two panels of the door. A rod then passes though the curved plates to keep the door locked. A padlock can inserted in the rod to lock the door. Accessories: Numerous devices exist to serve specific purposes related to how a door should (or should not) be used. See: Door chain - A device to secure door opening Door closer – Mechanical or electromagnetic device to close an open door (in the event of a fire) Door opener - Automatic door opening device activated by motion sensors or pressure pads Door damper – A hydraulic device employed to slow the door's closure Door knocker Door stop – used to prevent the door from opening too far or striking another object Espagnolette (for a window) Fingerplate Letter box or mail slot PeepholeA number of items normally accompany doors but are not necessarily mounted on the door itself, such as doorbells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mark (Australian rules football)** Mark (Australian rules football): A mark in Australian rules football is the catch of a kicked ball which earns the catching player a free kick. The catch must be cleanly taken, or deemed by the umpire to have involved control of the ball for sufficient time. A tipped ball, or one that has touched the ground cannot be marked. Since 2002, in most Australian competitions, the minimum distance for a mark is 15 metres (16 yards or 49 feet). Mark (Australian rules football): Marking is one of the most important skills in Australian football. Aiming for a teammate who can mark their kick is the primary focus of any kicking player not kicking for goal. Marking can also be one of the most spectacular and distinctive aspects of the game, and the best mark of the AFL season is awarded with the Mark of the Year, with similar competitions running across smaller leagues. Mark (Australian rules football): The most prolific markers in the history of the Australian Football League, Nick Riewoldt, Matthew Richardson, Stewart Loewe and Gary Dempsey took an average of around eight marks per game. An AFL match between St Kilda and Port Adelaide in 2006 set a record of 303 marks in a single game. Rules: Upon taking a mark, the umpire will blow the whistle to signify the mark and a player is entitled to an unimpeded kick of the ball. The nearest opposition player stands on the spot where the player marked the ball, which is also known as 'the mark' and he becomes 'the man on the mark.' When awarded the free kick, the player can choose to forego their kick to play-on and run into space, with the defending players then allowed to tackle as normal. The player has 10 seconds to move the ball on after a mark unless they are taking a shot on goal in which case they have 30 seconds to take their kick. If the player takes too long to complete their free kick, the umpire will call play-on, rescinding the award of the free kick, which also allows the defenders to tackle as normal. Rules: A mark must be caught cleanly, with the player having complete control of the ball, even if only for a short time. As such, if the ball is punched out from between the player's hands after it is caught, or the ball is dislodged upon hitting the ground, a mark is still paid, even if the ball was held for only an instant. Rules: Although the rules make no provision for two players marking the ball simultaneously, by convention the umpire will award the mark to the man in front, i.e. the player who has the front position in the marking contest. If he cannot determine which player is in front, then a ball-up will result. The mark has been included in the compromise rules used in the International Rules Football series between teams from Australia and Ireland since 1984. Rules: Minimum distance requirement The current minimum distance the ball must travel for a mark to be awarded is 15 metres in any direction; a cleanly caught ball which travels a shorter distance is called 'play on'. This has been the case since 2002; for more than a century before that, the minimum distance was ten metres or ten yards. There are very few ground markings on an Australian rules football field which could be used to measure this distance precisely, leaving the decision on distance up to the best judgement of the umpire; a kick which is too short will typically be met with shouts of 'play on' or 'not fifteen' by the umpire. Rules: In the early decades of Australian rules football, the minimum distance was substantially shorter, resulting in a type of play called a "little mark", in which a team could earn a mark by kicking the ball a short distance into the hands of a team-mate standing almost immediately adjacent. Little marks were polarising: they were considered by supporters as one of the game's best features and an effective way for teams to clear scrimmages; and considered by detractors as too difficult to accurately adjudicate and sometimes exploited by crafty players who would disguise a hand-off as a little mark. The minimum distance for a mark changed many times over the early years: until 1877, no minimum; in 1877, six yards; in 1886, five yards; in 1887, two yards – but in practice by the start of the 1890s it was reported that most umpires would pay little marks over only a few inches. Little marking was effectively abolished with the introduction of the ten yard minimum distance in 1897. Rules: Standing the mark Only one player may stand the mark; this restriction was introduced in 1924. Since 2021, a player standing the mark must remain stationary upon taking up the mark until the kicker has disposed of the ball or played on; prior to this, the man on the mark was free to leave the mark or move laterally, provided he did not move over the mark towards the kicker. Breaking this rule is punishable by a 50-metre penalty. If the team chooses not to put a man on the mark, then players may defend the kick from five metres behind the mark; these players are allowed to move.There is a protected area around the kicker, which is a corridor which extends ten metres either side of the line between the man on the mark and the kicker, five metres behind the kicker, and five metres behind the mark. Opposing players may not enter the protected area unless following their direct opponent within two metres; and player who find themselves within the area need to make best endeavours to leave it. Breaking the rule is also punishable by 50-metre penalty. Origins of the mark: The combination of kick and mark as the primary means for advancing the ball has been a distinctive feature of Australian football ever since the first rules were created in 1859. The original rules of the game, which were published in The Footballer newspaper in 1859, included the phrase "A mark shall be considered to be a clean catch of the ball, on the full, without it touching the ground". This rule was included in the Victorian Football Association's rules in 1866, and was included in the Australian National Football Council's rules in 1897. Origins of the mark: Other forms of football descended from English public school football games of the 19th century have featured a fair catch, with similar rules to the mark. It was abolished early in the development of soccer and is only used occasionally in rugby union and American football. Origins of the mark: The origin of the term has a few possible sources. In rugby and the early days of soccer, a player would shout 'mark' and mark the ground with their foot. It was formerly a requirement in the Australian game to make such a mark but this is no longer the case. Sometimes a cap which formed part of the uniform was used to show where the fair catch was taken. Origins of the mark: Another source of the term may have been from the traditional Aboriginal game of Marn Grook, which is said to have influenced founder Tom Wills' development of the early forms of Australian football. It is claimed that in Marn Grook, jumping to catch the ball, called "mumarki", an Aboriginal word meaning "to catch", results in a free kick.These early influences may be limited in their relevance, as the term "catching the ball" was more commonly used throughout the early 20th century. The term "mark" only became widely used in the 1940s, and used by players and commentators alike by the 1950s. Origins of the mark: Evolution of the overhead mark Early forms of Australian football were characterised by low, short kicks and scrimmages. Marks were taken on the chest as all other marks were seen as dangerous or risky. One of the first players to attempt an overhead mark and high mark was Jack Kerley in 1883. Jack Worrall popularised the high mark between 1885 and 1887 and others followed, ushering in a new era of overhead marking in the sport. However players who leapt for the ball could be pushed in mid air, risking immediate dispossession, if not injury. At a meeting of the Australasian Football Council (AFC) in 1890 a motion was passed banning pushing in the back in a marking contest which was agreed to by its member leagues including Victoria. It was adopted by the newly formed VFL in 1897. While the rule encouraged high marking, players marking from behind were still often penalised. Origins of the mark: In 1907 the AFC introduced the concept of unintentional interference in a marking contest. Spectacular marks subsequently became more common. Types of marks: In Australian football, marks are often described in combination of the following ways. Overhead mark: catching the ball with hands extended above the head Contested mark: catching the ball against one or more opponents who are attempting to also mark or spoil the player attempting the mark. This skill is declining in the professional game as coaches prefer to avoid contests. Pack mark: catching the ball against one or more opponents and/or teammates all close to the fall of the ball. High mark: catching the ball whilst jumping up in the air. Stewart "Buckets" Loewe, Matthew Richardson and Simon Madden are notable exponents of the high mark. Types of marks: Spectacular mark: sometimes nicknamed 'specky', 'screamer' or 'hanger', this term is most often used when a mark taken whilst jumping in the air. Additional elevation is achieved by using the legs to spring off the back or shoulders of one or more opponents and/or teammates. The movement of other players beneath the player marking can cause them to lose balance in mid air and land or fall awkwardly, enhancing the spectacle of the mark. Types of marks: Chest mark: catching the ball and drawing it in to the chest. This is considered the easiest mark to take, and is often used in wet weather. At professional level this skill is discouraged by coaches due to it giving opponents a much better chance of intercepting the ball from most directions. Types of marks: Out in front: catching the ball with arms extended forward from the body. This skill is extremely difficult, particularly with the ball travelling low and at high speeds. At professional level this skill is preferred by coaches, as it gives opponents less chance of spoiling from behind, and if the ball spills, it will be "front and centre" of the player, which makes it much easier for rovers to predict and to execute game strategy. Types of marks: One-handed mark: catching the ball with only one hand. Sometimes used in a contested situation where one player's arm is impeded by an opponent, or where the player uses upper body strength to physically fend off their opponent. While spectacular, this skill is discouraged by coaches due to a low percentage of success and is sometimes seen as "showing off" or "lairising". Types of marks: Diving mark: leaping horizontally to catch the ball before it hits the ground. Types of marks: With the flight of the ball: a mark taken running in the direction that the ball is travelling. In order to do this, the player must take their eyes off opposition players sometimes running at fast pace in the opposite direction. This type of mark is often branded "courageous", because in attempting the mark, the player must ignore the danger of a high speed collision with oncoming players. Wayne Carey and Jonathan Brown were known for their ability to take courageous marks. Types of marks: Standing one's ground: a mark taken by a player who is standing still. These are particularly difficult, because the player must wait in a stationary position, making it much easier for moving opponents to make better position. There is also a higher risk of collision with incoming players, meaning it requires courage. Backing into a pack: a mark taken by a player who is running or jogging backwards while facing the ball. These are particularly dangerous with an extremely high risk of collision from behind by players running at the ball at high speed. It is also difficult to keep eyes on the ball whilst expecting a high collision from behind. Half volley: technically not a mark. Sometimes players catch the ball so close to the ground that it is difficult to tell whether it hit or bounced off the ground. Sometimes a player is awarded a mark by an umpires benefit of the doubt. Types of marks: Juggled mark: when a player takes two or more touches of the ball to claim the mark. The player must appear to have had control of the ball to be awarded the mark. Importantly, the mark must be completed within the field of play to be paid as a mark; it should not be paid if the first touch was inside the boundary line, but the last outside.Fingertip mark: when the player is only barely able to hold the ball with his/her fingers at full stretch. This type of mark carries a high risk of injury to fingers. Types of marks: Slips catch: a fingertip mark taken low to the ground, with terminology borrowed directly from cricket. Famous marks: While the Mark of the Year competition has identified many famous marks, other marks include: In the 1970 Grand Final before a record crowd of 121,696, Carlton full forward, and giant of the game, Alex Jesaulenko, took one of the most inspirational marks in the history of 'the Australian game.' Leaping high for a specky over Collingwood's Graeme Jenkin just before half time, the mark inspired a Carlton side that was behind by 44 points at the half. It was retroactively classified as the Mark of the Year. Famous marks: Sydney's Leo Barry leapt into history with his match-saving mark in the final seconds of the 2005 grand final against the West Coast Eagles to seal the game. His contested overhead mark was taken in a congested pack of three teammates and three opposition players. Shaun Smith's and Gary Ablett's marks share the title of Mark of the Century. St Kilda/South Melbourne player Roy Cazaly was renowned for his high marks, giving rise to the catchphrase and song "Up There Cazaly". Spoiling the mark: Spoiling is the technique typically employed by opposition defenders to legally stop a player from catching the ball. It is performed as a punching action by hand or fist just before the opposing player has caught the ball in their hands. Spoiling the mark: The rules are quite strict on defensive spoiling methods. Players are not allowed to push other players out of marking contests or make forceful front on contact with an opponent in a marking contest, if they are not simultaneously attempting to mark or spoil the ball. Also, no high contact is allowed unless such contact is incidental to attempting to mark or spoil the ball. Spoiling the mark: Taking the arms Deliberately taking, hacking or chopping the arms is an infringement committed by players which will result in a free kick. Spoiling the mark: The arm interference free kick was introduced as a specific free kick in the AFL and its affiliates in 2005, although it was paid as a blocking, striking or holding free kick previously. The free kick was designed predominately to make it easier for forwards to take contested marks by not allowing defending player to punch or pull a marking player's outstretched arms in a marking contest.The rule was introduced by the AFL amidst on-going calls from fans and commentators to take action against the defensive tactic of flooding. The rule does directly limit the effectiveness of defenders, but the AFL has never stated whether or not flooding was the reason for the change. Marking-related injuries: Marking can cause injuries to hands and fingers, including hyperextension, joint and tendon damage, dislocation and fractures. Over a long period of time and with re-injury there can be long-term effects such as chronic injury and debilitating arthritis. To overcome these injuries, some players will strap problem fingers together, whole hands, wear splints or gloves. Some of these injuries require surgery and extended recovery, threatening professional careers. AFL players whose careers were threatened by such injuries include Robert Campbell, Fraser Gehrig, Brett Backwell and Daniel Chick. Some players, such as Backwell and Chick, have opted for amputation of digits in a bid to extend their playing careers and continue to mark the ball.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stichtite** Stichtite: Stichtite is a mineral, a carbonate of chromium and magnesium; formula Mg6Cr2CO3(OH)16·4H2O. Its colour ranges from pink through lilac to a rich purple colour. It is formed as an alteration product of chromite containing serpentine. It occurs in association with barbertonite (the hexagonal polymorph of Mg6Cr2CO3(OH)16·4H2O), chromite and antigorite.Discovered in 1910 on the west coast of Tasmania, Australia, it was first recognised by A.S. Wesley a former chief chemist with the Mount Lyell Mining and Railway Company, it was named after Robert Carl Sticht the manager of the mine.It is observed in combination with green serpentine at Stichtite Hill near the Dundas Extended Mine, Dundas - east of Zeehan, as well as on the southern shore of Macquarie Harbour. It is exhibited in the West Coast Pioneers Museum in Zeehan. The only commercial mine for stichtite serpentine is located on Stichtite Hill. Stichtite has also been reported from the Barberton District, Transvaal; Darwendale, Zimbabwe; near Bou Azzer, Morocco; Cunningsburgh, the Shetland Islands of Scotland; Langban, Varmland, Sweden; the Altai Mountains, Russia; Langmuir Township, Ontario and the Megantic, Quebec; Bahia, Brazil; and the Keonjhar district, Orissa, India.It is sometimes used as a gemstone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nominal category** Nominal category: A nominal category or a nominal group is a group of objects or ideas that can be collectively grouped on the basis of a particular characteristic—a qualitative property. A variable that codes whether each one in a set of observations is in a particular nominal category is called a categorical variable. Valid data operations: A nominal group only has members and non-members. That is, nothing more can be said about the members of the group other than they are part of the group. Nominal categories cannot be numerically organized or ranked. The members of a nominal group cannot be placed in ordinal (sequential) or ratio form. Valid data operations: Nominal categories of data are often compared to ordinal and ratio data, to see if nominal categories play a role in determining these other factors. For example, the effect of race (nominal) on income (ratio) could be investigated by regressing the level of income upon one or more dummy variables that specify race. When nominal variables are to be explained, logistic regression or probit regression is commonly used. Examples: For example, citizenship is a nominal group. A person can either be a citizen of a country or not. One citizen of Canada does not have "more citizenship" than another citizen of Canada; therefore it is impossible to order citizenship according to any sort of mathematical logic. Another example would be "words that start with the letter 'a'". There are thousands of words that start with the letter 'a' but none have "more" of this nominal quality than others. Correlating two nominal categories is thus very difficult, because some relationships that occur are actually spurious, and thus unimportant. For example, trying to figure out whether proportionally more Canadians have first names starting with the letter 'a' than non-Canadians would be a fairly arbitrary, random exercise.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dibenzopentalene** Dibenzopentalene: Dibenzopentalene (dibenzo[a,e]pentalene or dibenzo[b,f]pentalene) is an organic compound and a hydrocarbon with formula C16H10. It is of some scientific interest as a stable derivative of the highly reactive antiaromatic pentalene by benzannulation. The first derivative was synthesised in 1912 by Brand. The parent compound was reported in 1952. The NICS value for the 5-membered rings is estimated at 7.4 ppm and that of the six-membered rings -9.8 ppm. Aromatic dicationic salts can be obtained by reaction with antimony pentafluoride in sulfuryl chloride. The dianion forms by reduction with lithium metal or deprotonation of 5,10-dihydroindeno[2,1-a]indene with two equivalents of butyllithium. The aromatic nature of the dianion has been confirmed by X-ray analysis. Another isomer of this compound exists called dibenzo[a,f]pentalene with one of the benzene rings positioned on the other available pentalene face.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Easter seals (philately)** Easter seals (philately): An Easter seal is a form of charity label issued to raise funds for charitable purposes. They are issued by the Easterseals charity in the United States, and by the Canadian Easter Seals charities. Easter seals are applied to the front of mail to show support for particular charitable causes. They are distributed along with appeals to donate to the charities they support. Easter seals are a form of Cinderella stamp. They do not have any postal value.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Uterine artery embolization** Uterine artery embolization: Uterine artery embolization is a procedure in which an interventional radiologist uses a catheter to deliver small particles that block the blood supply to the uterine body. The procedure is done for the treatment of uterine fibroids and adenomyosis. This minimally invasive procedure is commonly used in the treatment of uterine fibroids and is also called uterine fibroid embolization. Medical uses: Uterine artery embolization is used to treat bothersome bulk-related symptoms or abnormal or heavy uterine bleeding due to uterine fibroids or for the treatment of adenomyosis. Fibroid size, number, and location are three potential predictors of a successful outcome.Long-term patient satisfaction outcomes are similar to that of surgery. There is tentative evidence that traditional surgery may result in better fertility. Uterine artery embolization also appears to require more repeat procedures than if surgery was done initially.It has shorter recovery times. Uterine artery embolization is thought to work because uterine fibroids have abnormal vasculature together with aberrant responses to hypoxia (inadequate oxygenation to tissues).Uterine artery embolization can also be used to control heavy uterine bleeding for reasons other than fibroids, such as postpartum obstetrical hemorrhage. and adenomyosis. Medical uses: According to the American Journal of Gynecology, uterine artery embolization costs 12% less than hysterectomy and 8% less than myomectomy. Adverse effects: The rate of serious complications is comparable to that of myomectomy or hysterectomy. The advantage of somewhat faster recovery time is offset by a higher rate of minor complications and an increased likelihood of requiring surgical intervention within two to five years of the initial procedure.Complications include the following: Death from embolism, or sepsis (the presence of pus-forming or other pathogenic organisms, or their toxins, in the blood or tissues) resulting in multiple organ failure Infection from tissue death of fibroids, leading to endometritis (infection of the uterus) resulting in lengthy hospitalization for administration of intravenous antibiotics Misembolization from microspheres or polyvinyl alcohol particles flowing or drifting into organs or tissues where they were not intended to be, causing damage to other organs or other parts of the body such as ovaries, bladder, rectum, and rarely small bowel, uterus, vagina, and labia. Adverse effects: Loss of ovarian function, infertility, and loss of orgasm Failure – continued fibroid growth, regrowth within four months Menopause – iatrogenic, abnormal, cessation of menstruation and follicle stimulating hormones elevated to menopausal levels Post-embolization syndrome – characterized by acute and/or chronic pain, fevers, malaise, nausea, vomiting and severe night sweats; foul vaginal odor coming from infected, necrotic tissue which remains inside the uterus; hysterectomy due to infection, pain or failure of embolization Severe, persistent pain, resulting in the need for morphine or synthetic narcotics Hematoma, blood clot at the incision site; vaginal discharge containing pus and blood, bleeding from incision site, bleeding from vagina, fibroid expulsion (fibroids pushing out through the vagina), unsuccessful fibroid expulsion (fibroids trapped in the cervix causing infection and requiring surgical removal), life-threatening allergic reaction to the contrast material, and uterine adhesions Procedure: The procedure is performed by an interventional radiologist under conscious sedation. Access is commonly through the radial or femoral artery via the wrist or groin, respectively. After anesthetizing the skin over the artery of choice, the artery is accessed by a needle puncture using Seldinger technique. An access sheath and guidewire are then introduced into the artery. In order to select the uterine vessels for subsequent embolization, a guiding catheter is commonly used and placed into the uterine artery under X-ray fluoroscopy guidance. Once at the level of the uterine artery an angiogram with contrast is performed to confirm placement of the catheter and the embolizing agent (spheres or beads) is released. Blood flow to the fibroid will slow significantly or cease altogether, causing the fibroid to shrink. This process can be repeated for as many arteries as are supplying the fibroid. This is done bilaterally from the initial puncture site as unilateral uterine artery embolizations have a high risk of failure. With both uterine arteries occluded, abundant collateral circulation prevents uterine necrosis, and the fibroids decrease in size and vascularity as they receive the bulk of the embolization material. The procedure can be performed in a hospital, surgical center or office setting and commonly take no longer than an hour to perform. Post-procedurally if access was gained via a femoral artery puncture an occlusion device can be used to hasten healing of the puncture site and the patient is asked to remain with the leg extended for several hours but many patients are discharged the same day with some remaining in the hospital for a single day admission for pain control and observation. If access was gained via the radial artery the patient will be able to get off the table and walk out immediately following the procedure. The procedure is not a surgical intervention, and allows the uterus to be kept in place, avoiding many of the associated surgical complications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Unit injector** Unit injector: A unit injector (UI) is a high pressure integrated direct fuel injection system for diesel engines, combining the injector nozzle and the injection pump in a single component. The plunger pump used is usually driven by a shared camshaft. In a unit injector, the device is usually lubricated and cooled by the fuel itself. Unit injector: High pressure injection delivers power and fuel consumption benefits over earlier lower pressure fuel injection by injecting fuel as a larger number of smaller droplets, giving a much higher ratio of surface area to volume. This provides improved vaporization from the surface of the fuel droplets and so more efficient combining of atmospheric oxygen with vaporized fuel, delivering more complete and cleaner combustion. History: In 1911, a patent was issued in Great Britain for a unit injector resembling those in use today to Frederick Lamplough. History: Commercial usage of unit injectors in the U.S. began in early 1930s on Winton engines powering locomotives, boats, even US Navy submarines, and in 1934, Arthur Fielden was granted U.S. patent No.1,981,913 on the unit injector design later used for the General Motors two-stroke diesel engines. Most mid-sized diesel engines used a single pump and separate injectors, but some makers, such as Detroit Diesel and Electro-Motive Diesel became well known for favoring unit injectors, in which the high-pressure pump is contained within the injector itself. E.W. Kettering's 1951 ASME presentation goes into detail about the development of the modern Unit injector. History: Also Cummins PT (pressure-time) is a form of unit injection where the fuel injectors are on a common rail fed by a low pressure pump and the injectors are actuated by a third lobe on the camshaft. The pressure determines how much fuel the injectors get and the time is determined by the cam. In 1994, Robert Bosch GmbH supplied the first electronic unit injector for commercial vehicles, and other manufacturers soon followed. In 1995, Electro-Motive Diesel converted its 710 diesel engines to electronic fuel injection, using an EUI which replaces the UI. Today, major manufacturers include Robert Bosch GmbH, CAT, Cummins, Delphi, Detroit Diesel, Electro-Motive Diesel. Design and technology: Design of the unit injector eliminates the need for high-pressure fuel pipes, and with that their associated failures, as well as allowing for much higher injection pressure to occur. The unit injector system allows accurate injection timing, and amount control as in the common rail system .The unit injector is fitted into the engine cylinder head, where the fuel is supplied via integral ducts machined directly into the cylinder head. Each injector has its own pumping element, and in the case of electronic control, a fuel solenoid valve as well. The fuel system is divided into the low pressure (<500 kPa) fuel supply system, and the high-pressure injection system (<2000 bar). Design and technology: Technical characteristics: The special feature of the unit injector system is that an individual pump is assigned to each cylinder The pump and nozzle are therefore combined in a compact assembly which is installed directly in the cylinder head The unit injector system enables high injection pressures up to 2,200 bar.Advantages: High performance for clean and powerful engine High engine power balanced against low consumption and low engine emissions High degree of efficiency due to compact design Low noise level due to direct assembly in the engine block Injection pressures up to 2,200 bar for the ideal combination of air-fuel mixture. Operation principle: The basic operation can be described as a sequence of four separate phases: the filling phase, the spill phase, the injection phase, and the pressure reduction phase. A low pressure fuel delivery pump supplies filtered diesel fuel into the cylinder head fuel ducts, and into each injector fuel port of constant stroke pump plunger injector, which is overhead camshaft operated. Fill phase The constant stroke pump element on the way up draws fuel from the supply duct in to the chamber, and as long as electric solenoid valve remains de-energized fuel line is open. Spill phase The pump element is on the way down, and as long as solenoid valve remains de-energized the fuel line is open and fuel flows in through into the return duct. Injection phase The pump element is still on the way down, the solenoid is now energized and fuel line is now closed. The fuel can not pass back into return duct, and is now compressed by the plunger until pressure exceeds specific "opening" pressure, and the injector nozzle needle lifts, allowing fuel to be injected into the combustion chamber. Operation principle: Pressure reduction phase The plunger is still on its way down, the engine ECU de-energizes the solenoid when required quantity of fuel is delivered, the fuel valve opens, fuel can flow back into return duct, causing pressure drop, which in turn causes the injector nozzle needle to shut, hence no more fuel is injected.Summary The start of an injection is controlled by the solenoid closing point, and the injected fuel quantity is determined by the closing time, which is the length of time the solenoid remains closed. The solenoid operation is fully controlled by the engine ECU. Additional functions: The use of electronic control allows for special functions; such as temperature controlled injection timing, cylinder balancing (smooth idle), switching off individual cylinders under part load for further reduction in emissions and fuel consumption, and multi-pulse injection (more than one injection occurrence during one engine cycle). Usage & Applications: Unit injector fuel systems are being used on wide variety of vehicles and engines; commercial vehicle from manufacturers such as Volvo, Cummins, Detroit Diesel, CAT, Navistar International and passenger vehicles from manufacturers such as Land Rover and Volkswagen Group, among others, and locomotives from Electro-Motive Diesel. The Volkswagen Group mainstream marques used unit injector systems (branded "Pumpe Düse", commonly abbreviated to "PD") in their Suction Diesel Injection (SDI) and Turbocharged Direct Injection (TDI) diesel engines, however this fuel injection method has been superseded by a common-rail design, such as the new 1.6 TDI. Usage & Applications: In North America, the Volkswagen Jetta, Golf, and New Beetle TDI 2004–2006 are Pumpe Düse (available in both the MK4 and MK5 generations, with BEW and BRM engine codes respectively, older models use timing belt driven injection pump).TDI engines incorporating PD unit injector systems manufactured by the Volkswagen Group were also installed on some cars sold in Europe and other markets where the diesel fuel was conveniently priced, amongst those there were some Chrysler/Dodge cars of the DaimlerChrysler era, e.g. the Dodge Caliber (MY07 BKD, MY08 BMR), Dodge Journey, Jeep Compass, Jeep Patriot. Usage & Applications: Volkswagen Group major-interest truck and diesel engine maker Scania AB also use the unit injector system, which they call "Pumpe-Düse-Einspritzung", or "PDE". Hydraulically actuated electronic (HEUI) development and applications: In 1993, CAT and International Truck & Engine Corporation introduced "hydraulically actuated electronic unit injection" (HEUI), where the injectors are no longer camshaft-operated and could pressurize fuel independently from engine RPM. First available on Navistar's 7.3L /444 cuin, V8 diesel engine. HEUI uses engine oil pressure to power high-pressure fuel injection, where the usual method of unit injector operation is with the engine camshaft. Hydraulically actuated electronic (HEUI) development and applications: HEUI applications included the Ford 7.3L and 6.0L Powerstroke used between May 1993 and 2007. International also used the HEUI system for multiple engines including the DT 466E, DT 570, T-444E, DT-466–570, MaxxForce 5, 7, 9, 10, MaxxForce DT and VT365 engines. Caterpillar incorporated HEUI systems in the 3116, 3126, C7, C9 among others and the Daimler-Detroit Diesel Series 40 engine supplied by International also incorporated a HEUI fuel system. Isuzu fitted a HEUI system to their 3.0 LTR 4JX1 engine fitted to the Trooper and its variants. The HEUI system has been replaced by many manufacturers with common rail injection solutions, a newer technology, to meet better fuel economy and new emissions standards being introduced.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alpha Magnetic Spectrometer** Alpha Magnetic Spectrometer: The Alpha Magnetic Spectrometer (AMS-02) is a particle physics experiment module that is mounted on the International Space Station (ISS). The experiment is a recognized CERN experiment (RE1). The module is a detector that measures antimatter in cosmic rays; this information is needed to understand the formation of the Universe and search for evidence of dark matter. Alpha Magnetic Spectrometer: The principal investigator is Nobel laureate particle physicist Samuel Ting. The launch of Space Shuttle Endeavour flight STS-134 carrying AMS-02 took place on May 16, 2011, and the spectrometer was installed on May 19, 2011. By April 15, 2015, AMS-02 had recorded over 60 billion cosmic ray events and 90 billion after five years of operation since its installation in May 2011.In March 2013, Professor Ting reported initial results, saying that AMS had observed over 400,000 positrons, with the positron to electron fraction increasing from 10 GeV to 250 GeV. (Later results have shown a decrease in positron fraction at energies over about 275 GeV). There was "no significant variation over time, or any preferred incoming direction. These results are consistent with the positrons originating from the annihilation of dark matter particles in space, but not yet sufficiently conclusive to rule out other explanations." The results have been published in Physical Review Letters. Additional data are still being collected. History: The alpha magnetic spectrometer was proposed in 1995 by the Antimatter Study Group, led by MIT particle physicist Samuel Ting, not long after the cancellation of the Superconducting Super Collider. The original name for the instrument was Antimatter Spectrometer, with the stated objective to search for primordial antimatter, with a target resolution of antimatter/matter ≈10−9. The proposal was accepted and Ting became the principal investigator. History: AMS-01 An AMS prototype designated AMS-01, a simplified version of the detector, was built by the international consortium under Ting's direction and flown into space aboard the Space Shuttle Discovery on STS-91 in June 1998. By not detecting any antihelium the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio and proved that the detector concept worked in space. This shuttle mission was the last shuttle flight to the Mir Space Station. History: AMS-02 After the flight of the prototype, the group, now labelled the AMS Collaboration, began the development of a full research system designated AMS-02. This development effort involved the work of 500 scientists from 56 institutions and 16 countries organized under United States Department of Energy (DOE) sponsorship. History: The instrument which eventually resulted from a long evolutionary process has been called "the most sophisticated particle detector ever sent into space", rivaling very large detectors used at major particle accelerators, and has cost four times as much as any of its ground-based counterparts. Its goals have also evolved and been refined over time. As built it is a more comprehensive detector which has a better chance of discovering evidence of dark matter along other goals.The power requirements for AMS-02 were thought to be too great for a practical independent spacecraft. So AMS-02 was designed to be installed as an external module on the International Space Station and use power from the ISS. The post-Space Shuttle Columbia plan was to deliver AMS-02 to the ISS by space shuttle in 2005 on station assembly mission UF4.1, but technical difficulties and shuttle scheduling issues added more delays.AMS-02 successfully completed final integration and operational testing at CERN in Geneva, Switzerland which included exposure to energetic proton beams generated by the CERN SPS particle accelerator. AMS-02 was then shipped by specialist haulier to ESA's European Space Research and Technology Centre (ESTEC) facility in the Netherlands where it arrived February 16, 2010. Here it underwent thermal vacuum, electromagnetic compatibility and electromagnetic interference testing. AMS-02 was scheduled for delivery to the Kennedy Space Center in Florida, United States. in late May 2010. This was however postponed to August 26, as AMS-02 underwent final alignment beam testing at CERN. History: A cryogenic, superconducting magnet system was developed for the AMS-02. When the Obama administration extended International Space Station operations beyond 2015, the decision was made by AMS management to exchange the AMS-02 superconducting magnet for the non-superconducting magnet previously flown on AMS-01. Although the non-superconducting magnet has a weaker field strength, its on-orbit operational time at ISS is expected to be 10 to 18 years versus only three years for the superconducting version. In December 2018 it was announced that funding for the ISS had been extended to 2030.In 1999, after the successful flight of AMS-01, the total cost of the AMS program was estimated to be $33 million, with AMS-02 planned for flight to the ISS in 2003. After the Space Shuttle Columbia disaster in 2003, and after a number of technical difficulties with the construction of AMS-02, the cost of the program ballooned to an estimated $2 billion. History: Installation on the International Space Station For several years it was uncertain if AMS-02 would ever be launched because it was not manifested to fly on any of the remaining Space Shuttle flights. After the 2003 Columbia disaster NASA decided to reduce shuttle flights and retire the remaining shuttles by 2010. A number of flights were removed from the remaining manifest including the flight for AMS-02. In 2006 NASA studied alternative ways of delivering AMS-02 to the space station, but they all proved to be too expensive.In May 2008 a bill was proposed to launch AMS-02 to ISS on an additional shuttle flight in 2010 or 2011. The bill was passed by the full House of Representatives on June 11, 2008. The bill then went before the Senate Commerce, Science and Transportation Committee where it also passed. It was then amended and passed by the full Senate on September 25, 2008, and was passed again by the House on September 27, 2008. It was signed by President George W. Bush on October 15, 2008. The bill authorized NASA to add another space shuttle flight to the schedule before the space shuttle program was discontinued. In January 2009 NASA restored AMS-02 to the shuttle manifest. On August 26, 2010, AMS-02 was delivered from CERN to the Kennedy Space Center by a Lockheed C-5 Galaxy.It was delivered to the International Space Station on May 19, 2011, as part of station assembly flight ULF6 on shuttle flight STS-134, commanded by Mark Kelly. It was removed from the shuttle cargo bay using the shuttle's robotic arm and handed off to the station's robotic arm for installation. AMS-02 is mounted on top of the Integrated Truss Structure, on USS-02, the zenith side of the S3-element of the truss. History: Operations, condition and repairs By April 2017 only one of the 4 redundant coolant pumps for the silicon trackers was fully working, and repairs were being planned, despite AMS-02 not being designed to be serviced in space. By 2019, the last one was being operated intermittently. In November 2019, after four years of planning, special tools and equipment were sent to the ISS for in-situ repairs that may require four or five EVAs. Liquid carbon dioxide coolant was also replenished.The repairs were conducted by the ISS crew of Expedition 61. The spacewalkers were the expedition commander and ESA astronaut Luca Parmitano, and NASA astronaut Andrew Morgan. Both of them were assisted by NASA astronauts Christina Koch and Jessica Meir who operated the Canadarm2 robotic arm from inside the Station. The spacewalks were described as the "most challenging since [the last] Hubble repairs".The entire spacewalk campaign was a central feature of the Disney+ docuseries Among The Stars. History: First spacewalk The first spacewalk was conducted on November 15, 2019. The spacewalk began with the removal of the debris shield covering AMS, which was jettisoned to burn up in the atmosphere. The next task was to install three handrails in the vicinity of AMS to prepare for the next spacewalks and remove zip ties on the AMS' vertical support strut. This was followed by the "get ahead" tasks: Luca Parmitano removed the screws from a carbon-fibre cover under the insulation and passed the cover to Andrew Morgan to jettison. The spacewalkers also removed the vertical support beam cover. The duration of the spacewalk was 6 hours and 39 minutes. History: Second spacewalk The second spacewalk was conducted on November 22, 2019. Parmitano and Morgan cut a total of eight stainless steel tubes, including one that vented the remaining carbon dioxide from the old cooling pump. The crew members also prepared a power cable and installed a mechanical attachment device in advance of installing the new cooling system. The duration of the spacewalk was 6 hours and 33 minutes. History: Third spacewalk The third spacewalk was conducted on December 2, 2019. The crew completed the primary task of installing the upgraded cooling system, called the upgraded tracker thermal pump system (UTTPS), completed the power and data cable connections for the system, and connected all eight cooling lines from the AMS to the new system. The intricate connection work required making a clean cut for each existing stainless steel tube connected to the AMS, then connecting it to the new system through swaging.The astronauts also completed an additional task to install an insulating blanket on the nadir side of the AMS to replace the heat shield and blanket they removed during the first spacewalk to begin the repair work. The flight control team on Earth initiated power-up of the system and confirmed its reception of power and data.The duration of the spacewalk was 6 hours and 2 minutes. History: Fourth spacewalk The final spacewalk was conducted on January 25, 2020. The astronauts conducted leak checks for the cooling system on the AMS and opened a valve to pressurize the system. Parmitano found a leak in one of the AMS's cooling lines. The leak was fixed during the spacewalk. Preliminary testing showed the AMS was responding as expected.Ground teams are working to fill the new AMS thermal control system with carbon dioxide, allow the system to stabilize, and power on the pumps to verify and optimize their performance. The tracker, one of several detectors on the AMS, began collecting science data again before the end of the week after the spacewalk.The astronauts also completed an additional task to remove degraded lens filters on two high-definition video cameras.The duration of the spacewalk was 6 hours and 16 minutes. Specifications: Mass: 7,500 kilograms (16,500 lb) Structural material: Stainless steel Power: 2,500 W Internal data rate: 7 Gbit/s Data rate to ground: 2 Mbit/s (typical, average) Primary mission duration: 10 to 18 years Design life: 3 years. Specifications: Magnetic field intensity: 0.15 teslas produced by a 1,200 kilograms (2,600 lb) permanent neodymium magnet Original superconducting magnet: 2 coils of niobium-titanium at 1.8 K producing a central field of 0.87 teslas (Not used in the actual device) AMS-02 flight magnet changed to non-superconducting AMS-01 version to extend experiment life and to solve reliability problems in the operation of the superconducting systemAbout 1,000 cosmic rays are recorded by the instrument per second, generating about one GB/sec of data. This data is filtered and compressed to about 300 kbit/s for download to the operation center POCC at CERN. Design: The detector module consists of a series of detectors that are used to determine various characteristics of the radiation and particles as they pass through. Characteristics are determined only for particles that pass through from top to bottom. Particles that enter the detector at any other angles are rejected. From top to bottom the subsystems are identified as: Transition radiation detector measures the velocities of the highest energy particles; Upper time of flight counter, along with the lower time of flight counter, measures the velocities of lower energy particles; Star tracker determines the orientation of the module in space; Silicon tracker (9 disks among 6 locations) measures the coordinates of charged particles in the magnetic field; Has 4 redundant coolant pumps Permanent magnet bends the path of charged particles so they can be identified; Anti-coincidence counter rejects stray particles that enter through the sides; Ring imaging Cherenkov detector measures velocity of fast particles with extreme accuracy; Electromagnetic calorimeter measures the total energy of the particles. Scientific goals: The AMS-02 will use the unique environment of space to advance knowledge of the Universe and lead to the understanding of its origin by searching for antimatter, dark matter and measuring cosmic rays. Scientific goals: Antimatter Experimental evidence indicates that our galaxy is made of matter; however, scientists believe there are about 100–200 billion galaxies in the observable Universe and some versions of the Big Bang theory of the origin of the Universe require equal amounts of matter and antimatter. Theories that explain this apparent asymmetry violate other measurements. Whether or not there is significant antimatter is one of the fundamental questions of the origin and nature of the Universe. Any observations of an antihelium nucleus would provide evidence for the existence of antimatter in space. In 1999, AMS-01 established a new upper limit of 10−6 for the antihelium/helium flux ratio in the Universe. AMS-02 was designed to search with a sensitivity of 10−9, an improvement of three orders of magnitude over AMS-01, sufficient to reach the edge of the expanding Universe and resolve the issue definitively. Scientific goals: Dark matter The visible matter in the Universe, such as stars, adds up to less than 5 percent of the total mass that is known to exist from many other observations. The other 95 percent is dark, either dark matter, which is estimated at 20 percent of the Universe by weight, or dark energy, which makes up the balance. The exact nature of both still is unknown. One of the leading candidates for dark matter is the neutralino. If neutralinos exist, they should be colliding with each other and giving off an excess of charged particles that can be detected by AMS-02. Any peaks in the background positron, antiproton, or gamma ray flux could signal the presence of neutralinos or other dark matter candidates, but would need to be distinguished from poorly known confounding astrophysical signals. Scientific goals: Strangelets Six types of quarks (up, down, strange, charm, bottom and top) have been found experimentally; however, the majority of matter on Earth is made up of only up and down quarks. It is a fundamental question whether there exists stable matter made up of strange quarks in combination with up and down quarks. Particles of such matter are known as strangelets. Strangelets might have extremely large mass and very small charge-to-mass ratios. It would be a totally new form of matter. AMS-02 may determine whether this extraordinary matter exists in our local environment. Scientific goals: Space radiation environment Cosmic radiation during transit is a significant obstacle to sending humans to Mars. Accurate measurements of the cosmic ray environment are needed to plan appropriate countermeasures. Most cosmic ray studies are done by balloon-borne instruments with flight times that are measured in days; these studies have shown significant variations. AMS-02 is operative on the ISS, gathering a large amount of accurate data and allowing measurements of the long term variation of the cosmic ray flux over a wide energy range, for nuclei from protons to iron. In addition to understanding the radiation protection required for astronauts during interplanetary flight, this data will allow the interstellar propagation and origins of cosmic rays to be identified. Results: In July 2012, it was reported that AMS-02 had observed over 18 billion cosmic rays.In February 2013, Samuel Ting reported that in its first 18 months of operation AMS had recorded 25 billion particle events including nearly eight billion fast electrons and positrons. The AMS paper reported the positron-electron ratio in the mass range of 0.5 to 350 GeV, providing evidence about the weakly interacting massive particle (WIMP) model of dark matter. Results: On March 30, 2013, the first results from the AMS experiment were announced by the CERN press office. The first physics results were published in Physical Review Letters on April 3, 2013. A total of 6.8×106 positron and electron events were collected in the energy range from 0.5 to 350 GeV. The positron fraction (of the total electron plus positron events) steadily increased from energies of 10 to 250  GeV, but the slope decreased by an order of magnitude above 20 GeV, even though the fraction of positrons still increased. There was no fine structure in the positron fraction spectrum, and no anisotropies were observed. The accompanying Physics Viewpoint said that "The first results from the space-borne Alpha Magnetic Spectrometer confirm an unexplained excess of high-energy positrons in Earth-bound cosmic rays." These results are consistent with the positrons originating from the annihilation of dark matter particles in space, but not yet sufficiently conclusive to rule out other explanations. Ting said "Over the coming months, AMS will be able to tell us conclusively whether these positrons are a signal for dark matter, or whether they have some other origin."On September 18, 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters. A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. Results: AMS presented for 3 days at CERN in April 2015, covering new data on 300 million proton events and helium flux. It revealed in December 2016 that it had discovered a few signals consistent with antihelium nuclei amidst several billion helium nuclei. The result remains to be verified, and the team is currently trying to rule out contamination.A study from 2019, using data from NASA's Fermi Gamma-ray Space Telescope discovered a halo around the nearby pulsar Geminga. The accelerated electrons and positrons collide with nearby starlight. The collision boosts the light up to much higher energies. Geminga alone could be responsible for as much as 20% of the high-energy positrons seen by the AMS-02 experiment.The AMS-02 on the ISS has, as of 2021, recorded eight events that seem to indicate the detection of antihelium-3.As of 2023, the AMS-02 has collected more than 215 billion cosmic ray events.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RNA silencing** RNA silencing: RNA silencing or RNA interference refers to a family of gene silencing effects by which gene expression is negatively regulated by non-coding RNAs such as microRNAs. RNA silencing may also be defined as sequence-specific regulation of gene expression triggered by double-stranded RNA (dsRNA). RNA silencing mechanisms are conserved among most eukaryotes. The most common and well-studied example is RNA interference (RNAi), in which endogenously expressed microRNA (miRNA) or exogenously derived small interfering RNA (siRNA) induces the degradation of complementary messenger RNA. Other classes of small RNA have been identified, including piwi-interacting RNA (piRNA) and its subspecies repeat associated small interfering RNA (rasiRNA). Background: RNA silencing describes several mechanistically related pathways which are involved in controlling and regulating gene expression. RNA silencing pathways are associated with the regulatory activity of small non-coding RNAs (approximately 20–30 nucleotides in length) that function as factors involved in inactivating homologous sequences, promoting endonuclease activity, translational arrest, and/or chromatic or DNA modification. In the context in which the phenomenon was first studied, small RNA was found to play an important role in defending plants against viruses. For example, these studies demonstrated that enzymes detect double-stranded RNA (dsRNA) not normally found in cells and digest it into small pieces that are not able to cause disease.While some functions of RNA silencing and its machinery are understood, many are not. For example, RNA silencing has been shown to be important in the regulation of development and in the control of transposition events. RNA silencing has been shown to play a role in antiviral protection in plants as well as insects. Also in yeast, RNA silencing has been shown to maintain heterochromatin structure. However, the varied and nuanced role of RNA silencing in the regulation of gene expression remains an ongoing scientific inquiry. A range of diverse functions have been proposed for a growing number of characterized small RNA sequences—e.g., regulation of developmental, neuronal cell fate, cell death, proliferation, fat storage, haematopoietic cell fate, insulin secretion.RNA silencing functions by repressing translation or by cleaving messenger RNA (mRNA), depending on the amount of complementarity of base-pairing. RNA has been largely investigated within its role as an intermediary in the translation of genes into proteins. More active regulatory functions, however, only began to be addressed by researchers beginning in the late-1990s. The landmark study providing an understanding of the first identified mechanism was published in 1998 by Fire et al., demonstrating that double-stranded RNA could act as a trigger for gene silencing. Since then, various other classes of RNA silencing have been identified and characterized. Presently, the therapeutic potential of these discoveries is being explored, for example, in the context of targeted gene therapy.While RNA silencing is an evolving class of mechanisms, a common theme is the fundamental relationship between small RNAs and gene expression. It has also been observed that the major RNA silencing pathways currently identified have mechanisms of action which may involve both post-transcriptional gene silencing (PTGS) as well as chromatin-dependent gene silencing (CDGS) pathways. CDGS involves the assembly of small RNA complexes on nascent transcripts and is regarded as encompassing mechanisms of action which implicate transcriptional gene silencing (TGS) and co-transcriptional gene silencing (CTGS) events. This is significant at least because the evidence suggests that small RNAs play a role in the modulation of chromatin structure and TGS.Despite early focus in the literature on RNA interference (RNAi) as a core mechanism which occurs at the level of messenger RNA translation, others have since been identified in the broader family of conserved RNA silencing pathways acting at the DNA and chromatin level. RNA silencing refers to the silencing activity of a range of small RNAs and is generally regarded as a broader category than RNAi. While the terms have sometimes been used interchangeably in the literature, RNAi is generally regarded as a branch of RNA silencing. To the extent it is useful to craft a distinction between these related concepts, RNA silencing may be thought of as referring to the broader scheme of small RNA related controls involved in gene expression and the protection of the genome against mobile repetitive DNA sequences, retroelements, and transposons to the extent that these can induce mutations. The molecular mechanisms for RNA silencing were initially studied in plants but have since broadened to cover a variety of subjects, from fungi to mammals, providing strong evidence that these pathways are highly conserved.At least three primary classes of small RNA have currently been identified, namely: small interfering RNA (siRNA), microRNA (miRNA), and piwi-interacting RNA (piRNA). Background: small interfering RNA (siRNA) siRNAs act in the nucleus and the cytoplasm and are involved in RNAi as well as CDGS. siRNAs come from long dsRNA precursors derived from a variety of single-stranded RNA (ssRNA) precursors, such as sense and antisense RNAs. siRNAs also come from hairpin RNAs derived from transcription of inverted repeat regions. siRNAs may also arise enzymatically from non-coding RNA precursors. The volume of literature on siRNA within the framework of RNAi is extensive. One of the potent applications of siRNAs is the ability to distinguish the target versus non-target sequence with a single-nucleotide difference. This approach has been considered as therapeutically crucial for the silencing dominant gain-of-function (GOF) disorders,where mutant allele causing disease is differed from wt-allele by a single nucleotide (nt). This type of siRNAs with capability to distinguish a single-nt difference are termed as allele-specific siRNAs. Background: microRNA (miRNA) The majority of miRNAs act in the cytoplasm and mediate mRNA degradation or translational arrest. However, some plant miRNAs have been shown to act directly to promote DNA methylation. miRNAs come from hairpin precursors generated by the RNaseIII enzymes Drosha and Dicer. Both miRNA and siRNA form either the RNA-induced silencing complex (RISC) or the nuclear form of RISC known as RNA-induced transcriptional silencing complex (RITS). The volume of literature on miRNA within the framework of RNAi is extensive. Background: Three prime untranslated regions and microRNAs Three prime untranslated regions (3'UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally cause RNA interference. Such 3'-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3'-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3'-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA. Background: The 3'-UTR often contains microRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3'-UTRs. Among all regulatory motifs within the 3'-UTRs (e.g. including silencer regions), MREs make up about half of the motifs. Background: As of 2014, the miRBase web site, an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes). Freidman et al. estimate that >45,000 miRNA target sites within human mRNA 3'UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs. Background: Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold).The effects of miRNA dysregulation of gene expression seem to be important in cancer. For instance, in gastrointestinal cancers, nine miRNAs have been identified as epigenetically altered and effective in down regulating DNA repair enzymes.The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depression, Parkinson's disease, Alzheimer's disease and autism spectrum disorders. Background: piwi-interacting RNA (piRNA) piRNAs represent the largest class of small non-coding RNA molecules expressed in animal cells, deriving from a large variety of sources, including repetitive DNA and transposons. However, the biogenesis of piRNAs is also the least well understood. piRNAs appear to act both at the post-transcriptional and chromatin levels. They are distinct from miRNA due to at least an increase in terms of size and complexity. Repeat associated small interfering RNA (rasiRNAs) are considered to be a subspecies of piRNA. Mechanism: The most basic mechanistic flow for RNA Silencing is as follows: (For a more detailed explanation of the mechanism, refer to the RNAi:Cellular mechanism article.) 1: RNA with inverted repeats hairpin/panhandle constructs --> 2: dsRNA --> 3: miRNAs/siRNAs --> 4: RISC --> 5: Destruction of target mRNA It has been discovered that the best precursor to good RNA silencing is to have single stranded antisense RNA with inverted repeats which, in turn, build small hairpin RNA and panhandle constructs. The hairpin or panhandle constructs exist so that the RNA can remain independent and not anneal with other RNA strands. Mechanism: These small hairpin RNAs and/or panhandles then get transported from the nucleus to the cytosol through the nuclear export receptor called exportin-5, and then get transformed into a dsRNA, a double stranded RNA, which, like DNA, is a double stranded series of nucleotides. If the mechanism didn't use dsRNAs, but only single strands, there would be a higher chance for it to hybridize to other "good" mRNAs. As a double strand, it can be kept on call for when it is needed. Mechanism: The dsRNA then gets cut up by a Dicer into small (21-28 nt = nucleotides long) strands of miRNAs (microRNAs) or siRNAs (short interfering RNAs.) A Dicer is an endoribonuclease RNase, which is a complex of a protein mixed with strand(s) of RNA. Lastly, the double stranded miRNAs/siRNAs separate into single strands; the antisense RNA strand of the two will combine with another endoribonuclease enzyme complex called RISC (RNA-induced silencing complex), which includes the catalytic component Argonaute, and will guide the RISC to break up the "perfectly complementary" target mRNA or viral genomic RNA so that it can be destroyed. Mechanism: It means that based on a short sequence specific area, a corresponding mRNA will be cut. To make sure, it will be cut in many other places as well. (If the mechanism only worked with a long stretch, then there would be higher chance that it would not have time to match to its complementary long mRNA.) It has also been shown that the repeated-associated short interference RNAs (rasiRNA) have a role in guiding chromatin modification.For an animated explanation of the mechanism of RNAi by Nature Reviews, see the External Links section below. Biological functions: Immunity against viruses or transposons RNA silencing is the mechanism that our cells (and cells from all kingdoms) use to fight RNA viruses and transposons (which originate from our own cells as well as from other vehicles). In the case of RNA viruses, these get destroyed immediately by the mechanism cited above. In the case of transposons, it's a little more indirect. Since transposons are located in different parts of the genome, the different transcriptions from the different promoters produce complementary mRNAs that can hybridize with each other. When this happens, the RNAi machinery goes into action, debilitating the mRNAs of the proteins that would be required to move the transposons themselves. Biological functions: Down-regulation of genes For a detailed explanation of the down-regulation of genes, see RNAi:downregulation of genes Up-regulation of genes For a detailed explanation of the up-regulation of genes, see RNAi:upregulation of genes RNA silencing also gets regulated The same way that RNA silencing regulates downstream target mRNAs, RNA silencing itself is regulated. For example, silencing signals get spread between cells by a group of enzymes called RdRPs (RNA-dependent RNA polymerases) or RDRs. Practical applications: Growing understanding of small RNA gene-silencing mechanisms involving dsRNA-mediated sequence-specific mRNA degradation has directly impacted the fields of functional genomics, biomedicine, and experimental biology. The following section describes various applications involving the effects of RNA silencing. These include uses in biotechnology, therapeutics, and laboratory research. Bioinformatics techniques are also being applied to identify and characterize large numbers of small RNAs and their targets. Practical applications: Biotechnology Artificial introduction of long dsRNAs or siRNAs has been adopted as a tool to inactivate gene expression, both in cultured cells and in living organisms. Structural and functional resolution of small RNAs as the effectors of RNA silencing has had a direct impact on experimental biology. For example, dsRNA may be synthesized to have a specific sequence complementary to a gene of interest. Once introduced into a cell or biological system, it is recognized as exogenous genetic material and activates the corresponding RNA silencing pathway. This mechanism can be used to effect decreases in gene expression with respect to the target, useful for investigating loss of function for genes relative to a phenotype. That is, studying the phenotypic and/or physiologic effects of expression decreases can reveal the role of a gene product. The observable effects can be nuanced, such that some methods can distinguish between “knockdown” (decrease expression) and “knockout” (eliminate expression) of a gene. RNA interference technologies have been noted recently as one of the most widely utilized techniques in functional genomics. Screens developed using small RNAs have been used to identify genes involved in fundamental processes such as cell division, apoptosis and fat regulation. Practical applications: Biomedicine Since at least the mid-2000s, there has been intensifying interest in developing short interfering RNAs for biomedical and therapeutic applications. Bolstering this interest is a growing number of experiments which have successfully demonstrated the clinical potential and safety of small RNAs for combatting diseases ranging from viral infections to cancer as well as neurodegenerative disorders. In 2004, the first Investigational New Drug applications for siRNA were filed in the United States with the Food and Drug Administration; it was intended as a therapy for age-related macular degeneration. RNA silencing in vitro and in vivo has been accomplished by creating triggers (nucleic acids that induce RNAi) either via expression in viruses or synthesis of oligonucleotides. Optimistically many studies indicate that small RNA-based therapies may offer novel and potent weapons against pathogens and diseases where small molecule/pharmacologic and vaccine/biologic treatments have failed or proved less effective in the past. However, it is also warned that the design and delivery of small RNA effector molecules should be carefully considered in order to ensure safety and efficacy. Practical applications: The role of RNA silencing in therapeutics, clinical medicine, and diagnostics is a fast developing area and it is expected that in the next few years some of the compounds using this technology will reach market approval. A report has been summarized below to highlight the many clinical domains in which RNA silencing is playing an increasingly important role, chief among them are ocular and retinal disorders, cancer, kidney disorders, LDL lowering, and antiviral. The following table displays a listing of RNAi based therapy currently in various phases of clinical trials. The status of these trials can be monitored on the ClinicalTrials.gov website, a service of the National Institutes of Health (NIH). Of note are treatments in development for ocular and retinal disorders, that were among the first compounds to reach clinical development. AGN211745 (sirna027) (Allergan) and bevasiranib (Cand5) (Opko) underwent clinical development for the treatment of age-related macular degeneration, but trials were terminated before the compounds reached the market. Other compounds in development for ocular conditions include SYL040012 (Sylentis) and QPI-007 (Quark). SYL040012 (bamosinan) is a drug candidate under clinical development for glaucoma, a progressive optic neurdegeneration frequently associated to increased intraocular pressure; QPI-007 is a candidate for the treatment of angle-closure glaucoma and Non-arteritic anterior ischaemic optic neuropathy; both compounds are currently undergoing phase II clinical trials. Several compounds are also under development for conditions such as cancer and rare diseases. Practical applications: Main challenge As with conventional manufactured drugs, the main challenge in developing successful offshoots of the RNAi-based drugs is the precise delivery of the RNAi triggers to where they are needed in the body. The reason that the ocular macular degeneration antidote was successful sooner than the antidote with other diseases is that the eyeball is almost a closed system, and the serum can be injected with a needle exactly where it needs to be. The future successful drugs will be the ones who are able to land where needed, probably with the help of nanobots. Below is a rendition of a table that shows the existing means of delivery of the RNAi triggers. Practical applications: Laboratory The scientific community has been quick to harness RNA silencing as a research tool. The strategic targeting of mRNA can provide a large amount of information about gene function and its ability to be turned on and off. Induced RNA silencing can serve as a controlled method for suppressing gene expression. Since the machinery is conserved across most eukaryotes, these experiments scale well to a range of model organisms. In practice, expressing synthetic short hairpin RNAs can be used to reach stable knock-down. If promoters can be made to express these designer short hairpin RNAs, the result is often potent, stable, and controlled gene knock-down in both in vitro and in vivo contexts. Short hairpin RNA vector systems can be seen as roughly analogous in scope to using cDNA overexpression systems. Overall, synthetic and natural small RNAs have proven to be an important tool for studying gene function in cells as well as animals.Bioinformatics approaches to identify small RNAs and their targets have returned several hundred, if not thousands of, small RNA candidates predicted to affect gene expression in plants, C. elegans, D. melanogaster, zebrafish, mouse, rat, and human. These methods are largely directed to identifying small RNA candidates for knock-out experiments but may have broader applications. One bioinformatics approach evaluated sequence conservation criteria by filtering seed complementary target-binding sites. The cited study predicted that approximately one third of mammalian genes were to be regulated by, in this case, miRNAs. Practical applications: Ethics & Risk-Benefit Analysis One aspect of RNA silencing to consider is its possible off-target affects, toxicity, and delivery methods. If RNA silencing is to become a conventional drug, it must first pass the typical ethical issues of biomedicine. Using risk-benefit analysis, researchers can determine whether RNA silencing conforms to ethical ideologies such as nonmaleficence, beneficence, and autonomy.There is a risk of creating infection-competent viruses that could infect non-consenting people. There is also a risk of affecting future generations based on these treatments. These two scenarios, in respect to autonomy, is possible unethical. At this moment, unsafe delivery methods and unintended aspects of vector viruses add to the argument against RNA silencing.In terms of off-target effects, siRNA can induce innate interferon responses, inhibit endogenous miRNAs through saturation, and may have complementary sequences to other non-target mRNAs. These off-targets could also have target up-regulations such as oncogenes and antiapoptotic genes. The toxicity of RNA silencing is still under review as there are conflicting reports. Practical applications: RNA silencing is quickly developing, because of that, the ethical issues need to be discussed further. With the knowledge of general ethical principles, we must continuously perform risk-benefit analysis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Complex affine space** Complex affine space: Affine geometry, broadly speaking, is the study of the geometrical properties of lines, planes, and their higher dimensional analogs, in which a notion of "parallel" is retained, but no metrical notions of distance or angle are. Affine spaces differ from linear spaces (that is, vector spaces) in that they do not have a distinguished choice of origin. So, in the words of Marcel Berger, "An affine space is nothing more than a vector space whose origin we try to forget about, by adding translations to the linear maps." Accordingly, a complex affine space, that is an affine space over the complex numbers, is like a complex vector space, but without a distinguished point to serve as the origin. Complex affine space: Affine geometry is one of the two main branches of classical algebraic geometry, the other being projective geometry. A complex affine space can be obtained from a complex projective space by fixing a hyperplane, which can be thought of as a hyperplane of ideal points "at infinity" of the affine space. To illustrate the difference (over the real numbers), a parabola in the affine plane intersects the line at infinity, whereas an ellipse does not. However, any two conic sections are projectively equivalent. So a parabola and ellipse are the same when thought of projectively, but different when regarded as affine objects. Somewhat less intuitively, over the complex numbers, an ellipse intersects the line at infinity in a pair of points while a parabola intersects the line at infinity in a single point. So, for a slightly different reason, an ellipse and parabola are inequivalent over the complex affine plane but remain equivalent over the (complex) projective plane. Complex affine space: Any complex vector space is an affine space: all one needs to do is forget the origin (and possibly any additional structure such as an inner product). For example, the complex n-space Cn can be regarded as a complex affine space, when one is interested only in its affine properties (as opposed to its linear or metrical properties, for example). Since any two affine spaces of the same dimension are isomorphic, in some situations it is appropriate to identify them with Cn , with the understanding that only affinely-invariant notions are ultimately meaningful. This usage is very common in modern algebraic geometry. Affine structure: There are several equivalent ways to specify the affine structure of an n-dimensional complex affine space A. The simplest involves an auxiliary space V, called the difference space, which is a vector space over the complex numbers. Then an affine space is a set A together with a simple and transitive action of V on A. (That is, A is a V-torsor.) Another way is to define a notion of affine combination, satisfying certain axioms. An affine combination of points p1, …, pk ∊ A is expressed as a sum of the form a1p1+⋯+akpk where the scalars ai are complex numbers that sum to unity. Affine structure: The difference space can be identified with the set of "formal differences" p − q, modulo the relation that formal differences respect affine combinations in an obvious way. Affine structure: Affine functions A function f:A↦C is called affine if it preserves affine combinations. So f(a1p1+⋯+akpk)=a1f(p1)+⋯+akf(pk) for any affine combination a1p1+⋯+akpk in A.The space of affine functions A* is a linear space. The dual vector space of A* is naturally isomorphic to an (n+1)-dimensional vector space F(A) which is the free vector space on A modulo the relation that affine combination in A agrees with affine combination in F(A). Via this construction, the affine structure of the affine space A can be recovered completely from the space of affine functions. Affine structure: The algebra of polynomials in the affine functions on A defines a ring of functions, called the affine coordinate ring in algebraic geometry. This ring carries a filtration, by degree in the affine functions. Conversely, it is possible to recover the points of the affine space as the set of algebra homomorphisms from the affine coordinate ring into the complex numbers. This is called the maximal spectrum of the ring, because it coincides with its set of maximal ideals. There is a unique affine structure on this maximal spectrum that is compatible with the filtration on the affine coordinate ring. Low-dimensional examples: One dimension A one-dimensional complex affine space, or complex affine line, is a torsor for a one-dimensional linear space over C . The simplest example is the Argand plane of complex numbers C itself. This has a canonical linear structure, and so "forgetting" the origin gives it a canonical affine structure. Low-dimensional examples: For another example, suppose that X is a two-dimensional vector space over the complex numbers. Let α:X→C be a linear functional. It is well known that the set of solutions of α(x) = 0, the kernel of α, is a one-dimensional linear subspace (that is, a complex line through the origin of X). But if c is some non-zero complex number, then the set A of solutions of α(x) = c is an affine line in X, but it is not a linear subspace because it is not closed under arbitrary linear combination. The difference space V is the kernel of α, because the difference of two solutions of the inhomogeneous equation α(x) = c lies in the kernel. Low-dimensional examples: An analogous construction applies to the solution of first order linear ordinary differential equations. The solutions of the homogeneous differential equation y′(x)+μ(x)y(x)=0 is a one-dimensional linear space, whereas the set of solutions of the inhomogeneous problem y′(x)+μ(x)y(x)=f(x) is a one-dimensional affine space A. The general solution is equal to a particular solution of the equation, plus a solution of the homogeneous equation. The space of solutions of the homogeneous equation is the difference space V. Low-dimensional examples: Consider once more the general the case of a two-dimensional vector space X equipped with a linear form α. An affine space A(c) is given by the solution α(x) = c. Observe that, for two difference non-zero values of c, say c1 and c2, the affine spaces A(c1) and A(c2) are naturally isomorphic: scaling by c2/c1 maps A(c1) to A(c2). So there is really only one affine space worth considering in this situation, call it A, whose points are the lines through the origin of X that do not lie on the kernel of α. Low-dimensional examples: Algebraically, the complex affine space A just described is the space of splittings of the exact sequence ker 0. Low-dimensional examples: Two dimensions A complex affine plane is a two-dimensional affine space over the complex numbers. An example is the two-dimensional complex coordinate space C2 . This has a natural linear structure, and so inherits an affine structure under the forgetful functor. Another example is the set of solutions of a second-order inhomogeneous linear ordinary differential equation (over the complex numbers). Finally, in analogy with the one-dimensional case, the space of splittings of an exact sequence 0→C2→C3→C→0 is an affine space of dimension two. Low-dimensional examples: Four dimensions The conformal spin group of the Lorentz group is SU(2,2), which acts on a four dimensional complex vector space T (called twistor space). The conformal Poincare group, as a subgroup of SU(2,2), stabilizes an exact sequence of the form 0→Π→T→Ω→0 where Π is a maximal isotropic subspace of T. The space of splittings of this sequence is a four-dimensional affine space: (complexified) Minkowski space. Affine coordinates: Let A be an n-dimensional affine space. A collection of n affinely independent affine functions z1,z2,…,zn:A→C is an affine coordinate system on A. An affine coordinate system on A sets up a bijection of A with the complex coordinate space Cn , whose elements are n-tuples of complex numbers. Conversely, Cn is sometimes referred to as complex affine n-space, where it is understood that it is its structure as an affine space (as opposed, for instance, to its status as a linear space or as a coordinate space) that is of interest. Such a usage is typical in algebraic geometry. Associated projective space: A complex affine space A has a canonical projective completion P(A), defined as follows. Form the vector space F(A) which is the free vector space on A modulo the relation that affine combination in F(A) agrees with affine combination in A. Then dim F(A) = n + 1, where n is the dimension of A. The projective completion of A is the projective space of one-dimensional complex linear subspaces of F(A). Structure group and automorphisms: The group Aut(P(A)) = PGL(F(A)) ≅ PGL(n + 1, C) acts on P(A). The stabilizer of the hyperplane at infinity is a parabolic subgroup, which is the automorphism group of A. It is isomorphic (but not naturally isomorphic) to a semidirect product of the group GL(V) and V. The subgroup GL(V) is the stabilizer of some fixed reference point o (an "origin") in A, acting as the linear automorphism group of the space of vector emanating from o, and V acts by translation. Structure group and automorphisms: The automorphism group of the projective space P(A) as an algebraic variety is none other than the group of collineations PGL(F(A)). In contrast, the automorphism group of the affine space A as an algebraic variety is much larger. For example, consider the self-map of the affine plane defined in terms of a pair of affine coordinates by (z1,z2)↦(z1,z2+f(z1)) where f is a polynomial in a single variable. This is an automorphism of the algebraic variety, but not an automorphism of the affine structure. The Jacobian determinant of such an algebraic automorphism is necessarily a non-zero constant. It is believed that if the Jacobian of a self-map of a complex affine space is non-zero constant, then the map is an (algebraic) automorphism. This is known as the Jacobian conjecture. Complex structure: A function on complex affine space is holomorphic if its complex conjugate is Lie derived along the difference space V. This gives any complex affine space the structure of a complex manifold. Every affine function from A to the complex numbers is holomorphic. Hence, so is every polynomial in affine functions. Topologies: There are two topologies on a complex affine space that are commonly used. The analytic topology is the initial topology for the family of affine functions into the complex numbers, where the complex numbers carry their usual Euclidean topology induced by the complex absolute value as norm. This is also the initial topology for the family of holomorphic functions. The analytic topology has a base consisting of polydiscs. Associated to any n independent affine functions z1,…,zn:A→C on A, the unit polydisc is defined by B(z1,…,zn)={z∈A:|z1(z)|<1,…,|zn(z)|<1}. Any open set in the analytic topology is the union of a countable collection of unit polydiscs. Topologies: The Zariski topology is the initial topology for the affine complex-valued functions, but giving the complex line the finite-complement topology instead. So in the Zariski topology, a subset of A is closed if and only if it is the zero set of some collection of complex-valued polynomial functions on A. A subbase of the Zariski topology is the collection of complements of irreducible algebraic sets. Topologies: The analytic topology is finer than the Zariski topology, meaning that every set that is open in the Zariski topology is also open in the analytic topology. The converse is not true. A polydisc, for example, is open in the analytic topology but not the Zariski topology. A metric can be defined on a complex affine space, making it a Euclidean space, by selecting an inner product on V. The distance between two points p and q of A is then given in terms of the associated norm on V by d(p,q)=‖p−q‖. The open balls associated to the metric form a basis for a topology, which is the same as the analytic topology. Sheaf of analytic functions: The family of holomorphic functions on a complex affine space A forms a sheaf of rings on it. By definition, such a sheaf associates to each (analytic) open subset U of A the ring O(U) of all complex-valued holomorphic functions on U. The uniqueness of analytic continuation says that given two holomorphic functions on a connected open subset U of Cn, if they coincide on a nonempty open subset of U, they agree on U. In terms of sheaf theory, the uniqueness implies that O , when viewed as étalé space, is a Hausdorff topological space. Oka's coherence theorem states that the structure sheaf O of a complex affine space is coherent. This is the fundamental result in the function theory of several complex variables; for instance it immediately implies that the structure sheaf of a complex-analytic space (e.g., a complex manifold) is coherent. Every complex affine space is a domain of holomorphy. In particular, it is a Stein manifold.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vocal pedagogy** Vocal pedagogy: Vocal pedagogy is the study of the art and science of voice instruction. It is used in the teaching of singing and assists in defining what singing is, how singing works, and how proper singing technique is accomplished. Vocal pedagogy covers a broad range of aspects of singing, ranging from the physiological process of vocal production to the artistic aspects of interpretation of songs from different genres or historical eras. Typical areas of study include: Human anatomy and physiology as it relates to the physical process of singing. Vocal pedagogy: Breathing and air support for singing Posture for singing Phonation Vocal resonation or voice projection Diction, vowels and articulation Vocal registration Sostenuto and legato for singing Other singing elements, such as range extension, tone quality, vibrato, coloratura Vocal health and voice disorders related to singing Vocal styles, such as learning to sing opera, belt, or art song Phonetics Voice classificationAll of these different concepts are a part of developing proper vocal technique. Not all voice teachers have the same opinions within every topic of study which causes variations in pedagogical approaches and vocal technique. History: Within Western culture, the study of vocal pedagogy began in Ancient Greece. Scholars such as Alypius and Pythagoras studied and made observations on the art of singing. It is unclear, however, whether the Greeks ever developed a systematic approach to teaching singing as little writing on the subject survives today.The first surviving record of a systematized approach to teaching singing was developed in the medieval monasteries of the Roman Catholic Church sometime near the beginning of the 13th century. As with other fields of study, the monasteries were the center of musical intellectual life during the medieval period and many men within the monasteries devoted their time to the study of music and the art of singing. Highly influential in the development of a vocal pedagogical system were monks Johannes de Garlandia and Jerome of Moravia who were the first to develop a concept of vocal registers. These men identified three registers: chest voice, throat voice, and head voice (pectoris, guttoris, and capitis). Their concept of head voice, however, is much more similar to the modern pedagogists understanding of the falsetto register. Other concepts discussed in the monastic system included vocal resonance, voice classification, breath support, diction, and tone quality to name a few. The ideas developed within the monastic system highly influenced the development of vocal pedagogy over the next several centuries including the Bel Canto style of singing.With the onset of the Renaissance in the 15th century, the study of singing began to move outside of the church. The courts of rich patrons, such as the Dukes of Burgundy who supported the Burgundian School and the Franco-Flemish School, became secular centers of study for singing and all other areas of musical study. The vocal pedagogical methods taught in these schools, however, were based on the concepts developed within the monastic system. Many of the teachers within these schools had their initial musical training from singing in church choirs as children. The church also remained at the forefront of musical composition at this time and remained highly influential in shaping musical tastes and practices both in and outside the church. It was the Catholic Church that first popularized the use of castrato singers in the 16th century, which ultimately led to the popularity of castrato voices in Baroque and Classical operas.It was not until the development of opera in the 17th century that vocal pedagogy began to break away from some of the established thinking of the monastic writers and develop deeper understandings of the physical process of singing and its relation to key concepts like vocal registration and vocal resonation. It was also during this time that noted voice teachers began to emerge. Giulio Caccini is an example of an important early Italian voice teacher. In the late 17th century, the bel canto method of singing began to develop in Italy. This style of singing had a huge impact on the development of opera and the development of vocal pedagogy during the Classical and Romantic periods. It was during this time that teachers and composers first began to identify singers by and write roles for more specific voice types. However, it was not until the 19th century that more clearly defined voice classification systems like the German Fach system emerged. Within these systems, more descriptive terms were used in classifying voices such as coloratura soprano and lyric soprano. History: Voice teachers in the 19th century continued to train singers for careers in opera. Manuel Patricio Rodríguez García is often considered one of the most important voice teachers of the 19th century, and is credited with the development of the laryngoscope and the beginning of modern voice pedagogy. History: The field of voice pedagogy became more fully developed in the middle of the 20th century. A few American voice teachers began to study the science, anatomy, and physiology of singing, especially Ralph Appelman at Indiana University, Oren Brown at the Washington University School of Medicine and later the Juilliard School, and William Vennard at the University of Southern California. This shift in approach to the study of singing led to the rejection of many of the assertions of the bel canto singing method, most particularly in the areas of vocal registration and vocal resonation. As a result, there are currently two predominating schools of thought among voice teachers today, those who maintain the historical positions of the bel canto method and those who choose to embrace more contemporary understandings based in current knowledge of human anatomy and physiology. There are also those teachers who borrow ideas from both perspectives, creating a hybrid of the two.Appelman and Vennard were also part of a group of voice instructors who developed courses of study for beginning voice teachers, adding these scientific ideas to the standard exercises and empirical ways to improve vocal technique, and by 1980 the subject of voice pedagogy was beginning to be included in many college music degree programs for singers and vocal music educators.More recent works by authors such as Richard Miller and Johan Sundberg have increased the general knowledge of voice teachers, and scientific and practical aspects of voice pedagogy continue to be studied and discussed by professionals. In addition, the creation of organisations such as the National Association of Teachers of Singing (now an international organization of Vocal Instructors) has enabled voice teachers to establish more of a consensus about their work, and has expanded the understanding of what singing teachers do. Topics of study: Pedagogical philosophy There are basically three major approaches to vocal pedagogy. They're all related to how the mechanistic and psychological controls are employed while singing. Some voice instructors advocate an extreme mechanistic approach that believes that singing is largely a matter of getting the right physical parts in the right places at the right time, and that correcting vocal faults is accomplished by calling direct attention to the parts which are not working well. On the other extreme, is the school of thought that believes that attention should never be directed to any part of the vocal mechanism—that singing is a matter of producing the right mental images of the desired tone, and that correcting vocal faults is achieved by learning to think the right thoughts and by releasing the emotions through interpretation of the music. Most voice teachers, however, believe that the truth lies somewhere in between the two extremes and adopt a composite of those two approaches. Topics of study: The nature of vocal sounds Physiology of vocal sound production There are four physical processes involved in producing vocal sound: respiration, phonation, resonation, and articulation. These processes occur in the following sequence: Breath is taken Sound is initiated in the larynx The vocal resonators receive the sound and influence it The articulators shape the sound into recognizable unitsAlthough these four processes are to be considered separately, in actual practice they merge into one coordinated function. With an effective singer or speaker, one should rarely be reminded of the process involved as their mind and body are so coordinated that one only perceives the resulting unified function. Many vocal problems result from a lack of coordination within this process. Topics of study: Respiration In its most basic sense, respiration is the process of moving air in and out of the body—inhalation and exhalation. Sound is produced in the larynx. But producing the sound would not be possible without a power source: the flow of air from the lungs. This flow sets the vocal folds into motion to produce sound. Breathing for singing and speaking is a more controlled process than is the ordinary breathing used for sustaining life. The controls applied to exhalation are particularly important in good vocal technique. Topics of study: Phonation Phonation is the process of producing vocal sound by the vibration of the vocal folds that is in turn modified by the resonance of the vocal tract. It takes place in the larynx when the vocal folds are brought together and breath pressure is applied to them in such a way that vibration ensues causing an audible source of acoustic energy, i.e., sound, which can then be modified by the articulatory actions of the rest of the vocal apparatus. The vocal folds are brought together primarily by the action of the interarytenoid muscles, which pull the arytenoid cartilages together. Topics of study: Resonation Vocal resonation is the process by which the basic product of phonation is enhanced in timbre and/or intensity by the air-filled cavities through which it passes on its way to the outside air. Various terms related to the resonation process include amplification, enrichment, enlargement, improvement, intensification, and prolongation, although in strictly scientific usage acoustic authorities would question most of them. The main point to be drawn from these terms by a singer or speaker is that the end result of resonation is, or should be, to make a better sound.There are seven areas that may be listed as possible vocal resonators. In sequence from the lowest within the body to the highest, these areas are the chest, the tracheal tree, the larynx itself, the pharynx, the oral cavity, the nasal cavity, and the sinuses.Research has shown that the larynx, the pharynx and the oral cavity are the main resonators of vocal sound, with the nasal cavity only coming into play in nasal consonants, or nasal vowels, such as those found in French. This main resonating space, from above the vocal folds to the lips is known as the vocal tract. Many voice users experience sensations in the sinuses that may be misconstrued as resonance. However, these sensations are caused by sympathetic vibrations, and are a result, rather than a cause, of efficient vocal resonance. Topics of study: Articulation Articulation is the process by which the joint product of the vibrator and the resonators is shaped into recognizable speech sounds through the muscular adjustments and movements of the speech organs. These adjustments and movements of the articulators result in verbal communication and thus form the essential difference between the human voice and other musical instruments. Singing without understandable words limits the voice to nonverbal communication. In relation to the physical process of singing, vocal instructors tend to focus more on active articulation as opposed to passive articulation. There are five basic active articulators: the lip ("labial consonants"), the flexible front of the tongue ("coronal consonants"), the middle/back of the tongue ("dorsal consonants"), the root of the tongue together with the epiglottis ("pharyngeal consonants"), and the glottis ("glottal consonants"). These articulators can act independently of each other, and two or more may work together in what is called coarticulation. Topics of study: Unlike active articulation, passive articulation is a continuum without many clear-cut boundaries. The places linguolabial and interdental, interdental and dental, dental and alveolar, alveolar and palatal, palatal and velar, velar and uvular merge into one another, and a consonant may be pronounced somewhere between the named places. In addition, when the front of the tongue is used, it may be the upper surface or blade of the tongue that makes contact ("laminal consonants"), the tip of the tongue ("apical consonants"), or the under surface ("sub-apical consonants"). These articulations also merge into one another without clear boundaries. Topics of study: Interpretation Interpretation is sometimes listed by voice teachers as a fifth physical process even though strictly speaking it is not a physical process. The reason for this is that interpretation does influence the kind of sound a singer makes which is ultimately achieved through a physical action the singer is doing. Although teachers may acquaint their students with musical styles and performance practices and suggest certain interpretive effects, most voice teachers agree that interpretation can not be taught. Students who lack a natural creative imagination and aesthetic sensibility can not learn it from someone else. Failure to interpret well is not a vocal fault, even though it may affect vocal sound significantly. Topics of study: Classification of vocal sounds Vocal sounds are divided into two basic categories—vowels and consonants—with a wide variety of sub-classifications. Voice teachers and serious voice students spend a great deal of time studying how the voice forms vowels and consonants, and studying the problems that certain consonants or vowels may cause while singing. The International Phonetic Alphabet is used frequently by voice teachers and their students. Topics of study: Problems in describing vocal sounds Describing vocal sound is an inexact science largely because the human voice is a self-contained instrument. Since the vocal instrument is internal, the singer's ability to monitor the sound produced is complicated by the vibrations carried to the ear through the Eustachean (auditory) tube and the bony structures of the head and neck. In other words, most singers hear something different in their ears/head than what a person listening to them hears. As a result, voice teachers often focus less on how it "sounds" and more on how it "feels". Vibratory sensations resulting from the closely related processes of phonation and resonation, and kinesthetic ones arising from muscle tension, movement, body position, and weight serve as a guide to the singer on correct vocal production. Topics of study: Another problem in describing vocal sound lies in the vocal vocabulary itself. There are many schools of thought within vocal pedagogy and different schools have adopted different terms, sometimes from other artistic disciplines. This has led to the use of a plethora of descriptive terms applied to the voice which are not always understood to mean the same thing. Some terms sometimes used to describe a quality of a voice's sound are: warm, white, dark, light, round, reedy, spread, focused, covered, swallowed, forward, ringing, hooty, bleaty, plummy, mellow, pear-shaped, and so forth. Topics of study: Body alignment The singing process functions best when certain physical conditions of the body exist. The ability to move air in and out of the body freely and to obtain the needed quantity of air can be seriously affected by the body alignment of the various parts of the breathing mechanism. A sunken chest position will limit the capacity of the lungs, and a tense abdominal wall will inhibit the downward travel of the diaphragm. Good body alignment allows the breathing mechanism to fulfill its basic function efficiently without any undue expenditure of energy. Good body alignment also makes it easier to initiate phonation and to tune the resonators as proper alignment prevents unnecessary tension in the body. Voice Instructors have also noted that when singers assume good body alignment it often provides them with a greater sense of self-assurance and poise while performing. Audiences also tend to respond better to singers with good body alignment. Habitual good body alignment also ultimately improves the overall health of the body by enabling better blood circulation and preventing fatigue and stress on the body. Topics of study: Breathing and breath support All singing begins with breath. All vocal sounds are created by vibrations in the larynx caused by air from the lungs. Breathing in everyday life is a subconscious bodily function which occurs naturally; however, the singer must have control of the intake and exhalation of breath to achieve maximum results from their voice. Topics of study: Natural breathing has three stages: a breathing-in period, a breathing-out period, and a resting or recovery period; these stages are not usually consciously controlled. Within singing there are four stages of breathing: breathing-in period (inhalation) setting up controls period (suspension) controlled exhalation period (phonation) recovery periodThese stages must be under conscious control by the singer until they become conditioned reflexes. Many singers abandon conscious controls before their reflexes are fully conditioned which ultimately leads to chronic vocal problems. Topics of study: Voice classification In European classical music and opera, voices are treated like musical instruments. Composers who write vocal music must have an understanding of the skills, talents, and vocal properties of singers. Voice classification is the process by which human singing voices are evaluated and are thereby designated into voice types. These qualities include but are not limited to: vocal range, vocal weight, vocal tessitura, vocal timbre, and vocal transition points such as breaks and lifts within the voice. Other considerations are physical characteristics, speech level, scientific testing, and vocal registration. The science behind voice classification developed within European classical music and has been slow in adapting to more modern forms of singing. Voice classification is often used within opera to associate possible roles with potential voices. There are currently several different systems in use within classical music including: the German Fach system and the choral music system among many others. No system is universally applied or accepted.However, most classical music systems acknowledge seven different major voice categories. Women are typically divided into three groups: soprano, mezzo-soprano, and contralto. Men are usually divided into four groups: countertenor, tenor, baritone, and bass. When considering children's voices, an eighth term, treble, can be applied. Within each of these major categories there are several sub-categories that identify specific vocal qualities like coloratura facility and vocal weight to differentiate between voices.Within choral music, singers voices are divided solely on the basis of vocal range. Choral music most commonly divides vocal parts into high and low voices within each sex (SATB). As a result, the typical choral situation affords many opportunities for misclassification to occur. Since most people have medium voices, they must be assigned to a part that is either too high or too low for them; the mezzo-soprano must sing soprano or alto and the baritone must sing tenor or bass. Either option can present problems for the singer, but for most singers there are fewer dangers in singing too low than in singing too high.Within contemporary forms of music (sometimes referred to as Contemporary Commercial Music), singers are classified by the style of music they sing, such as jazz, pop, blues, soul, country, folk, and rock styles. There is currently no authoritative voice classification system within non-classical music. Attempts have been made to adopt classical voice type terms to other forms of singing but such attempts have been met with controversy. The development of voice categorizations were made with the understanding that the singer would be using classical vocal technique within a specified range using unamplified (no microphones) vocal production. Since contemporary musicians use different vocal techniques, microphones, and are not forced to fit into a specific vocal role, applying such terms as soprano, tenor, baritone, etc. can be misleading or even inaccurate. Topics of study: Dangers of quick identification Many voice teachers warn of the dangers of quick identification. Premature concern with classification can result in misclassification, with all its attendant dangers. Vennard says: "I never feel any urgency about classifying a beginning student. So many premature diagnoses have been proved wrong, and it can be harmful to the student and embarrassing to the teacher to keep striving for an ill-chosen goal. It is best to begin in the middle part of the voice and work upward and downward until the voice classifies itself."Most voice teachers believe that it is essential to establish good vocal habits within a limited and comfortable range before attempting to classify the voice. When techniques of posture, breathing, phonation, resonation, and articulation have become established in this comfortable area, the true quality of the voice will emerge and the upper and lower limits of the range can be explored safely. Only then can a tentative classification be arrived at, and it may be adjusted as the voice continues to develop. Many acclaimed voice instructors suggest that teachers begin by assuming that a voice is of a medium classification until it proves otherwise. The reason for this is that the majority of individuals possess medium voices and therefore this approach is less likely to misclassify or damage the voice. Topics of study: Vocal registration Vocal registration refers to the system of vocal registers within the human voice. A register in the human voice is a particular series of tones, produced in the same vibratory pattern of the vocal folds, and possessing the same quality. Registers originate in laryngeal function. They occur because the vocal folds are capable of producing several different vibratory patterns. Each of these vibratory patterns appears within a particular range of pitches and produces certain characteristic sounds. The term register can be somewhat confusing as it encompasses several aspects of the human voice. The term register can be used to refer to any of the following: A particular part of the vocal range such as the upper, middle, or lower registers. Topics of study: A resonance area such as chest voice or head voice. A phonatory process A certain vocal timbre A region of the voice which is defined or delimited by vocal breaks. A subset of a language used for a particular purpose or in a particular social setting.In linguistics, a register language is a language which combines tone and vowel phonation into a single phonological system. Topics of study: Within speech pathology the term vocal register has three constituent elements: a certain vibratory pattern of the vocal folds, a certain series of pitches, and a certain type of sound. Speech pathologists identify four vocal registers based on the physiology of laryngeal function: the vocal fry register, the modal register, the falsetto register, and the whistle register. This view is also adopted by many teachers of singing.Some voice teachers, however, organize registers differently. There are over a dozen different constructs of vocal registers in use within the field. The confusion which exists concerning what a register is, and how many registers there are, is due in part to what takes place in the modal register when a person sings from the lowest pitches of that register to the highest pitches. The frequency of vibration of the vocal folds is determined by their length, tension, and mass. As pitch rises, the vocal folds are lengthened, tension increases, and their thickness decreases. In other words, all three of these factors are in a state of flux in the transition from the lowest to the highest tones.If a singer holds any of these factors constant and interferes with their progressive state of change, his laryngeal function tends to become static and eventually breaks occur with obvious changes of tone quality. These breaks are often identified as register boundaries or as transition areas between registers. The distinct change or break between registers is called a passaggio or a ponticello. Vocal instructors teach that with study a singer can move effortlessly from one register to the other with ease and consistent tone. Registers can even overlap while singing. Teachers who like to use this theory of "blending registers" usually help students through the "passage" from one register to another by hiding their "lift" (where the voice changes). Topics of study: However, many voice instructors disagree with this distinction of boundaries blaming such breaks on vocal problems which have been created by a static laryngeal adjustment that does not permit the necessary changes to take place. This difference of opinion has effected the different views on vocal registration. Topics of study: Coordination Singing is an integrated and coordinated act and it is difficult to discuss any of the individual technical areas and processes without relating them to the others. For example, phonation only comes into perspective when it is connected with respiration; the articulators affect resonance; the resonators affect the vocal folds; the vocal folds affect breath control; and so forth. Vocal problems are often a result of a breakdown in one part of this coordinated process which causes voice teachers to frequently focus in, intensively, on one area of the process with their student until that issue is resolved. However, some areas of the art of singing are so much the result of coordinated functions that it is hard to discuss them under a traditional heading like phonation, resonation, articulation, or respiration. Topics of study: Once the voice student has become aware of the physical processes that make up the act of singing and of how those processes function, the student begins the task of trying to coordinate them. Inevitably, students and teachers will become more concerned with one area of the technique than another. The various processes may progress at different rates, with a resulting imbalance or lack of coordination. The areas of vocal technique which seem to depend most strongly on the student's ability to coordinate various functions are: Extending the vocal range to its maximum potential Developing consistent vocal production with a consistent tone quality Developing flexibility and agility Achieving a balanced vibrato Developing the singing voice Some consider that singing is not a natural process but is a skill that requires highly developed muscle reflexes, but others consider that some ways of singing can be considered as natural. Singing does not require much muscle strength but it does require a high degree of muscle coordination. Individuals can develop their voices further through the careful and systematic practice of both songs and vocal exercises. Voice teachers instruct their students to exercise their voices in an intelligent manner. Singers should be thinking constantly about the kind of sound they are making and the kind of sensations they are feeling while they are singing. Topics of study: Exercising the singing voice There are several purposes for vocal exercises, including: Warming up the voice Extending the vocal range "Lining up" the voice horizontally and vertically Acquiring vocal techniques such as legato, staccato, control of dynamics, rapid figurations, learning to comfortably sing wide intervals, and correcting vocal faults. Topics of study: Extending the vocal range An important goal of vocal development is to learn to sing to the natural limits of one's vocal range without any undesired changes of quality or technique. Voice instructors teach that a singer can only achieve this goal when all of the physical processes involved in singing (such as laryngeal action, breath support, resonance adjustment, and articulatory movement) are effectively working together. Most voice teachers believe that the first step in coordinating these processes is by establishing good vocal habits in the most comfortable tessitura of the voice first before slowly expanding the range beyond that.There are three factors which significantly affect the ability to sing higher or lower: The Energy Factor – In this usage the word energy has several connotations. It refers to the total response of the body to the making of sound. It refers to a dynamic relationship between the breathing-in muscles and the breathing-out muscles known as the breath support mechanism. It also refers to the amount of breath pressure delivered to the vocal folds and their resistance that pressure, and it refers to the dynamic level of the sound. Topics of study: The Space Factor – Space refers to the amount of space created by the moving of the mouth and the position of the palate and larynx. Generally speaking, a singer's mouth should be opened wider the higher they sing. The internal space or position of the soft palate and larynx can be widened by the relaxing of the throat. Voice teachers often describe this as feeling like the "beginning of a yawn". Topics of study: The Depth Factor – In this usage the word depth has two connotations. It refers to the actual physical sensations of depth in the body and vocal mechanism and it refers to mental concepts of depth as related to tone quality.McKinney says, "These three factors can be expressed in three basic rules: (1) As you sing higher, you must use more energy; as you sing lower, you must use less. (2) As you sing higher, you must use more space; as you sing lower, you must use less. (3) As you sing higher, you must use more depth; as you sing lower, you must use less." General music studies Some voice teachers will spend time working with their students on general music knowledge and skills, particularly music theory, music history, and musical styles and practices as it relates to the vocal literature being studied. If required they may also spend time helping their students become better sight readers, often adopting solfège, which assigns certain syllables to the notes of the scale. Topics of study: Performance skills and practices Since singing is a performing art, voice teachers spend some of their time preparing their students for performance. This includes teaching their students etiquette of behavior on stage such as bowing, learning to manage stage fright, addressing problems like nervous tics, and the use of equipment such as microphones. Some students may also be preparing for careers in the fields of opera or musical theater where acting skills are required. Many voice instructors will spend time on acting techniques and audience communication with students in these fields of interest. Students of opera also spend a great deal of time with their voice teachers learning foreign language pronunciations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peekawoo** Peekawoo: Peekawoo was a dating application launched in September 2013. It was described as "a dating app made for women that emphasises fun and companionship - and nothing more."The peekawoo.com website appeared to be down on July 16, 2018 The app's Facebook page was last updated in February 2017.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Olympus Zuiko Digital 40-150mm f/3.5-4.5** Olympus Zuiko Digital 40-150mm f/3.5-4.5: The Olympus Zuiko Digital 40-150mm F3.5-4.5 is an interchangeable lens for Four Thirds system digital single-lens reflex cameras announced by Olympus Corporation on September 28, 2004.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reductive Lie algebra** Reductive Lie algebra: In mathematics, a Lie algebra is reductive if its adjoint representation is completely reducible, hence the name. More concretely, a Lie algebra is reductive if it is a direct sum of a semisimple Lie algebra and an abelian Lie algebra: g=s⊕a; there are alternative characterizations, given below. Examples: The most basic example is the Lie algebra gln of n×n matrices with the commutator as Lie bracket, or more abstractly as the endomorphism algebra of an n-dimensional vector space, gl(V). This is the Lie algebra of the general linear group GL(n), and is reductive as it decomposes as gln=sln⊕k, corresponding to traceless matrices and scalar matrices. Any semisimple Lie algebra or abelian Lie algebra is a fortiori reductive. Over the real numbers, compact Lie algebras are reductive. Definitions: A Lie algebra g over a field of characteristic 0 is called reductive if any of the following equivalent conditions are satisfied: The adjoint representation (the action by bracketing) of g is completely reducible (a direct sum of irreducible representations). g admits a faithful, completely reducible, finite-dimensional representation. The radical of g equals the center: r(g)=z(g). The radical always contains the center, but need not equal it. g is the direct sum of a semisimple ideal s0 and its center z(g): g=s0⊕z(g). Compare to the Levi decomposition, which decomposes a Lie algebra as its radical (which is solvable, not abelian in general) and a Levi subalgebra (which is semisimple). g is a direct sum of a semisimple Lie algebra s and an abelian Lie algebra a : g=s⊕a. g is a direct sum of prime ideals: g=∑gi. Some of these equivalences are easily seen. For example, the center and radical of s⊕a is a, while if the radical equals the center the Levi decomposition yields a decomposition g=s0⊕z(g). Further, simple Lie algebras and the trivial 1-dimensional Lie algebra k are prime ideals. Properties: Reductive Lie algebras are a generalization of semisimple Lie algebras, and share many properties with them: many properties of semisimple Lie algebras depend only on the fact that they are reductive. Notably, the unitarian trick of Hermann Weyl works for reductive Lie algebras. The associated reductive Lie groups are of significant interest: the Langlands program is based on the premise that what is done for one reductive Lie group should be done for all.The intersection of reductive Lie algebras and solvable Lie algebras is exactly abelian Lie algebras (contrast with the intersection of semisimple and solvable Lie algebras being trivial).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anterior vagal trunk** Anterior vagal trunk: The anterior vagal trunk is one of the two divisions (the other being the posterior vagal trunk) into which the vagus nerve splits as it passes through the esophageal hiatus to enter the abdominal cavity. The anterior and posterior vagal trunks represent the inferior continuation of the esophageal nervous plexus inferior to the diaphragm. The majority of nerve fibres in the anterior vagal trunk are derived from the left vagus nerve.The anterior vagal trunk is responsible mainly for providing parasympathetic innervation to the lesser curvature of the stomach, pylorus, gallbladder, and biliary apparatus. Anatomy: Branches Hepatic branch which supply the liver, gallbladder, and biliary apparatus. Celiac branch which contributes parasympathetic afferents to the celiac plexus. Anterior gastric branches which supply the stomach. Anterior and posterior nerves of Latarjet which innervate the pylorus, and proximal duodenum. Clinical significance: The anterior vagal trunk and its branches are at risk of iatrogenic injury during surgeries of the distal oesophagus, stomach, proximal duodenum, gallbladder, and biliary tract.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cisco Catalyst 6500** Cisco Catalyst 6500: The Cisco Catalyst 6500 is a modular chassis network switch manufactured by Cisco Systems from 1999 to 2015, capable of delivering speeds of up to "400 million packets per second".A 6500 comprises a chassis, power supplies, one or two supervisors, line cards, and service modules. A chassis can have 3, 4, 6, 9, or 13 slots each (Catalyst model 6503, 6504, 6506, 6509, or 6513, respectively) with the option of one or two modular power supplies. The supervisor engine provides centralised forwarding information and processing; up to two of these cards can be installed in a chassis to provide active/standby or stateful failover. The line cards provide port connectivity and service modules to allow for devices such as firewalls to be integrated within the switch. Supervisor: The 6500 Supervisor comprises a Multilayer Switch Feature Card (MSFC) and a Policy Feature Card (PFC). The MSFC runs all software processes, such as routing protocols. The PFC makes forwarding decisions in hardware. The supervisor has connections to the switching fabric and classic bus, as well as bootflash for the Cisco IOS software. The latest generation supervisor is 'Supervisor 2T'. This supervisor was introduced at Cisco Live Las Vegas in July 2011. It provides 80 gigabits per slot on all slots of 6500-E chassis. Operating systems: The 6500 currently supports three operating systems: CatOS, Native IOS, and Modular IOS. Operating systems: CatOS CatOS is supported for layer 2 (switching) operations only. And, able to perform routing functions (e.g. Layer 3) operations, the switch must be run in hybrid mode. In this case, CatOS runs on the Switch Processor (SP) portion of the Supervisor, and IOS runs on the Route Processor (RP), also known as the MSFC. To make configuration changes, user must then manually switch between the two environments. Operating systems: CatOS does have some missing functionality, and is generally considered 'obsolete' compared to running a switch in Native Mode. Operating systems: Native IOS Cisco IOS can be run on both the SP and RP. In this instance, the user is unaware of where a command is being executed on the switch, even though technically two IOS images are loaded—one on each processor. This mode is the default shipping mode for Cisco products and enjoys the support of all new features and line cards. Operating systems: Modular IOS Modular IOS is a version of Cisco IOS that employs a modern UNIX-based kernel to overcome some of the limitations of IOS. Additional to this is the ability to perform patching of processes without rebooting the device and in-service upgrades. Methods of operation: The 6500 has five major modes of operation: Classic, CEF256, dCEF256, CEF720, and dCEF720. Methods of operation: Classic Bus The 6500 classic architecture provides 32 Gbit/s centralised forwarding performance. The design is such that an incoming packet is first queued on the line card and then placed on to the global data bus (dBus) and is copied to all other line cards, including the supervisor. The supervisor then looks up the correct egress port, access lists, policing, and any relevant rewrite information on the PFC. It is placed on the result bus (rBus) and sent to all line cards. Those line cards for which the data is not required terminate processing. The others continue forwarding and apply relevant egress queuing. Methods of operation: The speed of the classic bus is 32gb half duplex (since it is a shared bus) and is the only supported way of connecting a Supervisor 32 engine (or Supervisor 1) to a 6500. Methods of operation: CEF256 This method of forwarding, was first introduced with the Supervisor 2 engine. When used in combination with a switch fabric module, each line card has an 8Gbit/s connection to the switch fabric and additionally a connection to the classic bus. In this mode, assuming all line cards have a switch fabric connection, an ingress packet is queued as before and its headers are sent along the dBus to the supervisor. They are looked up in the PFC (including ACLs etc.), then the result is placed on the rBus. The initial egress line card takes this information and forwards the data to the correct line card along with the switch fabric. The main advantage here is that, there is a dedicated 8 Gbit/s connection between the line cards. The receiving line card queues the egress packet before sending it from the desired port. Methods of operation: The '256' is derived from a chassis using 2x8gb ports on 8 slots of a 6509 chassis: 16 * 8 = 128, 128 * 2 = 256. The number gets double because of the switch fabric being 'full duplex'. dCEF256 dCEF256 uses distributed forwarding. These line cards have 2x8gb connections to the switch fabric and no classic bus connection. Only modules that have a DFC (Distributed Forwarding Card) can use dCEF. Methods of operation: Unlike the previous examples, the line cards hold a full copy of the supervisor's routing tables locally, as well as its L2 adjacency table (i.e. MAC addresses). This eliminates the need for any connection to the classic bus or the requirement to use the shared resource of the supervisor. In this instance, an ingress packet is queued, but its destination is looked up locally. The packet is then sent across the switch fabric and queued in the egress line card before being sent. Methods of operation: CEF720 This mode of operation acts identically to CEF256, except with 2x20gb connections to the switch fabric and there is no need for a switch fabric module (this is now integrated into the supervisor). This was first introduced into the Supervisor Engine 720. Methods of operation: The '720' is derived from a chassis using 2x20gb ports on 9 slots of a 6509 chassis. 40 * 9 = 360 * 2 = 720. The number is doubled to the switch fabric being 'full duplex'. The reason 9 slots are used for the calculation instead of 8 for the cef256 is that it no longer needs to waste a slot with the switch fabric module. Methods of operation: dCEF720 This mode of operation acts identically to dCEF256, except with 2x20gb connections to the switch fabric. Power supplies: The 6500 is able to deliver high densities of Power over Ethernet across the chassis. Because of this, power supplies are the key elements of the configuration. Chassis support The following goes through the various 6500 chassis and their supported power supplies & loads. 6503 The original chassis permits up to 2800W and uses rear-inserted power supplies differ from the others in the series. 6504-E This chassis permits up to 5000W (119A @ 42V) of power and, like the 6503, uses rear-inserted power supplies. Power supplies: 6506, 6509, 6506-E and 6509-E The original chassis can support up to a maximum of 4000W (90A @ 42V) of power, because of backplane limitations. If a power supply above this is inserted, it will deliver at full power up to this limitation (i.e. a 6000W power supply is supported in these chassis, but will output a maximum of 4000W). Power supplies: The 6509-NEB-A supports a maximum of 4500W (108A @ 42V). With the introduction of the 6506-E and 6509-E series chassis, the maximum power supported has been increased to over 14500 W (350A @ 42V). 6513 This chassis can support a maximum of 8000W (180A @ 42V). However, to obtain this, it must be run in combined mode. Therefore, it is suggested that it would be run in redundant mode to obtain a maximum of 6000W (145A @ 42V). Power redundancy options The 6500 supports dual power supplies for redundancy. These may be run in one of two modes: redundant or combined mode. Power supplies: Redundant mode When running in Redundant mode, each power supply provides approximately 50% of its capacity to the chassis. In the event of a failure, the unaffected power supply will then provide 100% of its capacity and an alert will be generated. As there was enough to power the chassis ahead of time, there is no interruption to service in this configuration. This is also the default and recommended way to configure power supplies. Power supplies: Combined mode In combined mode, each power supply provides approximately 83% of its capacity to the chassis. This allows for greater utilisation of the power supplies and potentially increased PoE densities. Power supplies: In systems that are equipped with two power supplies, if one power supply fails and the other power supply cannot fully power all of the installed modules, system power management will shut down devices in the following order: Power over Ethernet (PoE) devices— The system will power down PoE devices in descending order, starting with the highest numbered port on the module in the highest numbered slot. Power supplies: Modules—If additional power savings are needed, the system will power down modules in descending order, starting with the highest numbered slot. Slots containing supervisor engines or Switch Fabric Modules are bypassed and are not powered down.This shut down order is fixed and cannot be changed. Online Insertion & Removal: OIR is a feature of the 6500 which allows hot swapping of most line cards without first powering down the chassis. The advantage of this is that one may perform an in-service upgrade. However, before attempting this, it is important to understand the process of OIR and how it may still require a reload. Online Insertion & Removal: To prevent bus errors, the chassis has three pins in each slot that correspond with the line card. Upon insertion, the longest of these makes first contact and stalls the bus (to avoid corruption). As the line card is pushed in further, the middle pin makes the data connection. Finally, the shortest pin removes the bus stall and allows the chassis to continue operation. Online Insertion & Removal: However, if any part of this operation is skipped, errors will occur (resulting in a stalled bus and ultimately a chassis reload). Common problems include: Line cards being inserted incorrectly (thus making contact with only the stall & data pins and thus not releasing the bus) Line cards being inserted too quickly (thus the stall removal signal is not received) Line cards are being inserted too slowly (thus the bus is stalled for too long & forces a reload).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DIN 41612** DIN 41612: DIN 41612 was a DIN standard for electrical connectors that are widely used in rack based electrical systems. Standardisation of the connectors is a pre-requisite for open systems, where users expect components from different suppliers to operate together. The most widely known use of DIN 41612 connectors is in the VMEbus and NuBus systems. The standard has withdrawn in favor of international standards IEC 60603-2 and EN 60603-2. DIN 41612: DIN 41612 connectors are used in Pancon, STEbus,Futurebus, VMEbus, Multibus II, NuBus, VXI Bus, eurocard TRAM motherboards, and Europe Card Bus, all of which typically use male DIN 41612 connectors on Eurocards plugged into female DIN 41612 on the backplane in a 19-inch rack chassis. Mechanical details: The standard describes connectors which may have one, two or three rows of contacts, which are labelled as rows a, b and c. Two row connectors may use rows a+b or rows a+c. The connectors may have 16 or 32 columns, which means that the possible permutations allow 16, 32, 48, 64 or 96 contacts. The rows and columns are on a 0.1 inch (2.54 mm) grid pitch. Insertion and removal force are controlled, and three durability grades are available. Mechanical details: Often the female DIN 41612 connectors have press fit contacts rather than solder pin contacts, to avoid thermal shock to the backplane. Electrical details: The headline performance of the connectors is a 2 amp per pin current carrying capacity, and 500 volt working voltage. Both these figures may need to be de-rated according to safety requirements or environmental conditions. Performance Classes: The DIN 41612 specification identifies 3 different classes or "levels"; it's more complicated than this, but, essentially: class 1 is good for 500 mating cycles; class 2 is good for 400 mating cycles, and, class 3 is good for 50 mating cycles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fosdagrocorat** Fosdagrocorat: Fosdagrocorat (developmental code names PF-04171327 and PF-4171327; also known as dagrocorat 2-(dihydrogen phosphate)) is a nonsteroidal but steroid-like selective glucocorticoid receptor modulator (SGRM) which was under development for the treatment of rheumatoid arthritis but was never marketed. It is the C2 dihydrogen phosphate ester of dagrocorat, and acts as a prodrug of dagrocorat with improved pharmacokinetics. The drug reached phase II clinical trials prior to the discontinuation of its development.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Single-wavelength anomalous dispersion** Single-wavelength anomalous dispersion: Single-wavelength anomalous diffraction (SAD) is a technique used in X-ray crystallography that facilitates the determination of the structure of proteins or other biological macromolecules by allowing the solution of the phase problem. In contrast to multi-wavelength anomalous diffraction, SAD uses a single dataset at a single appropriate wavelength. One advantage of the technique is the minimization of time spent in the beam by the crystal, thus reducing potential radiation damage to the molecule while collecting data. SAD is sometimes called "single-wavelength anomalous dispersion", but no dispersive differences are used in this technique since the data are collected at a single wavelength. Today, selenium-SAD is commonly used for experimental phasing due to the development of methods for selenomethionine incorporation into recombinant proteins.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded