text
stringlengths
60
353k
source
stringclasses
2 values
**Cognitive Surplus** Cognitive Surplus: Cognitive Surplus: How Technology Makes Consumers into Collaborators is a 2010 non-fiction book by Clay Shirky, originally published in with the subtitle "Creativity and Generosity in a Connected Age". The book is an indirect sequel to Shirky's Here Comes Everybody, which covered the impact of social media. Cognitive Surplus focuses on describing the free time that individuals have to engage with collaborative activities within new media. Shirky's text searches to prove that global transformation can come from individuals committing their time to active engagement with technology. Overall response has been mixed with some critics praising Shirky's insights but also decrying some of the shortcomings of his theory. Background: Clay Shirky has long been interested in and published works concerning the Internet and its impact on society. He currently works at New York University, where he "has been making the case that the Internet is an inherently participatory and social medium".Shirky wrote this book two years after its predecessor, Here Comes Everybody, which relates to the topics of the Internet and organization of people, was published. In it, Shirky argues that "As the Internet radically reduces the costs of collective action for everyone, it will transform the relationship between ordinary individuals and the large, hierarchical institutions that were a dominant force in 20th-century societies". This transformation of relationships between individuals is a concept Shirky builds on in Cognitive Surplus. A central concern Shirky had in mind when writing it was in illuminating the difference between communal and civic values, and how the Internet is a vehicle for both. In particular, he was interested in showing "effusions of people pooling their spare time and talent" and showing how we can create a culture "that celebrates the creation of civic value".Shirky has stated that he is interested in exploring "the changes in the way people collaborate" that are spurred on by technology and new media, and these changes are a large part of what Cognitive Surplus is devoted to examining. Topics that Shirky frequently writes about include Network Economics, Media and Community, Internet Globalization, and Open Source Software. He has also been featured in many magazines and journals including The New York Times, Wall Street Journal and Harvard Business Review. Summary: Shirky argues that since the 1940s, people are learning how to use free time more constructively for creative acts rather than consumptive ones, particularly with the advent of online tools that allow new forms of collaboration. While Shirky acknowledges that the activities that we use our cognitive surplus for may be frivolous (such as creating LOLcats), the trend as a whole is leading to valuable and influential new forms of human expression. These forms of human collaboration that he argues the Internet provides take the form of four categories of varying degrees of value: personal, communal, public, and civic. Shirky argues that while all of these are legitimate uses of "cognitive surplus", the civic value (the power to actually change society) that social media provides is what should be celebrated about the Internet. Summary: Chapter summaries Gin, Television, and Cognitive Surplus Shirky introduces the Gin Craze as an older version of a modern-day concern. Gin offered its consumers the ability to fall apart a little bit at a time. He relates this to how society today deals with free time in relation to technology. Since the Second World War, increases in GDP, educational attainment, and life span have forced industrialized world to grapple with something we'd never had to deal with on a national scale: free time. He claims that the sitcom, along with other technology and social media, has become the modern day Gin Craze. Something that makes today remarkable is that we can treat free time as a general social asset that can be harnessed for large, communally created projects. Society never knows what to do with a surplus at first (hence "surplus"). The "Milkshake Mistake" is the idea that it is a mistake to look at the meaning of or potential of something by looking at the history and original purpose of a product (such as drinking milkshakes for breakfast). Shirky says we do this when thinking about media. The use of technology is much less determined by the tool/medium itself; when we use a network, the most important asset we get is access to one another. Ushahidi was one of the first examples of a program in response to this surplus of free time being used for a greater good (cognitive surplus). It was a service developed to help citizens track outbreaks of ethnic violence in Kenya. It grew to a bigger and better at reporting over a wide geographical area as a result of cognitive surplus. It brought people together to carve out enough collective good-will from the community to create resources that no one could have imagined years ago. Along with good-will acts, people with a surplus of free time also engage in the use of creativity (ICanHasCheezburger). Cognitive surplus, newly forced from previously disconnected island of time and talent, is just raw material. To get any value out of it, Shirky says, we have to make it mean something or be useful. We collectively aren't just a source of the surplus; we are also the people designing its use by our participation and by the things we expect of one another as we wrestle together with our new connectedness. Summary: Means In the second chapter, Shirky discusses the impact of media on the personal lives of civilians, companies, and governments. Throughout the text, Shirky references the 2008 Beef Revolts in South Korea and how they came about since they were led by K-pop fan girls. Shirky also referenced PickupPal and the tension between it and bus company Trentway-Wagar, who complained about the latter to the OHTB. Shirky discusses the publishing button and whether or not should exist since now anyone can publish. He references Naomi Wolf's The Beauty Myth and Harvey Swados and uses them as part of his argument. He also brings up ICanHasCheezeBurger, YouTube, and AOL as well, all of which are platforms that anyone can use. Shirky compares all of this to the fifteenth century invention of the printing press and also lists other forms of life-changing media, including photographic plates, CDs, radio, and television. Summary: Motive In the third chapter, Shirky explains that the means are platforms, tools, or systems we use that allow us to connect, learn, and share. The combination of time, place, and people which enable us to share and take action, is the opportunity. Shirky discusses the types of motivations that a person who shares would consider. Intrinsic motivations, which Shirky summarizes as a need for 1) increased competence, 2) autonomy over what we do, 3) membership of a group who share our values and beliefs, 4) the sharing of things with that group. Then, extrinsic motivations, like reward and recognition or punishment for certain behaviors. These motivations could also be classified into personal and social motivations. Social motivations include membership and sharing, while personal motivations include competence and autonomy. With the evidence from Benkler and Nissenbaum, it is concluded that social motivations reinforce personal ones. With the tools of today, we see many new groups; most of them large, public, and amateur groups. The goal for these groups is more about scope rather than size. The use of this public access media is to reach audiences that are like the group. Sharers have always had the same interests-or motivations- it is the opportunity that has changed them, and the ability to connect, share, and learn easily. Summary: Opportunity In the fourth chapter, Shirky explores how means and motives alone cannot explain new uses for our cognitive surplus, explaining that we take the opportunities made available to us. He argues that we often discover unconventional routes to benefit society. Shirky cites examples such as the elderly adopting the Internet as new means for communication and skateboarders adapting drained swimming pools into skating ramps. The narrative about the drought-stricken pools proved an important point: the intended capabilities of something do not necessarily determine its functions. He goes on to discuss the Ultimatum game, in which a proposer and responder are given the task of splitting ten dollars. According to behavioral economics, proposers should always propose a split that heavily favors them and the responder should always accept it, because no matter how small their share is, there is still a gain. In practice, however, proposers tend to offer fair deals and responders tend to reject unfair proposals. On some level, we always feel we are in a social situation and will either treat each other fairly or punish those who do not. According to Shirky, this is why what he describes as the Public sector is so popular; it is designed to enrich society without any monetary incentive. In the public sector, digital networks connect people worldwide and give amateurs the opportunity to share their thoughts and work when they normally couldn't. There are no gatekeepers on the Internet; innovation is actively encouraged and younger generations in a changing environment can find their voice there. Summary: Culture The fifth chapter is broken up into four parts, Culture as a Coordinating Tool, The Economics of Sharing, College Professors and Brain Surgeons, and Patients Like Us. The first section of the chapter discusses an experiment that appeared in a paper published in the Journal of Legal Studies, written by Uri Gneezy and Aldo Rustichini. The experiment demonstrates the backfiring of an attempt to regulate group collaboration; when attempting to do so by ascribing monetary values to people's time, a community will then begin to view people who provide services as the service itself rather than as individuals. This emphasizes Shirky's point that group collaboration is essential to its prosperity and functionality. Another example he uses is The Invisible College, an example of collaboration that resulted in a monumental scientific advancements because of the sense of a shared purpose. Unlike the alchemists of their time, The Invisible College shared information with each other in order to further the field rather than claim individual advancements as their own. Later Shirky emphasizes the sense of belonging that is an intricate part of this group culture. In his brain surgeon analogy he discusses the values of a professional versus an amateur, and that while in the case of the brain surgeon people prefer the professional, this is not always the case; for instance, in cases such as food critics, people tend to prefer an outlet where ordinary people give their opinions. He likens to the distinction between the services offered by a prostitute (professional at their craft) versus the intimacy between partners, demonstrating that a sense of belonging is often held in a higher regard than skill. This sense of belonging opens up a new discussion with the example of patientslikeme.com in which patients of similar diseases can openly discuss their ailments and feel comfort in knowing that they are not the only ones experiencing it. Furthermore, it allows researchers to collaborate with patients. Thus demonstrating that social media platforms can be used to enhance this nature of belonging in culture, but can also produce real civic value. Summary: Personal, Communal, Public, Civic In the sixth chapter, Shirky outlines the variations in the forms of sharing and the types of value that result. He argues that there are four main values: personal, communal, public, and civic. Personal value deals with the efforts of singular agents sharing ideas on a whim. Communal value is sharing in a small group that serves the interests of the group members collaborating. Public value deals with groups that share in order to produce projects that serve people outside of the group. Civic value is when groups collaborate on a project that serves to benefit society at large. Point being that value is determined on who is involved and the intended benefactors of their efforts and level of cooperation that is upheld amongst the group, and that ultimately cognitive surplus is the driving factor behind such efforts. The impact of the efforts can be as minuscule (personal) as sharing a selfie with the world, to Hashtags that are created to garner large followings and support to the issues that can change the world. Summary: Looking for the Mouse In the seventh and final chapter, Shirky starts to work towards his conclusion of how his novel is a resource and how society may utilize it. He references the happenings of the Post-World War II era and the resulting transformation that society endured. Shirky notes from Steven Weber's book The Success of Open Source that lesser costs nor technical quality are worthy explanations for why someone would collaborate on an open source project. Shirky states that there is a paradox in revolution in which is the result of someone being able to change the future of a previously-existing society. He also references other media that revolutionized human conversation, stating that PLATO, a computer system that existed in the 1960s, was the ancestor of today's electronic devices. He also shares with his readers that SixDegrees was the first social networking website, not Facebook and Friendster as everyone had previously thought. He makes several recommendations to his readers, some of them being that to know that it's impractical to start complex, start small, ask questions, and note that behavior is in pursuit of opportunity. Critical reception: The negative criticisms largely address the issue of negative uses of cognitive surplus. For example, Shirky discusses lolcats in the book, but this is a pretty innocuous example of negative or trite uses of cognitive surplus, especially considering the reality of cyber crimes, and other far more drastically negative uses. The main criticism of Shirky is that he is not realistic about the many possible ways we might waste this cognitive surplus, or worse, the many terrible ways it can and is being used for destructive and criminal activities, for example the global Jihadist movement. On the positive side, Shirky is praised for explaining the potential opportunities we can harness. He shows us effectively that we can not only make better use of our time, but also, that technology enables us to do so in a way that maximizes our ability to share and communicate. Critical reception: Positive One critic Russell Davies writes, "There are revealing thoughts in every chapter and they're particularly important for people trying to do business on the internet, because they shed light on some fundamental motivations and forces that we often miss or misconstrue". Sorin Adam Matei of Purdue University, West Lafayette writes, "Despite shortcomings, Cognitive Surplus remains overall a very well-written and generally well-informed contribution to our discussion about the social effects of social media. The academic research that shapes some of its assumptions and conclusions is well translated in everyday language," Davies describes Shirky as "the best and most helpful writer about the internet and society there is." He praised the book for elucidating the power of new technology for business. Davies says Shirky elucidated the personal/public media distinction "That explains a lot to me. It's obvious when you read it but failing to grasp the fused state of public/personal media is responsible for a lot of the things we get wrong online. We often take it to be a commercial, public media space (and we always seem to be looking for another small group of professionals out there to deal with)—but it's not just that. Things that are perfectly appropriate in public media just don't work in personal media. You wouldn't steam open people's letters and insert magazines ads, but that's sometimes how we seem to behave." Upon its release, Cognitive Surplus was praised by Tom Chatfield of The Guardian and James Harkin of The Financial Times who both are complimentary of Shirkey's depiction of the Internet and its effect on society. Critical reception: Negative His approach has been criticized by Farhad Manjoo in The New York Times for being too academic and for cheerleading positive examples of the online use of cognitive surplus. Similarly, Lehmann describes it as "the latest, monotonous installment in the sturdy tradition of exuberant web yay-saying."Lehmann's review compares the contradictions Shirky makes in his argument about quality being democratized to hailing "a cascade of unrefereed digital content as a breakthrough in creativity and critical thought is roughly akin to greeting news of a massive national egg recall by laying off the country's food inspectors." Moreover, he objects to Shirky's selective use of anecdotes to support his point. Meanwhile, he finds it disorienting and obscene to suggest the web is hailing a new economy and abolishing class in the midst of financial distress and joblessness. He also questions Shirky's assumption that free time was squandered prior to the web and suggests instead people did useful things with their time. Furthermore, he questions the intrinsic value of time spent online as a lot of time spent online may be used for things like gambling and porn. There's nothing innately compassionate or generous about the web. For any good thing people do online, someone could also be doing something bad with the internet.Lehmann also suggests that a cognitive surplus raises a question about what the baseline value of time spent was to begin with, "one", he claims "that might be better phrased as either 'Surplus for what?' or 'Whose surplus, white man?'" In the same vein, Lehmann accuses Shirky of being myopic. Shirky says the worst thing on the web is LOLcats when actually there are some bad things such as, for example, fake Obama birth certificates. Shirky says you cannot communicate with society on the basis of a web search to which Lehmann responds, The idea of society as a terminally unresponsive, nonconversant entity would certainly be news to the generations of labor and gender-equality advocates who persistently engaged the social order with demands for the ballot and the eight-hour workday. It would likewise ring strangely in the ears of the leaders of the civil rights movement, who used a concerted strategy of nonviolent protest as a means of addressing an abundance-obsessed white American public who couldn't find the time to regard racial inequality as a pressing social concern. The explicit content of such protests, meanwhile, indicted that same white American public on the basis of the civic and political standards—or rather double standards—of equality and opportunity that fueled the nation's chauvinist self-regard. Critical reception: Shirky bases a lot of his conclusions of generosity on the Ultimatum Game experiment to which Lehmann objects "The utility of the Ultimatum Game for a new market enabled theory of human nature thins out considerably when one realizes that the players are bartering with unearned money." and if you "Consult virtually any news story following up on a lottery winner's post-windfall life—to say nothing of the well-chronicled implosion of the past decade's market in mortgage backed securities—and you'll get a quick education in how playing games with other people's money can have a deranging effect on human behavior."Lehmann also criticizes Shirky's expectation of the web to change economics and governmental systems. For example, he criticizes Shirky's idealising of amateurism: As for crowdsourcing being a "labor of love" (Shirky primly reminds us that the term "amateur" "derives from the Latin amare—'to love'"), the governing metaphor here wouldn't seem to be digital sharecropping so much as the digital plantation. For all too transparent reasons of guilt sublimation, patrician apologists for antebellum slavery also insisted that their uncompensated workers loved their work, and likewise embraced their overseers as virtual family members. This is not, I should caution, to brand Shirky as a latter-day apologist for slavery but rather to note that it's an exceptionally arrogant tic of privilege to tell one's economic inferiors, online or off, what they do and do not love, and what the extra-material wellsprings of their motivation are supposed to be. To use an old-fashioned Enlightenment construct, it's at minimum an intrusion into a digital contributor's private life—even in the barrier-breaking world of Web 2.0 oversharing and friending. The just and proper rejoinder to any propagandist urging the virtues of uncompensated labor from an empyrean somewhere far above mere "society" is, "You try it, pal." The idea of crowdsourcing as a more egalitarian economic tool also draws criticism saying crowdsourcing is just cost-cutting, much akin to outsourcing. The possibility for the web to fundamentally change the government is also questioned as Cognitive Surplus is already aging badly, with the WikiLeaks furor showing just how little web-based traffic in raw information, no matter how revelatory or embarrassing, has upended the lumbering agendas of the old nation-state on the global chessboard of realpolitik—a place where everything has a price, often measured in human lives. More than that, though, Shirky's book inadvertently reminds us of the lesson we should have absorbed more fully with the 2000 collapse of the high-tech market: the utopian enthusiasms of our country's cyber-elite exemplify not merely what the historian E.P. Thompson called "the enormous condescension of posterity" but also a dangerous species of economic and civic illiteracy. Critical reception: The Western Cold War attitude has spawned a delusion about the power of information spreading to topple authoritarian regimes. This will not be the case in Eastern countries. Paul Barrett takes a similar though softer stance, claiming all of Shirky's examples are relatively tame and mildly progressive. Moreover, Shirky presents everything as civic change when some things such as carpooling services are really stretching the term.According to Matei, "A broader conclusion of the book is that converting 'cognitive surplus' into social capital and collective action is the product of technologies fueled by the passion of affirming the individual need for autonomy and competence." His enthusiasm for social media and for the Internet produces at times overly drawn statements. Author Jonah Lehrer criticized what he saw as Shirky's premise that forms of consumption, cultural consumption in particular, are inherently less worthy than producing and sharing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Image sharing** Image sharing: Image sharing, or photo sharing, is the publishing or transfer of digital photos online. Image sharing websites offer services such as uploading, hosting, managing and sharing of photos (publicly or privately). This function is provided through both websites and applications that facilitate the upload and display of images. The term can also be loosely applied to the use of online photo galleries that are set up and managed by individual users, including photoblogs. Sharing means that other users can view but not necessarily download images, and users can select different copyright options for their images. Image sharing: While photoblogs tend only to display a chronological view of user-selected medium-sized photos, most photo sharing sites provide multiple views (such as thumbnails and slideshows), the ability to classify photos into albums, and add annotations (such as captions or tags). Desktop photo management applications may include their own photo-sharing features or integration with sites for uploading images to them. There are also desktop applications whose sole function is sharing images, generally using peer-to-peer networking. Basic image sharing functionality can be found in applications that allow you to email photos, for example by dragging and dropping them into pre-designed templates. Photo sharing is not confined to the web and personal computers, but is also possible from portable devices such as camera phones, either directly or via MMS. Some cameras now come equipped with wireless networking and similar sharing functionality themselves. History: The first photo sharing sites originated during the mid to late 1990s, primarily from services providing online ordering of prints (photo finishing), but many more came into being during the early 2000s with the goal of providing permanent and centralized access to a user's photos, and in some cases video clips too. Webshots, SmugMug, Yahoo! Photos and Flickr were among the first. This has resulted in different approaches to revenue generation and functionality among providers. The first windows application was invented by jerry ackerman at icu software in 1990 and then Philip Morris at Kodak for a patent in 1994. Revenue models: Image sharing sites can be broadly broken up into two groups: sites that offer photo sharing for free and sites that charge consumers directly to host and share images.Of the sites that offer free photo sharing, most can be broken up into advertising-supported media plays and online photo finishing sites, where photo sharing is a vehicle to sell prints or other merchandise. These designations are not strict, and some subscription sites have a limited free version. Consumers can share their photos directly from their home computers over high speed connections through peer-to-peer photo sharing using applications. Peer-to-peer photo sharing often carries a small one-time cost for the software. Some sites allow you to post your pictures online and they will then project the image onto famous buildings during special events, while other sites let you insert photos into digital postcards, slide shows and photo albums and send them to others. Revenue models: Some free sites are owned by camera manufacturers, and only accept photos made with their hardware. Revenue models: Subscription-based In return for a fee, subscription-based photo sharing sites offer their services without the distraction of advertisements or promotions for prints and gifts. They may also have other enhancements over free services, such as guarantees regarding the online availability of photos, more storage space, the ability for non-account holders to download full-size, original versions of photos, and tools for backing up photos. Some offer user photographs for sale, splitting the proceeds with the photographer, while others may use a disclaimer to reserve the right to use or sell the photos without giving the photographer royalties or notice. Revenue models: Some image sharing sites have begun integrating video sharing as well. Sharing methods: Peer-to-peer With the introduction of high speed (broadband) connections directly to homes, it is feasible to share images and videos without going through a central service. The advantages of peer-to-peer sharing are reduced hosting costs and no loss of control to a central service. The downsides are that the consumer does not get the benefit of off-site backup; consumer Internet service providers (ISPs) often prohibit the serving of content both by contract and through the implementation of network filtering, and there are few quality guarantees for recipients. However, there are typically no direct consumer costs beyond the purchase of the initial software, provided the consumer already has a computer with the photos at home on a high speed connection. Applications like Tonido provide peer-to-peer photo sharing. Sharing methods: Peer-to-server Operating peer-to-peer solutions without a central server can create problems as some users do not leave their computers online and connected all the time. Using an always-on server like Windows Home Server which acts as an intermediate point, it is possible to share images peer-to-peer with the reliability and security of a central server. Images are securely stored behind a firewall on the Windows Home Server and can be accessed only by those with appropriate permissions. Sharing methods: Peer-to-browser A variation on the peer-to-peer model is peer-to-browser, whereby images are shared on one PC with the use of a local (on the host computer) software service (much like peer-to-peer) but made available to the viewer through a standard web browser. Technically speaking, this may still be described as peer-to-peer (with the second peer being a web browser) but it is characteristically different as it assumes no need to download peer software for the viewer. Photos are accessed by regular URLs that standard web browsers understand natively without any further software required. Consequently, photos shared in this way are accessible not only to users who have downloaded the correct peer software (compatible with the software in use by the sharer).Peer-to-browser sharing has (similar to peer-to-peer) reduced hosting costs, no loss of control to a central service, and no waiting for files to upload to the central service. Furthermore, universal web browser access to shared files makes them more widely accessible and available for use in different ways, such as embedding in, or linking to, from within web pages. As with peer-to-peer, the downsides are lack of off-site backup, possible inhibition by some ISPs, and limitations in speed of serving. Sharing methods: Social networks With the emergence of social networks, image sharing has now become a common online activity. For example, in Great Britain, 70% of online users engaged in image sharing in 2013; 64% of British users shared their photos through a social network. Facebook stated in 2015 that there were approximately two billion images uploaded to its service daily. In terms of image sharing, Facebook is the largest social networking service. On Facebook, people can upload and share their photo albums individually, and collaboratively with shared albums. This feature allows multiple users to upload pictures to the same album, and the album's creator has the ability to add or delete contributors. Twitter collaborated with Photobucket in developing a new photo sharing service so users can attach a picture to a tweet without depending on another application such as TwitPic or Yfrog. As of June 2016, there were more than 500 million monthly active Instagram users. Sharing methods: Link aggregation sites Image sharing on social news and image aggregation sites such as Reddit, Imgur, 4chan, Pinterest and Tumblr allow users to share images with a large community of users. Images are the most liked content of the aggregation and media sharing site Reddit; and according to data analyst Randy Olson as of August 2014, nearly 2/3 of all successful posts on the site were links to an image hosted on Imgur. Sharing methods: Mobile Sharing images via mobile phones has become popular. Several networks and applications have sprung up offering capabilities to share captured photos directly from mobile phones to social networks. The most prominent of these is Instagram, which has quickly become the dominant image sharing-centric social network with over 500 million members. Other applications and networks offering similar service and growing in popularity include Streamzoo, Path, PicsArt, Piictu, and Starmatic. Sharing methods: Apps Instagram, Snapchat and, in China, Nice (mobile app) are photo sharing apps with millions of users. Technologies: Web photo album generators Software can be found on the Internet to generate digital photo albums, usually to share photos on the web, using a home web server. In general, this is for advanced users that want to have better control over the appearance of their web albums and the actual servers they are going to run on. Technologies: Image classification Image sharing sites usually propose several ways to classify images. Most sites propose at least a taxonomy where images can be grouped within a directory-like structure in so-called "galleries". Some sites also allow users to classify images using tags to build a folksonomy. Depending on the restrictions on the set of users allowed to tag a single document and the set of tags available to describe the document, one speaks about narrow and broad folksonomies. A folksonomy is broad when there is no restriction on the set of taggers and available tags. When there are limitations, the folksonomy is called narrow. Another mechanism is coupling taxonomy and folksonomy, where tags associated to galleries and artists are cascaded to the galleries and artist's pictures. Broad taxonomies have interesting properties like the power law.The use of Artificial Intelligence to classify uploaded photos by subject, theme, or location is a prominent feature of Raise Archived 2019-03-06 at the Wayback Machine that Canon-USA launched early March 2019. Technologies: Photo tagging Photo tagging is the process that allows users to tag and group photos of an individual or individuals. With facial recognition software tagging photos can become quicker and easier; the more tagging done of an individual the more accurate the software can be. This type of software is currently in use on Facebook. Photo tagging is a way of labeling photos so that viewers can know who is who in the picture. On most online photo sharing sites such as Facebook, a tag can also be used as a link that when clicked will take you to the person's profile that was tagged. Most of the time photos can only be tagged by the user to uploads the photo but on some sites photos can be tagged by other users as well. These tags can be searched for across the entire Internet, on separate websites or in private data bases. They can be used for crowdsourced classification (see the section on image classification) but can also play a socio-cultural role in that they can establish neologisms, Internet memes, snowclones, slogans, catch phrases, shared vocabularies and categorizations as well as producing comedic twists, contexts and perspectives of the presented images, and hence often play a significant role in the community building and identity formation of and the entertainment in online communities that allow the creation of broad folksonomies. Technologies: Geotagging Geotagging a photo is the process in which a photo is marked with the geographical identification of the place it was taken. Most technology with photo taking capabilities are equipped with GPS system sensors that routinely geotag photos and videos. Crowdsourced data available from photo-sharing services have the potentiality of tracking places. Geotagging can reveal the footprints and behaviors of travelers by utilizing spatial proximity of geo-tagged photos that are shared online, making it possible to extract travel information relating to a particular location. Instagram, Flickr, and Panoramio are a few services that provide the option of geotagging images. Flickr has over 40 million geotagged photos uploaded by 400 thousand users, and still growing at a rapid pace. Some sites including Panoramio and Wikimedia Commons show their geocoded photographs on a map, helping the user find pictures of the same or nearby objects from different directions. Criticism: Critics of image/photo sharing are concerned with the use of applications such as Instagram, because they believe that the behaviors portrayed on these sites could potentially be linked to the narcissism trait. Keen argues that "Self" is running digital culture, and he states that people use social-media platforms because they are interested in advertising themselves. Buffardi and Campbell (2008) also alleged that Instagram offers "a gateway for self-promotion via self-descriptions, vanity via photos, and a large amount of shallow relationships." However, they later said that the large number of users suggests the general psychology of the members is normative. Criticism: Privacy Privacy activists and researchers have noted that the sharing of images on social networks may compromise the privacy of people depicted in them. Further, most current social networks afford their users little control over content that they did not post themselves. In its privacy policy, Facebook states that any information posted using its service, including images, may be used to display relevant ads to its users. Facebook utilizes automatic facial recognition software that can automatically recognize the face of another Facebook user in new photos, and suggest that the user be tagged in the photo. A Ghent University study found that employers commonly search for prospective employees on Facebook, and may decide whether or not to grant an interview based on the person's profile picture. Purposes: The increasing ease of use has encouraged image sharing in insurance, including crop insurance. The insurance company and farmer have a shared interest in the current state of a field. This method allows crop health to be monitored more quickly and easily than any other way.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aggregate (composite)** Aggregate (composite): Aggregate is the component of a composite material that resists compressive stress and provides bulk to the composite material. For efficient filling, aggregate should be much smaller than the finished item, but have a wide variety of sizes. For example, the particles of stone used to make concrete typically include both sand and gravel. Comparison to fiber composites: Aggregate composites tend to be much easier to fabricate, and much more predictable in their finished properties, than fiber composites. Fiber orientation and continuity can have an overwhelming effect, but can be difficult to control and assess. Fabrication aside, aggregate materials themselves also tend to be less expensive; the most common aggregates mentioned above are found in nature and can often be used with only minimal processing. Comparison to fiber composites: Not all composite materials include aggregate. Aggregate particles tend to have about the same dimensions in every direction (that is, an aspect ratio of about one), so that aggregate composites do not display the level of synergy that fiber composites often do. A strong aggregate held together by a weak matrix will be weak in tension, whereas fibers can be less sensitive to matrix properties, especially if they are properly oriented and run the entire length of the part (i.e., a continuous filament). Comparison to fiber composites: Most composites are filled with particles whose aspect ratio lies somewhere between oriented filaments and spherical aggregates. A good compromise is chopped fiber, where the performance of filament or cloth is traded off in favor of more aggregate-like processing techniques. Ellipsoid and plate-shaped aggregates are also used. Aggregate properties: In most cases, the ideal finished piece would be 100% aggregate. A given application's most desirable quality (be it high strength, low cost, high dielectric constant, or low density) is usually most prominent in the aggregate itself; all the aggregate lacks is the ability to flow on a small scale, and form attachments between particles. The matrix is specifically chosen to serve this role, but its abilities should not be abused. Aggregate properties: Aggregate size Experiments and mathematical models show that more of a given volume can be filled with hard spheres if it is first filled with large spheres, then the spaces between (interstices) are filled with smaller spheres, and the new interstices filled with still smaller spheres as many times as possible. For this reason, control of particle size distribution can be quite important in the choice of aggregate; appropriate simulations or experiments are necessary to determine the optimal proportions of different-sized particles. Aggregate properties: The upper limit to particle size depends on the amount of flow required before the composite sets (the gravel in paving concrete can be fairly coarse, but fine sand must be used for tile mortar), whereas the lower limit is due to the thickness of matrix material at which its properties change (clay is not included in concrete because it would "absorb" the matrix, preventing a strong bond to other aggregate particles). Particle size distribution is also the subject of much study in the fields of ceramics and powder metallurgy. Aggregate properties: Some exceptions to this rule include: Toughened composites Toughness is a compromise between the (often contradictory) requirements of strength and plasticity. In many cases, the aggregate will have one of these properties, and will benefit if the matrix can add what it lacks. Perhaps the most accessible examples of this are composites with an organic matrix and ceramic aggregate, such as asphalt concrete ("tarmac") and filled plastic (i.e., Nylon mixed with powdered glass), although most metal matrix composites also benefit from this effect. In this case, the correct balance of hard and soft components is necessary or the material will become either too weak or too brittle. Aggregate properties: Nanocomposites Many materials properties change radically at small length scales (see nanotechnology). In the case where this change is desirable, a certain range of aggregate size is necessary to ensure good performance. This naturally sets a lower limit to the amount of matrix material used. Unless some practical method is implemented to orient the particles in micro- or nano-composites, their small size and (usually) high strength relative to the particle-matrix bond allows any macroscopic object made from them to be treated as an aggregate composite in many respects. Aggregate properties: While bulk synthesis of such nanoparticles as carbon nanotubes is currently too expensive for widespread use, some less extreme nanostructured materials can be synthesized by traditional methods, including electrospinning and spray pyrolysis. One important aggregate made by spray pyrolysis is glass microspheres. Often called microballoons, they consist of a hollow shell several tens of nanometers thick and approximately one micrometer in diameter. Casting them in a polymer matrix yields syntactic foam, with extremely high compressive strength for its low density. Aggregate properties: Many traditional nanocomposites escape the problem of aggregate synthesis in one of two ways: Natural aggregates: By far the most widely used aggregates for nano-composites are naturally occurring. Usually these are ceramic materials whose crystalline structure is extremely directional, allowing it to be easily separated into flakes or fibers. The nanotechnology touted by General Motors for automotive use is in the former category: a fine-grained clay with a laminar structure suspended in a thermoplastic olefin (a class which includes many common plastics like polyethylene and polypropylene). The latter category includes fibrous asbestos composites (popular in the mid-20th century), often with matrix materials such as linoleum and Portland cement. Aggregate properties: In-situ aggregate formation: Many micro-composites form their aggregate particles by a process of self-assembly. For example, in high impact polystyrene, two immiscible phases of polymer (including brittle polystyrene and rubbery polybutadiene) are mixed together. Special molecules (graft copolymers) include separate portions which are soluble in each phase, and so are only stable at the interface between them, in the manner of a detergent. Since the number of this type of molecule determines the interfacial area, and since spheres naturally form to minimize surface tension, synthetic chemists can control the size of polybutadiene droplets in the molten mix, which harden to form rubbery aggregates in a hard matrix. Dispersion strengthening is a similar example from the field of metallurgy. In glass-ceramics, the aggregate is often chosen to have a negative coefficient of thermal expansion, and the proportion of aggregate to matrix adjusted so that the overall expansion is very near zero. Aggregate size can be reduced so that the material is transparent to infrared light.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alpha mapping** Alpha mapping: Alpha Mapping is a technique in 3D computer graphics involving the use of texture mapping to designate the amount of transparency/translucency of areas in a certain object. Alpha mapping is used when the given object's transparency is not consistent: when the transparency amount is not the same for the entire object and/or when the object is not entirely transparent. If the object has the same level of transparency everywhere, one can either use a solid-color alpha texture or an integer value. The alpha map is often encoded in the alpha channel of an RGBA texture used for coloring instead of being a standalone greyscale texture.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MINK1** MINK1: Misshapen-like kinase 1 is an enzyme that in humans is encoded by the MINK1 gene. Function: Misshapen-like kinase 1 is a serine/threonine kinase belonging to the germinal center kinase (GCK) family. The protein is structurally similar to the kinases that are related to NIK and may belong to a distinct subfamily of NIK-related kinases within the GCK family. Studies of the mouse homolog indicate an up-regulation of expression in the course of postnatal mouse cerebral development and activation of the cJun N-terminal kinase (JNK) and the p38 pathways. Alternative splicing occurs at this locus and four transcript variants encoding distinct isoforms have been identified. Interactions: MINK1 has been shown to interact with NCK1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sandpaper fig** Sandpaper fig: The sandpaper figs are so named for their leaves, which are rough and sandpaper-like in texture. The common name may refer to a number of species in the genus Ficus: Australian species: Ficus carpentariensis, possibly hybrid individuals Ficus coronata, creek sandpaper fig Ficus coronulata, crown, peach-leaf or river fig Ficus copiosa, sandpaper fig of New Guinea and northern Australia Ficus fraseri, white or shiny sandpaper fig Ficus leptoclada, Atherton sandpaper fig Ficus opposita, sweet sandpaper fig Ficus podocarpifolia Ficus scobina, sandpaper fig Ficus virgataOthers: Ficus capreifolia, river sandpaper fig Ficus exasperata, sandpaper forest fig
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Differential graded category** Differential graded category: In mathematics, especially homological algebra, a differential graded category, often shortened to dg-category or DG category, is a category whose morphism sets are endowed with the additional structure of a differential graded Z -module. Differential graded category: In detail, this means that Hom ⁡(A,B) , the morphisms from any object A to another object B of the category is a direct sum Hom n⁡(A,B) and there is a differential d on this graded group, i.e., for each n there is a linear map Hom Hom n+1⁡(A,B) ,which has to satisfy d∘d=0 . This is equivalent to saying that Hom ⁡(A,B) is a cochain complex. Furthermore, the composition of morphisms Hom Hom Hom ⁡(A,C) is required to be a map of complexes, and for all objects A of the category, one requires id A)=0 Examples: Any additive category may be considered to be a DG-category by imposing the trivial grading (i.e. all Homn(−,−) vanish for n≠0 ) and trivial differential ( d=0 ). Examples: A little bit more sophisticated is the category of complexes C(A) over an additive category A . By definition, Hom C(A),n⁡(A,B) is the group of maps A→B[n] which do not need to respect the differentials of the complexes A and B, i.e., HomC(A),n(A,B)=∏l∈ZHom(Al,Bl+n) The differential of such a morphism f=(fl:Al→Bl+n) of degree n is defined to be fl+1∘dA+(−1)n+1dB∘fl where dA,dB are the differentials of A and B, respectively. This applies to the category of complexes of quasi-coherent sheaves on a scheme over a ring.A DG-category with one object is the same as a DG-ring. A DG-ring over a field is called DG-algebra, or differential graded algebra. Further properties: The category of small dg-categories can be endowed with a model category structure such that weak equivalences are those functors that induce an equivalence of derived categories.Given a dg-category C over some ring R, there is a notion of smoothness and properness of C that reduces to the usual notions of smooth and proper morphisms in case C is the category of quasi-coherent sheaves on some scheme X over R. Relation to triangulated categories: A DG category C is called pre-triangulated if it has a suspension functor Σ and a class of distinguished triangles compatible with the suspension, such that its homotopy category Ho(C) is a triangulated category. A triangulated category T is said to have a dg enhancement C if C is a pretriangulated dg category whose homotopy category is equivalent to T. dg enhancements of an exact functor between triangulated categories are defined similarly. In general, there need not exist dg enhancements of triangulated categories or functors between them, for example stable homotopy category can be shown not to arise from a dg category in this way. However, various positive results do exist, for example the derived category D(A) of a Grothendieck abelian category A admits a unique dg enhancement.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Attaque à outrance** Attaque à outrance: Attaque à outrance (French: Attack to excess) was the expression of a military philosophy common to many armies in the period before and during the earlier parts of World War I. Attaque à outrance: This philosophy was a response to the increasing weight of defensive firepower that accrued to armies in the nineteenth century, as a result of several technological innovations, notably breech-loading rifled guns, machine guns, and light field artillery firing high-explosive shells. It held that the victor would be the side with the strongest will, courage, and dash/energy (élan), and that every attack must therefore be pushed to the limit. The lethality of artillery, combined with the lack of mobility of infantry, as well as the subsequent development of trench warfare, rendered this tactic extremely costly and usually ineffective. Attaque à outrance: The philosophy is particularly associated with the French, due to its adoption by Noël de Castelnau in the First Battle of Champagne (1914), and by Robert Nivelle in the Nivelle offensive (1917). Joseph Joffre, French chief of general staff from 1911 on, had originally adopted the doctrine for the French military and purged the army of 'defensively-minded' commanders. However, all sides launched large, costly and futile frontal offensives in this style: the British at the Battle of the Somme (1916), the Germans in the First Battle of Ypres (1914), the Russians in the Brusilov offensive (1916), and so on. Attaque à outrance: The origins of this doctrine are traced back to the increasingly militarized 'Warrior Culture' that most European nations developed during the 19th century, where the ideal citizen was the soldier employed by his homeland. This predisposed officers and soldiers towards narrow ideals focusing on blind courage in the face of war's adversity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Auricular branch of occipital artery** Auricular branch of occipital artery: The auricular branch of occipital artery supplies the back of the concha and frequently gives off a branch, which enters the skull through the mastoid foramen and supplies the dura mater, the diploë, and the mastoid cells; this latter branch sometimes arises from the occipital artery, and is then known as the mastoid branch.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aktashite** Aktashite: Aktashite is a rare arsenic sulfosalt mineral with formula Cu6Hg3As4S12. It is a copper mercury-bearing sulfosalt and is the only sulfosalt mineral with essential Cu and Hg yet known. It is of hydrothermal origin. It was published without approval of the IMA-CNMNC, but recognized as valid species by the IMA-CNMNC Sulfosalts Subcommittee (2008).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Golden rice** Golden rice: Golden rice is a variety of rice (Oryza sativa) produced through genetic engineering to biosynthesize beta-carotene, a precursor of vitamin A, in the edible parts of the rice. It is intended to produce a fortified food to be grown and consumed in areas with a shortage of dietary vitamin A. Vitamin A deficiency causes xerophthalmia, a range of eye conditions from night blindness to more severe clinical outcomes such as keratomalacia and corneal scars, and permanent blindness. Additionally, vitamin A deficiency also increases risk of mortality from measles and diarrhea in children. In 2013, the prevalence of deficiency was the highest in sub-Saharan Africa (48%; 25–75), and South Asia (44%; 13–79).Although golden rice has met significant opposition from environmental and anti-globalisation activists, more than 100 Nobel laureates in 2016 encouraged use of genetically modified golden rice which can produce up to 23 times as much beta-carotene as the original golden rice. History: Research for development of golden rice began as a Rockefeller Foundation initiative in 1982.In the 1990s, Peter Bramley discovered that a single phytoene desaturase gene (bacterial CrtI) can be used to produce lycopene from phytoene in GM tomato, rather than having to introduce multiple carotene desaturases that are normally used by higher plants. Lycopene is then cyclized to beta-carotene by the endogenous cyclase in golden rice. The scientific details of the rice were first published in 2000, the product of an eight-year project by Ingo Potrykus of the Swiss Federal Institute of Technology and Peter Beyer of the University of Freiburg.The first field trials of golden rice cultivars were conducted by Louisiana State University Agricultural Center in 2004. Additional trials were conducted in the Philippines, Taiwan, and in Bangladesh (2015). Field testing provided an accurate measurement of nutritional value and enabled feeding tests to be performed. Preliminary results from field tests showed field-grown golden rice produces 4 to 5 times more beta-carotene than golden rice grown under greenhouse conditions. History: Crossbreeding As of 2018, breeders at the Philippine Rice Research Institute, the Bangladesh Rice Research Institute, and the Indonesian Centre for Rice Research were developing golden rice versions of existing rice varieties used with their local farmers, retaining the same yield, pest resistance, and grain qualities. Golden rice seeds may cost farmers the same as other rice varieties. History: Approvals In 2018, Canada and the United States approved golden rice, with Health Canada and the US Food and Drug Administration (FDA) declaring it safe for consumption. This followed a 2016 decision where the FDA had ruled that the beta-carotene content in golden rice did not provide sufficient amounts of vitamin A for US markets. Health Canada declared that golden rice would not affect allergies, and that the nutrient contents were the same as in common rice varieties, except for the intended high levels of provitamin A.In 2019, golden rice was approved for use as human food and animal feed or for processing in the Philippines. On 21 July 2021, the Philippines became the first country to officially issue the biosafety permit for commercially propagating vitamin A-infused golden rice. The approval came as the first commercial propagation authorisation of genetically engineered rice in South and Southeast Asia. As a result of the permission, golden rice can be grown on a commercial scale in accordance with the terms and conditions specified by the Philippines government. In April 2023, however, the country's Supreme Court ordered the agriculture department to stop commercial propagation of golden rice in relation to a petition filed by MASIPAG (a group of farmers and scientists), who claimed that golden rice poses risk to the health of consumers and to the environment. Genetics: Golden rice was created by transforming rice with two beta-carotene biosynthesis genes: psy (phytoene synthase) from daffodil (Narcissus pseudonarcissus) crtI (phytoene desaturase) from the soil bacterium Erwinia uredovora(The insertion of a lcy (lycopene cyclase) gene was thought to be needed, but further research showed it is already produced in wild-type rice endosperm.) The psy and crtI genes were transferred into the rice nuclear genome and placed under the control of an endosperm-specific promoter, so that they are only expressed in the endosperm. The exogenous lcy gene has a transit peptide sequence attached, so it is targeted to the plastid, where geranylgeranyl diphosphate is formed. The bacterial crtI gene was an important inclusion to complete the pathway, since it can catalyse multiple steps in the synthesis of carotenoids up to lycopene, while these steps require more than one enzyme in plants. The end product of the engineered pathway is lycopene, but if the plant accumulated lycopene, the rice would be red. Recent analysis has shown the plant's endogenous enzymes process the lycopene to beta-carotene in the endosperm, giving the rice the distinctive yellow colour for which it is named. The original golden rice was called SGR1, and under greenhouse conditions it produced 1.6 µg/g of carotenoids. Genetics: Golden Rice 2 In 2005, a team of researchers at Syngenta produced Golden Rice 2. They combined the phytoene synthase (psy) gene from maize with crtl gene from the original golden rice. Golden Rice 2 produces 23 times more carotenoids than golden rice (up to 37 µg/g) because psy gene of maize is the most effective gene for carotenoid synthesis, and preferentially accumulates beta-carotene (up to 31 µg/g of the 37 µg/g of carotenoids). Vitamin A deficiency: The research that led to golden rice was conducted with the goal of helping children who suffer from vitamin A deficiency (VAD). Estimates show that around 1.02 billion people are severely affected by micronutrient deficiencies globally, with vitamin A to be the most deficient nutrient in the body. In 2012, the World Health Organization reported that about 250 million preschool children are affected by VAD, and that providing those children with vitamin A could prevent about a third of all under-five deaths, which amounts to up to 2.7 million children that could be saved from dying unnecessarily. The World Health Organization has classified vitamin A deficiency as a public health problem affecting about one third of children aged 6 to 59 months in 2013, with the highest rates in sub-Saharan Africa (48 per cent) and South Asia (44 per cent).VAS programs began in the 1990s in response to evidence demonstrating the association between VAD and increased childhood mortality. Between 1990 and 2013, more than 40 efficacy studies of VAS in children 6–59 months of age were conducted, and two systematic reviews and meta-analyses have concluded that VA supplements can considerably reduce mortality and morbidity during childhood. As of 2017, more than 80 countries worldwide are implementing universal VA supplementation (VAS) programs targeted to children 6–59 months of age through semi-annual national campaigns. Periodic, high-dose vitamin A supplementation is a proven, low-cost intervention which has been shown to reduce all-cause mortality by 12 to 24 per cent, and is therefore an important program in support of efforts to reduce child mortality. However, UNICEF and a number of NGOs involved in supplementation note more frequent low-dose supplementation is preferable. Vitamin A deficiency: As many children in VAD-affected countries rely on rice as a staple food, genetic modification to make rice produce the vitamin A precursor beta-carotene was seen as a simple and less expensive alternative to ongoing vitamin supplements or an increase in the consumption of green vegetables or animal products. Initial analyses of the potential nutritional benefits of golden rice suggested consumption of golden rice would not eliminate the problems of vitamin A deficiency, but could complement other supplementation. Golden Rice 2 contains sufficient provitamin A to provide the entire dietary requirement via daily consumption of some 75 grams (3 oz) per day.Vitamin A deficiency is usually coupled to an unbalanced diet. Since carotenes are hydrophobic, sufficient fat must be present in the diet for golden rice (or most other vitamin A supplements) to alleviate vitamin A deficiency. Moreover, this claim referred to an early cultivar of golden rice; one bowl of the latest version provides 60% of RDA for healthy children. The RDA levels advocated in developed countries are far in excess of the amounts needed to prevent blindness. Research: In 2009, results of a clinical trial of golden rice with adult volunteers concluded that "beta-carotene derived from golden rice is effectively converted to vitamin A in humans". A summary for the American Society for Nutrition suggested that "Golden Rice could probably supply 50% of the Recommended Dietary Allowance (RDA) of vitamin A from a very modest amount – perhaps a cup – of rice, if consumed daily. This amount is well within the consumption habits of most young children and their mothers." Beta-carotene is found and consumed in many nutritious foods eaten around the world, including fruits and vegetables. Beta-carotene in food is a safe source of vitamin A.A 2012 study showed that the beta-carotene produced by golden rice is as effective as beta-carotene in oil at providing vitamin A to children. The study stated that "recruitment processes and protocol were approved". However, in 2015, the journal retracted the study, claiming that the researchers had acted unethically when providing Chinese children golden rice without their parents' consent.Golden rice improves vitamin A intake and may reduce vitamin A deficiency among women and children. Food derived from golden rice varieties is as safe as food derived from conventional rice varieties. Controversy: Critics of genetically engineered crops have raised various concerns. An early issue was that golden rice originally did not have sufficient beta-carotene content. This problem was solved by the advancing of GR2E event. The speed at which beta-carotene degrades once the rice is harvested, and how much remains after cooking are contested. However, a 2009 study concluded that beta-carotene from golden rice is effectively converted into vitamin A in humans.Greenpeace opposes the use of any patented genetically modified organisms in agriculture and opposes the cultivation of golden rice, claiming it will open the door to more widespread use of GMOs. The International Rice Research Institute (IRRI) has emphasised the non-commercial nature of their project, stating that "None of the companies listed ... are involved in carrying out the research and development activities of IRRI or its partners in Golden Rice, and none of them will receive any royalty or payment from the marketing or selling of golden rice varieties developed by IRRI."Vandana Shiva, an Indian anti-GMO activist, argued the problem was not the plant per se, but potential issues with loss of biodiversity. Shiva argued that golden rice proponents were obscuring the limited availability of diverse and nutritionally adequate food. Other groups argued that a varied diet containing foods rich in beta-carotene such as sweet potato, leaf vegetables and fruit would provide children with sufficient vitamin A. However, Keith West of Johns Hopkins Bloomberg School of Public Health has said that foodstuffs containing vitamin A are often unavailable, only available in certain seasons, or are too expensive for poor families to obtain.In 2008, WHO malnutrition expert Francesco Branca cited the lack of real-world studies and uncertainty about how many people will use golden rice, concluding "giving out supplements, fortifying existing foods with vitamin A, and teaching people to grow carrots or certain leafy vegetables are, for now, more promising ways to fight the problem". Author Michael Pollan, who had criticized the product in 2001, being unimpressed by the benefits, expressed support for the continuation of the research in 2013.In 2012, controversy surrounded a study published in The American Journal of Clinical Nutrition. The study, involving feeding GM rice to children from 6 to 8 years old in China, was later found to have violated human research rules of both Tufts University and the federal government. Subsequent reviews found no evidence of safety problems with the study, but found issues with insufficient consent forms, unapproved changes to study protocol, and lack of approval from a China-based ethics review board. Additionally, the GM rice used was brought into China illegally. Controversy: Support The Bill and Melinda Gates Foundation supports the use of genetically modified organisms in agricultural development and supports the International Rice Research Institute in developing golden rice. In June 2016, 107 Nobel laureates signed a letter urging Greenpeace and its supporters to abandon their campaign against GMOs, and against golden rice in particular.In May 2018, the U.S. Food and Drug Administration approved the use of golden rice for human consumption, stating: "Based on the information IRRI has presented to FDA, we have no further questions concerning human or animal food derived from GR2E rice at this time." This marks the fourth national health organisation to approve the use of golden rice in 2018, joining Australia, Canada and New Zealand who issued their assessments earlier in the year.In December 2021, an opinion piece in Proceedings of the National Academy of Sciences of the United States of America called on regulators to "allow Golden Rice to save lives", which the authors say has been delayed due to "fear and false accusations", leading to estimated 266,000 lives lost per year due to vitamin A deficiency. Controversy: Protests On August 8, 2013, an experimental plot of golden rice being developed by IRRI and DA-PhilRice in Camarines Sur province of the Philippines was uprooted by protesters. British author Mark Lynas reported in Slate that the vandalism was carried out by a group of activists led by Kilusang Magbubukid ng Pilipinas (KMP) (literally 'Farmers' Movement of the Philippines'). Distribution: A recommendation was made that golden rice be distributed free to subsistence farmers. Free licenses for developing countries were granted quickly due to the positive publicity that golden rice received, particularly in Time magazine in July 2000. Monsanto Company was one of the companies to grant free licences for related patents owned by the company. The cutoff between humanitarian and commercial use was set at US$10,000. Therefore, as long as a farmer or subsequent user of golden rice genetics would not make more than $10,000 per year, no royalties would need to be paid. In addition, farmers would be permitted to keep and replant seed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methylcyclohexane** Methylcyclohexane: Methylcyclohexane (cyclohexylmethane) is an organic compound with the molecular formula is CH3C6H11. Classified as saturated hydrocarbon, it is a colourless liquid with a faint odor. Methylcyclohexane is used as a solvent. It is mainly converted in naphtha reformers to toluene. Methylcyclohexane is also used in some correction fluids (such as White-Out) as a solvent. Production and use: It can be also produced by hydrogenation of toluene: CH3C6H5 + 3 H2 → CH3C6H11Methylcyclohexane, as a component of a mixture, is usually dehydrogenated to toluene, which increases the octane rating of gasoline. It is also one of a host substances in jet fuel surrogate blends, e.g., for Jet A fuel. Solvent Methylcyclohexane is used as an organic solvent, with properties similar to related saturated hydrocarbons such as heptane. It is also a solvent in many types of correction fluids. Structure: Methylcyclohexane is a monosubstituted cyclohexane because it has one branching via the attachment of one methyl group on one carbon of the cyclohexane ring. Like all cyclohexanes, it can interconvert rapidly between two chair conformers. The lowest energy form of this monosubstituted methylcyclohexane occurs when the methyl group occupies an equatorial rather than an axial position. This equilibrium is embodied in the concept of A value. In the axial position, the methyl group experiences steric crowding (steric strain) because of the presence of axial hydrogen atoms on the same side of the ring (known as the 1,3-diaxial interactions). There are two such interactions, with each pairwise methyl/hydrogen combination contributing approximately 7.61 kJ/mol of strain energy. The equatorial conformation experiences no such interaction, and so it is the energetically favored conformation. Flammability and toxicity: Methylcyclohexane is flammable. Furthermore, it is considered "very toxic to aquatic life". Note, while methylcyclohexane is a substructure of 4-methylcyclohexanemethanol (MCHM), it is distinct in its physical, chemical, and biological (ecologic, metabolic, and toxicologic) properties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tirapazamine** Tirapazamine: Tirapazamine (SR-[[4233]]) is an experimental anticancer drug that is activated to a toxic radical only at very low levels of oxygen (hypoxia). Such levels are common in human solid tumors, a phenomenon known as tumor hypoxia. Thus, tirapazamine is activated to its toxic form preferentially in the hypoxic areas of solid tumors. Cells in these regions are resistant to killing by radiotherapy and most anticancer drugs. Thus the combination of tirapazamine with conventional anticancer treatments is particularly effective. As of 2006, tirapazamine is undergoing phase III testing in patients with head and neck cancer and gynecological cancer, and similar trials are being undertaken for other solid tumor types.Chemically it is an aromatic heterocycle di-N-oxide. Its full chemical name is 3-amino-1,2,4-benzotriazine-1,4 dioxide. Originally it was prepared in a program screening for new herbicides in 1972. Its clinical use was first described by Zeman et al. in 1986. While tirapazamine has had only limited effectiveness in clinical trials, it has been used as a lead compound to develop a number of newer compounds with improved anti-cancer properties.An update of a Phase III trial (Tirapazamine, cisplatin, and radiation versus cisplatin and radiation for advanced squamous cell carcinoma of the head and neck (TROG 02.02, HeadSTART): a phase III trial of the Trans-Tasman Radiation Oncology Group) found no evidence that the addition of TPZ to chemoradiotherapy, in patients with advanced head and neck cancer not selected for the presence of hypoxia, improved overall survival.Two possible molecular mechanisms of TPZ, for generating reactive oxygen species which causes DNA strand break, have been considered widely. In hypoxia, under bioreductive condition, it has been observed that TPZ primarily produces hydroxyl or and benzotriazinyl radicals as the DNA damaging reactive species.A new clinical phase I trial of Tirapazamine combined with embolization in liver cancer has been received in June, 2014. This study will help to optimize the safe tolerable dose of TPZ, when it is administered with embolization in liver cancer. Tirapazamine: Treatment of solid tumors is complicated by the fact that these are often poorly provided with blood vessels, thus limiting their exposure to cytotoxic agents. Attempts have, however, been made to take advantage of the resulting hypoxic environment by designing drugs that are nonreactive until they are reduced to reactive species in oxygen-deficient tissues. This, it is hoped, will lead to enhanced selectivity. The azaquinoxaline dioxide function on the antineoplastic agent tirapazamine, for example, has been shown to give reactive nitroxide radicals on reduction. Synthesis: The first step in the synthesis, condensation of 2-nitroaniline (1) with cyanamide, probably involves initial formation of a guanidine such as 2. This then cyclizes to the heterocycle 3. Oxidation with hydrogen peroxide then completes the preparation of tirapazamine (4).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Angiotensin II receptor type 1** Angiotensin II receptor type 1: Angiotensin II receptor type 1 (AT1) is the best characterized angiotensin receptor. It is encoded in humans by the AGTR1 gene. AT1 has vasopressor effects and regulates aldosterone secretion. It is an important effector controlling blood pressure and volume in the cardiovascular system. Angiotensin II receptor blockers are drugs indicated for hypertension, diabetic nephropathy and congestive heart failure. Function: The AT1 receptor mediates the major cardiovascular effects of angiotensin II. Effects include vasoconstriction, aldosterone synthesis and secretion, increased vasopressin secretion, cardiac hypertrophy, augmentation of peripheral noradrenergic activity, vascular smooth muscle cells proliferation, decreased renal blood flow, renal renin inhibition, renal tubular sodium reuptake, modulation of central sympathetic nervous system activity, cardiac contractility, central osmocontrol and extracellular matrix formation. The main function of angiotensin II in the brain is to stimulate drinking behavior, an effect that is mediated by the AT1 receptor. Mechanism: The angiotensin receptor is activated by the vasoconstricting peptide angiotensin II. The activated receptor in turn couples to Gq/11 and thus activates phospholipase C and increases the cytosolic Ca2+ concentrations, which in turn triggers cellular responses such as stimulation of protein kinase C. Activated receptor also inhibits adenylate cyclase in hepatocytes and activates various tyrosine kinases. Clinical significance: Due to the hemodynamic pressure and volume effects mediated by AT1 receptors, AT1 receptor antagonists are widely prescribed drugs in the management of hypertension and stable heart failure. Animal studies: Elements of the renin-angiotensin system have been widely studied in a large variety of vertebrate animals including amphibians, reptiles, birds, and mammals.AT1 receptor blockers have been shown to reduce fear memory recall in mice, but the reliability and relevance of this finding are to be determined. Gene: It was previously thought that a related gene, denoted as AGTR1B, existed; however, it is now believed that there is only one type 1 receptor gene in humans. At least four transcript variants have been described for this gene. Additional variants have been described but their full-length nature has not been determined. The entire coding sequence is contained in the terminal exon and is present in all transcript variants.A huge number of polymorphisms is reported in the databases for AT1R which provide an avenue to explore these polymorphisms for their implications in protein structure, function and drug efficacy. Methods In the current study all the SNPs (10234) reported in NCBI were analyzed and SNPs which were important in protein structure and drug interactions were identified. Structures of these polymorphic forms were modeled and in silico drug interaction studies were carried out. Results Result of the interaction studies with polymorphism was in correlation with the reported case. Two SNP mutated structures of AT1R i.e. rs780860717 (G288T), rs868647200 (A182C) shows considerably less binding affinities in case of all angiotensin receptor blockers (ARBs). Interactions: Angiotensin II receptor type 1 has been shown to interact with Zinc finger and BTB domain-containing protein 16. The protein's mRNA has been reported to interact with Mir-132 microRNA as part of an RNA silencing mechanism that reduces receptor expression.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Development informatics** Development informatics: Development informatics is a field of both research and practice focusing on the application of information systems in socio-economic development. Development informatics: The "informatics" terminology is intended to be a translation of the French "informatique". It indicates a broad and systemic view that encompasses four inter-linked levels: Data, information and knowledge Information and communication technologies Processes of learning, decision-making and communication Wider human, organisational and national contextThe terminology is therefore intended to indicate a broader approach than that taken by the more techno-centric definitions of either Information and Communication Technologies for Development (ICT4D), which focuses on use of ICTs for delivery of specific development goals, or Information and Communication Technologies and Development (ICTD), which looks at use of ICTs in developing countries. Development informatics: However, it is unclear whether these differences are understood or used in practice.The main network for those active in development informatics is the International Development Informatics Association, which organises conferences and publications in the field.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conical refiner** Conical refiner: The conical refiner is a machine used in the refining of pulp in the papermaking process. It may also be referred to as a Jordan refiner, after the American inventor Joseph Jordan who patented the device in 1858. Conical refiner: The conical refiner is a chamber with metal bars mounted around the inside of the container. The material to be refined is pumped into the chamber at high-pressure rate in order to create an abrasive effect as the material is forced through the machine, abraided by the metal bars. At the opposite end of the chamber the resulting pulp is pumped out.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Respiratory failure** Respiratory failure: Respiratory failure results from inadequate gas exchange by the respiratory system, meaning that the arterial oxygen, carbon dioxide, or both cannot be kept at normal levels. A drop in the oxygen carried in the blood is known as hypoxemia; a rise in arterial carbon dioxide levels is called hypercapnia. Respiratory failure is classified as either Type 1 or Type 2, based on whether there is a high carbon dioxide level, and can be acute or chronic. In clinical trials, the definition of respiratory failure usually includes increased respiratory rate, abnormal blood gases (hypoxemia, hypercapnia, or both), and evidence of increased work of breathing. Respiratory failure causes an altered mental status due to ischemia in the brain. Respiratory failure: The typical partial pressure reference values are oxygen Pa O2 more than 80 mmHg (11 kPa) and carbon dioxide Pa CO2 less than 45 mmHg (6.0 kPa). Cause: Several types of conditions can potentially result in respiratory failure: Conditions that reduce the flow of air into and out of the lungs, including physical obstruction by foreign bodies or masses and reduced breathing due to drugs or changes to the chest. Conditions that impair the lungs' blood supply. These include thromboembolic conditions and conditions that reduce the output of the right heart, such as right heart failure and some myocardial infarctions. Conditions that limit the ability of the lung tissue to exchange oxygen and carbon dioxide between the blood and the air within the lungs. Any disease which can damage the lung tissue can fit into this category. The most common causes are (in no particular order) infections, interstitial lung disease, and pulmonary oedema. Diagnosis: Type 1 Type 1 respiratory failure is defined as a low level of oxygen in the blood (hypoxemia) with either a standard (normocapnia) or low (hypocapnia) level of carbon dioxide (PaCO2) but not an increased level (hypercapnia). It is typically caused by a ventilation/perfusion (V/Q) mismatch; the volume of air flowing in and out of the lungs is not matched with the flow of blood to the lungs. The fundamental defect in type 1 respiratory failure is a failure of oxygenation characterized by: This type of respiratory failure is caused by conditions that affect oxygenation, such as: Low ambient oxygen (e.g. at high altitude) Ventilation-perfusion mismatch (parts of the lung receive oxygen but not enough blood to absorb it, e.g. pulmonary embolism) Alveolar hypoventilation (decreased minute volume due to reduced respiratory muscle activity, e.g. in acute neuromuscular disease); this form can also cause type 2 respiratory failure if severe. Diagnosis: Diffusion problem (oxygen cannot enter the capillaries due to parenchymal disease, e.g. in pneumonia or ARDS) Shunt (oxygenated blood mixes with non-oxygenated blood from the venous system, e.g. right to left shunt) Type 2 Hypoxemia (PaO2 <8kPa or normal) with hypercapnia (PaCO2 >6.0kPa). Diagnosis: The basic defect in type 2 respiratory failure is characterized by: Type 2 respiratory failure is caused by inadequate alveolar ventilation; both oxygen and carbon dioxide are affected. Defined as the buildup of carbon dioxide levels (PaCO2) that has been generated by the body but cannot be eliminated. The underlying causes include: Increased airways resistance (chronic obstructive pulmonary disease, asthma, suffocation) Reduced breathing effort (drug effects, brain stem lesion, extreme obesity) A decrease in the area of the lung available for gas exchange (such as in chronic bronchitis) Neuromuscular problems (Guillain–Barré syndrome, motor neuron disease) Deformed (kyphoscoliosis), rigid (ankylosing spondylitis), or flail chest. Diagnosis: Type 3 Type 3 respiratory failure results from lung atelectasis. Because atelectasis occurs so commonly in the perioperative period, this form is also called perioperative respiratory failure. After general anesthesia, decreases in functional residual capacity leads to collapse of dependent lung units. Type 4 Type 4 respiratory failure results from hypoperfusion of respiratory muscles as in patients in shock. Patients in shock often experience respiratory distress due to pulmonary edema (e.g., in cardiogenic shock). Lactic acidosis and anemia can also result in type 4 respiratory failure. However, type 1 and 2 are the most widely accepted. Treatment: Treatment of the underlying cause is required, if possible. The treatment of acute respiratory failure may involve medication such as bronchodilators (for airways disease), antibiotics (for infections), glucocorticoids (for numerous causes), diuretics (for pulmonary oedema), amongst others. Respiratory failure resulting from an overdose of opioids may be treated with the antidote naloxone. In contrast, most benzodiazepine overdose does not benefit from its antidote, flumazenil. Respiratory therapy/respiratory physiotherapy may be beneficial in some cases of respiratory failure.Type 1 respiratory failure may require oxygen therapy to achieve adequate oxygen saturation. Lack of oxygen response may indicate other modalities such as heated humidified high-flow therapy, continuous positive airway pressure or (if severe) endotracheal intubation and mechanical ventilation. .Type 2 respiratory failure often requires non-invasive ventilation (NIV) unless medical therapy can improve the situation. Mechanical ventilation is sometimes indicated immediately or otherwise if NIV fails. Respiratory stimulants such as doxapram are now rarely used.There is tentative evidence that in those with respiratory failure identified before arrival in hospital, continuous positive airway pressure can be helpful when started before conveying to hospital.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Centre for Polar Observation &amp; Modelling** Centre for Polar Observation &amp; Modelling: The Centre for Polar Observation & Modelling (CPOM) is a Natural Environment Research Council (NERC) Centre of Excellence that studies processes in the Earth's polar environments. CPOM conducts research on sea ice, land ice, and ice sheets using satellite observations and numerical models.CPOM comprises research groups and scientists based at the Universities of Leeds, Bristol, Reading, Lancaster, Swansea, Edinburgh, and University College London. CPOM also has partnerships with several other institutions, including the British Antarctic Survey (BAS), the National Oceanography Centre (NOC), the National Centre for Earth Observation (NCEO), the European Space Agency, and the Met Office. History: The Centre for Polar Observation and Modelling was founded in 2000 by Professor Sir Duncan Wingham. History: Directors Professor Sir Wingham was director of CPOM from 2000 to 2005, and his expertise in the study of Earth's ice sheets led to high impact publications on the widespread mass loss on the west Antarctic Ice Sheet. He was also Project Scientist of the European Space Agency's CryoSat mission. He has since been appointed as NERC Chief Executive. Professor Sir Wingham was awarded a Knighthood in 2019 for services to Climate Science.Professor Sir Wingham was succeeded as CPOM Director by Professor Seymour Laxon. Professor Laxon was an expert on satellite radar altimetry, and his work pioneered the use of satellite altimetry to measure sea ice thickness and surface circulation in polar oceans. This work would lead to the successful development of the European Space Agency's CryoSat mission. Sadly, Professor Laxon died following an accident in 2013.The role of CPOM Director was succeeded by Professor Andrew Shepherd. Professor Shepherd is an expert in remote observations of the Cryosphere, and is Principal Scientific Advisor to the European Space Agency's CryoSat mission, and co-leader of the Ice Sheet Mass Balance Inter-comparison Exercise. Research: Recent notable publications from CPOM scientists that garnered significant media attention include: In 2022, researchers from CPOM and BAS found that mega iceberg A68 had released 152 billion tonnes of freshwater into the ocean around South Georgia. In 2021, CPOM researchers named glaciers in the Getz region of Antarctica after international climate conferences, including COP26 in Glasgow. CPOM researchers reported in 2021 that Arctic sea ice is thinning twice as fast as previously thought. In 2021, a review by CPOM researchers into the state of Earth's ice showed that between 1994 and 2017, Earth lost 28 trillion tonnes of ice. Researchers from CPOM in 2020 found that ice sheets in Greenland and Antarctica are melting at a rate which matches the worst-case scenario for sea level rise. In 2020, researchers from CPOM found a six-fold increase in polar ice losses since the 1990s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MOS Technology Agnus** MOS Technology Agnus: The MOS Technology "Agnus", usually called Agnus, is an integrated circuit in the custom chipset of the Amiga computer. The Agnus, Denise and Paula chips collectively formed the OCS and ECS chipsets. MOS Technology Agnus: The Agnus is the Address Generator Chip. Its main function, in chip area, is the RAM Address Generator and Register Address Encoder which handles all DMA addresses. The 8361 Agnus is made up of approximately 21000 transistors and contains DMA Channel Controllers. According to Jay Miner, original Agnus was fabricated in 5 μm manufacturing process like all OCS chipset. The Blitter and Copper are also contained here. MOS Technology Agnus: Agnus features: The Blitter, a bitmap manipulator. The Blitter is capable of copying blocks of display data, or any arbitrary data in the on-board memory, at high speed with various raster operations as well as drawing pixel perfect lines and filling outlined polygons, while freeing the CPU for concurrent tasks. MOS Technology Agnus: "Copper", a display synchronized co-processor 25 Direct Memory Access (DMA) channels, allowing graphics, sound and I/O to be used with minimal CPU intervention DRAM refresh controller Memory controller (memory that can be accessed by the processor and the chipset) Generates the system clock from the 28 MHz oscillator Video timingAgnus was replaced by Alice in the Amiga 4000 and Amiga 1200 when the AGA chipset was introduced in 1992. Chips by capability: OCS Agnus which can address up to 512 kB of Chip RAM (PLCC versions add 512 kB of pseudo-fast RAM) 8361 (DIP) - Amiga 1000 (NTSC); Amiga 2000 model A (NTSC) 8367 (DIP) - Amiga 1000 (PAL); Amiga 2000 model A (PAL) 8370 (PLCC) - Amiga 500 to Rev 5.x (NTSC); Amiga 2000 model B to Rev 4.5 (NTSC) 8371 (PLCC) - Amiga 500 to Rev 5.x (PAL); Amiga 2000 model B to Rev 4.5 (PAL) ECS Agnus which can address up to 1 MB of Chip RAM 8372 - no data* 8372A - Amiga 500 from Rev 6 (NTSC/PAL); Amiga 2000 model B from Rev 6.0 to Rev 6.3 (NTSC/PAL); Commodore CDTV 8375 (318069-16 only) (PAL) - Amiga 500 from Rev 6 (PAL); Amiga 2000 model B from Rev 6.4 (PAL) 8375 (318069-17 only) (NTSC) - Amiga 500 from Rev 6 (NTSC); Amiga 2000 model B from Rev 6.4 (NTSC) ECS Agnus which can address up to 2 MB of Chip RAM 8372AB - Amiga 3000 from Rev 6.1 to Rev 8.9 (NTSC/PAL) 8372B - Amiga 3000 Rev 9 (NTSC/PAL) 8375 (PAL) - Amiga 500 Plus; Amiga 600 (PAL) 8375 (NTSC) - Amiga 600 (NTSC)* Somewhere 8372A Agnus mentioned as simply "8372". Chips by package: 48-lead DIP Agnus (aka thin Agnus): 8361; 8367 84-contact PLCC Fat Agnus (named Fat Lady on most Amiga 2000 motherboards) 8370; 8371; 8372; 8372A; 8372AB; 8372B; 8375NotesFat Agnus 1MB and Fat Agnus 2MB usually known as Super Agnus; Super Fat Agnus; Fatter Agnus; Big Agnus; Big Fat Agnus, but these aren't official names. Pinout: PLCC Versions When replacing or upgrading chips, pinouts need to be taken care of. Types are just mentioned for reference; four-digit types and pinouts/usage are not consistent. References: A500 Service Training, A3000 Service Manual, A500+ Service Manual, A1200 schematics
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quaternary cubic** Quaternary cubic: In mathematics, a quaternary cubic form is a degree 3 homogeneous polynomial in four variables. The zeros form a cubic surface in 3-dimensional projective space. Invariants: Salmon (1860) and Clebsch (1861, 1861b) studied the ring of invariants of a quaternary cubic, which is a ring generated by invariants of degrees 8, 16, 24, 32, 40, 100. The generators of degrees 8, 16, 24, 32, 40 generate a polynomial ring. The generator of degree 100 is a skew invariant, whose square is a polynomial in the other generators given explicitly by Salmon. Salmon also gave an explicit formula for the discriminant as a polynomial in the generators, though Edge (1980) pointed out that the formula has a widely copied misprint in it. Sylvester pentahedron: A generic quaternary cubic can be written as a sum of 5 cubes of linear forms, unique up to multiplication by cube roots of unity. This was conjectured by Sylvester in 1851, and proven 10 years later by Clebsch. The union of the 5 planes where these 5 linear forms vanish is called the Sylvester pentahedron.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Datamatics** Datamatics: Datamatics is an Indian company that provides consulting, information technology (IT), data management, and business process management services. Its services use robotics, artificial intelligence and machine learning algorithms. Headquartered in Mumbai, the company has a presence across America, Australia, Asia and Europe. The company was incorporated in 1987, offering computer and electronic data processing linked services, and later added information technology enabled services with robotic process automation. History: On 3 November 1987, the company was incorporated as Interface Software Resources Private Ltd. The name of the company was then changed to Datamatics Technologies Private Ltd. on 18 December 1992. It then changed its name to Datamatics Technologies Ltd. when it got listed as a public company under the provisions of section 43A of the Companies Act on 13 January 2000. Mumbai, Nashik, Chennai, Bangalore, Pune and Puducherry Asia (excluding India): Philippines and UAE Australia: Australia Europe: United Kingdom United States: Michigan, New Jersey, Massachusetts and Missouri Present: In February 2019, Datamatics Global Services and AEP Ticketing solutions SRL, Italy (AEP) were granted the letter of acceptance (LOA) by Mumbai Metropolitan Region Development Authority (MMRDA) for implementing an automatic fare collection system for 52 stations of the Mumbai Metro Rail project for belongs to Muralidharan don puducherry ₹ 160 crore. In May 2019, the company's shares rose as much as 19.99% to Rs 107.75, marking the biggest intraday percentage gain for Datamatics since December 2010.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gate valve** Gate valve: A gate valve, also known as a sluice valve, is a valve that opens by lifting a barrier (gate) out of the path of the fluid. Gate valves require very little space along the pipe axis and hardly restrict the flow of fluid when the gate is fully opened. The gate faces can be parallel but are most commonly wedge-shaped (in order to be able to apply pressure on the sealing surface). Typical use: Gate valves are used to shut off the flow of liquids rather than for flow regulation, which is frequently done with a globe valve. When fully open, the typical gate valve has no obstruction in the flow path, resulting in very low flow resistance. The size of the open flow path generally varies in a nonlinear manner as the gate is moved. This means that the flow rate does not change evenly with stem travel. Depending on the construction, a partially open gate can vibrate from the fluid flow.Gate valves are mostly used with larger pipe diameters (from 2" to the largest pipelines) since they are less complex to construct than other types of valves in large sizes. Typical use: At high pressures, friction can become a problem. As the gate is pushed against its guiding rail by the pressure of the medium, it becomes harder to operate the valve. Large gate valves are sometimes fitted with a bypass controlled by a smaller valve to be able to reduce the pressure before operating the gate valve itself. Gate valves without an extra sealing ring on the gate or the seat are used in applications where minor leaking of the valve is not an issue, such as heating circuits or sewer pipes. Valve construction: Common gate valves are actuated by a threaded stem that connects the actuator (e.g. handwheel or motor) to the gate. They are characterised as having either a rising or a nonrising stem, depending on which end of the stem is threaded. Rising stems are fixed to the gate and rise and lower together as the valve is operated, providing a visual indication of valve position. The actuator is attached to a nut that is rotated around the threaded stem to move it. Nonrising stem valves are fixed to, and rotate with, the actuator, and are threaded into the gate. They may have a pointer threaded onto the stem to indicate valve position, since the gate's motion is concealed inside the valve. Nonrising stems are used where vertical space is limited. Valve construction: Gate valves may have flanged ends drilled according to pipeline-compatible flange dimensional standards. Gate valves are typically constructed from cast iron, cast carbon steel, ductile iron, gunmetal, stainless steel, alloy steels, and forged steels. All-metal gate valves are used in ultra-high vacuum chambers to isolate regions of the chamber. Valve construction: Bonnet Bonnets provide leakproof closure for the valve body. Gate valves may have a screw-in, union, or bolted bonnet. A screw-in bonnet is the simplest, offering a durable, pressure-tight seal. A union bonnet is suitable for applications requiring frequent inspection and cleaning. It also gives the body added strength. A bolted bonnet is used for larger valves and higher pressure applications. Valve construction: Pressure seal bonnet Another type of bonnet construction in a gate valve is pressure seal bonnet. This construction is adopted for valves for high pressure service, typically in excess of 2250 psi (15 MPa). The unique feature of the pressure seal bonnet is that the bonnet ends in a downward-facing cup that fits inside the body of the valve. As the internal pressure in the valve increases, the sides of the cup are forced outward. improving the body-bonnet seal. Other constructions where the seal is provided by external clamping pressure tend to create leaks in the body-bonnet joint. Valve construction: Knife gate valve For plastic solids and high-viscosity slurries such as paper pulp, a specialty valve known as a knife gate valve is used to cut through the material to stop the flow. A knife gate valve is usually not wedge shaped and has a tapered knife-like edge on its lower surface.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Common cardinal veins** Common cardinal veins: The common cardinal veins, also known as the ducts of Cuvier, are veins that drain into the sinus venosus during embryonic development. These drain an anterior cardinal vein and a posterior cardinal vein on each side. Each of the ducts of Cuvier receives an ascending vein. The ascending veins return the blood from the parietes of the trunk and from the Wolffian bodies, and are called cardinal veins. Part of the left common cardinal vein persists after birth to form the coronary sinus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Milnor number** Milnor number: In mathematics, and particularly singularity theory, the Milnor number, named after John Milnor, is an invariant of a function germ. If f is a complex-valued holomorphic function germ then the Milnor number of f, denoted μ(f), is either a nonnegative integer, or is infinite. It can be considered both a geometric invariant and an algebraic invariant. This is why it plays an important role in algebraic geometry and singularity theory. Algebraic definition: Consider a holomorphic complex function germ f:(Cn,0)→(C,0) and denote by On the ring of all function germs (Cn,0)→(C,0) . Every level of a function is a complex hypersurface in Cn , therefore we will call f a hypersurface singularity. Algebraic definition: Assume it is an isolated singularity: in case of holomorphic mappings we say that a hypersurface singularity f is singular at 0∈Cn if its gradient ∇f is zero at 0 , a singular point is isolated if it is the only singular point in a sufficiently small neighbourhood. In particular, the multiplicity of the gradient dim C⁡On/∇f is finite by an application of Rückert's Nullstellensatz. This number μ(f) is the Milnor number of singularity f at 0 Note that the multiplicity of the gradient is finite if and only if the origin is an isolated critical point of f. Geometric interpretation: Milnor originally introduced μ(f) in geometric terms in the following way. All fibers f−1(c) for values c close to 0 are nonsingular manifolds of real dimension 2(n−1) . Their intersection with a small open disc Dϵ centered at 0 is a smooth manifold F called the Milnor fiber. Up to diffeomorphism F does not depend on c or ϵ if they are small enough. It is also diffeomorphic to the fiber of the Milnor fibration map. Geometric interpretation: The Milnor fiber F is a smooth manifold of dimension 2(n−1) and has the same homotopy type as a bouquet of μ(f) spheres Sn−1 . This is to say that its middle Betti number bn−1(F) is equal to the Milnor number and it has homology of a point in dimension less than n−1 . For example, a complex plane curve near every singular point z0 has its Milnor fiber homotopic to a wedge of μz0(f) circles (Milnor number is a local property, so it can have different values at different singular points). Geometric interpretation: Thus we have equalities Milnor number = number of spheres in the wedge = middle Betti number of F = degree of the map z→∇f(z)‖∇f(z)‖ on Sϵ = multiplicity of the gradient ∇f Another way of looking at Milnor number is by perturbation. We say that a point is a degenerate singular point, or that f has a degenerate singularity, at z0∈Cn if z0 is a singular point and the Hessian matrix of all second order partial derivatives has zero determinant at z0 det 0. Geometric interpretation: We assume that f has a degenerate singularity at 0. We can speak about the multiplicity of this degenerate singularity by thinking about how many points are infinitesimally glued. If we now perturb the image of f in a certain stable way the isolated degenerate singularity at 0 will split up into other isolated singularities which are non-degenerate! The number of such isolated non-degenerate singularities will be the number of points that have been infinitesimally glued. Geometric interpretation: Precisely, we take another function germ g which is non-singular at the origin and consider the new function germ h := f + εg where ε is very small. When ε = 0 then h = f. The function h is called the morsification of f. It is very difficult to compute the singularities of h, and indeed it may be computationally impossible. This number of points that have been infinitesimally glued, this local multiplicity of f, is exactly the Milnor number of f. Geometric interpretation: Further contributions give meaning to Milnor number in terms of dimension of the space of versal deformations, i.e. the Milnor number is the minimal dimension of parameter space of deformations that carry all information about initial singularity. Examples: Here we give some worked examples in two variables. Working with only one is too simple and does not give a feel for the techniques, whereas working with three variables can be quite tricky. Two is a nice number. Also we stick to polynomials. If f is only holomorphic and not a polynomial, then we could have worked with the power series expansion of f. Examples: Consider a function germ with a non-degenerate singularity at 0, say f(x,y)=x2+y2 . The Jacobian ideal is just ⟨2x,2y⟩=⟨x,y⟩ . We next compute the local algebra: Af=O/⟨x,y⟩=⟨1⟩. Examples: To see why this is true we can use Hadamard's lemma which says that we can write any function h∈O as h(x,y)=k+xh1(x,y)+yh2(x,y) for some constant k and functions h1 and h2 in O (where either h1 or h2 or both may be exactly zero). So, modulo functional multiples of x and y, we can write h as a constant. The space of constant functions is spanned by 1, hence Af=⟨1⟩ It follows that μ(f) = 1. It is easy to check that for any function germ g with a non-degenerate singularity at 0 we get μ(g) = 1. Examples: Note that applying this method to a non-singular function germ g we get μ(g) = 0. Let f(x,y)=x3+xy2 , then Af=O/⟨3x2+y2,xy⟩=⟨1,x,y,x2⟩. So in this case μ(f)=4 One can show that if f(x,y)=x2y2+y3 then μ(f)=∞. This can be explained by the fact that f is singular at every point of the x-axis. Versal Deformations: Let f have finite Milnor number μ, and let g1,…,gμ be a basis for the local algebra, considered as a vector space. Then a miniversal deformation of f is given by F:(Cn×Cμ,0)→(C,0), := f(z)+a1g1(z)+⋯+aμgμ(z), where (a1,…,aμ)∈Cμ These deformations (or unfoldings) are of great interest in much of science. Invariance: We can collect function germs together to construct equivalence classes. One standard equivalence is A-equivalence. We say that two function germs f,g:(Cn,0)→(C,0) are A-equivalent if there exist diffeomorphism germs ϕ:(Cn,0)→(Cn,0) and ψ:(C,0)→(C,0) such that f∘ϕ=ψ∘g : there exists a diffeomorphic change of variable in both domain and range which takes f to g. If f and g are A-equivalent then μ(f) = μ(g). Invariance: Nevertheless, the Milnor number does not offer a complete invariant for function germs, i.e. the converse is false: there exist function germs f and g with μ(f) = μ(g) which are not A-equivalent. To see this consider f(x,y)=x3+y3 and g(x,y)=x2+y5 . We have μ(f)=μ(g)=4 but f and g are clearly not A-equivalent since the Hessian matrix of f is equal to zero while that of g is not (and the rank of the Hessian is an A-invariant, as is easy to see).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dezaemon 3D** Dezaemon 3D: Dezaemon 3D (Japanese: デザエモン3D) is a video game and game editor for Nintendo 64. It was released only in Japan in 1998.The game editor allows players to design their own shooting levels similar to those shown in Star Soldier: Vanishing Earth. The game has many options, such as creating the stage boss or adding a custom soundtrack for each level. It was originally developed alongside an ultimately unreleased accompanying expansion disk title for the 64DD.It includes two sample games: "SOLID GEAR", and "USAGI-san" (Mr. Rabbit). Reception: N64 Magazine noted the difficulty of use in English "without any English instructions", but that "as Solid Gear ably demonstrates, Dezaemon [sic] is perfectly capable of producing a commercial-standard shooter", and that "given an English translation...we'd buy it just for the music editor." While IGN64 did not give it a full review, their coverage called it a "high quality creativity app" and placed it second on their list of "Top Nintendo 64 Imports" after Sin & Punishment, lamenting that Nintendo did not give it a US release.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Axiom of union** Axiom of union: In axiomatic set theory, the axiom of union is one of the axioms of Zermelo–Fraenkel set theory. This axiom was introduced by Ernst Zermelo.The axiom states that for each set x there is a set y whose elements are precisely the elements of the elements of x. Formal statement: In the formal language of the Zermelo–Fraenkel axioms, the axiom reads: ∀A∃B∀c(c∈B⟺∃D(c∈D∧D∈A)) or in words: Given any set A, there is a set B such that, for any element c, c is a member of B if and only if there is a set D such that c is a member of D and D is a member of A.or, more simply: For any set , there is a set \bigcup A\ which consists of just the elements of the elements of that set Relation to Pairing: The axiom of union allows one to unpack a set of sets and thus create a flatter set. Together with the axiom of pairing, this implies that for any two sets, there is a set (called their union) that contains exactly the elements of the two sets. Relation to Replacement: The axiom of replacement allows one to form many unions, such as the union of two sets. However, in its full generality, the axiom of union is independent from the rest of the ZFC-axioms: Replacement does not prove the existence of the union of a set of sets if the result contains an unbounded number of cardinalities. Together with the axiom schema of replacement, the axiom of union implies that one can form the union of a family of sets indexed by a set. Relation to Separation: In the context of set theories which include the axiom of separation, the axiom of union is sometimes stated in a weaker form which only produces a superset of the union of a set. For example, Kunen states the axiom as \forall {\mathcal {F}}\,\exists A\,\forall Y\,\forall x[(x\in Y\land Y\in {\mathcal {F}})\Rightarrow x\in A]. which is equivalent to ∀F∃A∀x[[∃Y(x∈Y∧Y∈F)]⇒x∈A]. Compared to the axiom stated at the top of this section, this variation asserts only one direction of the implication, rather than both directions. Relation to Intersection: There is no corresponding axiom of intersection. If is a nonempty set containing , it is possible to form the intersection \bigcap A using the axiom schema of specification as ⋂A={c∈E:∀D(D∈A⇒c∈D)} ,so no separate axiom of intersection is necessary. (If A is the empty set, then trying to form the intersection of A as {c: for all D in A, c is in D}is not permitted by the axioms. Moreover, if such a set existed, then it would contain every set in the "universe", but the notion of a universal set is antithetical to Zermelo–Fraenkel set theory.)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gaia hypothesis** Gaia hypothesis: The Gaia hypothesis (), also known as the Gaia theory, Gaia paradigm, or the Gaia principle, proposes that living organisms interact with their inorganic surroundings on Earth to form a synergistic and self-regulating, complex system that helps to maintain and perpetuate the conditions for life on the planet. Gaia hypothesis: The Gaia hypothesis was formulated by the chemist James Lovelock and co-developed by the microbiologist Lynn Margulis in the 1970s. Following the suggestion by his neighbour, novelist William Golding, Lovelock named the hypothesis after Gaia, the primordial deity who personified the Earth in Greek mythology. In 2006, the Geological Society of London awarded Lovelock the Wollaston Medal in part for his work on the Gaia hypothesis.Topics related to the hypothesis include how the biosphere and the evolution of organisms affect the stability of global temperature, salinity of seawater, atmospheric oxygen levels, the maintenance of a hydrosphere of liquid water and other environmental variables that affect the habitability of Earth. Gaia hypothesis: The Gaia hypothesis was initially criticized for being teleological and against the principles of natural selection, but later refinements aligned the Gaia hypothesis with ideas from fields such as Earth system science, biogeochemistry and systems ecology. Even so, the Gaia hypothesis continues to attract criticism, and today many scientists consider it to be only weakly supported by, or at odds with, the available evidence. Overview: Gaian hypotheses suggest that organisms co-evolve with their environment: that is, they "influence their abiotic environment, and that environment in turn influences the biota by Darwinian process". Lovelock (1995) gave evidence of this in his second book, Ages of Gaia, showing the evolution from the world of the early thermo-acido-philic and methanogenic bacteria towards the oxygen-enriched atmosphere today that supports more complex life. Overview: A reduced version of the hypothesis has been called "influential Gaia" in "Directed Evolution of the Biosphere: Biogeochemical Selection or Gaia?" by Andrei G. Lapenis, which states the biota influence certain aspects of the abiotic world, e.g. temperature and atmosphere. This is not the work of an individual but a collective of Russian scientific research that was combined into this peer reviewed publication. It states the coevolution of life and the environment through "micro-forces" and biogeochemical processes. An example is how the activity of photosynthetic bacteria during Precambrian times completely modified the Earth atmosphere to turn it aerobic, and thus supports the evolution of life (in particular eukaryotic life). Overview: Since barriers existed throughout the twentieth century between Russia and the rest of the world, it is only relatively recently that the early Russian scientists who introduced concepts overlapping the Gaia paradigm have become better known to the Western scientific community. These scientists include Piotr Alekseevich Kropotkin (1842–1921) (although he spent much of his professional life outside Russia), Rafail Vasil’evich Rizpolozhensky (1862 – c. 1922), Vladimir Ivanovich Vernadsky (1863–1945), and Vladimir Alexandrovich Kostitzin (1886–1963). Overview: Biologists and Earth scientists usually view the factors that stabilize the characteristics of a period as an undirected emergent property or entelechy of the system; as each individual species pursues its own self-interest, for example, their combined actions may have counterbalancing effects on environmental change. Opponents of this view sometimes reference examples of events that resulted in dramatic change rather than stable equilibrium, such as the conversion of the Earth's atmosphere from a reducing environment to an oxygen-rich one at the end of the Archaean and the beginning of the Proterozoic periods. Overview: Less accepted versions of the hypothesis claim that changes in the biosphere are brought about through the coordination of living organisms and maintain those conditions through homeostasis. In some versions of Gaia philosophy, all lifeforms are considered part of one single living planetary being called Gaia. In this view, the atmosphere, the seas and the terrestrial crust would be results of interventions carried out by Gaia through the coevolving diversity of living organisms. Overview: The Gaia paradigm was an influence on the deep ecology movement. Details: The Gaia hypothesis posits that the Earth is a self-regulating complex system involving the biosphere, the atmosphere, the hydrospheres and the pedosphere, tightly coupled as an evolving system. The hypothesis contends that this system as a whole, called Gaia, seeks a physical and chemical environment optimal for contemporary life.Gaia evolves through a cybernetic feedback system operated by the biota, leading to broad stabilization of the conditions of habitability in a full homeostasis. Many processes in the Earth's surface, essential for the conditions of life, depend on the interaction of living forms, especially microorganisms, with inorganic elements. These processes establish a global control system that regulates Earth's surface temperature, atmosphere composition and ocean salinity, powered by the global thermodynamic disequilibrium state of the Earth system.The existence of a planetary homeostasis influenced by living forms had been observed previously in the field of biogeochemistry, and it is being investigated also in other fields like Earth system science. The originality of the Gaia hypothesis relies on the assessment that such homeostatic balance is actively pursued with the goal of keeping the optimal conditions for life, even when terrestrial or external events menace them. Details: Regulation of global surface temperature Since life started on Earth, the energy provided by the Sun has increased by 25% to 30%; however, the surface temperature of the planet has remained within the levels of habitability, reaching quite regular low and high margins. Lovelock has also hypothesised that methanogens produced elevated levels of methane in the early atmosphere, giving a situation similar to that found in petrochemical smog, similar in some respects to the atmosphere on Titan. This, he suggests, helped to screen out ultraviolet light until the formation of the ozone layer, maintaining a degree of homeostasis. However, the Snowball Earth research has suggested that "oxygen shocks" and reduced methane levels led, during the Huronian, Sturtian and Marinoan/Varanger Ice Ages, to a world that very nearly became a solid "snowball". These epochs are evidence against the ability of the pre Phanerozoic biosphere to fully self-regulate. Details: Processing of the greenhouse gas CO2, explained below, plays a critical role in the maintenance of the Earth temperature within the limits of habitability. The CLAW hypothesis, inspired by the Gaia hypothesis, proposes a feedback loop that operates between ocean ecosystems and the Earth's climate. The hypothesis specifically proposes that particular phytoplankton that produce dimethyl sulfide are responsive to variations in climate forcing, and that these responses lead to a negative feedback loop that acts to stabilise the temperature of the Earth's atmosphere. Currently the increase in human population and the environmental impact of their activities, such as the multiplication of greenhouse gases may cause negative feedbacks in the environment to become positive feedback. Lovelock has stated that this could bring an extremely accelerated global warming, but he has since stated the effects will likely occur more slowly. Details: Daisyworld simulations In response to the criticism that the Gaia hypothesis seemingly required unrealistic group selection and cooperation between organisms, James Lovelock and Andrew Watson developed a mathematical model, Daisyworld, in which ecological competition underpinned planetary temperature regulation.Daisyworld examines the energy budget of a planet populated by two different types of plants, black daisies and white daisies, which are assumed to occupy a significant portion of the surface. The colour of the daisies influences the albedo of the planet such that black daisies absorb more light and warm the planet, while white daisies reflect more light and cool the planet. The black daisies are assumed to grow and reproduce best at a lower temperature, while the white daisies are assumed to thrive best at a higher temperature. As the temperature rises closer to the value the white daisies like, the white daisies outreproduce the black daisies, leading to a larger percentage of white surface, and more sunlight is reflected, reducing the heat input and eventually cooling the planet. Conversely, as the temperature falls, the black daisies outreproduce the white daisies, absorbing more sunlight and warming the planet. The temperature will thus converge to the value at which the reproductive rates of the plants are equal. Details: Lovelock and Watson showed that, over a limited range of conditions, this negative feedback due to competition can stabilize the planet's temperature at a value which supports life, if the energy output of the Sun changes, while a planet without life would show wide temperature changes. The percentage of white and black daisies will continually change to keep the temperature at the value at which the plants' reproductive rates are equal, allowing both life forms to thrive. Details: It has been suggested that the results were predictable because Lovelock and Watson selected examples that produced the responses they desired. Details: Regulation of oceanic salinity Ocean salinity has been constant at about 3.5% for a very long time. Salinity stability in oceanic environments is important as most cells require a rather constant salinity and do not generally tolerate values above 5%. The constant ocean salinity was a long-standing mystery, because no process counterbalancing the salt influx from rivers was known. Recently it was suggested that salinity may also be strongly influenced by seawater circulation through hot basaltic rocks, and emerging as hot water vents on mid-ocean ridges. However, the composition of seawater is far from equilibrium, and it is difficult to explain this fact without the influence of organic processes. One suggested explanation lies in the formation of salt plains throughout Earth's history. It is hypothesized that these are created by bacterial colonies that fix ions and heavy metals during their life processes.In the biogeochemical processes of Earth, sources and sinks are the movement of elements. The composition of salt ions within our oceans and seas is: sodium (Na+), chlorine (Cl−), sulfate (SO42−), magnesium (Mg2+), calcium (Ca2+) and potassium (K+). The elements that comprise salinity do not readily change and are a conservative property of seawater. There are many mechanisms that change salinity from a particulate form to a dissolved form and back. Considering the metallic composition of iron sources across a multifaceted grid of thermomagnetic design, not only would the movement of elements hypothetically help restructure the movement of ions, electrons, and the like, but would also potentially and inexplicably assist in balancing the magnetic bodies of the Earth's geomagnetic field. The known sources of sodium i.e. salts are when weathering, erosion, and dissolution of rocks are transported into rivers and deposited into the oceans. Details: The Mediterranean Sea as being Gaia's kidney is found (here) by Kenneth J. Hsue, a correspondence author in 2001. Hsue suggests the "desiccation" of the Mediterranean is evidence of a functioning Gaia "kidney". In this and earlier suggested cases, it is plate movements and physics, not biology, which performs the regulation. Earlier "kidney functions" were performed during the "deposition of the Cretaceous (South Atlantic), Jurassic (Gulf of Mexico), Permo-Triassic (Europe), Devonian (Canada), and Cambrian/Precambrian (Gondwana) saline giants." Regulation of oxygen in the atmosphere The Gaia theorem states that the Earth's atmospheric composition is kept at a dynamically steady state by the presence of life. The atmospheric composition provides the conditions that contemporary life has adapted to. All the atmospheric gases other than noble gases present in the atmosphere are either made by organisms or processed by them. Details: The stability of the atmosphere in Earth is not a consequence of chemical equilibrium. Oxygen is a reactive compound, and should eventually combine with gases and minerals of the Earth's atmosphere and crust. Oxygen only began to persist in the atmosphere in small quantities about 50 million years before the start of the Great Oxygenation Event. Since the start of the Cambrian period, atmospheric oxygen concentrations have fluctuated between 15% and 35% of atmospheric volume. Traces of methane (at an amount of 100,000 tonnes produced per year) should not exist, as methane is combustible in an oxygen atmosphere. Details: Dry air in the atmosphere of Earth contains roughly (by volume) 78.09% nitrogen, 20.95% oxygen, 0.93% argon, 0.039% carbon dioxide, and small amounts of other gases including methane. Lovelock originally speculated that concentrations of oxygen above about 25% would increase the frequency of wildfires and conflagration of forests. This mechanism, however, would not raise oxygen levels if they became too low. If plants can be shown to robustly over-produce O2 then perhaps only the high oxygen forest fires regulator is necessary. Recent work on the findings of fire-caused charcoal in Carboniferous and Cretaceous coal measures, in geologic periods when O2 did exceed 25%, has supported Lovelock's contention. Details: Processing of CO2 Gaia scientists see the participation of living organisms in the carbon cycle as one of the complex processes that maintain conditions suitable for life. The only significant natural source of atmospheric carbon dioxide (CO2) is volcanic activity, while the only significant removal is through the precipitation of carbonate rocks. Carbon precipitation, solution and fixation are influenced by the bacteria and plant roots in soils, where they improve gaseous circulation, or in coral reefs, where calcium carbonate is deposited as a solid on the sea floor. Calcium carbonate is used by living organisms to manufacture carbonaceous tests and shells. Once dead, the living organisms' shells fall. Some arrive at the bottom of shallow seas where the heat and pressure of burial, and/or the forces of plate tectonics, eventually convert them to deposits of chalk and limestone. Much of the falling dead shells, however, redissolve into the ocean below the carbon compensation depth. Details: One of these organisms is Emiliania huxleyi, an abundant coccolithophore algae which may have a role in the formation of clouds. CO2 excess is compensated by an increase of coccolithophorid life, increasing the amount of CO2 locked in the ocean floor. Coccolithophorids, if the CLAW Hypothesis turns out to be supported (see "Regulation of Global Surface Temperature" above), could help increase the cloud cover, hence control the surface temperature, help cool the whole planet and favor precipitation necessary for terrestrial plants. Lately the atmospheric CO2 concentration has increased and there is some evidence that concentrations of ocean algal blooms are also increasing.Lichen and other organisms accelerate the weathering of rocks in the surface, while the decomposition of rocks also happens faster in the soil, thanks to the activity of roots, fungi, bacteria and subterranean animals. The flow of carbon dioxide from the atmosphere to the soil is therefore regulated with the help of living organisms. When CO2 levels rise in the atmosphere the temperature increases and plants grow. This growth brings higher consumption of CO2 by the plants, who process it into the soil, removing it from the atmosphere. History: Precedents The idea of the Earth as an integrated whole, a living being, has a long tradition. The mythical Gaia was the primal Greek goddess personifying the Earth, the Greek version of "Mother Nature" (from Ge = Earth, and Aia = PIE grandmother), or the Earth Mother. James Lovelock gave this name to his hypothesis after a suggestion from the novelist William Golding, who was living in the same village as Lovelock at the time (Bowerchalke, Wiltshire, UK). Golding's advice was based on Gea, an alternative spelling for the name of the Greek goddess, which is used as prefix in geology, geophysics and geochemistry. Golding later made reference to Gaia in his Nobel prize acceptance speech. History: In the eighteenth century, as geology consolidated as a modern science, James Hutton maintained that geological and biological processes are interlinked. Later, the naturalist and explorer Alexander von Humboldt recognized the coevolution of living organisms, climate, and Earth's crust. In the twentieth century, Vladimir Vernadsky formulated a theory of Earth's development that is now one of the foundations of ecology. Vernadsky was a Ukrainian geochemist and was one of the first scientists to recognize that the oxygen, nitrogen, and carbon dioxide in the Earth's atmosphere result from biological processes. During the 1920s he published works arguing that living organisms could reshape the planet as surely as any physical force. Vernadsky was a pioneer of the scientific bases for the environmental sciences. His visionary pronouncements were not widely accepted in the West, and some decades later the Gaia hypothesis received the same type of initial resistance from the scientific community. History: Also in the turn to the 20th century Aldo Leopold, pioneer in the development of modern environmental ethics and in the movement for wilderness conservation, suggested a living Earth in his biocentric or holistic ethics regarding land. History: It is at least not impossible to regard the earth's parts—soil, mountains, rivers, atmosphere etc,—as organs or parts of organs of a coordinated whole, each part with its definite function. And if we could see this whole, as a whole, through a great period of time, we might perceive not only organs with coordinated functions, but possibly also that process of consumption as replacement which in biology we call metabolism, or growth. In such case we would have all the visible attributes of a living thing, which we do not realize to be such because it is too big, and its life processes too slow. History: Another influence for the Gaia hypothesis and the environmental movement in general came as a side effect of the Space Race between the Soviet Union and the United States of America. During the 1960s, the first humans in space could see how the Earth looked as a whole. The photograph Earthrise taken by astronaut William Anders in 1968 during the Apollo 8 mission became, through the Overview Effect an early symbol for the global ecology movement. History: Formulation of the hypothesis Lovelock started defining the idea of a self-regulating Earth controlled by the community of living organisms in September 1965, while working at the Jet Propulsion Laboratory in California on methods of detecting life on Mars. The first paper to mention it was Planetary Atmospheres: Compositional and other Changes Associated with the Presence of Life, co-authored with C.E. Giffin. A main concept was that life could be detected in a planetary scale by the chemical composition of the atmosphere. According to the data gathered by the Pic du Midi observatory, planets like Mars or Venus had atmospheres in chemical equilibrium. This difference with the Earth atmosphere was considered to be a proof that there was no life in these planets. History: Lovelock formulated the Gaia Hypothesis in journal articles in 1972 and 1974, followed by a popularizing 1979 book Gaia: A new look at life on Earth. An article in the New Scientist of February 6, 1975, and a popular book length version of the hypothesis, published in 1979 as The Quest for Gaia, began to attract scientific and critical attention. History: Lovelock called it first the Earth feedback hypothesis, and it was a way to explain the fact that combinations of chemicals including oxygen and methane persist in stable concentrations in the atmosphere of the Earth. Lovelock suggested detecting such combinations in other planets' atmospheres as a relatively reliable and cheap way to detect life. History: Later, other relationships such as sea creatures producing sulfur and iodine in approximately the same quantities as required by land creatures emerged and helped bolster the hypothesis.In 1971 microbiologist Dr. Lynn Margulis joined Lovelock in the effort of fleshing out the initial hypothesis into scientifically proven concepts, contributing her knowledge about how microbes affect the atmosphere and the different layers in the surface of the planet. The American biologist had also awakened criticism from the scientific community with her advocacy of the theory on the origin of eukaryotic organelles and her contributions to the endosymbiotic theory, nowadays accepted. Margulis dedicated the last of eight chapters in her book, The Symbiotic Planet, to Gaia. However, she objected to the widespread personification of Gaia and stressed that Gaia is "not an organism", but "an emergent property of interaction among organisms". She defined Gaia as "the series of interacting ecosystems that compose a single huge ecosystem at the Earth's surface. Period". The book's most memorable "slogan" was actually quipped by a student of Margulis'. History: James Lovelock called his first proposal the Gaia hypothesis but has also used the term Gaia theory. Lovelock states that the initial formulation was based on observation, but still lacked a scientific explanation. The Gaia hypothesis has since been supported by a number of scientific experiments and provided a number of useful predictions. History: First Gaia conference In 1985, the first public symposium on the Gaia hypothesis, Is The Earth a Living Organism? was held at University of Massachusetts Amherst, August 1–6. The principal sponsor was the National Audubon Society. Speakers included James Lovelock, George Wald, Mary Catherine Bateson, Lewis Thomas, John Todd, Donald Michael, Christopher Bird, Thomas Berry, David Abram, Michael Cohen, and William Fields. Some 500 people attended. History: Second Gaia conference In 1988, climatologist Stephen Schneider organised a conference of the American Geophysical Union. The first Chapman Conference on Gaia, was held in San Diego, California on March 7, 1988. History: During the "philosophical foundations" session of the conference, David Abram spoke on the influence of metaphor in science, and of the Gaia hypothesis as offering a new and potentially game-changing metaphorics, while James Kirchner criticised the Gaia hypothesis for its imprecision. Kirchner claimed that Lovelock and Margulis had not presented one Gaia hypothesis, but four: CoEvolutionary Gaia: that life and the environment had evolved in a coupled way. Kirchner claimed that this was already accepted scientifically and was not new. History: Homeostatic Gaia: that life maintained the stability of the natural environment, and that this stability enabled life to continue to exist. Geophysical Gaia: that the Gaia hypothesis generated interest in geophysical cycles and therefore led to interesting new research in terrestrial geophysical dynamics. History: Optimising Gaia: that Gaia shaped the planet in a way that made it an optimal environment for life as a whole. Kirchner claimed that this was not testable and therefore was not scientific.Of Homeostatic Gaia, Kirchner recognised two alternatives. "Weak Gaia" asserted that life tends to make the environment stable for the flourishing of all life. "Strong Gaia" according to Kirchner, asserted that life tends to make the environment stable, to enable the flourishing of all life. Strong Gaia, Kirchner claimed, was untestable and therefore not scientific.Lovelock and other Gaia-supporting scientists, however, did attempt to disprove the claim that the hypothesis is not scientific because it is impossible to test it by controlled experiment. For example, against the charge that Gaia was teleological, Lovelock and Andrew Watson offered the Daisyworld Model (and its modifications, above) as evidence against most of these criticisms. Lovelock said that the Daisyworld model "demonstrates that self-regulation of the global environment can emerge from competition amongst types of life altering their local environment in different ways".Lovelock was careful to present a version of the Gaia hypothesis that had no claim that Gaia intentionally or consciously maintained the complex balance in her environment that life needed to survive. It would appear that the claim that Gaia acts "intentionally" was a statement in his popular initial book and was not meant to be taken literally. This new statement of the Gaia hypothesis was more acceptable to the scientific community. Most accusations of teleologism ceased, following this conference. History: Third Gaia conference By the time of the 2nd Chapman Conference on the Gaia Hypothesis, held at Valencia, Spain, on 23 June 2000, the situation had changed significantly. Rather than a discussion of the Gaian teleological views, or "types" of Gaia hypotheses, the focus was upon the specific mechanisms by which basic short term homeostasis was maintained within a framework of significant evolutionary long term structural change. History: The major questions were: "How has the global biogeochemical/climate system called Gaia changed in time? What is its history? Can Gaia maintain stability of the system at one time scale but still undergo vectorial change at longer time scales? How can the geologic record be used to examine these questions?" "What is the structure of Gaia? Are the feedbacks sufficiently strong to influence the evolution of climate? Are there parts of the system determined pragmatically by whatever disciplinary study is being undertaken at any given time or are there a set of parts that should be taken as most true for understanding Gaia as containing evolving organisms over time? What are the feedbacks among these different parts of the Gaian system, and what does the near closure of matter mean for the structure of Gaia as a global ecosystem and for the productivity of life?" "How do models of Gaian processes and phenomena relate to reality and how do they help address and understand Gaia? How do results from Daisyworld transfer to the real world? What are the main candidates for "daisies"? Does it matter for Gaia theory whether we find daisies or not? How should we be searching for daisies, and should we intensify the search? How can Gaian mechanisms be collaborated with using process models or global models of the climate system that include the biota and allow for chemical cycling?"In 1997, Tyler Volk argued that a Gaian system is almost inevitably produced as a result of an evolution towards far-from-equilibrium homeostatic states that maximise entropy production, and Kleidon (2004) agreed stating: "...homeostatic behavior can emerge from a state of MEP associated with the planetary albedo"; "...the resulting behavior of a symbiotic Earth at a state of MEP may well lead to near-homeostatic behavior of the Earth system on long time scales, as stated by the Gaia hypothesis". Staley (2002) has similarly proposed "...an alternative form of Gaia theory based on more traditional Darwinian principles... In [this] new approach, environmental regulation is a consequence of population dynamics. The role of selection is to favor organisms that are best adapted to prevailing environmental conditions. However, the environment is not a static backdrop for evolution, but is heavily influenced by the presence of living organisms. The resulting co-evolving dynamical process eventually leads to the convergence of equilibrium and optimal conditions". History: Fourth Gaia conference A fourth international conference on the Gaia hypothesis, sponsored by the Northern Virginia Regional Park Authority and others, was held in October 2006 at the Arlington, VA campus of George Mason University.Martin Ogle, Chief Naturalist, for NVRPA, and long-time Gaia hypothesis proponent, organized the event. Lynn Margulis, Distinguished University Professor in the Department of Geosciences, University of Massachusetts-Amherst, and long-time advocate of the Gaia hypothesis, was a keynote speaker. Among many other speakers: Tyler Volk, co-director of the Program in Earth and Environmental Science at New York University; Dr. Donald Aitken, Principal of Donald Aitken Associates; Dr. Thomas Lovejoy, President of the Heinz Center for Science, Economics and the Environment; Robert Correll, Senior Fellow, Atmospheric Policy Program, American Meteorological Society and noted environmental ethicist, J. Baird Callicott. Criticism: After initially receiving little attention from scientists (from 1969 until 1977), thereafter for a period the initial Gaia hypothesis was criticized by a number of scientists, including Ford Doolittle, Richard Dawkins and Stephen Jay Gould. Lovelock has said that because his hypothesis is named after a Greek goddess, and championed by many non-scientists, the Gaia hypothesis was interpreted as a neo-Pagan religion. Many scientists in particular also criticized the approach taken in his popular book Gaia, a New Look at Life on Earth for being teleological—a belief that things are purposeful and aimed towards a goal. Responding to this critique in 1990, Lovelock stated, "Nowhere in our writings do we express the idea that planetary self-regulation is purposeful, or involves foresight or planning by the biota". Criticism: Stephen Jay Gould criticized Gaia as being "a metaphor, not a mechanism." He wanted to know the actual mechanisms by which self-regulating homeostasis was achieved. In his defense of Gaia, David Abram argues that Gould overlooked the fact that "mechanism", itself, is a metaphor—albeit an exceedingly common and often unrecognized metaphor—one which leads us to consider natural and living systems as though they were machines organized and built from outside (rather than as autopoietic or self-organizing phenomena). Mechanical metaphors, according to Abram, lead us to overlook the active or agentic quality of living entities, while the organismic metaphors of the Gaia hypothesis accentuate the active agency of both the biota and the biosphere as a whole. With regard to causality in Gaia, Lovelock argues that no single mechanism is responsible, that the connections between the various known mechanisms may never be known, that this is accepted in other fields of biology and ecology as a matter of course, and that specific hostility is reserved for his own hypothesis for other reasons.Aside from clarifying his language and understanding of what is meant by a life form, Lovelock himself ascribes most of the criticism to a lack of understanding of non-linear mathematics by his critics, and a linearizing form of greedy reductionism in which all events have to be immediately ascribed to specific causes before the fact. He also states that most of his critics are biologists but that his hypothesis includes experiments in fields outside biology, and that some self-regulating phenomena may not be mathematically explainable. Criticism: Natural selection and evolution Lovelock has suggested that global biological feedback mechanisms could evolve by natural selection, stating that organisms that improve their environment for their survival do better than those that damage their environment. However, in the early 1980s, W. Ford Doolittle and Richard Dawkins separately argued against this aspect of Gaia. Doolittle argued that nothing in the genome of individual organisms could provide the feedback mechanisms proposed by Lovelock, and therefore the Gaia hypothesis proposed no plausible mechanism and was unscientific. Dawkins meanwhile stated that for organisms to act in concert would require foresight and planning, which is contrary to the current scientific understanding of evolution. Like Doolittle, he also rejected the possibility that feedback loops could stabilize the system. Criticism: Lynn Margulis, a microbiologist who collaborated with Lovelock in supporting the Gaia hypothesis, argued in 1999 that "Darwin's grand vision was not wrong, only incomplete. In accentuating the direct competition between individuals for resources as the primary selection mechanism, Darwin (and especially his followers) created the impression that the environment was simply a static arena". She wrote that the composition of the Earth's atmosphere, hydrosphere, and lithosphere are regulated around "set points" as in homeostasis, but those set points change with time.Evolutionary biologist W. D. Hamilton called the concept of Gaia Copernican, adding that it would take another Newton to explain how Gaian self-regulation takes place through Darwinian natural selection. More recently Ford Doolittle building on his and Inkpen's ITSNTS (It's The Song Not The Singer) proposal proposed that differential persistence can play a similar role to differential reproduction in evolution by natural selections, thereby providing a possible reconciliation between the theory of natural selection and the Gaia hypothesis. Criticism: Criticism in the 21st century The Gaia hypothesis continues to be broadly skeptically received by the scientific community. For instance, arguments both for and against it were laid out in the journal Climatic Change in 2002 and 2003. A significant argument raised against it are the many examples where life has had a detrimental or destabilising effect on the environment rather than acting to regulate it. Several recent books have criticised the Gaia hypothesis, expressing views ranging from "... the Gaia hypothesis lacks unambiguous observational support and has significant theoretical difficulties" to "Suspended uncomfortably between tainted metaphor, fact, and false science, I prefer to leave Gaia firmly in the background" to "The Gaia hypothesis is supported neither by evolutionary theory nor by the empirical evidence of the geological record". The CLAW hypothesis, initially suggested as a potential example of direct Gaian feedback, has subsequently been found to be less credible as understanding of cloud condensation nuclei has improved. In 2009 the Medea hypothesis was proposed: that life has highly detrimental (biocidal) impacts on planetary conditions, in direct opposition to the Gaia hypothesis.In a 2013 book-length evaluation of the Gaia hypothesis considering modern evidence from across the various relevant disciplines, Toby Tyrrell concluded that: "I believe Gaia is a dead end. Its study has, however, generated many new and thought provoking questions. While rejecting Gaia, we can at the same time appreciate Lovelock's originality and breadth of vision, and recognize that his audacious concept has helped to stimulate many new ideas about the Earth, and to champion a holistic approach to studying it". Elsewhere he presents his conclusion "The Gaia hypothesis is not an accurate picture of how our world works". This statement needs to be understood as referring to the "strong" and "moderate" forms of Gaia—that the biota obeys a principle that works to make Earth optimal (strength 5) or favourable for life (strength 4) or that it works as a homeostatic mechanism (strength 3). The latter is the "weakest" form of Gaia that Lovelock has advocated. Tyrrell rejects it. However, he finds that the two weaker forms of Gaia—Coeveolutionary Gaia and Influential Gaia, which assert that there are close links between the evolution of life and the environment and that biology affects the physical and chemical environment—are both credible, but that it is not useful to use the term "Gaia" in this sense and that those two forms were already accepted and explained by the processes of natural selection and adaptation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CDP-glycerol glycerophosphotransferase** CDP-glycerol glycerophosphotransferase: In enzymology, a CDP-glycerol glycerophosphotransferase (EC 2.7.8.12) is an enzyme that catalyzes the chemical reaction CDP-glycerol + (glycerophosphate)n ⇌ CMP + (glycerophosphate)n+1Thus, the two substrates of this enzyme are CDP-glycerol and (glycerophosphate)n, whereas its two products are CMP and (glycerophosphate)n+1. This enzyme belongs to the family of transferases, specifically those transferring non-standard substituted phosphate groups. The systematic name of this enzyme class is CDP-glycerol:poly(glycerophosphate) glycerophosphotransferase. Other names in common use include teichoic-acid synthase, cytidine diphosphoglycerol glycerophosphotransferase, poly(glycerol phosphate) polymerase, teichoic acid glycerol transferase, glycerophosphate synthetase, and CGPTase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Barfoed's test** Barfoed's test: Barfoed's test is a chemical test used for detecting the presence of monosaccharides. It is based on the reduction of copper(II) acetate to copper(I) oxide (Cu2O), which forms a brick-red precipitate. RCHO + 2Cu2+ + 2H2O → RCOOH + Cu2O↓ + 4H+(Disaccharides may also react, but the reaction is much slower.) The aldehyde group of the monosaccharide which normally forms a cyclic hemiacetal is oxidized to the carboxylate. A number of other substances, including sodium chloride, may interfere. Its autor is the Danish chemist Christen Thomsen Barfoed and is primarily used in botany.The test is similar to the reaction of Fehling's solution to aldehydes. Composition: Barfoed's reagent consists of a 0.33 molar solution of copper (II) acetate in 1% acetic acid solution. The reagent does not keep well and it is therefore advisable to make it up when it is actually required. Procedure: 1 drops of Barfoed's reagent is added to 2 mL of given sample in a test tube and boiled for 3 minutes and then allowed to cool. If a red precipitate occurs, a monosaccharide is present.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tristar 64** Tristar 64: The Tristar 64 is an unlicensed add-on for the Nintendo 64 (N64) video game console. Produced in Hong Kong by Future Laboratory, the Tristar 64 features two additional cartridge ports which are designed to accept cartridges created for the Nintendo Entertainment System (NES, a.k.a. Famicom) and Super Nintendo Entertainment System (SNES, a.k.a. Super Famicom). The device then emulates the NES (via an NES-on-a-chip) or SNES hardware, and allows the cartridge to be run. The device also features built-in cheat cartridge functionality through a program called the X-Terminator, as well as the Memory Editor, which allows SRAM and EEPROM saved game data to be archived and edited. Tristar 64: The Tristar 64 requires a separate power supply, and connects to a television set by way of RCA composite output cables, with the N64's own video output being first routed through the Tristar device. The Tristar 64 is similar to the Super 8, a device which allowed NES cartridges to be played on a SNES console.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Royal Blood-Fresh** Royal Blood-Fresh: Royal Blood-Fresh (혈궁불로정) is a traditional Korean medicine (Koryo medicine) health supplement derived from soybeans. It is manufactured in North Korea and is the most famous product sold by the North Korean company, Pugang Pharmaceutic.It is marketed as a "blood purifier" and a preventative against deep-vein thrombosis. It is marketed to foreigners during Air Koryo flights and has been sold at Pyongyang Gwan, a North Korean restaurant in Hanoi, Vietnam. It has been dismissed outside of North Korea as a non-scientific "miracle cure".In 2017, three Russian nationals were arrested in South Korea for selling North Korean drugs, which included Royal Blood-Fresh, Kumdang-2, and Neo-Viagra-Y.R.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VTES 3rd Edition** VTES 3rd Edition: VTES 3rd Edition (Third) is a complete base set for White Wolf's trading card game Vampire: The Eternal Struggle released on 4 September 2006. White Wolf's page dedicated to the set indicates the reasoning for calling it the third edition: "White Wolf's eleventh expansion for Vampire: The Eternal Struggle is a stand-alone base set. It is called Third Edition (after the Camarilla Edition, which is reckoned as the second edition)." The expansion also happens to be the third set based on the Sabbat sect. It contained a whole new set of vampires, but mainly reprints of library cards. Due to insufficient quality management the distribution of the cards in the boosters and the overall printing quality was significantly worse than in previous expansions. In addition the card backs are printed upside-down (in comparison to all other expansions). De facto it is now required to use card sleeves in tournaments when a player uses cards from the 3rd Edition set mixed with cards from other expansions. These flaws caused some resentment towards the 3rd Edition set in the player community. Nonetheless, the expansion won the InQuest Gamer 2007 Fan Awards for best trading card expansion. Contents: Part of the expansion are 4 different pre-constructed decks with 89 cards each as well as boosters with 11 cards each (5 common, 3 vampire, 2 uncommon and 1 rare). There is a total of 390 cards in this set, of which 136 cards are new, i.e. 10 new common, 10 uncommon, 10 rare cards, 6 fixed cards (from the starters) and 100 vampire cards; the rest of the 254 library cards are actually reprints from older sets. The pre-constructed decks are: Brujah antitribu Malkavian antritribu Tremere antitribu TzimisceThe vampires White Lily and Duality (also included in this set) were given as promo cards in a number of magazines. Simultaneously White Wolf released a players kit which contained half of each of the four pre-constructed starters as well as counters and an extensive example of play. Mechanics: Draft Effects—a number of library cards contain a draft effect, that the card can be played during a draft format game, usually with a lesser effect and other requirements that the base version of the card. Trifles—It is now allowed to play a second trifle master card if the first master card played was also a trifle. Before this the second master card couldn't be a trifle master card. Cards: Heart of Nizchetus—an equipment card which provides a very good card drawing/replacement ability that does not waste the currently unwanted cards. Helicopter—an equipment card which allows a minion to untap once each turn after a successful action, similar to the Freak Drive card. Mirror Walk—an action modifier requiring the Thaumaturgy discipline which gives stealth to a minion's action, while denying any combat if the action is blocked. Wash—a master out-of-turn card which allows to counter a master card played by another player. Due to the trifle property of Wash the player is still allowed to play a master card by himself in his own master phase next turn. Yawp Court—a master card which allows a Sabbat vampire to block a political action without regard to the stealth/intercept ratio.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Overstrike** Overstrike: In typography, overstrike is a method of printing characters that are missing from the printer's character set. The character is created by placing one character on another one – for example, overstriking ⟨L⟩ with ⟨-⟩ results in printing a ⟨Ł⟩ (L with stroke) character. The ASCII code supports six different diacritics. These are: grave accent, tilde, acute accent (approximated by the apostrophe), diaeresis (double quote), cedilla (comma), and circumflex accent. Each is typed by typing the preceding character, then backspace, and then the 'related character', which is ⟨`⟩, ⟨~⟩, ⟨'⟩, ⟨"⟩, or ⟨^⟩, respectively for the above-mentioned accents. With the wide adoption of Unicode (especially UTF-8, which supports a much larger number of characters in different writing systems), this technique is of little use today. However, combining characters such as diacritics are still used to depict characters which cannot be shown otherwise. Overstrike: Many font renderers in computer programs invent missing bold characters by overstriking the normal character with itself, slightly horizontally offset. The horizontal offset is essential since, unlike a typewriter where repeating a letter in exactly the same space will make it darker, most modern printers will not darken repeated "strikes" to the same space. Actual bold fonts are designed with some features thicker and others the same size as a regular font, so the use of this "fake bold" is considered undesirable from a typographic point of view. Overstrike: The character set for the APL programming language includes several characters that were printed by overstriking other characters on printing terminals such as the IBM 2741, for example the functions ⌽ and ⊖ may be used to reverse the elements of an array. WordPerfect word processing program included an overstrike functionality. Microsoft Word and LibreOffice/OpenOffice do not. No known keyboard arrangement includes a function key that allows any two characters to be superimposed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**B3GNT7** B3GNT7: UDP-GlcNAc:betaGal beta-1,3-N-acetylglucosaminyltransferase 7 is a protein in humans that is encoded by the B3GNT7 gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Through colour render** Through colour render: Through colour render is a sand/cement/lime based render that is made from White Portland cement (WOPC) and added pigment to produce a coloured effect that is throughout the body of the material. The pigment is preblended into the product as part of the manufacturing process to produce a prebagged product. Another name for these type of materials is monocouche renders. Through colour render: Various mix designs can be produced in renders according to the published Code of Practice for External Renderings, BS5262 1991. The through colour render market within Europe produces a class 3 mix design as per the Code of Practice by the manufacturing companies listed below;
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bimetric gravity** Bimetric gravity: Bimetric gravity or bigravity refers to two different classes of theories. The first class of theories relies on modified mathematical theories of gravity (or gravitation) in which two metric tensors are used instead of one. The second metric may be introduced at high energies, with the implication that the speed of light could be energy-dependent, enabling models with a variable speed of light. Bimetric gravity: If the two metrics are dynamical and interact, a first possibility implies two graviton modes, one massive and one massless; such bimetric theories are then closely related to massive gravity. Several bimetric theories with massive gravitons exist, such as those attributed to Nathan Rosen (1909–1995) or Mordehai Milgrom with relativistic extensions of Modified Newtonian Dynamics (MOND). More recently, developments in massive gravity have also led to new consistent theories of bimetric gravity. Though none has been shown to account for physical observations more accurately or more consistently than the theory of general relativity, Rosen's theory has been shown to be inconsistent with observations of the Hulse–Taylor binary pulsar. Some of these theories lead to cosmic acceleration at late times and are therefore alternatives to dark energy. Bimetric gravity is also at odds with measurements of gravitational waves emitted by the neutron-star merger GW170817.On the contrary, the second class of bimetric gravity theories does not rely on massive gravitons and does not modify Newton's law, but instead describes the universe as a manifold having two coupled Riemannian metrics, where matter populating the two sectors interact through gravitation (and antigravitation if the topology and the Newtonian approximation considered introduce negative mass and negative energy states in cosmology as an alternative to dark matter and dark energy). Some of these cosmological models also use a variable speed of light in the high energy density state of the radiation-dominated era of the universe, challenging the inflation hypothesis. Rosen's bigravity (1940 to 1989): In general relativity (GR), it is assumed that the distance between two points in spacetime is given by the metric tensor. Einstein's field equation is then used to calculate the form of the metric based on the distribution of energy and momentum. Rosen's bigravity (1940 to 1989): In 1940, Rosen proposed that at each point of space-time, there is a Euclidean metric tensor γij in addition to the Riemannian metric tensor gij . Thus at each point of space-time there are two metrics: ds2=gijdxidxj dσ2=γijdxidxj The first metric tensor, gij , describes the geometry of space-time and thus the gravitational field. The second metric tensor, γij , refers to the flat space-time and describes the inertial forces. The Christoffel symbols formed from gij and γij are denoted by {jki} and Γjki respectively. Rosen's bigravity (1940 to 1989): Since the difference of two connections is a tensor, one can define the tensor field Δjki given by: Two kinds of covariant differentiation then arise: g -differentiation based on gij (denoted by a semicolon, e.g. X;a ), and covariant differentiation based on γij (denoted by a slash, e.g. X/a ). Ordinary partial derivatives are represented by a comma (e.g. X,a ). Let Rijkh and Pijkh be the Riemann curvature tensors calculated from gij and γij , respectively. In the above approach the curvature tensor Pijkh is zero, since γij is the flat space-time metric. Rosen's bigravity (1940 to 1989): A straightforward calculation yields the Riemann curvature tensor Rijkh=Pijkh−Δij/kh+Δik/jh+ΔmjhΔikm−ΔmkhΔijm=−Δij/kh+Δik/jh+ΔmjhΔikm−ΔmkhΔijm Each term on the right hand side is a tensor. It is seen that from GR one can go to the new formulation just by replacing {:} by Δ and ordinary differentiation by covariant γ -differentiation, −g by gγ , integration measure d4x by −γd4x , where det (gij) , det (γij) and d4x=dx1dx2dx3dx4 . Having once introduced γij into the theory, one has a great number of new tensors and scalars at one's disposal. One can set up other field equations other than Einstein's. It is possible that some of these will be more satisfactory for the description of nature. Rosen's bigravity (1940 to 1989): The geodesic equation in bimetric relativity (BR) takes the form It is seen from equations (1) and (2) that Γ can be regarded as describing the inertial field because it vanishes by a suitable coordinate transformation. Being the quantity Δ a tensor, it is independent of any coordinate system and hence may be regarded as describing the permanent gravitational field. Rosen's bigravity (1940 to 1989): Rosen (1973) has found BR satisfying the covariance and equivalence principle. In 1966, Rosen showed that the introduction of the space metric into the framework of general relativity not only enables one to get the energy momentum density tensor of the gravitational field, but also enables one to obtain this tensor from a variational principle. The field equations of BR derived from the variational principle are where Nji=12γαβ(ghighj/α)/β or Nji=12γαβ{(ghighj,α),β−(ghigmjΓhαm),β−γαβ(Γjαi),β+Γλβi[ghλghj,α−ghλgmjΓhαm−Γjαλ]−Γjβλ[ghighλ,α−ghigmλΓhαm−Γλαi]+Γαβλ[ghighj,λ−ghigmjΓhλm−Γjλi]} with N=gijNij , κ=gγ and Tji is the energy-momentum tensor. Rosen's bigravity (1940 to 1989): The variational principle also leads to the relation Tj;ii=0 .Hence from (3) Kj;ii=0 ,which implies that in a BR, a test particle in a gravitational field moves on a geodesic with respect to gij. Rosen's bigravity (1940 to 1989): Rosen continued improving his bimetric gravity theory with additional publications in 1978 and 1980, in which he made an attempt "to remove singularities arising in general relativity by modifying it so as to take into account the existence of a fundamental rest frame in the universe." In 1985 Rosen tried again to remove singularities and pseudo-tensors from General Relativity. Twice in 1989 with publications in March and November Rosen further developed his concept of elementary particles in a bimetric field of General Relativity. Rosen's bigravity (1940 to 1989): It is found that the BR and GR theories differ in the following cases: propagation of electromagnetic waves the external field of a high density star the behaviour of intense gravitational waves propagating through a strong static gravitational field.The predictions of gravitational radiation in Rosen's theory have been shown since 1992 to be in conflict with observations of the Hulse–Taylor binary pulsar. Massive bigravity: Since 2010 there has been renewed interest in bigravity after the development by Claudia de Rham, Gregory Gabadadze, and Andrew Tolley (dRGT) of a healthy theory of massive gravity. Massive gravity is a bimetric theory in the sense that nontrivial interaction terms for the metric gμν can only be written down with the help of a second metric, as the only nonderivative term that can be written using one metric is a cosmological constant. In the dRGT theory, a nondynamical "reference metric" fμν is introduced, and the interaction terms are built out of the matrix square root of g−1f In dRGT massive gravity, the reference metric must be specified by hand. One can give the reference metric an Einstein–Hilbert term, in which case fμν is not chosen but instead evolves dynamically in response to gμν and possibly matter. This massive bigravity was introduced by Fawad Hassan and Rachel Rosen as an extension of dRGT massive gravity.The dRGT theory is crucial to developing a theory with two dynamical metrics because general bimetric theories are plagued by the Boulware–Deser ghost, a possible sixth polarization for a massive graviton. The dRGT potential is constructed specifically to render this ghost nondynamical, and as long as the kinetic term for the second metric is of the Einstein–Hilbert form, the resulting theory remains ghost-free.The action for the ghost-free massive bigravity is given by S=−Mg22∫d4x−gR(g)−Mf22∫d4x−fR(f)+m2Mg2∫d4x−g∑n=04βnen(X)+∫d4x−gLm(g,Φi). Massive bigravity: As in standard general relativity, the metric gμν has an Einstein–Hilbert kinetic term proportional to the Ricci scalar R(g) and a minimal coupling to the matter Lagrangian Lm , with Φi representing all of the matter fields, such as those of the Standard Model. An Einstein–Hilbert term is also given for fμν . Each metric has its own Planck mass, denoted Mg and Mf respectively. The interaction potential is the same as in dRGT massive gravity. The βi are dimensionless coupling constants and m (or specifically βi1/2m ) is related to the mass of the massive graviton. This theory propagates seven degrees of freedom, corresponding to a massless graviton and a massive graviton (although the massive and massless states do not align with either of the metrics). Massive bigravity: The interaction potential is built out of the elementary symmetric polynomials en of the eigenvalues of the matrices K=I−g−1f or X=g−1f , parametrized by dimensionless coupling constants αi or βi , respectively. Here g−1f is the matrix square root of the matrix g−1f . Written in index notation, X is defined by the relation XμαXαν=gμαfνα. The en can be written directly in terms of X as det ⁡X, where brackets indicate a trace, [X]≡Xμμ . It is the particular antisymmetric combination of terms in each of the en which is responsible for rendering the Boulware–Deser ghost nondynamical.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fodder** Fodder: Fodder (), also called provender (), is any agricultural foodstuff used specifically to feed domesticated livestock, such as cattle, rabbits, sheep, horses, chickens and pigs. "Fodder" refers particularly to food given to the animals (including plants cut and carried to them), rather than that which they forage for themselves (called forage). Fodder includes hay, straw, silage, compressed and pelleted feeds, oils and mixed rations, and sprouted grains and legumes (such as bean sprouts, fresh malt, or spent malt). Most animal feed is from plants, but some manufacturers add ingredients to processed feeds that are of animal origin. Fodder: The worldwide animal feed trade produced 873 million tons of feed (compound feed equivalent) in 2011, fast approaching 1 billion tonnes according to the International Feed Industry Federation, with an annual growth rate of about 2%. The use of agricultural land to grow feed rather than human food can be controversial (see food vs. feed); some types of feed, such as corn (maize), can also serve as human food; those that cannot, such as grassland grass, may be grown on land that can be used for crops consumed by humans. In many cases the production of grass for cattle fodder is a valuable intercrop between crops for human consumption, because it builds the organic matter in the soil. When evaluating if this soil organic matter increase mitigates climate change, both permanency of the added organic matter as well as emissions produced during use of the fodder product have to be taken into account. Some agricultural byproducts fed to animals may be considered unsavory by humans. Common plants specifically grown for fodder: Alfalfa (lucerne) Barley Common duckweed Birdsfoot trefoil Brassica spp. Kale Rapeseed (canola) Rutabaga (swede) Turnip Clover Alsike clover Red clover Subterranean clover White clover Grass Bermuda grass Brome False oat grass Fescue Heath grass Meadow grasses (from naturally mixed grassland swards) Orchard grass Ryegrass Timothy-grass Corn (maize) Millet Oats Sorghum Soybeans Trees (pollard tree shoots for "tree-hay") Wheat Types: Biochar for cattle Bran Conserved forage plants: hay and silage Compound feed and premixes, often called pellets, nuts or (cattle) cake Crop residues: stover, copra, straw, chaff, sugar beet waste Fish meal Freshly cut grass and other forage plants Grass or lawn clipping waste Green maize Green sorghum Horse gram Leaves from certain species of trees Meat and bone meal (now illegal in cattle and sheep feeds in many areas due to risk of BSE) Molasses Native green grass Oilseed press cake (cottonseed, safflower, sunflower, soybean, peanut or groundnut) Oligosaccharides Processed insects (i.e. processed maggots) Seaweed (including Asparagopsis taxiformis which is used mainly as a supplement to reduce methane emissions by up to 90%) Seeds and grains, either whole or prepared by crushing, milling, etc. Types: Single cell protein(can also be made from atmospheric CO2) Sprouted grains and legumes Yeast extract (brewer's yeast residue) Health concerns: In the past, bovine spongiform encephalopathy (BSE, or "mad cow disease") spread through the inclusion of ruminant meat and bone meal in cattle feed due to prion contamination. This practice is now banned in most countries where it has occurred. Some animals have a lower tolerance for spoiled or moldy fodder than others, and certain types of molds, toxins, or poisonous weeds inadvertently mixed into a feed source may cause economic losses due to sickness or death of the animals. The US Department of Health and Human Services regulates drugs of the Veterinary Feed Directive type that can be present within commercial livestock feed. Droughts: Increasing intensities and frequencies of drought events put rangeland agriculture under pressure in semi-arid and arid geographic areas. Innovative emergency fodder production concepts have been reported, such as bush-based animal fodder production in Namibia. During extended dry periods, some farmers have used woody biomass fibre from encroacher bush as their primary source of cattle feed, adding locally-available supplements for nutrients as well as to improve palatability. Production of sprouted grains as fodder: Fodder in the form of sprouted cereal grains such as barley, and legumes can be grown in small and large quantities. Production of sprouted grains as fodder: Systems have been developed recently that allow for many tons of sprouts to be produced each day, year round. Sprouted grains can significantly increase the nutritional value of the grain compared with feeding the ungerminated grain to stock. In addition, they use less water than traditional forage, making them ideal for drought conditions. Sprouted barley and other cereal grains can be grown hydroponically in a carefully-controlled environment. Hydroponically-grown sprouted fodder at 150 mm tall with a 50 mm root mat is at its peak for animal feed. Although products such as barley are grain, when sprouted they are approved by the American Grassfed Association to be used as livestock feed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quasiperiodic tiling** Quasiperiodic tiling: A quasiperiodic tiling is a tiling of the plane that exhibits local periodicity under some transformations: every finite subset of its tiles reappears infinitely often throughout the tiling, but there is no nontrivial way of superimposing the whole tiling onto itself so that all tiles overlap perfectly. See Aperiodic tiling and Penrose tiling for a mathematical viewpoint. Quasicrystal for a physics viewpoint.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spatial model of voting** Spatial model of voting: In social choice theory, the spatial model of voting is used to simulate the behavior of voters in an election, either to explain voter behavior, or to estimate the likelihood of desirable or undesirable outcomes under different voting systems.: 3 This model positions voters and candidates in a one- or multi-dimensional space, where each dimension represents an attribute of the candidate that voters care about.: 14  Voters are then modeled as having an ideal point in this space, and voting for the nearest candidates to that point. (As this is a mathematical model that can apply to any form of election, including non-governmental elections, each dimension can represent any attribute of the candidates, such as a single political issue sub-component of an issue,: 435  or non-political properties of the candidates, such as perceived corruption, health, etc.) A political spectrum or compass can therefore be thought of as either an attribute space itself, or as a projection of a higher-dimensional space onto a smaller number of dimensions for simplicity. For example, a study of German voters found that at least four dimensions were required to adequately represent all political parties. Accuracy: A study of three-candidate elections analyzed 12 different models of voter behavior, including several variations of the impartial culture model, and found the spatial model to be the most accurate to real-world ranked-ballot election data.: 244  (Their real-world data was 883 three-candidate elections of 350 to 1,957 voters, extracted from 84 ranked-ballot elections of the Electoral Reform Society, and 913 elections derived from the 1970–2004 American National Election Studies thermometer scale surveys, with 759 to 2,521 "voters".) A previous study by the same authors had found similar results, comparing 6 different models to the ANES data.: 37 A study of evaluative voting methods developed several models for generating rated ballots, and recommend the spatial model as the most realistic. (Their empirical evaluation was based on two elections, the 2009 European Election Survey of 8 candidates by 972 voters, and the Voter Autrement poll of the 2017 French presidential election, including 26,633 voters and 5 candidates.) History: The earliest roots of the model are the one-dimensional Hotelling's law of 1929 and Black's Median voter theorem of 1948.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CUDA** CUDA: CUDA (or Compute Unified Device Architecture) is a proprietary and closed source parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for general purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements, for the execution of compute kernels.CUDA is designed to work with programming languages such as C, C++, and Fortran. This accessibility makes it easier for specialists in parallel programming to use GPU resources, in contrast to prior APIs like Direct3D and OpenGL, which required advanced skills in graphics programming. CUDA-powered GPUs also support programming frameworks such as OpenMP, OpenACC and OpenCL; and HIP by compiling such code to CUDA. CUDA: CUDA was created by Nvidia. When it was first introduced, the name was an acronym for Compute Unified Device Architecture, but Nvidia later dropped the common use of the acronym. Background: The graphics processing unit (GPU), as a specialized computer processor, addresses the demands of real-time high-resolution 3D graphics compute-intensive tasks. By 2012, GPUs had evolved into highly parallel multi-core systems allowing efficient manipulation of large blocks of data. This design is more effective than general-purpose central processing unit (CPUs) for algorithms in situations where processing large blocks of data is done in parallel, such as: cryptographic hash functions machine learning molecular dynamics simulations physics engines sort algorithms Ontology: The following table offers a non-exact description for the ontology of CUDA framework. Programming abilities: The CUDA platform is accessible to software developers through CUDA-accelerated libraries, compiler directives such as OpenACC, and extensions to industry-standard programming languages including C, C++ and Fortran. C/C++ programmers can use 'CUDA C/C++', compiled to PTX with nvcc, Nvidia's LLVM-based C/C++ compiler, or by clang itself. Fortran programmers can use 'CUDA Fortran', compiled with the PGI CUDA Fortran compiler from The Portland Group.In addition to libraries, compiler directives, CUDA C/C++ and CUDA Fortran, the CUDA platform supports other computational interfaces, including the Khronos Group's OpenCL, Microsoft's DirectCompute, OpenGL Compute Shader and C++ AMP. Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, Common Lisp, Haskell, R, MATLAB, IDL, Julia, and native support in Mathematica. Programming abilities: In the computer game industry, GPUs are used for graphics rendering, and for game physics calculations (physical effects such as debris, smoke, fire, fluids); examples include PhysX and Bullet. CUDA has also been used to accelerate non-graphical applications in computational biology, cryptography and other fields by an order of magnitude or more.CUDA provides both a low level API (CUDA Driver API, non single-source) and a higher level API (CUDA Runtime API, single-source). The initial CUDA SDK was made public on 15 February 2007, for Microsoft Windows and Linux. Mac OS X support was later added in version 2.0, which supersedes the beta released February 14, 2008. CUDA works with all Nvidia GPUs from the G8x series onwards, including GeForce, Quadro and the Tesla line. CUDA is compatible with most standard operating systems. Programming abilities: CUDA 8.0 comes with the following libraries (for compilation & runtime, in alphabetical order): cuBLAS – CUDA Basic Linear Algebra Subroutines library CUDART – CUDA Runtime library cuFFT – CUDA Fast Fourier Transform library cuRAND – CUDA Random Number Generation library cuSOLVER – CUDA based collection of dense and sparse direct solvers cuSPARSE – CUDA Sparse Matrix library NPP – NVIDIA Performance Primitives library nvGRAPH – NVIDIA Graph Analytics library NVML – NVIDIA Management Library NVRTC – NVIDIA Runtime Compilation library for CUDA C++CUDA 8.0 comes with these other software components: nView – NVIDIA nView Desktop Management Software NVWMI – NVIDIA Enterprise Management Toolkit GameWorks PhysX – is a multi-platform game physics engineCUDA 9.0–9.2 comes with these other components: CUTLASS 1.0 – custom linear algebra algorithms, NVCUVID – NVIDIA Video Decoder was deprecated in CUDA 9.2; it is now available in NVIDIA Video Codec SDKCUDA 10 comes with these other components: nvJPEG – Hybrid (CPU and GPU) JPEG processingCUDA 11.0-11.8 comes with these other components: CUB is new one of more supported C++ libraries MIG multi instance GPU support nvJPEG2000 – JPEG 2000 encoder and decoder Advantages: CUDA has several advantages over traditional general-purpose computation on GPUs (GPGPU) using graphics APIs: Scattered reads – code can read from arbitrary addresses in memory. Unified virtual memory (CUDA 4.0 and above) Unified memory (CUDA 6.0 and above) Shared memory – CUDA exposes a fast shared memory region that can be shared among threads. This can be used as a user-managed cache, enabling higher bandwidth than is possible using texture lookups. Faster downloads and readbacks to and from the GPU Full support for integer and bitwise operations, including integer texture lookups Limitations: Whether for the host computer or the GPU device, all CUDA source code is now processed according to C++ syntax rules. This was not always the case. Earlier versions of CUDA were based on C syntax rules. As with the more general case of compiling C code with a C++ compiler, it is therefore possible that old C-style CUDA source code will either fail to compile or will not behave as originally intended. Limitations: Interoperability with rendering languages such as OpenGL is one-way, with OpenGL having access to registered CUDA memory but CUDA not having access to OpenGL memory. Copying between host and device memory may incur a performance hit due to system bus bandwidth and latency (this can be partly alleviated with asynchronous memory transfers, handled by the GPU's DMA engine). Limitations: Threads should be running in groups of at least 32 for best performance, with total number of threads numbering in the thousands. Branches in the program code do not affect performance significantly, provided that each of 32 threads takes the same execution path; the SIMD execution model becomes a significant limitation for any inherently divergent task (e.g. traversing a space partitioning data structure during ray tracing). Limitations: No emulator or fallback functionality is available for modern revisions. Valid C++ may sometimes be flagged and prevent compilation due to the way the compiler approaches optimization for target GPU device limitations. C++ run-time type information (RTTI) and C++-style exception handling are only supported in host code, not in device code. Limitations: In single-precision on first generation CUDA compute capability 1.x devices, denormal numbers are unsupported and are instead flushed to zero, and the precision of both the division and square root operations are slightly lower than IEEE 754-compliant single precision math. Devices that support compute capability 2.0 and above support denormal numbers, and the division and square root operations are IEEE 754 compliant by default. However, users can obtain the prior faster gaming-grade math of compute capability 1.x devices if desired by setting compiler flags to disable accurate divisions and accurate square roots, and enable flushing denormal numbers to zero. Limitations: Unlike OpenCL, CUDA-enabled GPUs are only available from Nvidia. Attempts to implement CUDA on other GPUs include: Project Coriander: Converts CUDA C++11 source to OpenCL 1.2 C. A fork of CUDA-on-CL intended to run TensorFlow. CU2CL: Convert CUDA 3.2 C++ to OpenCL C. GPUOpen HIP: A thin abstraction layer on top of CUDA and ROCm intended for AMD and Nvidia GPUs. Has a conversion tool for importing CUDA C++ source. Supports CUDA 4.0 plus C++11 and float16. ZLUDA is a drop-in replacement for CUDA on Intel GPU. ZLUDA allows to run unmodified CUDA applications using Intel GPUs with near-native performance. chipStar can compile and run CUDA/HIP programs on advanced OpenCL 3.0 or Level Zero platforms. Example: This example code in C++ loads a texture from an image into an array on the GPU: Below is an example given in Python that computes the product of two arrays on the GPU. The unofficial Python language bindings can be obtained from PyCUDA. Additional Python bindings to simplify matrix multiplication operations can be found in the program pycublas. while CuPy directly replaces NumPy: GPUs supported: Supported CUDA Compute Capability versions for CUDA SDK version and Microarchitecture (by code name): Note: CUDA SDK 10.2 is the last official release for macOS, as support will not be available for macOS in newer releases. CUDA Compute Capability by version with associated GPU semiconductors and GPU card models (separated by their various application areas): '*' – OEM-only products Version features and specifications: Data types Note: Any missing lines or empty entries do reflect some lack of information on that exact item. Tensor cores Note: Any missing lines or empty entries do reflect some lack of information on that exact item. Technical Specification Multiprocessor Architecture For more information read the Nvidia CUDA programming guide. Current and future usages of CUDA architecture: Accelerated rendering of 3D graphics Accelerated interconversion of video file formats Accelerated encryption, decryption and compression Bioinformatics, e.g. NGS DNA sequencing BarraCUDA Distributed calculations, such as predicting the native conformation of proteins Medical analysis simulations, for example virtual reality based on CT and MRI scan images Physical simulations, in particular in fluid dynamics Neural network training in machine learning problems Face recognition Volunteer computing projects, such as SETI@home and other projects using BOINC software Molecular dynamics Mining cryptocurrencies Structure from motion (SfM) software
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ivory tower** Ivory tower: An ivory tower is a metaphorical place—or an atmosphere—where people are happily cut off from the rest of the world in favor of their own pursuits, usually mental and esoteric ones. From the 19th century, it has been used to designate an environment of intellectual pursuit disconnected from the practical concerns of everyday life. Most contemporary uses of the term refer to academia or the college and university systems in many countries.The term originated from the Biblical Song of Songs (7:4) with a different meaning and was later used as an epithet for Mary. Biblical usage: In the Christian tradition, the term ivory tower is used as a symbol for noble purity. It originates with the Song of Songs 7:4 ("Your neck is like an ivory tower"; in the Hebrew Masoretic text, it is found in 7:5) and was included in the epithets for Mary in the sixteenth-century Litany of the Blessed Virgin Mary ("tower of ivory", turris eburnea in Latin), though the title and image were in use long before that, since the 12th-century Marian revival at least. It occasionally appears in art, especially in depictions of Mary in the hortus conclusus. Although the term is rarely used in the religious sense in modern times, it is credited with inspiring the modern meaning. Modern usage: The first modern usage of "ivory tower" in the familiar sense of an unworldly dreamer can be found in a poem of 1837, "Pensées d'Août, à M. Villemain", by Charles Augustin Sainte-Beuve, a French literary critic and author, who used the term "tour d'ivoire" for the poetical attitude of Alfred de Vigny as contrasted with the more socially engaged Victor Hugo: "Et Vigny, plus secret, Comme en sa tour d'ivoire, avant midi rentrait". [And Vigny, the more secretive, like he was in his ivory tower, returning before midday.] This poetic use of "tour d'ivoire" may have been an allusion to the rook (or castle) in chess, which is another meaning of the French word tour. Chess pieces were often made of ivory. The name Rook is derived from the Persian rukh ("chariot"), maybe influenced by the Italian rocca ("fortress"). In early versions of chess, this piece was imagined as conveying and shielding a powerful warrior. Henry James's last novel, The Ivory Tower, was begun in 1914 and left unfinished at his death two years later. Paralleling James' own dismaying experience of the United States after twenty years away, it chronicles the effect on a high-minded returning upper-class American of the vulgar emptiness of the Gilded Age. "You seem all here so hideously rich", says his hero. Thus, there are two meanings mixed together: mockery of an absent-minded savant and admiration of someone who is able to devote his or her entire efforts to a noble cause (hence "ivory", a noble but impractical building material). The term has a rather negative flavor today, the implication being that specialists who are so deeply drawn into their fields of study often can't find a lingua franca with laymen outside their "ivory towers". Modern usage: In Andrew Hodges' biography of the University of Cambridge scientist Alan Turing, he discusses Turing's 1936–38 stay at Princeton University and writes that "[t]he tower of the Graduate College was an exact replica of Magdalen College, and it was popularly called the Ivory Tower, because of that benefactor of Princeton, the Procter who manufactured Ivory soap." William Cooper Procter (Princeton class of 1883) was a significant supporter of the construction of the Graduate College, and the main dining hall bears the Procter name. The skylines of Oxford and Cambridge universities, along with many Ivy League universities, are dotted with turrets and spires which are often described as 'Ivory Towers'. Modern usage: In Randall Jarrell's essay "The End of the Line" (1942), Jarrell asserts that if modern poetry is to survive then poets must come down from the "Ivory Tower" of elitist composition. Jarrell's main thrust is that the rich poetry of the modernist period was over-dependent upon reference to other literary works. For Jarrell the Ivory Tower led modern poetry into obscurity. Modern usage: Writers for Philadelphia's other newspapers sarcastically referred to the former headquarters of the establishment Philadelphia Inquirer, a white art deco tower called the Elverson Building, as the "Ivory Tower of Truth." Academic usage: The ivory tower is most often connected with the careers and lifestyles of academics in university and college systems. They have often garnered reputations as elite institutions by joining or creating associations with other universities. In many countries, these institutions aligned themselves with a specific mission or athletic ties. Some have criticized the elitism associated with these groups.In certain instances, these ivory-tower universities have received a disproportionate amount of regional and national funding. They also produce a higher proportion of a country's publications and citations. They tend to be overrepresented in top university rankings, such as Academic Ranking of World Universities, QS World University Rankings, the Times Higher Education World University Rankings, and the U.S. News & World Report Best Global University Ranking.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Modedit** Modedit: Modedit was a MOD file editor (a form of Tracker) for MS-DOS written by Norman Lin and distributed as Shareware in 1991 and 1992. It was one of the first MOD software available for the PC. Its ability to play MODs through the PC speaker without requiring additional sound hardware, was achieved by using code written by Mark J. Cox. Releases: The most popular version was the initial release, v2.00, in 1991 (v1.0 was a private release). Its screen was divided into three parts: Pattern editor, Pattern sequence table, and Sample list. Navigation around these parts was by keyboard. Version 3.01 (released late 1992) added a piano roll display and mouse support, but it was less popular. It shipped with a MOD file of The House of the Rising Sun as a demonstration. Editor: The pattern editor showed details of the current pattern in the sequence table. It had 4 columns corresponding to the 4 channels of the MOD file, and each column was divided into 3 smaller columns for pitch (note name and octave number), sample number (corresponding to an entry in the sample list), and special effects code (if any). These values could be edited directly by typing, in a manner reminiscent of a hex editor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3-(Trifluoromethyl)aniline** 3-(Trifluoromethyl)aniline: 3-(Trifluoromethyl)aniline is an organic compound with the formula CF3C6H4NH2. It is one of three isomers of trifluoromethylaniline. Classified as an aromatic amines, they are colorless liquids. The corresponding dimethylamino derivative is also known.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**V420 Aurigae** V420 Aurigae: V420 Aurigae is a high-mass star with an inferred compact companion. Closely orbiting each other every 0.8 days, they are a source of X-ray emission.These coordinates were identified as an X-ray source using the Uhuru satellite in 1978, then associated with the star V420 Aurigae by V. F. Polcaro and associates in 1984. The spectrum of the star shows rapid variation in the lines of singly-ionized iron and Balmer line emission, with these varying on a time scale of less than 300 seconds. This lends support to the presence of a compact companion. The system displays an infrared excess, suggesting it has an orbiting circumstellar envelope of gas and possibly dust. The system appears to be positioned at the center of an irregular, wispy nebula that was detected in the infrared band. One of the two filaments in this nebula appears to be connected with the system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pochi and Nyaa** Pochi and Nyaa: Pochi and Nyaa (ポチッとにゃ〜, Pochittonya〜) is a puzzle video game developed by Aiky (continuing on from Compile) and released by Taito towards the end of 2003 for the Neo Geo. It was one of the last games developed for the platform, as well as the final Neo Geo MVS title developed by a third-party company. In 2004, it was ported to the PlayStation 2 by Bandai, featuring several improvements and new characters. Gameplay: The game is played on a pair of 7x12 grids, where the pieces, called Pochis and Nyaas, fall in pairs, similar to Puyo Puyo. Pieces can be moved and rotated as they fall. Unlike many matching puzzle games, the player can accumulate as many pieces of the same color as they want. A falling piece can be changed into a "trigger", which will clear adjacent pieces of the same color and form a chain reaction. A detonation will send nuisance pieces over to the opponent's grid, with longer chains and branching paths increasing the power of the attack. Like in Puyo Puyo, incoming nuisance pieces can be offset by the defending player triggering a chain in response. A player loses when a piece lands in the top-center square of their grid, represented by a skull. The game has a single-player story mode, a two-player versus mode, and a single-player infinite mode. Development: Pochi and Nyaa was announced under a commercial alliance between Compile and Taito, and was initially scheduled for release in mid-September 2002, running on the NAOMI arcade board. However, the game was repeatedly delayed due to Compile's bankruptcy until finally being released in December 2003 for the Neo Geo, with then-reformed SNK Playmore helping publish the game. Many characters originally announced for the game did not make it into the final arcade version, but were added as additional characters for the PS2 version. Development: In November 2005, Aiky's intellectual property, including this game, was transferred to D4 Enterprise. Plot: In the sky where several gods live, the dog god Pochi and the cat god Nyaa watch over a festival which occurs every thousand years. During this festival, prospective idols compete against one another, and Pochi and Nyaa bet on the best idols to determine which god will be spoiled for the next 1,000 years. Prim receives an invitation from God promising pumpkin pudding as a reward, so she joins the Pochi and Nyaa festival and competes against various opponents. At the end of the story mode, a giant pudding appears and covers the city. Characters: Pochi and Nyaa (ポチッ&にゃ〜) The two rival worshiped gods. Nyaa supports the first player, while Pochi supports the second. Playable characters Primrose "Prim" Amor (プリムローズ・アモル) The player character in story mode. An novice wizard who lives with her parents and sister in the port city of Solciel. Liv (リヴ) A young plant-race girl. She is a spirit of the world tree who has just been born. Misty (ミスティ) A user of Ancient Arts, a battle technique which guides magic through dance. Jurad Tethtith (ジュラード・テスティス) A strange demon-mage. Kumada Udon (くまだうどん) A white bear that is worshiped as a sacred beast and a mountain god in his hometown. Graber (グラベル) The spirit of a hero who was active 300 years ago. A character of the same name appears in Madō Monogatari for i-mode mobile phones. Characters: Paradisus (パラディスス) The older brother of Jurad, a judge who determines the fate of the deceased in the Court of Justice. He has a "book of the past" in which he writes out the past and future of the desired opponent in the trial. The website and manga suggest that Paradisus has an alternate identity as a masked man named "Red Moonlight", which he denies. Characters: Señor Shellhead (ミスターシェルヘッド) The final boss of the story mode. A mysterious person with a mask. Characters introduced in the PS2 version Eudex (ユーデクス) The father of Paradisus and Jurad. Former governor of the fifth chamber of the underworld. He passed his role down to his eldest son and lives in retirement. Punpkie (パンプキー) A demon who resembles a jack-o'-lantern. Aggressive, impatient and mischievous. Song-Hua (善花) A dragon-type girl with an older sister. Bomby~na (ボンビ〜ナ) Hidden character. Blue-skin dragon. He is the fiance of Song-Hwa. Kyle (カイル) Hidden character. Gravel's great-grandson. A handsome hero who uses a gun with a yellow silencer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PC Paintbrush** PC Paintbrush: PC Paintbrush was a graphics editing software created by the ZSoft Corporation in 1984 for computers running the MS-DOS operating system. History: It was originally developed as a response to the first paintbrush program for the IBM PC, PCPaint, which had been released the prior year by Mouse Systems, the company responsible for bringing the mouse to the IBM PC for the first time. In 1984, Mouse Systems had released PCPaint to compete with Apple Paint on the Apple II computer and was already positioned to compete with MacPaint on Apple Computer's new Macintosh platform. Unlike MacPaint, PCPaint enabled users to work in color. History: When Paintbrush was released the following year, PCPaint had already added 16-color support for the PC's 64-color Enhanced Graphics Adapter (EGA), and Paintbrush followed with the PC's advantage of EGA support as well. (The EGA supported 64 colors, of which any 16 could be on the screen at a time in normal use.) Also following the lead of Mouse Systems and PCPaint, one of the first pieces of software on the PC to use a computer mouse pointing device, the earliest versions of Paintbrush were distributed by Microsoft, with a mouse included. Both Microsoft and their competitor, Mouse Systems, bundled their mice with Mouse Systems' PCPaint in 1984. At Christmas 1984, amidst record sales volumes in the home computer market, Microsoft had created a "sidecar" bundle for the PCjr, complete with their mouse, but with their competitor's product, PCPaint. With the release of Paintbrush the following year, Microsoft no longer needed to sell the software of their competitor in the PC mouse hardware market in order to have the same market advantage. History: Microsoft's mechanical mice outsold Mouse Systems' optical mice after a few years, but PCPaint outsold Paintbrush until the late 1980s.Unlike most other applications before and since, Paintbrush version numbers were recorded with Roman numerals (ex: PC Paintbrush II, PC Paintbrush IV ).Along with the release of Paintbrush, ZSoft, following in the footsteps of PCPaint's Pictor PIC format, the first popular image format for the PC, created the PCX image format. Version history: The first version of PC Paintbrush released in 1984 only allowed the use of a limited EGA 16-color palette.PC Paintbrush II was released in 1985.PC Paintbrush 3.10 was released in 1986.PC Paintbrush Plus 1.20 was released in 1987.In 1987 a Microsoft licensed version was released as Microsoft Paintbrush 2.0. It supported saving images in PCX or GX1 file formats. It featured adjustable palettes, different aspect ratios, fifteen fonts and supported printers, amongst other options. A Windows 1 and 2 version, named PC Paintbrush 1.05 for Microsoft Windows was released the same year. A version called Publisher's Paintbrush allowed import of images via TWAIN-based capture devices like hand-held and flatbed scanners.PC Paintbrush III was released in 1988, allowing 256 colors and extended SVGA resolutions were supported through the use of hundreds of custom-tailored graphics drivers. The PCX format grew in capability accordingly. By its final version, Paintbrush was able to open and save PCX, TIFF, and GIF files. Version history: PC Paintbrush IV was released in 1989. PC Paintbrush IV Plus, an updated version released the same year, supporting scanners. Also in 1989, PC Paintbrush Plus 1.12 for Windows was released, eventually becoming the Windows Paint program.PC Paintbrush Plus for Windows v1.5 was released in 1990.PC Paintbrush V+ came in 1992.PC Paintbrush for Windows 1.0 was adapted to the Windows 3.0 graphical environment in 1993. Support for 24-bit color and simple photo retouching tools were also added, as well as the ability to open more than one image at a time. The program also added many simulations of real-world media, such as oil paints, watercolors, and colored pencils, and it had a number of new smudge tools that took advantage of the increased color depth.Both PC Paintbrush and Publisher's Paintbrush were supplemented and later replaced with the more budget-oriented PhotoFinish, first released in 1991, with version 4 released in 1994.After ZSoft was sold, resold, and then finally absorbed by The Learning Company, an extremely low-priced and simple graphics application was released in 1994 under the title PC Paintbrush Designer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zhan zhuang** Zhan zhuang: Zhan zhuang (simplified Chinese: 站桩; traditional Chinese: 站樁; pinyin: zhàn zhuāng; lit. 'standing [like a] post') is a training method often practiced by students of neijia (internal kung fu), such as , Xing Yi Quan, Bagua Zhang and tai chi. Zhan zhuang is sometimes translated Standing-on-stake, Standing Qigong, Standing Like a Tree, Post-standing, Pile-standing, or Pylon Standing. History: The original Zhan zhuang were health methods used by Taoists; in recent centuries, martial artists who already had static standing methods combined these with the internal mechanics of Zhan zhuang to create a superior exercise. The goal of Zhan zhuang in martial arts has always been to develop a martially capable body structure, but nowadays most practitioners have again returned to a health-preservation orientation in their training, and few teach Zhan zhuang as a martial method. History: The word Zhan zhuang is the modern term; it was coined by Wang Xiangzhai. Wang, a student of Xing Yi Quan, created a method of Kung Fu-based entirely upon Zhan zhuang, known as Yiquan, "Intent Fist." Yiquan's method of study is Zhan zhuang plus movements that continue the feeling of the Standing Post in action. History: The most common Zhan zhuang method is known as Hun Yuan (浑圆; Hún Yuán, "Completely Round," "Round Smoothness") or Cheng Bao (撑抱; Chēng Bào, "Tree Hugging" stance). This posture is entirely Taoist in its origins, has many variations, and is the main training posture in all branches of Yiquan. This practice has recently also become common practice in tai chi and Qigong schools. In Xing Yi Quan, the practice of San Ti Shi (simplified Chinese: 三体势; traditional Chinese: 三體勢; pinyin: sān tǐ shì; lit. 'Heaven, Earth, Man') has been a root practice for centuries. Detail: Those unfamiliar with Zhan zhuang can experience severe muscle fatigue and subsequent trembling at first. Later, once sufficient stamina and strength have been developed, the practitioner can use Zhan zhuang to work on developing the sensation of "opposing forces," as well as one's central equilibrium and sensitivity to specific areas of tension in the body.Zhan zhuang has a strong connection with Traditional Chinese Medicine. Some schools use the practice as a way of removing blockages in Qi flow, believing Zhan zhuang, when correctly practiced, has a normalizing effect on the body; they claim any habitual tension or tissue shortening (or lengthening) is normalized by the practice, and the body regains its natural ability to function optimally. It is claimed that a normalized body will be less prone to muscular-skeletal medical conditions, and it is also believed that Zhan zhuang, when practiced for developing relaxed postures, will lead to a beneficial calming effect. The Dantian is also involved in the practice of Zhan zhuang.The amount of time spent practicing Zhan zhuang varies between styles and schools; one may spend anywhere from two minutes to two hours standing in one posture.Many styles, especially the internal styles, combine Post Standing with Qigong training and other coordinated-body methods to develop whole-body coordination for martial purposes. The martial practice is thought to strengthen the body's Central Nervous System and develop the coordination required for effective martial performance. In Yiquan, a clear distinction is made between health postures and martially oriented postures. In Bagua Zhang's circle-walking practice, the upper body is held as a Zhan zhuang posture, while the lower body is more dynamic. Books: J.P.C. Moffett, Wang Xuanjie (1994), Traditional Chinese Therapeutic Exercises: Standing Pole. Lam Kam Chuen, Gaia Books Ltd, 1991 (2005) ISBN 1-85675-215-1, "Chi Kung: The Way of Energy". Lam Kam Chuen, Gaia Books Ltd, 2003 ISBN 1-85675-198-8, "The Way of Power: Reaching Full Strength in Body and Mind". Peter den Dekker, Back2Base Publishing BV, 2010 ISBN 978-94-90580-01-8, "The Dynamics of Standing Still" Professor Yu Yong Nian, Amazon (2012) "El arte de nutrir la vida. Zhang zhuang el poder de la quietud" Jonathan Bluestein (2014). Research of Martial Arts. Amazon CreateSpace. ISBN 978-1499122510. Mark Cohen, MSC Creative Enterprises, 2013, ISBN 978-0988317888, "Inside Zhan Zhuang".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Annals of Improbable Research** Annals of Improbable Research: The Annals of Improbable Research (AIR) is a bimonthly magazine devoted to scientific humor, in the form of a satirical take on the standard academic journal. AIR, published six times a year since 1995, usually showcases at least one piece of scientific research being done on a strange or unexpected topic, but most of their articles concern real or fictional absurd experiments, such as a comparison of apples and oranges using infrared spectroscopy. Other features include such things as ratings of the cafeterias at scientific institutes, fake classifieds and advertisements for a medical plan called HMO-NO, and a very odd letters page. The magazine is headquartered in Cambridge, Massachusetts. Annals of Improbable Research: AIR awards the annual science Ig Nobel Prizes, for ten achievements that "first make people laugh, and then make them think". AIR also runs the Luxuriant Flowing Hair Club for Scientists. History: AIR is not the first science parody magazine. The Journal of Irreproducible Results (JIR) was founded by Alex Kohn and Harry Lipkin in 1955, but its editorial staff, including editor Marc Abrahams, left after the magazine was bought by publisher George Scherr in 1994. Scherr filed a number of court actions against AIR, alleging that it was deceptively similar to the Journal and that it had stolen the name "Ig Nobel Prize", but these actions were unsuccessful. Profile: Occasional AIR articles are factual and illuminating, if a bit offbeat. For example, in 2003 researcher-documentary producer Nick T. Spark wrote about the background and history of Murphy's Law in a four-part article, "Why Everything You know About Murphy's Law is Wrong". It was revised, expanded and later published in June 2006 as the book A History of Murphy's Law. Another example: it was scientifically proved and waggishly reported that instruments can "distinguish shit from Shinola."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zombie Apocalypse (video game)** Zombie Apocalypse (video game): Zombie Apocalypse is a downloadable action shoot 'em up video game developed by Nihilistic Software and published by Konami.In 2011, a sequel was released, Zombie Apocalypse: Never Die Alone. Gameplay: Zombie Apocalypse is a multidirectional shoot 'em up. The player controls one of four characters through 55 levels set in seven different areas. The player must rescue survivors, and kill waves of zombies using a range of weapons and the environment. Use of environmental kills rewards the player with more points for their score. Every five kills awards the player with another score multiplier, which resets to one upon death. Each of the game's modes can be played in single or multiplayer.There are 12 trophies/achievements available. Playing through the game unlocks new modes. Development: Nihilistic sought to make a pure arcade shooter, akin to Robotron 2084 and Smash TV. Inspiration for the environments and characters was taken from zombie films, including Night of the Living Dead and Return of the Living Dead. Reception: The game received "mixed or average reviews" on both platforms according to the review aggregation website Metacritic. IGN said, "[Zombie Apocalypse]...is inconsequential." GameSpot called it "Robotron: 2084 with zombies...what it lacks in innovation it more than makes up for with good, mindless fun." Destructoid praised the variety of zombies in the PlayStation 3 version review, but added "by the time you hit your 25th night, you've pretty much seen them all." However, 1Up.com criticized the repetition, as well as the difficulty level and enemy variety, saying, "Worse than simply being tedious, though, is how jaw-grindingly frustrating Zombie Apocalypse becomes. As the game wears on, absurd variants...replace the run-of-the-mill brain-eaters."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Postessive case** Postessive case: In linguistics, the postessive case (abbreviated POSTE) is a noun case that indicates position behind something. This case is found in Northeast Caucasian languages like Lezgian and Agul. In Lezgian the suffix -хъ (-qh), when added to the ergative-case noun, marks the possessive case. This case is now rarely used for its original meaning "behind" and is often used to mean "with" or "in exchange for".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Java Device Test Suite** Java Device Test Suite: Sun's Java Device Test Suite (JDTS) is the de facto industry-standard tool for assessing the quality of Java Platform, Micro Edition (Java ME platform) implementations. This tool performs quality testing for devices using the Java ME platform. A feature that distinguishes the Java Device Test Suite from Technology Compatibility Kit (TCKs) is its focus on an implementation's quality instead of an implementation's specification compliance. Java Device Test Suite: The Java Device Test Suite is an extensible set of test packs, a shared management facility, and a distributed test execution harness that can be used to assess the quality of any device that implements a compatible combination of the Java ME technologies, including the following: (please look description of technologies on jcp.org) Categories: The Java Device Test Suite's tests can be divided into three main categories: Benchmark tests compare the performance of a device with a reference standard. Readiness tests assess a device's ability to run tests and discover the application programming interfaces (APIs) that a device supports. General tests (divided by test packs)Tests in test packs can be divided in several group by tested subsystems: Over-the-air (OTA) tests verify that a device can implement application life cycle operations and can communicate with a provisioning server. Security tests verify the correct implementation model of certificates, permissions, and policies. Network tests verify implementation of different protocols: HTTP, HTTPS, Socket, UDP, SMS, Bluetooth and so on. Several test sets verify channel between two implementations (tests with partner). Categories: GUI tests verify implementation graphical system for different objects Virtual machine tests (includes JASM tests) verify implementation of the VM core.The Java Device Test Suite has approximately 11,000 tests that can be extended with new tests written by Sun engineers or by others, including users of the test suite. Users can choose to run any combination of tests, according to the features supported by a device and available resources, and make use of framework features: Local application servers. Testers can install dedicated local application servers (relays) on the computers that host their harnesses. This configuration can be used to test devices that connect to the Relay host by a serial cable (local link). A tester can switch a harness between a local relay and the standard shared relay. Categories: Feature-based test selection and reporting – Tests are alternatively grouped by their correspondence to important device features, such as multimedia MP3 playback, for example. A user can easily select all tests that exercise this feature, and, after a test run, a user can easily see how many tests related to MP3 playback failed. Relevance (configuration based) filtering. Tests that are not applicable for execution according to current device configuration (device template)are automatically filtered out of the test run. Test selection and reporting by failure severity. When testing time is limited, subsets of tests can be selected based on their importance. Test failures can be similarly analyzed by test importance. Multiple configurable emulators. Users can add device emulators and switch them between normal and debug modes. Categories: Results Database Services – Storage for test results with history data, and web-based UI for querying and reporting results Device-specific Template Generation. Readiness tests automatically discover device capabilities and user can generate configuration templates based on readiness results Bluetooth Data Transfer Channel. Device can send test logs and results by Bluetooth Test Run Automator – A test automation tool to allow for running interactive tests without user intervention, a stand alone tool stores user actions and device responses, then repeats them automatically and compares results. Categories: Custom Test Libraries – Allows developers to inject a private Java library into the test bundle. Template Manager – A tool that helps to organize templates in hierarchies and perform synchronization of updated values from parent to children. Portable Templates can be easily exchanged between different JDTS systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PinpointBPS** PinpointBPS: PinpointBPS is a methodology for process improvement in laboratories. It is underpinned by eight principles that form the basis for decision-making in a laboratory. While its application is mainly in healthcare — particularly medical laboratories — it has also been applied in other industries. The methodology has been heralded as "groundbreaking" in the field of laboratory performance improvement. Overview: Medical laboratories operate in highly regulated environments that demand consistent quality of patient outputs. The external environment’s impact on the broader global healthcare industry is also dictating the need for ongoing improvements in quality and delivery while using fewer resources that leads to cost savings. Among other regulations, the Carter Report in the United Kingdom dictates that GBP£200-million in cost savings must be achieved by 2020, while the Affordable Care Act in the United States, also known as Obamacare, imposed further taxation requirements on medical laboratories, reducing cost saving ability. Dr. Jonathan Berg stated at The Royal College of Pathologists' annual meeting in 2012 that "We need to be creative and innovative in the services we offer, and we need to get our finances under control", indicating the critical need for innovation within pathology and a stronger focus on financial controls and performance. Within this context, PinpointBPS was founded as a methodology focusing on innovation, risk reduction and financial impact relating to laboratory performance improvement. Methodology: The PinpointBPS methodology is centered on the following process of discovery that highlights current performance, expected future performance and the means to achieve it. The methodology has also been aligned with ISO 15189 requirements. The PinpointBPS Methodology The steps below provide a high-level overview of the methodology in practice. Methodology: 1. Focus on quantified value While all effort within the laboratory is put into value creation, from turnaround time improvements to quality output and cost savings / profit increases, it is important that the performance of each of these can be quantified and their relationship with the financial performance of the laboratory understood. This enables the laboratory to understand current performance needs while at the same time establishing the key performance indicators that are important and should be measured. Methodology: 2. Determine current performance Understanding the current performance of the laboratory provides the necessary context to highlight performance improvement requirements as well as areas of excellence. This should be conducted by looking at sample turnaround time and resource utilization, and should be considered on a holistic basis, creating a virtual model of the whole laboratory by mapping out all processes and activities in detail. This provides an end-to-end perspective on current performance and forms the basis for future performance improvement initiatives. Methodology: 3. Understand future performance The expectation for future performance should be done through business process modeling, with current performance as the foundation. Using LIS data as well as data from workforce scheduling, and combining this data with the process model of the laboratory, creates an environment where changes to the laboratory can be simulated and the impact to performance understood in quantifiable terms. It is important to validate both the integrity of the data and the accuracy of the model. Once a desirable outcome for future performance has been found, the difference (or delta) in performance is evaluated to understand future performance requirements. Methodology: 4. Establish performance delta and requirements The difference in performance between current (baseline) and future (expected) performance highlights the necessary process changes and related initiatives that are required to achieve the future performance. This impact should be measured according to finance (cost or profit), quality and turnaround time. For these changes to take effect, each change initiative is prioritized based on a quantified understanding of its impact to the laboratory 'bottom line', while also improving turnaround time, quality or both. Methodology: Comparison with Lean While the PinpointBPS methodology can support Lean (and other continuous improvement methodologies like Six Sigma) in that it ultimately aims to provide patients and other stakeholders with quality outputs, its approach differs from other methodologies like the Lean Laboratory in various ways: While lean focuses on waste and the elimination thereof, PinpointBPS focuses on understanding process and the impact of changes to processes on value while reducing risk. Methodology: The definition of success differs in its layers of ambiguity. The focus of lean is to achieve "the happy situation of perfect value provided with zero waste", while PinpointBPS aims to establish a quantified definition of future value — "facts and figures, no fluff". Lean has evolved from the manufacturing approach first designed by Toyota in the 1950s, while PinpointBPS has been specifically developed for use in pathology performance improvement. Lean allows for the focus and treatment of individual or isolated performance issues, while PinpointBPS motivates for a system-wide approach first by creating visibility and testing. Prioritization in Lean is done according customer priority or on a FIFO (first in, first out) basis, while PinpointBPS encourages prioritization according to expected, quantified output linked to cost reduction or profit generation. PinpointBPS in Practice: PinpointPBS has been based on eight practical principles that allow laboratories to take ownership of the methodology and performance improvement initiatives. The 8 Principles 1. Boost the bottom-line, or bust As with any organization, value creation is the ultimate goal, however purely focusing on quality output may neglect cognizance of financial value. Understanding the ultimate financial impact of any value creating activity is critical for ongoing sustainability. 2. Led by a common language Ensuring that everyone in the organization (and value chain) has the same understanding of value creation and how it links to financial performance Continuous innovation requires us to draw from the same knowledge and speak the same language. We establish a universal truth that enables us to work together seamlessly. 3. Trust the transparency We value a clear line of sight into our strengths, our weaknesses and the connections between every process, person, and piece of technology. 4. Facts and figures, no fluff We believe in making decisions based on evidence. In our quest for bottom-line impact, every business action must be supported by real, tangible proof. 5. Driven by decisiveness We are empowered through understanding our performance and what shapes it. This allows everyone in our organization to make decisions swiftly, with understanding and certainty. 6. Inaccuracy is inexcusable Close is not close enough. Every decision we take is based on fact, and our execution must be accomplished with the same precision 7. Big impact beats small We believe in following the big opportunities that significantly impact the bottom line, rather than wasting time and resources on those that do not. 8. Knowledge knows no bounds We are only as good as the collective we represent. We fundamentally believe in continually imparting knowledge and learning from each other to move the industry forward. Tools: Various tools have been developed to enable the methodology in laboratories, including performance overviews, standardized process mapping using BPMN (business process model and notation), as well as simulation and scenario modeling. Professional certification: PinpointBPS professional certification can be granted to both laboratories that practice the methodology, as well as people who have successfully completed further study of the methodology. Two levels of PinpointBPS professional certifications exist. These certifications have been accredited by PACE, CPD, and the Royal College of Pathologists. Champion A PinpointBPS Champion is a professional who is able to interpret laboratory performance improvement metrics, map out all processes within a laboratory and make improvement recommendations. Master A PinpointBPS Master builds on the PinpointBPS Champion certification and is a professional who is able to simulate changes to a laboratory and interpret performance outputs, to make performance improvement recommendations and manage initiatives from start to end.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Effluent sewer** Effluent sewer: Effluent sewer systems, also called septic tank effluent gravity (STEG), solids-free sewer (SFS), or septic tank effluent drainage (STED) systems, have septic tanks that collect sewage from residences and businesses, and the liquid fraction of sewage that comes out of the tank is conveyed to a downstream receiving body such as either a centralized sewage treatment plant or a distributed treatment system for further treatment or disposal away from the community generating the sewage. Most of the solids are removed by the interceptor tanks, so the treatment plant can be much smaller than a typical plant and any pumping for the supernatant can be simpler without grinders (sometimes water pumps are sufficient). An alternative effluent sewer which is similar to the STEG system is the STEP system. Because of the vast reduction of solid wastes and the capture of fats, oils and grease (FOG) within the interceptor tank, a pumping system can be used to move the wastewater under pressure rather than a gravity driven conveyance system. Design considerations: Effluent pumping sewers have small diameter pipes that follow the contour of the land and are only buried a metre or two underground. While an effluent sewer can use gravity to move waste, the ability to move waste with a pressure system can be a big advantage in places where a gravity system is impractical. Compared to conventional sewer systems, effluent sewer systems can be installed at a shallow depth and do not require a minimum wastewater flow or slope to function.Effluent sewer systems, as well as all sewer systems, can use two methods to transport wastewater to a treatment facility. These methods are gravity and pumping, also called pressure systems. Gravity systems use pipes that are laid on a slight downhill slope to transport wastewater. Effluent pumping systems have pipes that are buried at a constant depth, such as a metre and a half, and rely on pumping stations that create pressure to move the waste to a treatment facility. An effluent sewer that uses gravity may be called a septic tank effluent gravity (STEG) system, while a pumping system may be called a septic tank effluent pumping (STEP) system. It is also possible to have a hybrid system that uses gravity and pumping. Gravity and pumping effluent sewer systems both have advantages and disadvantages. The best type of system to use depends on the area it will be serving. Factors such as population size, topography, groundwater level, as well as locations for pumping stations and the treatment plant, must be taken into account. STEG systems should not be confused with traditional sewer systems that use gravity to transport untreated sewage to a wastewater treatment plant, which are typically referred to as gravity sewer systems. Comparison with other systems: Conventional gravity sewers Effluent sewer systems are a much less common sewage disposal method than gravity sewer systems that use gravity, as well as pumping where needed, to send raw sewage and other wastewater straight from consumers to a sewage treatment plant. There are two main types of gravity sewers, sanitary and combined. Sanitary sewers only treat the wastewater from homes and business. Combined sewers have storm drains that are connected to the sewerage. In areas with high rainfall, this results in an enormous additional amount of wastewater that has to be treated. Combined sewers have higher operating costs due to the larger volume of wastewater that has to be treated, and they may require larger treatment plants, as well. In addition, when it rains very hard, the treatment plant will not be able to keep up, which can result in untreated wastewater being dumped into the plant's outfall, which may be a river, lake or ocean. When this occurs, the operator of the sewer is usually fined by one or more of the government bodies that oversee the body of water that the wastewater was dumped into. To prevent this, some cities have tanks, pits or ponds to store the excess wastewater until it can be properly treated. To prevent groundwater contamination, the pits and ponds should have liners if sewage has already been combined with the storm runoff. Comparison with other systems: Septic tanks Effluent sewers also currently serve fewer people than septic systems, which also use septic tanks, but simply dispose of the effluent by draining it into a leach field. About one quarter of United States homes dispose of their wastewater with septic tanks. However, effluent sewers are being looked at as a sewage treatment solution in areas where gravity sewer systems are not well-suited or when the high capital cost to build a gravity system is prohibitive. Areas that are less than ideal for gravity systems include areas that are large, but extremely flat and areas that require long-distance pumping, such as where homes are widely spread out or when several small villages or towns connect their sewage systems so that a centralized plant can be built. Comparison with other systems: Another problem area is a place where there are many homes or businesses at or near the lowest elevation in the area, such as sea level for a coastal city. Typically, waste is pumped uphill under low pressure to the main sewer line in such situations, either after it has been through a septic tank or after it has been ground up into a slurry by a grinder. Grinding can be done when the waste of many homes or businesses is combined or smaller grinders can be installed at each home or business. A disadvantage of using grinders is that they require electricity, and a disadvantage of using septic tanks is that they require solid waste buildup to be removed every one to three years, depending on the size of the tank and the number of people using the system. Comparison with other systems: Septic tanks also have a higher capital cost if they are being installed for new homes or if the existing septic tanks must be replaced. If there is a suitable septic tank in place, pumping the effluent from the tank is the lowest cost option for initial costs. Whether the septic tank is the lowest cost option over time depends on the cost of electricity in the area, how often the tank must be emptied and how much it costs to have the solids pumped out of the tank.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Natural Color System** Natural Color System: The Natural Color System (NCS) is a proprietary perceptual color model. It is based on the color opponency hypothesis of color vision, first proposed by German physiologist Ewald Hering. The current version of the NCS was developed by the Swedish Colour Centre Foundation, from 1964 onwards. The research team consisted of Anders Hård, Lars Sivik and Gunnar Tonnquist, who in 1997 received the AIC Judd award for their work. The system is based entirely on the phenomenology of human perception and not on color mixing. It is illustrated by a color atlas, marketed by NCS Colour AB in Stockholm. Definition: The NCS states that there are six elementary color percepts of human vision—which might coincide with the psychological primaries—as proposed by the hypothesis of color opponency: white, black, red, yellow, green, and blue. The last four are also called unique hues. In the NCS all six are defined as elementary colors, irreducible qualia, each of which would be impossible to define in terms of the other elementary colors. All other experienced colors are considered composite perceptions, i.e. experiences that can be defined in terms of similarity to the six elementary colors. E.g. a saturated pink would be fully defined by its visual similarity to red, blue, black and white.Colors in the NCS are defined by three values, expressed in percentages, specifying the degree of blackness (s , = relative visual similarity to the black elementary color), chromaticness (c, = relative visual similarity to the "strongest", most saturated, color in that hue triangle), and hue (Φ, = relative similarity to one or two of the chromatic elementary colors red, yellow, green and blue, expressed in at most two percentages). This means that a color can be expressed as either Y (yellow), YR (yellow with a red component), R (Red), RB (red with a blue component), B (blue), etc. No hue is considered to have visual similarity to both hues of an opponent pair; i.e. there is no "redgreen" or "yellowblue". The blackness and the chromaticness together add up to less than or equal to 100%. The remainder from 100%, if any, gives the amount of whiteness (w). Achromatic colors, i.e. colors that lack chromatic contents (ranging from black, to grey and finally white), have their hue component replaced with a capital "N", for example   "NCS S 9000-N" (a more or less complete black). NCS color notations are sometimes prepended by a capital "S", which denotes that the current version of the NCS color standard was used to specify the color. Definition: In summary, the NCS color notation for   S 2030-Y90R (light, pinkish red) is described as follows. Definition: NCS 1950 Standard 20 30 nuance 90 hue with 100 100 50 50 Saturation and lightness In addition to the above values s (blackness), w (whiteness), c (chromaticness) and Φ (hue), the NCS system can also describe the two perceptual quantities saturation and lightness. NCS saturation (m) refers to a color's relation between its chromaticness and whiteness (regardless of hue), defined as the ratio between the chromaticness and the sum of its whiteness and chromaticness m=c/(w+c) . The NCS saturation ranges between 0 and 1. Definition: NCS lightness (v) is a color's perceptual characteristic to contain more of the achromatic elementary colors black or white than another color. NCS lightness values varies from 0 for the elementary color black (S) to 1 for the elementary color white (W). For achromatic colors, that is any black, gray or white with no chromatic component (c = 0), lightness is defined as 100 100 . Definition: For chromatic colors, the NCS lightness is determined by comparing the chromatic color to a reference scale of achromatic colors (c = 0), and is determined to have the same lightness value v as the sample on the reference scale to which it has the least noticeable edge-to-edge difference. Definition: Examples Two examples of NCS color notation—the yellow and blue shades of the Swedish flag:   Yellow – NCS 0580-Y10R (nuance = 5% blackness, 80% chromaticness, hue = 90% yellow + 10% red. Strong, very slightly blackish yellow with a slight orangish tinge)   Blue – NCS 4055-R95B (nuance = 40% blackness, 55% chromaticness, hue = 5% red + 95% blue. Somewhat dark, medium strong blue with a very slight purplish tinge)The NCS is represented in nineteen countries and is the reference norm for color designation in Sweden (since 1979), Norway (since 1984), Spain (since 1994) and South Africa (since 2004). It is also one of the standards used by the International Colour Authority, a leading publisher of color trend forecasts for the interior design and textile markets. NCS 1950 Standard Colors: In order to be able to manufacture physical representations of the NCS color space (such as color atlases), a reduced set of colors had to be selected that would illustrate the system well. Originally developed in 1979 as part of becoming the Swedish national color standard by the SIS (Swedish Standards Institute), the Natural Color System was described in an atlas containing 1412 colors. In 1984, an additional 118 colors were added for a total of 1530 colors. Eleven years later, in 1995, a second edition of the NCS Color Samples was released containing 1750 standard colors. In 2004, 200 more colors (184 light colors and 16 in the blue-green space) were added, resulting in the NCS 1950 standard colors. Colors that have a representation in the NCS 1950 samples are denoted with a leading capital "S", for example   NCS S 1070-Y10R (a chromatic, slightly reddish yellow). Comparisons to other color systems: The most important difference between NCS and most other color systems resides in their starting points. The aim of NCS is to define colors from their visual appearance, as they are experienced by human consciousness. Other color models, such as CMYK and RGB, are based on an understanding of physical processes, how colors can be achieved or "made" in different media.The underlying physiological mechanisms involved in color opponency include the bipolar and ganglion cells in the retina, which process the signal originated by the retinal cones before it is sent to the brain. Models like RGB are based on what happens at the lower, retinal cone level, and thus are fitted for presenting self-illuminated, dynamic images as done by TV sets and computer displays; see additive color. The NCS model, for its part, describes the organization of the color sensations as perceived at the upper, brain level, and thus is much better fitted than RGB to deal with how humans experience and describe their color sensations (hence the "natural" part of its name). More problematic is the relation with the CMYK-model which is generally seen as a correct prediction of the behavior of mixing pigments, as a system of subtractive color. The NCS coincides with the CMYK as regards the green-yellow-red segment of the color circle, but differs from it in seeing the saturated subtractive primary colors magenta and cyan as complex sensations of a "redblue" and a "greenblue" respectively and in seeing green, not as a secondary color mix of yellow and cyan, but as a unique hue. The NCS explains this by assuming that the behavior of paint is partly counterintuitive to human phenomenology. Observing that the mix of yellow and cyan paint results in a green color, would thus be at odds with the intuition of pure human perception which would be unable to account for such a "yellowblue". Comparisons to other color systems: Hering argued that yellow is not a "redgreen" but a unique hue. Colorimetrist Jan Koenderink, in a critique of Hering's system, considered it inconsistent not to apply the same argument to the other two subtractive primaries, cyan and magenta, and see them as unique hues as well, not a "greenblue" or a "redblue". He also pointed out the difficulty within a four color theory that the primaries would not be equally spaced in the color circle; and the problem that Hering does not account for the fact that cyan and magenta are brighter than green, blue and red, whereas this is, in his view, elegantly explained within the CMYK-model. He concluded that Hering's scheme fitted common language better than color experience.Overview of the six base colors in Natural Color System with their equivalent in hex triplet, RGB and HSV coordinates systems. However, note that these codes are only approximate, as the definition of NCS elementaries is based on perception and not production of color.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hot cell** Hot cell: Shielded nuclear radiation containment chambers are commonly referred to as hot cells. The word "hot" refers to radioactivity. Hot cells are used in both the nuclear-energy and the nuclear-medicines industries. They are required to protect individuals from radioactive isotopes by providing a safe containment box in which they can control and manipulate the equipment required. Nuclear industry: Hot cells are used to inspect spent nuclear fuel rods and to work with other items which are high-energy gamma ray emitters. For instance, the processing of medical isotopes, having been irradiated in a nuclear reactor or particle accelerator, would be carried out in a hot cell. Hot cells are of nuclear proliferation concern, as they can be used to carry out the chemical steps used to extract plutonium (whether weapons grade or not) from reactor fuel. The cutting of the used fuel, the dissolving of the fuel in hot nitric acid and the first extraction cycle of a nuclear reprocessing PUREX process (highly active cycle) would need to be done in a hot cell. The second cycle of the PUREX process (medium active cycle) can be done in gloveboxes. Nuclear medicine industry: Hot cells are commonly used in the nuclear medicines industry: for the production of radiopharmaceuticals, according to GMP guidelines (industry) for the manipulation and dispense of radiopharmaceuticals (hospitals).The user must never be subject to shine paths that are emitted from the radioactive isotopes and therefore there generally is heavy shielding around the containment boxes, which can be made out of 316 stainless steel or other materials such as PVC or Corian. This shielding can be ensured by the use of lead (common) or materials such as concrete (very large walls are therefore required) or even tungsten. The amount of radioactivity present in the hot cell, the energy of the gamma photons emitted by the radioisotopes, and the number of neutrons that are formed by the material will prescribe how thick the shielding must be. For instance a 1 kilocurie (37 TBq) source of cobalt-60 will require thicker shielding than a 1 kilocurie (37 TBq) source of iridium-192 to give the same dose rate at the outer surface of the hot cell. Also if some actinide materials such as californium or spent nuclear fuel are used within the hot cell then a layer of water or polyethylene may be needed to lower the neutron dose rate. Viewing windows: In order to view what is in the hot cell, cameras can be used (but these require replacing on a regular basis) or most commonly, lead glass is used. Viewing windows: There are several densities for lead glass, but the most common is 5.2 g/cm3. A rough calculation for lead equivalence would be to multiply the Pb thickness by 2.5 (e.g. 10 mm Pb would require a 25 mm thick lead glass window). Older hot cells used a ZnBr2 solution in a glass tank to shield against high-energy gamma rays. This shielded the radiation without darkening the glass (as happens to leaded glass with exposure). This solution also "self-repairs" any damage caused by radiation interaction, but leads to optical distortion due to the difference in optical indices of the solution and glass. Manipulators: Telemanipulators or tongs are used for the remote handling of equipment inside hot cells, thereby avoiding heavy finger/hand doses. Gloves: Lead loaded gloves are often used in conjunction with tongs as they offer better dexterity and can be used in low radiation environments (such as hot cells used in hospital nuclear medicine labs). Some companies have developed tungsten loaded gloves which offer greater dexterity than lead loaded gloves, with better shielding than their counterparts. Gloves must be regularly replaced as the chemicals used for the cleaning/sterilisation process of the containments cause considerable wear and tear. Clean rooms: Hot cells can be placed in clean rooms with an air classification ranging from D to B (C is the most common). Types: Research and development cells These cells are often used to test new chemistry units or processes. They are generally fairly large as they require flexibility for the use of varying chemistry units which can greatly vary in size (e.g. synthera and tracerlab). Some cells require remote manipulation. Types: Stack mini-cells This type of hot cell is used purely for production of radiopharmaceuticals. A chemistry unit is placed in each cell, the production process is initiated (receiving the radioactive 18F from the cyclotron) and once finished, the cells are left closed for a minimum of 6 hours allowing the radiation to decrease to a safe level. No manipulation is necessary here. Types: Production and dispense cells Cells used to dispense products. For example, once Fludeoxyglucose (18F) (FDG) has been produced from 18F mixing with glucose, a bulk vial is put into in a dispense cell and its contents carefully dispensed into a number of syringes or vials. Remote manipulation is crucial at this stage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coa de jima** Coa de jima: A coa de jima or coa ("hoe for harvesting", "hoe") is a specialized tool for harvesting agaves. It is a long, machete-like round-ended knife on a long wooden handle used by a jimador to cut the leaves off an agave being harvested and to cut the agave from its roots. The core (or "heart") left, called piña ("pineapple"), is used for the production of mezcal, sotol or tequila. The shape of the coa is adapted for the efficiency of carrying out these operations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lone Scouts** Lone Scouts: Lone Scouts are members of the Scout movement who are in isolated areas or otherwise do not participate in a regular Scouting unit or organization. A Lone Scout must meet the membership requirements of the Scouting organization to which they belong and have an adult Scout leader or counselor who may be a parent, guardian, minister, teacher, or another adult. The leader or counselor instructs the boy and reviews all steps of Scouting advancement. Lone Scouts can be in the Scout Section or sections for older young people, and in some countries in the Cub section or sections for younger boys. They follow the same program as other Scouts and may advance in the same way as all other Scouts. Lone Scouts: Lone Scouts exist in many countries in the world, including Australia, the United Kingdom, Canada and the United States. History: John Hargrave was the inspirator of the Lone Scouts. Hargrave wrote a series of articles for "Lone Scouts", held Lonecraft Camps and wrote Lonecraft, the handbook for Lone Scouts, published in 1913. Hargrave's book referred to individual Lone Scouts and Lone Patrols. Hargrave dedicated his book to naturalist Ernest Thompson Seton, founder of the Woodcraft League. Hargrave was an early Boy Scout and, in 1917, became Commissioner for Woodcraft and Camping in the Baden-Powell Boy Scouts but Baden-Powell and his organization refused to recognize Hargrave's Lone Scouts and Woodcraft Scouting. Hargrave, a Quaker pacifist and medical corps war veteran of the disastrous 1915 Gallipoli Campaign, became increasingly disenchanted with the military dominated leadership and militarism of the Baden-Powell Boy Scouts and in February, 1919, he held a meeting of like-minded Scout leaders. In 1920 Hargrave formed the Kindred of the Kibbo Kift and in January 1921 he was expelled from Baden-Powell's organization. Many Lone Scouts disassociated from the Baden-Powell organization, some joined Hargrave's Kibbo Kift while others joined the British Boy Scouts, other National Peace Scouts or remained independent Scouts and patrols. History: The term "Lone Scout" was later officially adopted by Baden-Powell's Boy Scouts Association. The Lone Scouts of America were formed in 1915 by William D. Boyce, a Chicago newspaper entrepreneur. This organization merged with the Boy Scouts of America in 1924; its mission has been carried on through the BSA Lone Cub Scout and Lone Boy Scout programs. US Criteria: Boys/Girls (in the USA) who are eligible to become Lone Scouts include: Children of American citizens who live abroad Exchange students away from the United States for a year or more Boys/girls with disabilities that might prevent them from attending regular meetings of packs or troops Boys/Girls in rural communities who live far from a Scouting unit Sons/Daughters of migrant farmworkers Boys/Girls who attend night schools or boarding schools Boys/Girls who have jobs that conflict with troop meetings Boys/Girls whose families travel frequently, such as circus families, families who live on boats, missionaries, etc. US Criteria: Boys/Girls who alternate living arrangements with parents who live in different communities Boys/Girls who are unable to attend unit meetings because of life-threatening communicable diseases Boys/Girls whose parents believe their child might be endangered by getting to Scout unit meetings Boys/Girls who are being home schooled and whose parents do not want them spending time with other children in a youth group.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Explicator** The Explicator: The Explicator is a quarterly journal of literary criticism. Current owner Routledge acquired the journal from Heldref Publications in 2009. It mainly publishes short papers on poetry and prose. It is indexed in the Arts & Humanities Citation Index (A&HCI). It began publication in October 1942 and is in print.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flare fitting** Flare fitting: Flare fittings are a type of compression fitting used with metal tubing, usually soft steel, ductile (soft) copper and aluminum, though other materials are also used. In a flare fitting the tube itself is "flared" i.e. expanded and deformed at the end. The flare is then pressed against the fitting it connects to and is secured by a close-fitting nut that ensures that no leakage happens. Tube flaring is a type of forging operation, and is usually a cold working procedure. During assembly, a flare nut is used to secure the flared tubing's tapered end to the also tapered fitting, producing a pressure-resistant, leak-tight seal. Flared connections offer a high degree of long-term reliability and for this reason are often used in mission-critical and inaccessible locations. The tool used to flare tubing consists of a die that grips the tube, and either a mandrel or rolling cone is forced into the end of the tube to form the flare by cold working. The most common flare fitting standards in use today are the 45° SAE flare ,the 37° JIC flare, and the 37° AN flare. For high pressure, flare joints are made by doubling the tube wall material over itself before the bell end is formed. The double flare avoids stretching the cut end where a single flare may crack. Before the flaring step, the end of the tube is compressed axially causing the tube wall to yield radially outward forming a bubble. This bubble is then driven axially by a conical tool forming a double thickness flare just as for the single flare. SAE 45° flare connections are commonly used in automotive applications,as well as for plumbing, refrigeration and air conditioning. SAE fittings for plumbing and refrigeration are typically made from brass. SAE and AN/JIC connections are incompatible due to the different flare angle. Flare fitting: JIC 37° flare connections are used in higher pressure hydraulic applications. JIC fittings are typically steel or stainless steel. JIC fittings are not permissible where AN connections are specified, due to differing quality standards. Flare fitting: AN 37° flare connections are typically specified for military and aerospace applications. Fittings can be made from a large variety of materials. The “AN” standard (for Army/Navy) has been replaced by other military and aerospace standards, though in practice these fittings are still referred to as AN. Flared fittings are an alternative to solder-type joints that are mechanically separable and doesn’t require an open flame. Copper tube used for propane, liquefied petroleum gas, or natural gas may use flared brass fittings of single 45°-flare type, according to NFPA 54/ANSI. Z223.1 National Fuel Gas Code. Many plumbing codes, towns, and water companies require copper tube used for water service to be type-L or type-K. All National Model Codes permit the use of flare fitting joints, however, the authority having jurisdiction (AHJ) should be consulted to determine acceptance for a specific application.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**C1orf167** C1orf167: Chromosome 1 open reading frame (C1orf167) is a protein which in humans is encoded by the C1orf167 gene. The NCBI accession number is NP_001010881. The protein is 1468 amino acids in length with a molecular weight of 162.42 kDa. The mRNA sequence was found to be 4689 base pairs in length. Gene: Locus It can be located on chromosome 1 at position 1p36.22 on the plus strand and spans from positions 11,824,457 to 1,849,503. Aliases C1orf167 has one known alias with the name Chromosome 1 Open Reading Frame 167. Number of Exons There are 26 exons associated with the protein. mRNA: Alternative Splicing A splice region that is conserved in primate orthologs of the C1orf167 mRNA was located between exon 1 and exon 2. Known mRNA Isoforms The mRNA sequence has 8 known splice isoforms as determined by the conserved domains. The isoforms span the regions 426-863, 981-1418, 954-1391, 999-1329, 999-1400, 999-1436, 999-1404. and 999-1463 of the mRNA sequence. Protein: Known Protein Isoforms Alternative splicing produces two known isoforms of the human protein. They are XP_006711141.1 which is 1489aa in length and XP_003307860.2 which is 713aa in length. Protein: Composition The protein has an isoelectric point (pI) of 11. The predicted molecular weight (mW) is 160kDa for the human protein, but ranges from 140-180kDa for more distant orthologs. Compositional analysis revealed the most abundant amino acid to be Alanine (A) at 12.4% of the total protein. The analysis also revealed C1orf167 protein to be rich in Tryptophan (W) and deficient in Tyrosine (Y) and Isoleucine (I). Protein: Subcellular Localization C1orf167 is predicted to be localized to the cell nucleus. Post-Translational Modifications C1orf167 is predicted to undergo phosphorylation, O-Glycosylation, SUMOylation, glycation, and cleavage by staphylococcal peptidase I (Q105, Q321) and Glutamyl endopeptidase (Q1101). Table 1. Post-Translational Modifications determined for C1orf167. *GPS, NetPhos results indicated hyper-phosphorylation of C1orf167 in H. sapiens and five of its orthologs. Domain and Motifs by Homology One domain of unknown function, located from 954aa-1418aa, is 465 amino acids in length. Protein: Secondary Structure C1orf167 was determined to be rich in alpha helices. No notable regions of beta pleated sheets or coils were predicted. In particular, high confidence was indicated for 42 alpha helices with the longest alpha helix region spanning from residues 450aa to 1182aa. This long alpha helix region includes a significant portion of the conserved DUF which spans 954aa-1418aa. Protein: Tertiary Structure The best-aligned structural analog, generated by I-TASSER, of C1orf167 had a confidence (c-score) score of -0.68 given a range of [-5,2] with higher values indicating a higher confidence. Per Swiss Model, two monomers are predicted to form an alpha helix. Both of the helices are aligned facing outwards with hydrophobic amino acids such as glutamic acid (E) on the interior and asparagine (R), Serine (and lysine (K) on the exterior. Asparagine residues may serve as an important oligosaccharide binding site. Expression: C1orf167 has high expression in the larynx, blood, placenta, testis and prostate, with the highest expression found in the testis. The promoter GXP_5109290 spans 1507 base pairs on chromosome 1. GXP_5109290 was found to be conserved in the bonobo (Pan Paniscus), gorilla (Gorilla Gorilla Gorilla), mouse (Mus musculus), chimp (Pan Troglodytes), and rhesus monkey (Macaca mulata). Protein Interactions: There were 10 interactions identified by STRING. Homology: Paralogs No known paralogs or paralogous domains were identified for C1orf167. Homology: Orthologs Using NCBI BLAST, orthologs of C1orf167 were determined. No orthologs could be found in single-celled organisms, or fungi whose genomes have been sequenced. In terms of multi-cellular organisms, orthologs were found in mammals, aves, reptiles, and cartilaginous fishes. The table below shows a representative sample of 20 of the orthologs for C1orf167. The table is organized based on the time of divergence from humans in millions of years (MYA) and then by sequence similarity. Homology: Table 2. This table shows the divergence timeline of the C1orf167 orthologs. It is sorted by date of divergence, color according to taxonomic group or class and then by sequence similarity. Function: At this time the function of C1orf167 is uncharacterized. Clinical Significance: Pathology According to the EST profile for breakdown by healthy state, the expression levels of C1orf167 were higher than healthy cells for leukemia, head, neck and lung cancers. Based on the results from NCBI GeoProfiles, C1orf167 was found to have increased expression on dendritic cells for patients experiencing Chlamydia pneumoniae infections. Increased expression of C1orf167 was also indicated for Human Pulmonary Tuberculosis tissues given the presence of caseous tuberculosis granulomas in the lungs when compared to normal lung tissues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Authenticated Identity Body** Authenticated Identity Body: Authenticated Identity Body or AIB is a method allowing parties in a network to share authenticated identity thereby increasing the integrity of their SIP communications. AIBs extend other authentication methods like S/MIME to provide a more specific mechanism to introduce integrity to SIP transmissions. Parties transmitting AIBs cryptographically sign a subset of SIP message headers, and such signatures assert the message originator's identity. To meet requirements of reference integrity (for example in defending against replay attacks) additional SIP message headers such as 'Date' and 'Contact' may be optionally included in the AIB. Authenticated Identity Body: AIB is described and discussed in RFC 3893: "For reasons of end-to-end privacy, it may also be desirable to encrypt AIBs [...]. While encryption of AIBs entails that only the holder of a specific key can decrypt the body, that single key could be distributed throughout a network of hosts that exist under common policies. The security of the AIB is therefore predicated on the secure distribution of the key. However, for some networks (in which there are federations of trusted hosts under a common policy), the widespread distribution of a decryption key could be appropriate. Some telephone networks, for example, might require this model. When an AIB is encrypted, the AIB should be encrypted before it is signed... Unless, of course, it is signed by Mrs. L in Rin, VA."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Air interdiction** Air interdiction: Air interdiction (AI), also known as deep air support (DAS), is the use of preventive tactical bombing and strafing by combat aircraft against enemy targets that are not an immediate threat, to delay, disrupt or hinder later enemy engagement of friendly forces. It is a core capability of virtually all military air forces, and has been conducted in conflicts since World War I. Air interdiction: A distinction is often made between tactical and strategic air interdiction, depending on the objectives of the operation. Typical objectives in tactical interdiction are meant to affect events rapidly and locally, for example through direct destruction of forces or supplies en route to the active battle area. By contrast, strategic objectives are often broader and more long-term, with fewer direct attacks on enemy fighting capabilities, instead focusing on infrastructure, logistics and other supportive assets. Air interdiction: The term deep air support relates to close air support and denotes the difference between their respective objectives. Close air support, as the name suggests, is directed towards targets close to friendly ground units, as closely coordinated air-strikes, in direct support of active engagement with the enemy. Deep air support or air interdiction is carried out further from the active fighting, based more on strategic planning and less directly coordinated with ground units. Despite being more strategic than close air support, air interdiction should not be confused with strategic bombing, which is unrelated to ground operations. Background: In an examination of past air interdiction campaigns, Dr. Eduard Mark of the Center for Air Force History identified three methods by which an air interdiction campaign is carried out. The first is by the physical destruction or attrition of soldiers and matériel before they can reach the battlefield. The second is by severing the enemy's lines of communication, or creating blockage, to prevent soldiers and matériel from reaching the battlefield. The third is to create systemic inefficiencies in the enemy's logistic system so that soldiers and matériel arrive to the battlefield more slowly or in an uneconomical manner.While all three methods can be utilized for a tactical or strategic interdiction campaign, Dr. Mark argues their success will depend on the specific conditions at hand. Interdiction by attrition, for example, is best utilized on the tactical scale, as it is rarely possible to destroy more than a small portion of an enemy's soldiers and matériel at any one time. Destroying 100 supply trucks when an enemy force may have thousands - and the trucks can be easily replaced - will represent only a minor loss on a strategic scale, but if those trucks were the only ones available to respond to a specific battle their loss could be pivotal to its outcome. Likewise, successful interdiction by blockage is even more constrained by the existence of terrain features (i.e. bridges, tunnels, etc.) upon which an enemy is dependent, and the fact that these features are often repairable or can be avoided. Destroying a bridge at the right time, if it prevents enemy reinforcements from arriving for even just a little while, might have an effect on the outcome of a battle, but on the strategic scale the enemy will be able to replace it or find an alternate route fairly shortly. In this regard, creating systemic inefficiencies in the enemy's logistical system (by using a combination of attrition and blockage on the tactical scale) has been the most successful form of strategic interdiction. An enemy that is forced to take a more circuitous route or rely on a less efficient means of transportation can, over time, find themselves falling short of their needs and so be defeated for lack of adequate supplies.Dr. Mark also identified eight conditions that affect the outcome of interdiction campaigns, the first three of which he regards as being absolutely necessary for a successful interdiction. The other five are contributory in that, while not defining all successful campaigns, at least one of the conditions was present. Background: Air Superiority: Defined as largely unimpeded access to an enemy's airspace, air superiority is vital to a successful interdiction campaign. No interdiction campaign has succeeded where the interdictor had to fight for air superiority at the same time. Background: Intelligence: Having sufficient intelligence of an enemy's disposition is important not only for knowing what to attack, but when and in what manner to attack them. Knowing, for example, not only an enemy's supply routes but their transportation schedule can be the difference between a successful or failed campaign. This intelligence must be constantly updated as the enemy will invariably change their methods to evade interdiction. Background: Identifiability: If a target cannot be identified, it cannot be interdicted. This condition refers not only to the inherent nature of the target, but the surrounding environment and the level of technology involved. A convoy of ships at sea is easier to identify than a convoy of trucks traveling under a jungle canopy, but both would be harder to identify and attack at night if the interdictor does not possess night vision. Background: Sustained Pressure: Because the targets of an interdiction campaign are often replaceable or repairable, an interdictor must apply consistent pressure on their enemy to prevent them from doing so. While more of a contributory condition during tactical interdiction operations, sustained pressure has been necessary for strategic campaigns to succeed. Concentration: The fewer conveyances, routes or depots an enemy possesses, the easier it is to interdict them. Comparing a small convoy of large ships to a fleet of thousands of small trucks, the destruction of any one of those ships would represent a greater loss to the enemy than the destruction of any one of the trucks. Channelization: The fewer supply routes an enemy possesses, the greater will be their loss if any of them are severed. This includes the existence of any choke points, whether natural or artificial, that an interdictor can take advantage of. Certain means of conveyance are more subject to channelization than others (i.e. railroads). High Rate of Consumption: Whether due to heavy combat or extensive movement, an enemy that is forced to consume supplies at a much higher rate is more susceptible to interdiction than if their consumption rate is low. This prevents them from building up a stockpile of supplies, and they have less flexibility in using an inefficient method of resupply. Logistical Constriction: If an enemy's logistical network has less inherent capacity relative to its demands, it can be harder to compensate for damage inflicted upon it. In this way, even if the enemy has a low rate of supply consumption, constricting their logistic network sufficiently can still create supply shortfalls. History: World War II Korean War Vietnam War Iran-Iraq War Both the Iranian Air Force (IIAF) and the Iraqi Air Force (IQAF) made concerted efforts during the early days of the Iran-Iraq War to interdict the other side. For both sides this largely amounted to engaging in armed reconnaissance and attacking targets of opportunity, with few attacks on pre-planned targets. The IIAF did have the advantage of having superior munitions and tactical reconnaissance - possessing a squadron of RF-4E Phantoms and pre-revolution targeting intelligence - but their efforts largely mirrored that of the IQAF.The IQAF's interdiction efforts peaked during the first 45 days of the war, but later declined to more sporadic missions, increasing in conjunction with major offensives. Interdiction by the IIAF was more sustained through late 1980 but after mid-January 1981 also declined. While both sides caused considerable damage on the other, with the Iranians arguably achieving more, neither interdiction effort was particularly effective nor did they play a factor in the outcome of the war. Both sides pulled back their air forces to avoid mounting losses and with the reasoning that, while they might not play a role in winning the war, they could still be used to avoid defeat.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lossless join decomposition** Lossless join decomposition: In database design, a lossless join decomposition is a decomposition of a relation R into relations R1,R2 such that a natural join of the two smaller relations yields back the original relation. This is central in removing redundancy safely from databases while preserving the original data. Criteria: Lossless join can also be called nonadditive.If R is split into R1 and R2 , for this decomposition to be lossless (i.e., R1⋈R2=R ) then at least one of the two following criteria should be met. Check 1: Verify join explicitly Projecting on R1 and R2 , and joining them back, results in the relation you started with. Check 2: Via functional dependencies Let R be a relation schema. Criteria: Let F be a set of functional dependencies on R Let R1 and R2 form a decomposition of R The decomposition is lossless if one of the sub-relations (i.e. R1 or R2 ) is a subset of the closure of their intersection. In other words, the decomposition is a lossless-join decomposition of R if at least one of the following functional dependencies are in F+ (where F+ stands for the closure for every attribute or attribute sets in F): R1∩R2→R1 R1∩R2→R2 Examples: Let R=(A,B,C,D) be the relation schema, with attributes A, B, C and D. Let F={A→BC} be the set of functional dependencies. Decomposition into R1=(A,B,C) and R2=(A,D) is lossless under F because R1∩R2=A) . A is a superkey in R1 , meaning we have a functional dependency {A→BC} . In other words, now we have proven that (R1∩R2→R1)∈F+
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bead (woodworking)** Bead (woodworking): A bead is a woodworking decorative treatment applied to various elements of wooden furniture, boxes and other items. A bead is typically a rounded shape cut into a square edge to soften the edge and provide some protection against splitting. Beads can be simple round shapes, or more complex patterns. A bead may be created with an electric router, a special moulding handplane or a scratch stock. Beads are usually cut directly into the edge of the item to which the bead is being applied. However, beads applied across the grain are usually cut into a separate piece, which is then fixed in position. A bead is also an important design element in wood turning, a ring-shape or convex curve incised into a piece by the use of a chisel or skew. Types of beads: Angle bead, a projecting wood moulding at the corner of a plastered wall Corner bead is similar, but is usually fully embedded in plaster or drywall, and usually plastic or metal Nosing bead, the rounded projection of a stair tread over the riser below Parting bead, or parting strip, the feature that separates two sashes in a sash window
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Child abduction alert system** Child abduction alert system: A child abduction alert system (also Child Alert, Amber alert or Child Rescue Alert) is a tool used to alert the public in cases of worrying or life-threatening disappearances of children. Europe: Currently, there are AMBER Alert systems in 20 European countries: Belgium, Bulgaria, Cyprus, the Czech Republic, France, Germany, Greece, Ireland, Italy, Lithuania, Luxembourg, Malta, the Netherlands, Poland, Portugal, Romania, Slovakia, Spain, Switzerland and the United Kingdom. AMBER Alerts systems in Poland (2013), Slovakia (2015), Luxembourg (2016) and Malta (2017). These systems aim at quickly disseminating relevant information about a very worrying child disappearance to the general public at large, through a variety of channels, thus increasing the chances of finding the child. Europe: AMBER Alert Europe AMBER Alert Europe is a foundation that strives to improve the protection of missing children by empowering children and raising awareness on the issue of missing children and its root causes. AMBER Alert Europe advocates that one missing child is one too many and aims for zero missing children in Europe. AMBER Alert Europe is a neutral platform for the exchange of knowledge, expertise, and best practices on the issue of missing children and its roots causes. To contribute to a safer environment for children, it connects experts from 44 non-governmental and governmental organisations, as well as business entities from 28 countries across Europe. Europe: Its activities cover prevention and awareness-raising, training, research, and child alerting, as well as launching initiatives aimed at impacting policies and legislation in the area of children’s rights. All activities are implemented in line with the EU Charter of Fundamental Rights and the EU Strategy on the Rights of the Child and with respect for the privacy of children and data protection laws. Europe: Its beginnings As a response to missing children cases exceeding geographical borders, the AMBER Alert Europe Foundation was founded in 2013 to contribute to better cross-border coordination and cooperation in the search for missing children. Europe: Since then, its network has expanded and now encompasses different experts from a variety of backgrounds who make their know-how and experiences available to improve existing practices and procedures for a fast and safe recovery of children gone missing. Its joint efforts with police experts in the field of missing children even paved the way for a European police expert network in this area. Europe: With the objective to prevent children from going missing, it also develop activities in the area of prevention, awareness-raising, research, and training together with its network of experts. AMBER Alerts If, after a proper risk assessment, it is believed that the life or health of a missing child is in imminent danger, police can decide to issue a national AMBER Alert. This allows them to instantly alert the public and make sure everyone is on the lookout for the child. Europe: Life or death cases Extensive US research, backed by UK findings, show that when a child is abducted and killed, in 76% of the cases the child was killed within three hours after the abduction. The AMBER Alert system was developed for these special ‘life or death’ cases.Law enforcement agencies are responsible for issuing an AMBER Alert and use strict criteria. Below you can find the current criteria as recommended by the European Commission. Europe: The victim is a minor (i.e. under 18 years of age); It is a proven abduction, there are clear elements indicating that it could be a case of abduction; The health or the life of the victim is at high risk; Information is available which, once disseminated, will allow the victim to be located; Publication of this information is not expected to add to the risk facing the victim. Europe: Missing Child Alerts The Police issue a Missing Child Alert when there is an immediate and significant risk of harm but the case does not reach the criteria for an AMBER Alert. Police can decide to publicize information and ask the help of citizens to recover the child.However it is important to understand that Missing Child Alerts, for which a child alert system can be of use, constitute an average 1 to 2% of the total cases of missing children in Europe. While child alert systems can be of use in those 1 to 2%, the overall problem of missing children - of which an average of 60% concern children running away from situations of conflict, abuse, violence and neglect - requires a much more comprehensive approach, including measures aimed at prevention and empowerment. Europe: Child alert tools have proven their value in a number of EU Member States. They however need to be integrated in a wider set of complementary tools including hotlines for missing children, trained law enforcement services, mediation services, social services and child protection services. Child alert systems can furthermore only function efficiently and legitimately where national agencies mandated to deal with missing children work on the basis of clear operational procedures including the necessary assessment of the child's best interest. Europe: Where images of missing children are disseminated, it should be done with the consent of the parents or legal representative, and taking into account the need to balance the risks faced by the child with his or her right to privacy. In case of cross border alerts, clear procedures should be in place that allow to manage and control both the information shared with the public, as well the testimonies on sightings regarding the missing child received from the public. While using an efficient technology to disseminate information with the general public on missing children is valuable, the use of a powerful technology can be harmful if preconditions for 1) an effective best interest determination in each individual case and 2) the efficient management of the information, are not met. Therefore, the impact of this must be assessed by law enforcement agencies (e.g. police or public prosecutor) taking into consideration article 8 of the European Convention on Human Rights (ECHR) and national legislation. In the best interest of the child, information should be removed from public sources as soon as the child is being found.Missing Children Europe, a European federation for missing and sexually exploited children also works on supporting the development of national child alert systems as well as effective cross-border cooperation for child alert systems. It is also the main partner working on developing the Google Child Alert System in Europe. Europe: Belgium Child alert is the operational system that in the case of a disappearance putting a child's life in immediate dangers, can warn citizens of Belgium and appeal to evidence that can contribute to the search. Any citizen or organization has the opportunity to register to participate. Child Alert is managed by Child Focus, in collaboration with the Federal police and the Belgian justice. Europe: France The child abduction alert system that is used in France is called L'Alerte Enlèvement. The system was introduced in February 2006 and is based on the US AMBER Alert system. The warning message will be issued for three hours by different vectors: TV channels, radio stations, news agencies, variable message signs on highways, public places, sound in stations and metro stations, websites, social media, and smartphone apps. Europe: Since the start of L'Alerte Enlèvement in 2006, it was issued 25 times [as of 14 April 2021]. It has a success rate of 92%, with 23 children found alive, one found dead and the 25th alert issued on 14 April being lifted without the child having been found. To issue an alert, five conditions must be met: Removal proved and not simply disappear; Physical integrity or the victim's life is in danger; Pieces of information used to locate the child or suspect; The victim is a minor; The parents of the victim have agreed to trigger the alert. Europe: The Netherlands AMBER Alert Netherlands is the nationwide alert system for endangered missing children and child abduction cases. AMBER Alert Netherlands exclusively distributes AMBER Alerts and information about endangered missing children. The police only issues an AMBER Alert when a child's life is in imminent danger (AMBER Alert) or when the child is at immediate and significant risk of harm (endangered missing child). When an AMBER Alert is issued (about 1-2 times a year), the entire system is deployed. The whole country then turns into one big missing children's poster. The system enables the police to immediately alert press and public nationwide, using any medium available – from electronic highway signs, to TV, radio, social media such as Facebook and Twitter, pop-up and screensavers on PC's, large advertising screens (digital signage), e-mail, SMS text messages, smartphone apps, printable posters, RSS newsfeeds and website banners and pop-ups.There are four key criteria in The Netherlands to be met before an AMBER Alert is issued, assessed by the Dutch National Police: The missing child has to be younger than 18 years; The life of the child is in imminent danger, or there is fear they will be seriously injured; There is enough information about the victim to increase the chance of the child being found by means of an AMBER Alert, such as a photo, information about the abductor or a vehicle used; The AMBER Alert must be used as soon as possible after the abduction or the child going missing has been reported.Parts of the Dutch AMBER Alert system are being used for endangered missing children. A missing child is considered endangered when there is an immediate and significant risk of harm but the case does not reach the criteria for an AMBER Alert. The Dutch police can decide to publicize information and ask the help of citizens to recover the child. Europe: United Kingdom The UK has developed the Child Rescue Alert, similar to the American AMBER Alert. The system works in a way, where in the local area of the suspected abduction, radio and television broadcasts are immediately interrupted (even in some cases during mid-speech) and listeners/viewers are provided details of anything to look out for. Some counties include Variable message signs which alerts drivers on major roads to be on the lookout for that missing person or a car on the road. Europe: In England, the counties of Hampshire, Leicestershire, Surrey, Sussex, Gloucestershire, Cambridgeshire, Bedfordshire, Norfolk, Derbyshire, Suffolk, Thames Valley, Wiltshire, and Somerset, and the London Metropolitan Police Service, have adopted a similar program called the Child Rescue Alert system. Sussex was the first to launch the system, on November 14, 2002. It is based on and has alert requirements similar to the American system.There are four key criteria in the UK's system to be met before a Child Rescue Alert is issued: The child is apparently under 18 years old. Europe: There is a reasonable belief that the child has been kidnapped or abducted. Europe: There is reasonable belief that the child is in imminent danger of serious harm or death, and There is sufficient information available to enable the public to assist police in locating the child.Members of the public will be encouraged to keep their eyes and ears open for anything that may help the police in finding the abducted child. If they see anything they should call the police on 999.On 3 October 2012, the first child rescue alert since the system was introduced, was issued in the search for April Jones, who was abducted near her home in the market town of Machynlleth in Mid-Wales. News flashes are being used to interrupt local radio and programmes. Information is also being carried on motorway gantry displays and texted to the mobile phones of individuals who have signed up to the project.In May 2014 a Child Rescue Alert distribution system will be launched which aims to distribute alert messages to members of the public and the media through SMS, email, Mobile app, website pop-ups, Twitter and Facebook as well as digital billboards operated by the members of the Outdoor Media Centre. The system is available so that, if the above criteria are met, a police force can rapidly alert the public and ask them to report anything useful on a dedicated police telephone number. SMS and email messages can be sent to people who have registered to receive them through the website and who live or work in the vicinity of the disappearance. The system is an initiative of CEOP, the Child Exploitation and On-Line Protection Centre, a command of the National Crime Agency, and is facilitated by the charity, Missing People, which promotes and operates the system. The technology is provided by Groupcall. The development, promotion and operation of the system is funded initially by the players of the People's Postcode Lottery via the Dreamfund, the European Union and through the help of other supporters. North America: The AMBER Alert system is a notification to the general public, by media outlets in Canada and in the United States, issued when police confirm that a child has been abducted. AMBER is a backronym for America's Missing: Broadcasting Emergency Response, and was named after a 9-year-old girl named Amber Hagerman who was abducted and murdered in Arlington, Texas in 1996.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Skein relation** Skein relation: Skein relations are a mathematical tool used to study knots. A central question in the mathematical theory of knots is whether two knot diagrams represent the same knot. One way to answer the question is using knot polynomials, which are invariants of the knot. If two diagrams have different polynomials, they represent different knots. In general, the converse does not hold. Skein relation: Skein relations are often used to give a simple definition of knot polynomials. A skein relation gives a linear relation between the values of a knot polynomial on a collection of three links which differ from each other only in a small region. For some knot polynomials, such as the Conway, Alexander, and Jones polynomials, the relevant skein relations are sufficient to calculate the polynomial recursively. Definition: A skein relationship requires three link diagrams that are identical except at one crossing. The three diagrams must exhibit the three possibilities that could occur for the two line segments at that crossing, one of the lines could pass under, the same line could be over or the two lines might not cross at all. Link diagrams must be considered because a single skein change can alter a diagram from representing a knot to one representing a link and vice versa. Depending on the knot polynomial in question, the links (or tangles) appearing in a skein relation may be oriented or unoriented. Definition: The three diagrams are labelled as follows. Turn the three link diagram so the directions at the crossing in question are both roughly northward. One diagram will have northwest over northeast, it is labelled L−. Another will have northeast over northwest, it's L+. The remaining diagram is lacking that crossing and is labelled L0. Definition: (The labelling is independent of direction insofar as it remains the same if all directions are reversed. Thus polynomials on undirected knots are unambiguously defined by this method. However, the directions on links are a vital detail to retain as one recurses through a polynomial calculation.) It is also sensible to think in a generative sense, by taking an existing link diagram and "patching" it to make the other two—just so long as the patches are applied with compatible directions. Definition: To recursively define a knot (link) polynomial, a function F is fixed and for any triple of diagrams and their polynomials labelled as above, F(L−,L0,L+)=0 or more pedantically F(L−(x),L0(x),L+(x),x)=0 for all x (Finding an F which produces polynomials independent of the sequences of crossings used in a recursion is no trivial exercise.) More formally, a skein relation can be thought of as defining the kernel of a quotient map from the planar algebra of tangles. Such a map corresponds to a knot polynomial if all closed diagrams are taken to some (polynomial) multiple of the image of the empty diagram. Example: Sometime in the early 1960s, Conway showed how to compute the Alexander polynomial using skein relations. As it is recursive, it is not quite so direct as Alexander's original matrix method; on the other hand, parts of the work done for one knot will apply to others. In particular, the network of diagrams is the same for all skein-related polynomials. Example: Let function P from link diagrams to Laurent series in x be such that P(unknot)=1 and a triple of skein-relation diagrams (L−,L0,L+) satisfies the equation P(L−)=(x−1/2−x1/2)P(L0)+P(L+) Then P maps a knot to one of its Alexander polynomials. Example: In this example, we calculate the Alexander polynomial of the cinquefoil knot (), the alternating knot with five crossings in its minimal diagram. At each stage we exhibit a relationship involving a more complex link and two simpler diagrams. Note that the more complex link is on the right in each step below except the last. For convenience, let A = x−1/2−x1/2. Example: To begin, we create two new diagrams by patching one of the cinquefoil's crossings (highlighted in yellow) so P() = A × P() + P()The first diagram is actually a trefoil; the second diagram is two unknots with four crossings. Patching the latter P() = A × P() + P()gives, again, a trefoil, and two unknots with two crossings (the Hopf link [1]). Patching the trefoil P() = A × P() + P()gives the unknot and, again, the Hopf link. Patching the Hopf link P() = A × P() + P()gives a link with 0 crossings (unlink) and an unknot. The unlink takes a bit of sneakiness: P() = A × P() + P() Computations We now have enough relations to compute the polynomials of all the links we've encountered, and can use the above equations in reverse order to work up to the cinquefoil knot itself. The calculation is described in the table below, where ? denotes the unknown quantity we are solving for in each relation: Thus the Alexander polynomial for a cinquefoil is P(x) = x−2 -x−1 +1 -x +x2. Sources: American Mathematical Society, Knots and Their Polynomials, Feature Column. Weisstein, Eric W. "Skein Relationship". MathWorld. Morton, Hugh R.; Lukac, Sascha G. (2003), "HOMFLY polynomial of decorated Hopf link", Journal of Knot Theory and Its Ramifications, 12: 395–416, arXiv:math.GT/0108011, doi:10.1142/s0218216503002536.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transforming growth interacting factor** Transforming growth interacting factor: Transforming growth interacting factor (TGIF) is a potential repressor of TGF-β pathways in myometrial cells. Expression of TGIF is increased in uterine leiomyoma compared with myometrium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spinothalamic tract** Spinothalamic tract: The spinothalamic tract is a part of the anterolateral system or the ventrolateral system, a sensory pathway to the thalamus. From the ventral posterolateral nucleus in the thalamus, sensory information is relayed upward to the somatosensory cortex of the postcentral gyrus. The spinothalamic tract consists of two adjacent pathways: anterior and lateral. The anterior spinothalamic tract carries information about crude touch. The lateral spinothalamic tract conveys pain and temperature. In the spinal cord, the spinothalamic tract has somatotopic organization. This is the segmental organization of its cervical, thoracic, lumbar, and sacral components, which is arranged from most medial to most lateral respectively. The pathway crosses over (decussates) at the level of the spinal cord, rather than in the brainstem like the dorsal column-medial lemniscus pathway and lateral corticospinal tract. It is one of the three tracts which make up the anterolateral system. Structure: There are two main parts of the spinothalamic tract: The lateral spinothalamic tract transmits pain and temperature. The anterior spinothalamic tract (or ventral spinothalamic tract) transmits crude touch and firm pressure.The spinothalamic tract, like the dorsal column-medial lemniscus pathway, uses three neurons to convey sensory information from the periphery to conscious level at the cerebral cortex. Pseudounipolar neurons in the dorsal root ganglion have axons that lead from the skin into the dorsal spinal cord where they ascend or descend one or two vertebral levels via Lissauer's tract and then synapse with secondary neurons in either the substantia gelatinosa of Rolando or the nucleus proprius. These secondary neurons are called tract cells. Structure: The axons of the tract cells cross over (decussate) to the other side of the spinal cord via the anterior white commissure, and to the anterolateral corner of the spinal cord (hence the spinothalamic tract being part of the anterolateral system). Decussation usually occurs 1-2 spinal nerve segments above the point of entry. The axons travel up the length of the spinal cord into the brainstem, specifically the rostral ventromedial medulla. Structure: Traveling up the brainstem, the tract moves dorsally. The neurons ultimately synapse with third-order neurons in several nuclei of the thalamus—including the medial dorsal, ventral posterior lateral, and ventral posterior medial nuclei. From there, signals go to the cingulate cortex, the primary somatosensory cortex, and insular cortex respectively. Structure: Anterior spinothalamic tract The anterior spinothalamic tract, (Latin: tractus spinothalamicus anterior) or ventral spinothalamic fasciculus situated in the marginal part of the anterior funiculus and intermingled more or less with the vestibulospinal tract, is derived from cells in the posterior column or intermediate gray matter of the opposite side. Aβ fibres carry sensory information pertaining to crude touch from the skin. After entering the spinal cord the first order neurons synapse (in the nucleus proprius), and the second order neurons decussate via the anterior white commissure. These second order neurons ascend synapsing in the VPL of the thalamus. Incoming first order neurons can ascend or descend via the Lissauer tract. This is a somewhat doubtful fasciculus and its fibers are supposed to end in the thalamus and to conduct certain of the touch impulses. More specifically, its fibers convey crude touch information to the VPL (ventral posterolateral nucleus) part of the thalamus. Structure: The fibers of the anterior spinothalamic tract conduct information about pressure and crude touch (protopathic). The fine touch (epicritic) is conducted by fibers of the medial lemniscus. The medial lemniscus is formed by the axons of the neurons of the gracilis and cuneatus nuclei of the medulla oblongata which receive information about light touch, vibration and conscient proprioception from the gracilis and cuneatus fasciculus of the spinal cord. This fasciculus receive the axons of the first order neuron which is located in the dorsal root ganglion that receives afferent fibers from receptors in the skin, muscles and joints. Structure: Lateral spinothalamic tract The lateral spinothalamic tract (or lateral spinothalamic fasciculus), is a bundle of afferent nerve fibers ascending through the white matter of the spinal cord, in the spinothalamic tract, carrying sensory information to the brain. It carries pain, and temperature sensory information (protopathic sensation) to the thalamus. It is composed primarily of fast-conducting, sparsely myelinated A delta fibers and slow-conducting, unmyelinated C fibers. These are secondary sensory neurons which have already synapsed with the primary sensory neurons of the peripheral nervous system in the posterior horn of the spinal cord (one of the three grey columns). Structure: Together with the anterior spinothalamic tract, the lateral spinothalamic tract is sometimes termed the secondary sensory fasciculus or spinal lemniscus. Structure: Anatomy The neurons of the lateral spinothalamic tract originate in the spinal dorsal root ganglia. They project peripheral processes to the tissues in the form of free nerve endings which are sensitive to molecules indicative of cell damage. The central processes enter the spinal cord in an area at the back of the posterior horn known as the posterolateral tract. Here, the processes ascend approximately two levels before synapsing on second-order neurons. These secondary neurons are situated in the posterior horn, specifically in the Rexed laminae regions I, IV, V and VI. Region II is primarily composed of Golgi II interneurons, which are primarily for the modulation of pain, and largely project to secondary neurons in regions I and V. Secondary neurons from regions I and V decussate across the anterior white commissure and ascend in the (now contralateral) lateral spinothalamic tract. These fibers will ascend through the brainstem, including the medulla oblongata, pons and midbrain, as the spinal lemniscus until synapsing in the ventroposteriorlateral (VPL) nucleus of the thalamus. The third order neurons in the thalamus will then project through the internal capsule and corona radiata to various regions of the cortex, primarily the main somatosensory cortex, Brodmann areas 3, 1, and 2. Function: The types of sensory information means that the sensation is accompanied by a compulsion to act. For instance, an itch is accompanied by a need to scratch, and a painful stimulus makes us want to withdraw from the pain.There are two sub-systems identified: Direct (for direct conscious appreciation of pain) Indirect (for affective and arousal impact of pain). Indirect projections include Spino-Reticulo-Thalamo-Cortical (part of the ascending reticular arousal system, also known as ARAS) Spino-Mesencephalic-Limbic (for affective impact of pain). Function: Anterolateral system In the nervous system, the anterolateral system is an ascending pathway that conveys pain, temperature (protopathic sensation), and crude touch from the periphery to the brain. It comprises three main pathways: Clinical significance: In contrast to the axons of second-order neurons in dorsal column-medial lemniscus pathway, the axons of second-order neurons in the spinothalamic tracts cross at every segmental level in the spinal cord. This fact aids in determining whether a lesion is in the brain or the spinal cord. With lesions in the brain stem or higher, deficits of pain perception, touch sensation, and proprioception are all contralateral to the lesion. With spinal cord lesions, however, the deficit in pain perception is contralateral to the lesion, whereas the other deficits are ipsilateral. See Brown-Séquard syndrome. Clinical significance: Unilateral lesions usually cause contralateral anaesthesia (loss of pain and temperature). Anaesthesia will normally begin 1-2 segments below the level of lesion, due to the sensory fibers being carried by dorsal-lateral tract of Lissauer up several levels upon entry into the spinal cord, and will affect all caudal body areas. This is clinically tested by using pin pricks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Desensitization (medicine)** Desensitization (medicine): In medicine, desensitization is a method to reduce or eliminate an organism's negative reaction to a substance or stimulus. Desensitization (medicine): In pharmacology, drug desensitization refers to two related concepts. First, desensitization may be equivalent to drug tolerance and refers to subjects' reactions (positive or negative) to a drug reducing following its repeated use. This is a macroscopic, organism-level effect and differs from the second meaning of desensitization, which refers to a biochemical effect where individual receptors become less responsive after repeated application of an agonist. This may be mediated by phosphorylation, for instance by beta adrenoceptor kinase at the beta adrenoceptor. Application to allergies: For example, if a person with diabetes mellitus has a bad allergic reaction to taking a full dose of beef insulin, the person is given a very small amount of the insulin at first, so small that the person has no adverse reaction or very limited symptoms as a result. Over a period of time, larger doses are given until the person is taking the full dose. This is one way to help the body get used to the full dose, and to avoid having the allergic reaction to beef-origin insulin. Application to allergies: A temporary desensitization method involves the administration of small doses of an allergen to produces an IgE-mediated response in a setting where an individual can be resuscitated in the event of anaphylaxis; this approach, through uncharacterized mechanisms, eventually overrides the hypersensitive IgE response.Desensitization approaches for food allergies are generally at the research stage. They include: oral immunotherapy, which involves building up tolerance by eating a small amount of (usually baked) food; sublingual immunotherapy, which involves placing a small drop of milk or egg white under the tongue; epicutaneous immunotherapy, which injects the allergic food under the skin; monoclonal anti-IgE antibodies, which non-specifically reduce the body's capacity to produce an allergic reaction; a Chinese herbal formulation, FAHF-2, another non-specific approach currently being studied in peanut allergy; use of probiotics; helminthic therapy; a drug to suppress toll-like receptor 9 (TLR9); and mepolizumab to treat eosinophilic esophagitis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rorschach test** Rorschach test: The Rorschach test is a projective psychological test in which subjects' perceptions of inkblots are recorded and then analyzed using psychological interpretation, complex algorithms, or both. Some psychologists use this test to examine a person's personality characteristics and emotional functioning. It has been employed to detect underlying thought disorder, especially in cases where patients are reluctant to describe their thinking processes openly. The test is named after its creator, Swiss psychologist Hermann Rorschach. The Rorschach can be thought of as a psychometric examination of pareidolia, the active pattern of perceiving objects, shapes, or scenery as meaningful things to the observer's experience, the most common being faces or other pattern of forms that are not present at the time of the observation. In the 1960s, the Rorschach was the most widely used projective test.Although the Exner Scoring System (developed since the 1960s) claims to have addressed and often refuted many criticisms of the original testing system with an extensive body of research, some researchers continue to raise questions. The areas of dispute include the objectivity of testers, inter-rater reliability, the verifiability and general validity of the test, bias of the test's pathology scales towards greater numbers of responses, the limited number of psychological conditions which it accurately diagnoses, the inability to replicate the test's norms, its use in court-ordered evaluations, and the proliferation of the ten inkblot images, potentially invalidating the test for those who have been exposed to them. History: The use of interpreting "ambiguous designs" to assess an individual's personality is an idea that goes back to Leonardo da Vinci and Botticelli. Interpretation of inkblots was central to a game, Gobolinks, from the late 19th century. The Rorschach test, however, was the first systematic approach of this kind.After studying 300 mental patients and 100 control subjects, in 1921 Hermann Rorschach wrote his book Psychodiagnostik, which was to form the basis of the inkblot test. After experimenting with several hundred inkblots which he drew himself, he selected a set of ten for their diagnostic value. Although he had served as Vice President of the Swiss Psychoanalytic Society, Rorschach had difficulty in publishing the book and it attracted little attention when it first appeared. Rorschach died the following year. History: It has been suggested that Rorschach's use of inkblots may have been influenced by German doctor Justinus Kerner who, in 1857, had published a popular book of poems, each of which was inspired by an accidental inkblot. French psychologist Alfred Binet had also experimented with inkblots as a creativity test, and, after the turn of the century, psychological experiments where inkblots were utilized multiplied, with aims such as studying imagination and consciousness.In 1927, the newly founded Hans Huber publishing house purchased Rorschach's book Psychodiagnostik from the inventory of Ernst Bircher. Huber remains the publisher of the test and related book, with Rorschach a registered trademark of Swiss publisher Verlag Hans Huber, Hogrefe AG. The work has been described as "a densely written piece couched in dry, scientific terminology".After Rorschach's death, the original test scoring system was improved by Samuel Beck, Bruno Klopfer and others. John E. Exner summarized some of these later developments in the comprehensive system, at the same time trying to make the scoring more statistically rigorous. Some systems are based on the psychoanalytic concept of object relations. The Exner system remains very popular in the United States, while in Europe other methods sometimes dominate, such as that described in the textbook by Ewald Bohm, which is closer to the original Rorschach system and rooted more deeply in the original psychoanalysis principles.Rorschach never intended the inkblots to be used as a general personality test, but developed them as a tool for the diagnosis of schizophrenia. It was not until 1939 that the test was used as a projective test of personality, a use of which Rorschach had always been skeptical. Interviewed in 2012 for a BBC Radio 4 documentary, Rita Signer, curator of the Rorschach Archives in Bern, Switzerland, suggested that far from being random or chance designs, each of the blots selected by Rorschach for his test had been meticulously designed to be as ambiguous and "conflicted" as possible. Method: The Rorschach test is appropriate for subjects from the age of five to adulthood. The administrator and subject typically sit next to each other at a table, with the administrator slightly behind the subject. Side-by-side seating of the examiner and the subject is used to reduce any effects of inadvertent cues from the examiner to the subject. In other words, side-by-side seating mitigates the possibility that the examiner will accidentally influence the subject's responses. This is to facilitate a "relaxed but controlled atmosphere". There are ten official inkblots, each printed on a separate white card, approximately 18 by 24 cm in size. Each of the blots has near perfect bilateral symmetry. Five inkblots are of black ink, two are of black and red ink and three are multicolored, on a white background.After the test subject has seen and responded to all of the inkblots (free association phase), the tester then presents them again one at a time in a set sequence for the subject to study: the subject is asked to note where they see what they originally saw and what makes it look like that (inquiry phase). The subject is usually asked to hold the cards and may rotate them. Whether the cards are rotated, and other related factors such as whether permission to rotate them is asked, may expose personality traits and normally contributes to the assessment.As the subject is examining the inkblots, the psychologist writes down everything the subject says or does, no matter how trivial. Analysis of responses is recorded by the test administrator using a tabulation and scoring sheet and, if required, a separate location chart.The general goal of the test is to provide data about cognition and personality variables such as motivations, response tendencies, cognitive operations, affectivity, and personal/interpersonal perceptions. The underlying assumption is that an individual will class external stimuli based on person-specific perceptual sets, and including needs, base motives, conflicts, and that this clustering process is representative of the process used in real-life situations.Methods of interpretation differ. Rorschach scoring systems have been described as a system of pegs on which to hang one's knowledge of personality. The most widely used method in the United States is based on the work of Exner. Administration of the test to a group of subjects, by means of projected images, has also occasionally been performed, but mainly for research rather than diagnostic purposes.Test administration is not to be confused with test interpretation: The interpretation of a Rorschach record is a complex process. It requires a wealth of knowledge concerning personality dynamics generally as well as considerable experience with the Rorschach method specifically. Proficiency as a Rorschach administrator can be gained within a few months. However, even those who are able and qualified to become Rorschach interpreters usually remain in a "learning stage" for a number of years. Method: Features or categories The interpretation of the Rorschach test is not based primarily on the contents of the response, i.e., what the individual sees in the inkblot (the content). In fact, the contents of the response are only a comparatively small portion of a broader cluster of variables that are used to interpret the Rorschach data: for instance, information is provided by the time taken before providing a response for a card can be significant (taking a long time can indicate "shock" on the card). As well as by any comments the subject may make in addition to providing a direct response.In particular, information about determinants (the aspects of the inkblots that triggered the response, such as form and color) and location (which details of the inkblots triggered the response) is often considered more important than content, although there is contrasting evidence. Method: "Popularity" and "originality" of responses can also be considered as basic dimensions in the analysis. Method: Content The goal in coding content of the Rorschach is to categorize the objects that the subject describes in response to the inkblot. There are 27 established codes for identifying the name of the descriptive object. The codes are classified and include terms such as "human", "nature", "animal", "abstract", "clothing", "fire", and "x-ray", to name a few. Content described that does not have a code already established should be coded using the code "idiographic contents" with the shorthand code being "Idio." Items are also coded for statistical popularity (or, conversely, originality).More than any other feature in the test, content response can be controlled consciously by the subject, and may be elicited by very disparate factors, which makes it difficult to use content alone to draw any conclusions about the subject's personality; with certain individuals, content responses may potentially be interpreted directly, and some information can at times be obtained by analyzing thematic trends in the whole set of content responses (which is only feasible when several responses are available), but in general content cannot be analyzed outside of the context of the entire test record. Method: Location Identifying the location of the subject's response is another element scored in the Rorschach system. Location refers to how much of the inkblot was used to answer the question. Administrators score the response "W" if the whole inkblot was used to answer the question, "D" if a commonly described part of the blot was used, "Dd" if an uncommonly described or unusual detail was used, or "S" if the white space in the background was used. A score of W is typically associated with the subject's motivation to interact with his or her surrounding environment. D is interpreted as one having efficient or adequate functioning. A high frequency of responses coded Dd indicate some maladjustment within the individual. Responses coded S indicate an oppositional or uncooperative test subject. Method: Determinants Systems for Rorschach scoring generally include a concept of "determinants": These are the factors that contribute to establishing the similarity between the inkblot and the subject's content response about it. They can also represent certain basic experiential-perceptual attitudes, showing aspects of the way a subject perceives the world. Rorschach's original work used only form, color and movement as determinants. However currently, another major determinant considered is shading, which was inadvertently introduced by poor printing quality of the inkblots. Rorschach initially disregarded shading, since the inkblots originally featured uniform saturation, but later recognized it as a significant factor.Form is the most common determinant, and is related to intellectual processes. Color responses often provide direct insight into one's emotional life. Movement and shading have been considered more ambiguously, both in definition and interpretation. Rorschach considered movement only as the experiencing of actual motion, while others have widened the scope of this determinant, taking it to mean that the subject sees something "going on".More than one determinant can contribute to the formation of the subject's perception. Fusion of two determinants is taken into account, while also assessing which of the two constituted the primary contributor. For example, "form-color" implies a more refined control of impulse than "color-form". It is, indeed, from the relation and balance among determinants that personality can be most readily inferred. Method: Symmetry of the test items A striking characteristic of the Rorschach inkblots is their symmetry. Many unquestionably accept this aspect of the nature of the images but Rorschach, as well as other researchers, certainly did not. Rorschach experimented with both asymmetric and symmetric images before finally opting for the latter.He gives this explanation for the decision: Asymmetric figures are rejected by many subjects; symmetry supplied part of the necessary artistic composition. It has a disadvantage in that it tends to make answers somewhat stereotyped. On the other hand, symmetry makes conditions the same for right and left handed subjects; furthermore, it facilitates interpretation for certain blocked subjects. Finally, symmetry makes possible the interpretation of whole scenes. Method: The impact of symmetry in the Rorschach inkblot's has also been investigated further by other researchers. Exner scoring system The Exner scoring system, also known as the Rorschach Comprehensive System (RCS), is the standard method for interpreting the Rorschach test. It was developed in the 1960s by John E. Exner, as a more rigorous system of analysis. It has been extensively validated and shows high inter-rater reliability. In 1969, Exner published The Rorschach Systems, a concise description of what would be later called "the Exner system". He later published a study in multiple volumes called The Rorschach: A Comprehensive system, the most accepted full description of his system. Method: Creation of the new system was prompted by the realization that at least five related, but ultimately different methods were in common use at the time, with a sizeable minority of examiners not employing any recognized method at all, basing instead their judgment on subjective assessment, or arbitrarily mixing characteristics of the various standardized systems.The key components of the Exner system are the clusterization of Rorschach variables and a sequential search strategy to determine the order in which to analyze them, framed in the context of standardized administration, objective, reliable coding and a representative normative database. Method: The system places a lot of emphasis on a cognitive triad of information processing, related to how the subject processes input data, cognitive mediation, referring to the way information is transformed and identified, and ideation.In the system, responses are scored with reference to their level of vagueness or synthesis of multiple images in the blot, the location of the response, which of a variety of determinants is used to produce the response (i.e., what makes the inkblot look like what it is said to resemble), the form quality of the response (to what extent a response is faithful to how the actual inkblot looks), the contents of the response (what the respondent actually sees in the blot), the degree of mental organizing activity that is involved in producing the response, and any illogical, incongruous, or incoherent aspects of responses. "Bat" is a popular response to the first card.Using the scores for these categories, the examiner then performs a series of calculations producing a structural summary of the test data. The results of the structural summary are interpreted using existing research data on personality characteristics that have been demonstrated to be associated with different kinds of responses. Method: With the Rorschach plates (the ten inkblots), the area of each blot which is distinguished by the client is noted and coded—typically as "commonly selected" or "uncommonly selected". There were many different methods for coding the areas of the blots. Exner settled upon the area coding system promoted by S. J. Beck (1944 and 1961). This system was in turn based upon Klopfer's (1942) work. Method: As pertains to response form, a concept of "form quality" was present from the earliest of Rorschach's works, as a subjective judgment of how well the form of the subject's response matched the inkblots (Rorschach would give a higher form score to more "original" yet good form responses), and this concept was followed by other methods, especially in Europe; in contrast, the Exner system solely defines "good form" as a matter of word occurrence frequency, reducing it to a measure of the subject's distance to the population average. Method: Performance assessment system Rorschach performance assessment system (R-PAS) is a scoring method created by several members of the Rorschach Research Council. They believed that the Exner scoring system was in need of an update, but after Exner's death, the Exner family forbade any changes to be made to the Comprehensive System. Therefore, they established a new system: the R-PAS. It is an attempt at creating a current, empirically based, and internationally focused scoring system that is easier to use than Exner's Comprehensive System. The R-PAS manual is intended to be a comprehensive tool for administering, scoring, and interpreting the Rorschach. The manual consists of two chapters that are basics of scoring and interpretation, aimed for use for novice Rorschach users, followed by numerous chapters containing more detailed and technical information.In terms of updated scoring, the authors only selected variables that have been empirically supported in the literature. The authors did not create new variables or indices to be coded, but systematically reviewed variables that had been used in past systems. While all of these codes have been used in the past, many have been renamed to be more face valid and readily understood. Scoring of the indices has been updated (e.g. utilizing percentiles and standard scores) to make the Rorschach more in line with other popular personality measures. Preliminary evidence suggests that the R-PAS exhibits good inter-rater reliability.In addition to providing coding guidelines to score examinee responses, the R-PAS provides a system to code an examinee's behavior during Rorschach administration. These behavioral codes are included as it is believed that the behaviors exhibited during testing are a reflection of someone's task performance and supplements the actual responses given. This allows generalizations to be made between someone's responses to the cards and their actual behavior. Method: The R-PAS also recognized that scoring on many of the Rorschach variables differed across countries. Therefore, starting in 1997, Rorschach protocols from researchers around the world were compiled. After compiling protocols for over a decade, a total of 15 adult samples were used to provide a normative basis for the R-PAS. The protocols represent data gathered in the United States, Europe, Israel, Argentina and Brazil. Method: Cultural differences Comparing North American Exner normative data with data from European and South American subjects showed marked differences in some features, some of which impact important variables, while others (such as the average number of responses) coincide. Method: For instance, texture response is typically zero in European subjects (if interpreted as a need for closeness, in accordance with the system, a European would seem to express it only when it reaches the level of a craving for closeness), and there are fewer "good form" responses, to the point where schizophrenia may be suspected if data were correlated to the North American norms. Method: Form is also often the only determinant expressed by European subjects; while color is less frequent than in American subjects, color-form responses are comparatively frequent in opposition to form-color responses; since the latter tend to be interpreted as indicators of a defensive attitude in processing affect, this difference could stem from a higher value attributed to spontaneous expression of emotions.The differences in form quality are attributable to purely cultural aspects: different cultures will exhibit different "common" objects (French subjects often identify a chameleon in card VIII, which is normally classed as an "unusual" response, as opposed to other animals like cats and dogs; in Scandinavia, "Christmas elves" (nisser) is a popular response for card II, and "musical instrument" on card VI is popular for Japanese people), and different languages will exhibit semantic differences in naming the same object (the figure of card IV is often called a troll by Scandinavians and an ogre by French people). Method: Many of Exner's "popular" responses (those given by at least one third of the North American sample used) seem to be universally popular, as shown by samples in Europe, Japan and South America, while specifically card IX's "human" response, the crab or spider in card X and one of either the butterfly or the bat in card I appear to be characteristic of North America.Form quality, popular content responses and locations are the only coded variables in the Exner systems that are based on frequency of occurrence, and thus immediately subject to cultural influences; therefore, cultural-dependent interpretation of test data may not necessarily need to extend beyond these components.The cited language differences can result in misinterpretation if not administered in the subject's native language or a very well mastered second language, and interpreted by a master speaker of that language. For example, a bow tie is a frequent response for the center detail of card III, but since the equivalent term in French translates to "butterfly tie", an examiner not appreciating this language nuance may code the response differently from what is expected. Inkblots: Below are the ten inkblots printed in Rorschach Test – Psychodiagnostic Plates, together with the most frequent responses for either the whole image or the most prominent details according to various authors. Usage: United States The Rorschach test is used almost exclusively by psychologists. Forensic psychologists use the Rorschach 36% of the time. In custody cases, 23% of psychologists use the Rorschach to examine a child. Another survey found that 124 out of 161 (77%) of clinical psychologists engaging in assessment services utilize the Rorschach, and 80% of psychology graduate programs teach its use. Usage: Another study found that its use by clinical psychologists was only 43%, while it was used less than 24% of the time by school psychologists.During World War II, United States Army Medical Corps chief psychiatrist Dr. Douglas Kelley and psychologist Gustave Gilbert administered the Rorschach test to the 22 defendants in the Nazi leadership group prior to the first Nuremberg trials, and the test scores were published some decades later.Because of the large amount of data used to interpret the test, psychologist Zygmunt Piotrowski, began work to computerize ink blot interpretation in the 1950s and 1960s. This work included over 1,000 rules and included no summary nor narrative conclusions. A subsequent computerized interpretation of Rorschach test scores, that included summary and conclusions was developed in the 1970s by psychologists Perline and Cabanski, and marketed internationally. This computerized interpretation of the test was used to interpret the set of scores developed by Dr. Gilbert on Nazi Hermann Goering along with several other Nazis while awaiting trial at Nuremberg Prison.In the 1980s psychologist John Exner developed a computerized interpretation of the Rorschach test, based on his own scoring system, the Exner Comprehensive System. Presently, of the three computerized assessments, only the Exner system is available on the market. Usage: The arguments for or against computerized assessment of the Rorschach is likely to remain unresolved for some time, as there is no absolute correct interpretation against which the different markers (scores) denoting mental health can be compared. Although scores for a theoretically typical healthy adult have been proposed and reasonable attempts to standardize the computer interpretation against these scores have been obtained, more work in this area needs to be done. Usage: United Kingdom Many psychologists in the United Kingdom do not trust its efficacy and it is rarely used. Although skeptical about its scientific validity, some psychologists use it in therapy and coaching "as a way of encouraging self-reflection and starting a conversation about the person's internal world." It is still used, however, by some mental health organisations such as the Tavistock Clinic. In a survey done in the year 2000, 20% of psychologists in correctional facilities used the Rorschach while 80% used the MMPI. Usage: Japan Shortly after publication of Rorschach's book, a copy found its way to Japan where it was discovered by one of the country's leading psychiatrists in a second-hand book store. He was so impressed that he started a craze for the test that has never diminished. The Japanese Rorschach Society is by far the largest in the world and the test is "routinely put to a wide range of purposes". In 2012 the test was described, by presenter Jo Fidgen, for BBC Radio 4's programme Dr Inkblot, as "more popular than ever" in Japan. Controversy: Some skeptics consider the Rorschach inkblot test pseudoscience, as several studies suggested that conclusions reached by test administrators since the 1950s were akin to cold reading. In the 1959 edition of Mental Measurement Yearbook, Lee Cronbach (former President of the Psychometric Society and American Psychological Association) is quoted in a review: "The test has repeatedly failed as a prediction of practical criteria. There is nothing in the literature to encourage reliance on Rorschach interpretations." In addition, major reviewer Raymond J. McCall writes (p. 154): "Though tens of thousands of Rorschach tests have been administered by hundreds of trained professionals since that time (of a previous review), and while many relationships to personality dynamics and behavior have been hypothesized, the vast majority of these relationships have never been validated empirically, despite the appearance of more than 2,000 publications about the test." A moratorium on its use was called for in 1999.A 2003 report by Wood and colleagues had more mixed views: "More than 50 years of research have confirmed Lee J. Cronbach's (1970) final verdict: that some Rorschach scores, though falling woefully short of the claims made by proponents, nevertheless possess 'validity greater than chance' (p. 636). [...] Its value as a measure of thought disorder in schizophrenia research is well accepted. It is also used regularly in research on dependency, and, less often, in studies on hostility and anxiety. Furthermore, substantial evidence justifies the use of the Rorschach as a clinical measure of intelligence and thought disorder." Test materials The basic premise of the test is that objective meaning can be extracted from responses to blots of ink which are supposedly meaningless. Supporters of the Rorschach inkblot test believe that the subject's response to an ambiguous and meaningless stimulus can provide insight into their thought processes, but it is not clear how this occurs. Also, recent research shows that the blots are not entirely meaningless, and that a patient typically responds to meaningful as well as ambiguous aspects of the blots. Reber (1985) describes the blots as merely ".. the vehicle for the interaction .." between client and therapist, concluding: ".. the usefulness of the Rorschach will depend upon the sensitivity, empathy and insightfulness of the tester totally independently of the Rorschach itself. An intense dialogue about the wallpaper or the rug would do as well provided that both parties believe." Illusory and invisible correlations In the 1960s, research by psychologists Loren and Jean Chapman, at the University of Wisconsin, published in the Journal of Abnormal Psychology, showed that at least some of the apparent validity of the Rorschach was due to an illusion. At that time, the five signs most often interpreted as diagnostic of homosexuality were 1) buttocks and anuses; 2) feminine clothing; 3) male or female sex organs; 4) human figures without male or female features; and 5) human figures with both male and female features. The Chapmans surveyed 32 experienced testers about their use of the Rorschach to diagnose homosexuality. At this time homosexuality was regarded as a psychopathology, and the Rorschach was the most popular projective test. The testers reported that homosexual men had shown the five signs more frequently than heterosexual men. Despite these beliefs, analysis of the results showed that heterosexual men were just as likely to report these signs, which were therefore totally ineffective for determining homosexuality. The five signs did, however, match the guesses students made about which imagery would be associated with homosexuality.The Chapmans investigated the source of the testers' false confidence. In one experiment, students read through a stack of cards, each with a Rorschach blot, a sign and a pair of "conditions" (which might include homosexuality). The information on the cards was fictional, although subjects were told it came from case studies of real patients. The students reported that the five invalid signs were associated with homosexuality, even though the cards had been constructed so there was no association at all. The Chapmans repeated this experiment with another set of cards, in which the association was negative; the five signs were never reported by homosexuals. The students still reported seeing a strong positive correlation. These experiments showed that the testers' prejudices could result in them "seeing" non-existent relationships in the data. The Chapmans called this phenomenon "illusory correlation" and it has since been demonstrated in many other contexts.A related phenomenon called "invisible correlation" applies when people fail to see a strong association between two events because it does not match their expectations. This was also found in clinicians' interpretations of the Rorschach. Homosexual men are more likely to see a monster on Card IV or a part-animal, part-human figure in Card V. Almost all of the experienced clinicians in the Chapmans' survey missed these valid signs. The Chapmans ran an experiment with fake Rorschach responses in which these valid signs were always associated with homosexuality. The subjects missed these perfect associations and instead reported that invalid signs, such as buttocks or feminine clothing, were better indicators.In 1992, the psychologist Stuart Sutherland argued that these artificial experiments are easier than the real-world use of the Rorschach, and hence they probably underestimated the errors that testers were susceptible to. He described the continuing popularity of the Rorschach after the Chapmans' research as a "glaring example of irrationality among psychologists". Controversy: Tester projection Some critics argue that the testing psychologist must also project onto the patterns. A possible example sometimes attributed to the psychologist's subjective judgement is that responses are coded (among many other things), for "Form Quality": in essence, whether the subject's response fits with how the blot actually looks. Superficially this might be considered a subjective judgment, depending on how the examiner has internalized the categories involved. But with the Exner system of scoring, much of the subjectivity is eliminated or reduced by use of frequency tables that indicate how often a particular response is given by the population in general. Another example is that the response "bra" was considered a "sex" response by male psychologists, but a "clothing" response by females. Controversy: In Exner's system, however, such a response is always coded as "clothing" unless there is a clear sexual reference in the response.Third parties could be used to avoid this problem, but the Rorschach's inter-rater reliability has been questioned. That is, in some studies the scores obtained by two independent scorers do not match with great consistency. This conclusion was challenged in studies using large samples reported in 2002. Controversy: Validity When interpreted as a projective test, results are poorly verifiable. The Exner system of scoring (also known as the "Comprehensive System") is meant to address this, and has all but displaced many earlier (and less consistent) scoring systems. It makes heavy use of what factor (shading, color, outline, etc.) of the inkblot leads to each of the tested person's comments. Disagreements about test validity remain: while Exner proposed a rigorous scoring system, latitude remained in the actual interpretation, and the clinician's write-up of the test record is still partly subjective. Controversy: Reber (1985) comments ".. there is essentially no evidence whatsoever that the test has even a shred of validity."Nevertheless, there is substantial research indicating the utility of the measure for a few scores. Several scores correlate well with general intelligence. One such scale is R, the total number of responses; this reveals the questionable side-effect that more intelligent people tend to be elevated on many pathology scales, since many scales do not correct for high R: if a subject gives twice as many responses overall, it is more likely that some of these will seem "pathological". Also correlated with intelligence are the scales for Organizational Activity, Complexity, Form Quality, and Human Figure responses. Controversy: The same source reports that validity has also been shown for detecting such conditions as schizophrenia and other psychotic disorders; thought disorders; and personality disorders (including borderline personality disorder). There is some evidence that the Deviant Verbalizations scale relates to bipolar disorder. The authors conclude that "Otherwise, the Comprehensive System doesn't appear to bear a consistent relationship to psychological disorders or symptoms, personality characteristics, potential for violence, or such health problems as cancer". Controversy: (Cancer is mentioned because a small minority of Rorschach enthusiasts have claimed the test can predict cancer.) Reliability It is also thought that the test's reliability can depend substantially on details of the testing procedure, such as where the tester and subject are seated, any introductory words, verbal and nonverbal responses to subjects' questions or comments, and how responses are recorded. Exner has published detailed instructions, but Wood et al. cite many court cases where these had not been followed. Similarly, the procedures for coding responses are fairly well specified but extremely time-consuming leaving them very subject to the author's style and the publisher to the quality of the instructions (such as was noted with one of Bohm's textbooks in the 1950s) as well as clinic workers (which would include examiners) being encouraged to cut corners.United States courts have challenged the Rorschach as well. Jones v Apfel (1997) stated (quoting from Attorney's Textbook of Medicine) that Rorschach "results do not meet the requirements of standardization, reliability, or validity of clinical diagnostic tests, and interpretation thus is often controversial". In State ex rel H.H. (1999) where under cross-examination Dr. Bogacki stated under oath "many psychologists do not believe much in the validity or effectiveness of the Rorschach test" and US v Battle (2001) ruled that the Rorschach "does not have an objective scoring system." Population norms Another controversial aspect of the test is its statistical norms. Exner's system was thought to possess normative scores for various populations. But, beginning in the mid-1990s others began to try to replicate or update these norms and failed. In particular, discrepancies seemed to focus on indices measuring narcissism, disordered thinking, and discomfort in close relationships. Lilienfeld and colleagues, who are critical of the Rorschach, have stated that this proves that the Rorschach tends to "overpathologise normals". Although Rorschach proponents, such as Hibbard, suggest that high rates of pathology detected by the Rorschach accurately reflect increasing psychopathology in society, the Rorschach also identifies half of all test-takers as possessing "distorted thinking", a false positive rate unexplained by current research. Controversy: The accusation of "over-pathologising" has also been considered by Meyer et al. (2007). They presented an international collaborative study of 4704 Rorschach protocols, obtained in 21 different samples, across 17 countries, with only 2% showing significant elevations on the index of perceptual and thinking disorder, 12% elevated on indices of depression and hyper-vigilance and 13% elevated on persistent stress overload—all in line with expected frequencies among non-patient populations. Controversy: Applications The test is also controversial because of its common use in court-ordered evaluations. This controversy stems, in part, from the limitations of the Rorschach, with no additional data, in making official diagnoses from the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV).Irving B. Weiner (co-developer with John Exner of the Comprehensive system) has stated that the Rorschach "is a measure of personality functioning, and it provides information concerning aspects of personality structure and dynamics that make people the kind of people they are. Sometimes such information about personality characteristics is helpful in arriving at a differential diagnosis, if the alternative diagnoses being considered have been well conceptualized with respect to specific or defining personality characteristics". Controversy: In the vast majority of cases, anyway, the Rorschach test was not singled out but used as one of several in a battery of tests, and despite the criticism of usage of the Rorschach in the courts, out of 8,000 cases in which forensic psychologists used Rorschach-based testimony, the appropriateness of the instrument was challenged only six times, and the testimony was ruled inadmissible in only one of those cases. One study has found that use of the test in courts has increased by three times in the decade between 1996 and 2005, compared to the previous fifty years. Others however have found that its usage by forensic psychologists has decreased.Exner and others have claimed that the Rorschach test is capable of detecting suicidality. Controversy: Protection of test items and ethics Psychologists object to the publication of psychological test material out of concerns that a patient's test responses will be influenced ("primed") by previous exposure. The Canadian Psychological Association takes the position that, "Publishing the questions and answers to any psychological test compromises its usefulness" and calls for "keeping psychological tests out of the public domain." The same statement quotes their president as saying, "The CPA's concern is not with the publication of the cards and responses to the Rorschach test per se, for which there is some controversy in the psychological literature and disagreement among experts, but with the larger issue of the publication and dissemination of psychological test content". Controversy: From a legal standpoint, the Rorschach test images have been in the public domain for many years in most countries, particularly those with a copyright term of up to 70 years post mortem auctoris. They have been in the public domain in Hermann Rorschach's native Switzerland since 1992 (70 years after the author's death, or 50 years after the cut-off date of 1942), according to Swiss copyright law. They are also in the public domain under United States copyright law where all works published before 1923 are considered to be in the public domain. This means that the Rorschach images may be used by anyone for any purpose. William Poundstone was, perhaps, first to make them public in his 1983 book Big Secrets, where he also described the method of administering the test.The American Psychological Association (APA) has a code of ethics that supports "freedom of inquiry and expression" and helping "the public in developing informed judgments". Controversy: It claims that its goals include "the welfare and protection of the individuals and groups with whom psychologists work", and it requires that psychologists "make reasonable efforts to maintain the integrity and security of test materials". The APA has also raised concerns that the dissemination of test materials might impose "very concrete harm to the general public". It has not taken a position on publication of the Rorschach plates but noted "there are a limited number of standardized psychological tests considered appropriate for a given purpose". A public statement by the British Psychological Society expresses similar concerns about psychological tests (without mentioning any test by name) and considers the "release of [test] materials to unqualified individuals" to be misuse if it is against the wishes of the test publisher. Controversy: In his 1998 book Ethics in Psychology, Gerald Koocher notes that some believe "reprinting copies of the Rorschach plates ... and listing common responses represents a serious unethical act" for psychologists and is indicative of "questionable professional judgment". Controversy: Other professional associations, such as the Italian Association of Strategic Psychotherapy, recommend that even information about the purpose of the test or any detail of its administration should be kept from the public, even though "cheating" the test is held to be practically impossible.On September 9, 2008, Hogrefe attempted to claim copyright over the Rorschach ink blots during filings of a complaint with the World Intellectual Property Organization against the Brazilian psychologist Ney Limonge. These complaints were denied. Further complaints were sent to two other websites that contained information similar to the Rorschach test in May 2009 by legal firm Schluep and Degen of Switzerland.Psychologists have sometimes refused to disclose tests and test data to courts when asked to do so by the parties, citing ethical reasons; it is argued that such refusals may hinder full understanding of the process by the attorneys, and impede cross-examination of the experts. APA ethical standard 1.23(b) states that the psychologist has a responsibility to document processes in detail and of adequate quality to allow reasonable scrutiny by the court.Controversy ensued in the psychological community in 2009 when the original Rorschach plates and research results on interpretations were published in the "Rorschach test" article on Wikipedia. Hogrefe & Huber Publishing, a German company that sells editions of the plates, called the publication "unbelievably reckless and even cynical of Wikipedia" and said it was investigating the possibility of legal action. Due to this controversy an edit filter was temporarily established on Wikipedia to prevent the removal of the plates.James Heilman, an emergency room physician involved in the debate, compared it to the publication of the eye test chart: though people are likewise free to memorize the eye chart before an eye test, its general usefulness as a diagnostic tool for eyesight has not diminished. For those opposed to exposure, publication of the inkblots is described as a "particularly painful development", given the tens of thousands of research papers which have, over many years, "tried to link a patient's responses to certain psychological conditions." Controversy over Wikipedia's publication of the inkblots has resulted in the blots being published in other locations, such as The Guardian and The Globe and Mail. Later that year, in August 2009, two psychologists filed a complaint against Heilman with the Saskatchewan medical licensing board, arguing that his uploading of the images constituted unprofessional behavior. In 2012, two articles were published showing consequences of the publication of the images in Wikipedia. The first one studied negative attitudes towards the test generated during the Wikipedia-Rorschach debate, while the second suggested that reading the Wikipedia article could help to fake "good" results in the test.Publication of the Rorschach images is also welcomed by critics who consider the test to be pseudoscience. Benjamin Radford, editor of Skeptical Inquirer magazine, stated that the Rorschach "has remained in use more out of tradition than good evidence" and was hopeful that publication of the test might finally hasten its demise. In art and media: Australian artist Ben Quilty has used the Rorschach technique in his paintings, by loading impasto oil paint onto a canvas and then pressing a second, unpainted, canvas onto the first, and proceeding to create an artwork from the shape created by this method.The mask of the fictional antihero of the same name in the graphic novel limited series Watchmen and its 2009 film adaption displays a constantly morphing inkblot based on the designs used in the tests. In the 1999 Sofia Coppola movie The Virgin Suicides the character of Cecilia is given the test and in David Cronenberg's Spider (2002) the Rorschach inkblots are incorporated into the opening of the film.In 2022, a Malayalam language film titled Rorschach was announced with actor Mammootty in the lead role, inspiring queries and discussion in social media about the test.The trailer for season 1 (2017) of Mindhunter features a bloody inkblot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Testing effect** Testing effect: The testing effect (also known as retrieval practice, active recall, practice testing, or test-enhanced learning) suggests long-term memory is increased when part of the learning period is devoted to retrieving information from memory. It is different from the more general practice effect, defined in the APA Dictionary of Psychology as "any change or improvement that results from practice or repetition of task items or activities."Cognitive psychologists are working with educators to look at how to take advantage of tests—not as an assessment tool, but as a teaching tools as testing prior knowledge is more beneficial for learning when compared to reading or passively studying material, even more so when the test is more challenging for memory. History: Before much experimental evidence had been collected, the utility of testing was already evident to some perceptive observers including Francis Bacon who discussed it as a learning strategy as early as 1620. "Hence if you read a piece of text through twenty times, you will not learn it by heart so easily as if you read it ten times while attempting to recite it from time to time and consulting the text when your memory fails." Towards the end of the 17th Century, John Locke made a similar observation regarding the importance of repeated retrieval for retention in his 1689 book "An Essay Concerning Human Understanding". "But concerning the ideas themselves, it is easy to remark, that those that are oftenest refreshed (amongst which are those that are conveyed into the mind by more ways than one) by a frequent return of the objects or actions that produce them, fix themselves best in the memory, and remain clearest and longest there." Towards the end of the 19th century, Harvard psychologist William James described the testing effect in the following section of his 1890 book "The Principles of Psychology" "A curious peculiarity of our memory is that things are impressed better by active than by passive repetition. I mean that in learning (by heart, for example), when we almost know the piece, it pays better to wait and recollect by an effort from within, than to look at the book again. If we recover the words in the former way, we shall probably know them the next time; if in the latter way, we shall very likely need the book once more." The first documented empirical studies on the testing effect were published in 1909 by Edwina E. Abbott which was followed up by research into the transfer and retrieval of prior learning. In his 1932 book Psychology of Study, C. A. Mace said: "On the matter of sheer repetitive drill there is another principle of the highest importance: Active repetition is very much more effective than passive repetition. ... there are two ways of introducing further repetitions. We may re-read this list: this is passive repetition. We may recall it to mind without reference to the text before forgetting has begun: this is active repetition. It has been found that when acts of reading and acts of recall alternate, i.e., when every reading is followed by an attempt to recall the items, the efficiency of learning and retention is enormously enhanced." Studies in retrieval practice started in 1987 by John. L Richards, who published his findings in a newspaper in New York. Much of the confusion around early studies could have been due to constrained approaches not accounting for context. In more recent research with contributions from Hal Pashler, Henry Roediger and many others, testing knowledge can produce better learning, transfer, and retrieval results when compared to other forms of study that often use recognition like re-reading or highlighting. Retrieval practice: In recent research, storage strength (how well an item is learned) and retrieval strength (how well an item can be retrieved) have become separate measures for retrieval practice. Retrieval strength (also known as recall accuracy) is typically higher for restudied words when tested immediately after practice, whereas tested words were higher as time moves on. This suggests using tests is more beneficial for long term memory and retrieval which some authors believe is due to limited retrieval success during practice supporting the idea that tests are learning opportunities.Functional magnetic resonance imaging suggests that retrieval practice strengthens subsequent retention of learning through a "dual action" affecting the anterior and posterior hippocampus regions of the brain. This could support findings that individual differences in personality traits or with working memory capacity, don't seem to have any negative impacts of the testing effect, with a greater impact for lower ability individuals.Despite some doubting knowledge transfer across a topic when testing with some studies showing contradictory evidence suggesting recognition was better than recall, inferential thinking has been supported and the transfer of learning is at its strongest with application of theory to practice, inference questions, medical education, and problems involving medical diagnosis. The transfer can occur across domains, paradigms, and help retention for material not on a final test. Using retrieval practices also produces less forgetting than studying and restudying while helping to identify misconceptions and errors with effects lasting years. Retrieval practice: Repeated testing Repeated testing have shown statistical significance and results getting better than repeated studying which could be due to testing creating multiple retrieval routes for memory, allowing individuals to form lasting connections between items, or blocking information together which can help with memory retention and schema recall. Using spaced repetition has shown an increase on the testing effect with a greater impact with a delay in testing, but the delay could lead to forgetting or retrieval-induced forgetting. Retrieval practice: Delaying the test after a session can have a greater impact so studying in the day should be tested in the evening with a delay, but studying in the evening should have an immediate test due the effect sleep has on memory. Despite divided attention being thought to decrease the testing effect, if it is from a different medium it could enhance the effect.The rate of forgetting is not affected by the speed or degree of learning but by the type of practice involved. Retrieval practice: Test difficulty According to the retrieval effort hypothesis, "difficult but successful retrievals are better for memory than easier successful retrievals" which supports the idea of finding a desirable difficulty within the retrieval practice considering our memory biases. Learning a language was better when using unfamiliar words compared to familiar words, supporting higher difficulty resulting in greater learning. The difficulty relates to the likelihood of forgetting as the harder it is to remember, the more likely you are to remember and retain the information supporting the notion that more effort is required for longer lasting retention similar to the depth of processing at encoding. Therefore lack of effort from students studying could be a factor that reduces its efficiency.Increased difficulty shows decreased initial performance but increased performance on harder tests in the future, so retention and transfer suffer less when training is difficult. Even unsuccessful retrieval can enhance learning, as creating the thought helps with retention due to the generation effect. Like with processing time, it is the qualitative nature of the information that determines retention.Getting feedback helps with learning but finding a desirable difficulty for the test combined with feedback is more beneficial than studying or testing without feedback. The Read, Recite, Review method has been proposed as a method to combine retrieval practice with feedback. Retrieval practice: Test format The test format doesn't seem to impact the results as it is the process of retrieval that aids the learning but transfer-appropriate processing suggests that if the encoding of information is through a format similar to the retrieval format then the test results are likely to be higher, with a mismatch causing lower results. However, when short-answer tests or essays are used greater gains in results are seen when compared to multiple-choice test Cued recall can make retrieval easier as it reduces the required retrieval strength from an individual which can help short term results, but can hinder long term retrieval overtime due to reduced retrieval demand during practice. Quicker learning can reduce the rate of forgetting for a short period of time, but the effect doesn't last as long as more effortful retrieval. Cueing can be seen when encoding new information overlaps with prior knowledge making retrieval easier or from a visual or auditory aid. Retrieval practice: Prior knowledge seems to increase the impact of retrieval practice, but should not be seen as a boundary condition as individuals with higher prior knowledge and individuals with lower prior knowledge both benefit. Pre-testing can be used to get greater results, and the post-testing can be used to facilitate learning and memory of newly studied information, known as the forward testing effect. Pre-test or practice test accuracy doesn't predict post test results as time affects forgetting Pre-testing effect The pre-testing effect, also known as errorful generation or pre-questioning, is a related but distinct category where testing material before the material has been learned appears to lead to better subsequent learning performance than would have been the case without the pre-test, provided that feedback is given as to the correct answers once the pre-testing phase is completed or further study is undertaken. Pre-testing has been shown to aid learning in both laboratory. and classroom settings. In terms of specific examples, pre-testing appears to be a beneficial strategy in language learning, science classrooms generally, and specifically with lower ability learners in Chemistry. Pre-testing also seems to be a good way of introducing a lecture series and reduces mind-wandering during lectures. However, while some studies show that it does not seem to be as effective as post testing overall, others show that it is at least as effective as post-testing. The pre-testing effect does appear to be more target focused on the specific material to be learned and should not be seen as correlated with more generalised curiosity. Retrieval practice: Practice methods When compared to concept mapping alone, retrieval practice is more beneficial, despite students not seeing retrieval practice as a useful learning tool. When combined, learner performance was increased, suggesting concept mapping is a tool that should be combined with retrieval practice alongside other non-verbal responses. Retrieval helps with mental organization which can work well with concept mapping. Multimedia testing can be used alongside flashcards as a method of retrieval practice but removing cards too early can result in lower long term retention. Individuals may not correctly interpret the outcome of practice cards contributing to dropped cards which impact future retrieval attempts therefore resulting in lower results due to increased forgetting.It is advised that students, people in care units and teaching professionals use distributed retrieval practice with feedback to aid their studies. Interleaved practice, self-explanation, and elaborative interrogation can be useful but need more research. Summarization can be useful for individuals trained how to use to get the most from it. Keyword mnemonics and imagery for text have been somewhat helpful but the effects are often short lived. However, if each of these methods are integrated with retrieval elements the testing effect is more likely to occur. Retrieval practice: Test benefits A list of benefits of retrieval practice. Retrieval practice: Aids later retention Identifies knowledge gaps Aids future related learning Prevents interference from prior material in future learning Aids transfer of knowledge to new contexts Aids knowledge organization Aids retrieval of untested information Improves metacognitive monitoring Provides feedback to instructors Frequent testing encourages study intentions Quizzes A meta-analysis found the following links between frequent low-stakes quizzes in real classes and improved student academic performance: There was an association between the use of quizzes and academic performance. Retrieval practice: This association was stronger in psychology classes This association was stronger in all classes when quiz performance could improve class grades. Retrieval practice: Students doing well on quizzes tended to lead to students doing well on final exams Regular quizzing increased the chances of students passing classes Transfer of learning Learning using retrieval practice appears to be one of the most effective methods for promoting transfer of learning. In particular the following three techniques have been identified as particularly beneficial for transfer especially when combined with feedback: i) Implementing broad rather than narrow retrieval exercises ii) Encouraging meaningful explanations of concepts or topics iii) Using a variety of complexity and formats with questions such as retrieval questions that require inference. Considerations: Complex materials Some researchers have applied aspects of cognitive load theory to suggest the testing effect may disappear with increasing task difficulty due to increased element interactivity. This has been addressed in the literature with studies that show complex learning is benefitted by retrieval practice. Further research has demonstrated that higher-order retrieval does not need to be based on a lower-level factual recall, and that from the beginning of the learning period, both should be combined for best effect. Considerations: Future research It has been suggested that as most studies on the impact of retrieval practice were conducted in WEIRD countries this could cause a bias which should be explored in further studies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Instrumentalism** Instrumentalism: In philosophy of science and in epistemology, instrumentalism is a methodological view that ideas are useful instruments, and that the worth of an idea is based on how effective it is in explaining and predicting natural phenomena. Instrumentalism: According to instrumentalists, a successful scientific theory reveals nothing known either true or false about nature's unobservable objects, properties or processes. Scientific theory is merely a tool whereby humans predict observations in a particular domain of nature by formulating laws, which state or summarize regularities, while theories themselves do not reveal supposedly hidden aspects of nature that somehow explain these laws. Instrumentalism is a perspective originally introduced by Pierre Duhem in 1906.Rejecting scientific realism's ambitions to uncover metaphysical truth about nature, instrumentalism is usually categorized as an antirealism, although its mere lack of commitment to scientific theory's realism can be termed nonrealism. Instrumentalism merely bypasses debate concerning whether, for example, a particle spoken about in particle physics is a discrete entity enjoying individual existence, or is an excitation mode of a region of a field, or is something else altogether. Instrumentalism holds that theoretical terms need only be useful to predict the phenomena, the observed outcomes.There are multiple versions of instrumentalism. History: British empiricism Newton's theory of motion, whereby any object instantly interacts with all other objects across the universe, motivated the founder of British empiricism, John Locke, to speculate that matter is capable of thought. The next leading British empiricist, George Berkeley, argued that an object's putative primary qualities as recognized by scientists, such as shape, extension, and impenetrability, are inconceivable without the putative secondary qualities of color, hardness, warmth, and so on. He also posed the question how or why an object could be properly conceived to exist independently of any perception of it. Berkeley did not object to everyday talk about the reality of objects, but instead took issue with philosophers' talk, who spoke as if they knew something beyond sensory impressions that ordinary folk did not.For Berkeley, a scientific theory does not state causes or explanations, but simply identifies perceived types of objects and traces their typical regularities. Berkeley thus anticipated the basis of what Auguste Comte in the 1830s called positivism, although Comtean positivism added other principles concerning the scope, method, and uses of science that Berkeley would have disavowed. Berkeley also noted the usefulness of a scientific theory having terms that merely serve to aid calculations without their having to refer to anything in particular, so long as they proved useful in practice. Berkeley thus predated the insight that logical positivists—who originated in the late 1920s, but who, by the 1950s, had softened into logical empiricists—would be compelled to accept: theoretical terms in science do not always translate into observational terms.The last great British empiricist, David Hume, posed a number of challenges to Francis Bacon's inductivism, which had been the prevailing, or at least the professed view concerning the attainment of scientific knowledge. Regarding himself as having placed his own theory of knowledge on par with Newton's theory of motion, Hume supposed that he had championed inductivism over scientific realism. Upon reading Hume's work, Immanuel Kant was "awakened from dogmatic slumber", and thus sought to neutralise any threat to science posed by Humean empiricism. Kant would develop the first stark philosophy of physics. History: Transcendental idealism To save Newton's law of universal gravitation, Immanuel Kant reasoned that the mind is the precondition of experience and so, as the bridge from the noumena, which are how the world's things exist in themselves, to the phenomena, which are humans' recognized experiences. And so mind itself contains the structure that determines space, time, and substance, how mind's own categorization of noumena renders space Euclidean, time constant, and objects' motions exhibiting the very determinism predicted by Newtonian physics. Kant apparently presumed that the human mind, rather than a phenomenon itself that had evolved, had been predetermined and set forth upon the formation of humankind. In any event, the mind also was the veil of appearance that scientific methods could never lift. And yet the mind could ponder itself and discover such truths, although not on a theoretical level, but only by means of ethics. Kant's metaphysics, then, transcendental idealism, secured science from doubt—in that it was a case of "synthetic a priori" knowledge ("universal, necessary and informative")—and yet discarded hope of scientific realism. History: Logical empiricism Since the mind has virtually no power to know anything beyond direct sensory experience, Ernst Mach's early version of logical positivism (empirio-criticism) verged on idealism. It was alleged to even be a surreptitious solipsism, whereby all that exists is one's own mind. Mach's positivism also strongly asserted the ultimate unity of the empirical sciences. Mach's positivism asserted phenomenalism as to new basis of scientific theory, all scientific terms to refer to either actual or potential sensations, thus eliminating hypotheses while permitting such seemingly disparate scientific theories as physical and psychological to share terms and forms. Phenomenalism was insuperably difficult to implement, yet heavily influenced a new generation of philosophers of science, who emerged in the 1920s while terming themselves logical positivists while pursuing a program termed verificationism. Logical positivists aimed not to instruct or restrict scientists, but to enlighten and structure philosophical discourse to render scientific philosophy that would verify philosophical statements as well as scientific theories, and align all human knowledge into a scientific worldview, freeing humankind from so many of its problems due to confused or unclear language. History: The verificationists expected a strict gap between theory versus observation, mirrored by a theory's theoretical terms versus observable terms. Believing a theory's posited unobservables to always correspond to observations, the verificationists viewed a scientific theory's theoretical terms, such as electron, as metaphorical or elliptical at observations, such as white streak in cloud chamber. They believed that scientific terms lacked meanings unto themselves, but acquired meanings from the logical structure that was the entire theory that in turn matched patterns of experience. So by translating theoretical terms into observational terms and then decoding the theory's mathematical/logical structure, one could check whether the statement indeed matched patterns of experience, and thereby verify the scientific theory false or true. Such verification would be possible, as never before in science, since translation of theoretical terms into observational terms would make the scientific theory purely empirical, none metaphysical. Yet the logical positivists ran into insuperable difficulties. Moritz Schlick debated with Otto Neurath over foundationalism—the traditional view traced to Descartes as founder of modern Western philosophy—whereupon only nonfoundationalism was found tenable. Science, then, could not find a secure foundation of indubitable truth. History: And since science aims to reveal not private but public truths, verificationists switched from phenomenalism to physicalism, whereby scientific theory refers to objects observable in space and at least in principle already recognizable by physicists. Finding strict empiricism untenable, verificationism underwent "liberalization of empiricism". Rudolf Carnap even suggested that empiricism's basis was pragmatic. Recognizing that verification—proving a theory false or true—was unattainable, they discarded that demand and focused on confirmation theory. Carnap sought simply to quantify a universal law's degree of confirmation—its probable truth—but, despite his great mathematical and logical skill, discovered equations never operable to yield over zero degree of confirmation. Carl Hempel found the paradox of confirmation. By the 1950s, the verificationists had established philosophy of science as subdiscipline within academia's philosophy departments. By 1962, verificationists had asked and endeavored to answer seemingly all the great questions about scientific theory. Their discoveries showed that the idealized scientific worldview was naively mistaken. By then the leader of the legendary venture, Hempel raised the white flag that signaled verificationism's demise. Suddenly striking Western society, then, was Kuhn's landmark thesis, introduced by none other than Carnap, verificationism's greatest firebrand. Instrumentalism exhibited by scientists often does not even discern unobservable from observable entities. History: Historical turn From the 1930s until Thomas Kuhn's 1962 The Structure of Scientific Revolutions, there were roughly two prevailing views about the nature of science. The popular view was scientific realism, which usually involved a belief that science was progressively unveiling a truer view, and building a better understanding, of nature. The professional approach was logical empiricism, wherein a scientific theory was held to be a logical structure whose terms all ultimately refer to some form of observation, while an objective process neutrally arbitrates theory choice, compelling scientists to decide which scientific theory was superior. Physicists knew better, but, busy developing the Standard Model, were so steeped in developing quantum field theory, that their talk, largely metaphorical, perhaps even metaphysical, was unintelligible to the public, while the steep mathematics warded off philosophers of physics. By the 1980s, physicists regarded not particles, but fields as the more fundamental, and no longer even hoped to discover what entities and processes might be truly fundamental to nature, perhaps not even the field. Kuhn had not claimed to have developed a novel thesis, but instead hoped to synthesize more usefully recent developments in the philosophy and history of science. History: Scientific realism One scientific realist, Karl Popper, rejected all variants of positivism via its focus on sensations rather than realism, and developed critical rationalism instead. Popper alleged that instrumentalism reduces basic science to what is merely applied science. The British physicist David Deutsch, in his much later 1997 book The Fabric of Reality, followed Popper's critique of instrumentalism and argued that a scientific theory stripped of its explanatory content would be of strictly limited utility. History: Constructive empiricism as a form of instrumentalism Bas van Fraassen's (1980) project of constructive empiricism focuses on belief in the domain of the observable, so for this reason it is described as a form of instrumentalism. In the philosophy of mind: In the philosophy of mind, instrumentalism is the view that propositional attitudes like beliefs are not actually concepts on which we can base scientific investigations of mind and brain, but that acting as if other beings have beliefs is a successful strategy. Relation to pragmatism: Instrumentalism is closely related to pragmatism, the position that practical consequences are an essential basis for determining meaning, truth or value. Sources: Torretti, Roberto, The Philosophy of Physics (Cambridge: Cambridge University Press, 1999), Berkeley, pp. 98, 101–4.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Null instantiation** Null instantiation: In frame semantics, a theory of linguistic meaning, null instantiation is the name of a category used to annotate, or tag, absent semantic constituents or frame elements (Fillmore et al. 2003, Section 3.4). Frame semantics, best exemplified by the FrameNet project, views words as evoking frames of knowledge and frames as typically involving multiple components, called frame elements (e.g. buyer and goods in an acquisition). The term null refers to the fact that the frame element in question is absent. The logical object of the term instantiation refers to the frame element itself. So, null instantiation is an empty instantiation of a frame element. Ruppenhofer and Michaelis postulate an implicational regularity tying the interpretation type of an omitted argument to the frame membership of its predicator: "If a particular frame element role is lexically omissible under a particular interpretation (either anaphoric or existential) for one LU [lexical unit] in a frame, then for any other LUs in the same frame that allow the omission of this same FE [frame element], the interpretation of the missing FE is the same." (Ruppenhofer and Michaelis 2014: 66) Definite null instantiation: Definite null instantiation is the absence of a frame element that is recoverable from the context. It is similar to a zero anaphor. Indefinite null instantiation: Indefinite null instantiation is the absence of the object of a potentially transitive verb such as eat or drink. Constructional null instantiation: Constructional null instantiation is the absence of a frame element due to a syntactic construction, e.g. the optional omission of agents in passive sentences.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Segmental arterial mediolysis** Segmental arterial mediolysis: Segmental arterial mediolysis (SAM) is a rare disorder of the arteries characterized by the development of aneurysms, blood clots, narrowing of the arteries (stenoses), and blood collections (hematomas) in the affected distribution.SAM most commonly affects the arteries supplying the intestines and abdominal organs. Signs and Symptoms: Varies depending on the location of the affected blood vessels. Gastrointestinal System: Acute abdominal pain (most common) Flank pain Nausea Vomiting Diarrhea Bloody stools Back painNervous System Headache StrokeThe most severe signs occur if an aneurysm ruptures potentially resulting in: Shock Loss of consciousness Bleeding into the abdominal cavity Bleeding into the brain Mechanism: The middle layer of an artery, called the media, made of smooth muscle is damaged. Mediolysis occurs when the smooth muscle cells in the area of damage are destroyed. Small gaps are formed in the wall of the artery which then fill with blood. Gaps create weakness in the wall of the artery, allowing increasing pressure from blood to expand the gaps resulting in an aneurysm. Aneurysms have potential for rupture. Diagnosis: Often Segmental Arterial Mediolysis is diagnosed after clinical presentation with symptoms as above followed by CT angiogram or MRI demonstrating aneurysm(s). The gold standard method for confirming the diagnosis is surgical resection of the affected area of blood vessel followed by histologic investigation under a microscope. Segmental Arterial Mediolysis must be differentiated from fibromuscular dysplasia, atherosclerosis, and other systemic vasculidites including polyarteritis nodosa, Takayasu's arteritis, Behcet's disease, cystic medial necrosis, and cystic adventitial artery disease. Treatment: Patients presenting with bleeding into the abdominal cavity require possible blood transfusions and emergent intervention with coil embolization via catheter angiography. Patients without active bleeding, but diagnosed aneurysms should have strict blood pressure control with antihypertensive drugs to decrease the risk of aneurysm rupture. Epidemiology: Since it was first reported in 1976 there have been 101 documented cases of Segmental Arterial Mediolysis. Although typically seen in older patients with an average age of 57 years old, it can affect patients of any age and does not favor one gender or the other.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Variable and attribute (research)** Variable and attribute (research): In science and research, an attribute is a quality of an object (person, thing, etc.). Attributes are closely related to variables. A variable is a logical set of attributes. Variables can "vary" – for example, be high or low. How high, or how low, is determined by the value of the attribute (and in fact, an attribute could be just the word "low" or "high"). (For example see: Binary option) While an attribute is often intuitive, the variable is the operationalized way in which the attribute is represented for further data processing. In data processing data are often represented by a combination of items (objects organized in rows), and multiple variables (organized in columns). Variable and attribute (research): Values of each variable statistically "vary" (or are distributed) across the variable's domain. A domain is a set of all possible values that a variable is allowed to have. The values are ordered in a logical way and must be defined for each variable. Domains can be bigger or smaller. The smallest possible domains have those variables that can only have two values, also called binary (or dichotomous) variables. Bigger domains have non-dichotomous variables and the ones with a higher level of measurement. (See also domain of discourse.) Semantically, greater precision can be obtained when considering an object's characteristics by distinguishing 'attributes' (characteristics that are attributed to an object) from 'traits' (characteristics that are inherent to the object). Examples: Age is an attribute that can be operationalized in many ways. It can be dichotomized so that only two values – "old" and "young" – are allowed for further data processing. In this case the attribute "age" is operationalized as a binary variable. If more than two values are possible and they can be ordered, the attribute is represented by ordinal variable, such as "young", "middle age", and "old". Next it can be made of rational values, such as 1, 2, 3.... 99.The "social class" attribute can be operationalized in similar ways as age, including "lower", "middle" and "upper class" and each class could be differentiated between upper and lower, transforming thus changing the three attributes into six (see the model proposed by William Lloyd Warner) or it could use different terminology (such as the working class as in the model by Gilbert and Kahl).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pulmonary valve stenosis** Pulmonary valve stenosis: Pulmonary valve stenosis (PVS) is a heart valve disorder. Blood going from the heart to the lungs goes through the pulmonary valve, whose purpose is to prevent blood from flowing back to the heart. In pulmonary valve stenosis this opening is too narrow, leading to a reduction of flow of blood to the lungs.While the most common cause of pulmonary valve stenosis is congenital heart disease, it may also be due to a malignant carcinoid tumor. Both stenosis of the pulmonary artery and pulmonary valve stenosis are forms of pulmonic stenosis (nonvalvular and valvular, respectively) but pulmonary valve stenosis accounts for 80% of pulmonic stenosis. PVS was the key finding that led Jacqueline Noonan to identify the syndrome now called Noonan syndrome. Symptoms and signs: Among some of the symptoms consistent with pulmonary valve stenosis are the following: Heart murmur Cyanosis Dyspnea Dizziness Upper thorax pain Developmental disorders Cause: In regards to the cause of pulmonary valve stenosis a very high percentage are congenital, the right ventricular flow is hindered (or obstructed by this). The cause in turn is divided into: valvular, external and intrinsic (when it is acquired). Pathophysiology: The pathophysiology of pulmonary valve stenosis consists of the valve leaflets becoming too thick (therefore not separate one from another), which can cause high pulmonary pressure, and pulmonary hypertension. This however, does not mean the cause is always congenital.The left ventricle can be changed physically, these changes are a direct result of right ventricular hypertrophy. Once the obstruction is subdued, it (the left ventricle) can return to normal. Diagnosis: The diagnosis of pulmonary valve stenosis can be made using stethoscopic auscultation of the heart, which can reveal a systolic ejection murmur that is best heard at the second left intercostal space. Transthoracic or transesophageal echocardiography can provide a more accurate diagnosis. Obstetric ultrasonography can be useful for the in utero diagnosis of pulmonary valve stenosis and other congenital cardiovascular defects such as Tetralogy of Fallot. Diagnosis: Other conditions to consider in the differential diagnosis of pulmonic valvular stenosis include infundibular stenosis and pulmonary artery stenosis. Treatment: In terms of treatment for pulmonary valve stenosis, valve replacement or surgical repair (depending upon whether the stenosis is in the valve or vessel) may be indicated. If the valve stenosis is of congenital origin, balloon valvuloplasty is another option, depending on the case. Valves made from animal or human tissue (are used for valve replacement), in adults metal valves can be used. Epidemiology: The epidemiology of pulmonary valve stenosis can be summed up by the congenital aspect which is the majority of cases, in broad terms PVS is rare in the general population.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Industrial mineral** Industrial mineral: Industrial resources (minerals) are geological materials that are mined for their commercial value, which are not fuel (fuel minerals or mineral fuels) and are not sources of metals (metallic minerals) but are used in the industries based on their physical and/or chemical properties. They are used in their natural state or after beneficiation either as raw materials or as additives in a wide range of applications. Examples and applications: Typical examples of industrial rocks and minerals are limestone, clays, sand, gravel, diatomite, kaolin, bentonite, silica, barite, gypsum, and talc. Some examples of applications for industrial minerals are construction, ceramics, paints, electronics, filtration, plastics, glass, detergents and paper. In some cases, even organic materials (peat) and industrial products or by-products (cement, slag, silica fume) are categorized under industrial minerals, as well as metallic compounds mainly utilized in non-metallic form (as an example most titanium is utilized as an oxide TiO2 rather than Ti metal). Examples and applications: The evaluation of raw materials to determine their suitability for use as industrial minerals requires technical test-work, mineral processing trials and end-product evaluation; free to download evaluation manuals are available for the following industrial minerals: limestone, flake graphite, diatomite, kaolin, bentonite and construction materials. These are available from the British Geological Survey external link 'Industrial Minerals in BGS' with regular industry news and reports published in Industrial Minerals magazine. List of industrial minerals by name: Aggregates Alunite Asbestos Asphalt, Natural Ball clays Baryte Bentonite / Diatomite / Fuller's earth Borates Brines Carbonatites Corundum Diamond Dimension stone Feldspar and Nepheline - Syenite Fluorspar Garnet Gem mineral Granite Graphite Gypsum Halite Kaolin Kyanite / Sillimanite / Andalusite Limestone / Dolomite Marble Mica Mirabilite Natron Nahcolite Novaculite Olivine Perlite Phosphate Potash –Potassium minerals Pumice Quartz Slate Silica sand / Tripoli Sulfur Talc Vermiculite Wollastonite Zeolites
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ternary search tree** Ternary search tree: In computer science, a ternary search tree is a type of trie (sometimes called a prefix tree) where nodes are arranged in a manner similar to a binary search tree, but with up to three children rather than the binary tree's limit of two. Like other prefix trees, a ternary search tree can be used as an associative map structure with the ability for incremental string search. However, ternary search trees are more space efficient compared to standard prefix trees, at the cost of speed. Common applications for ternary search trees include spell-checking and auto-completion. Description: Each node of a ternary search tree stores a single character, an object (or a pointer to an object depending on implementation), and pointers to its three children conventionally named equal kid, lo kid and hi kid, which can also be referred respectively as middle (child), lower (child) and higher (child). A node may also have a pointer to its parent node as well as an indicator as to whether or not the node marks the end of a word. The lo kid pointer must point to a node whose character value is less than the current node. The hi kid pointer must point to a node whose character is greater than the current node. The equal kid points to the next character in the word. Description: The figure below shows a ternary search tree with the strings "cute","cup","at","as","he","us" and "i": c / | \ a u h | | | \ t t e u / / | / | s p e i s As with other trie data structures, each node in a ternary search tree represents a prefix of the stored strings. All strings in the middle subtree of a node start with that prefix. Operations: Insertion Inserting a value into a ternary search can be defined recursively or iteratively much as lookups are defined. This recursive method is continually called on nodes of the tree given a key which gets progressively shorter by pruning characters off the front of the key. If this method reaches a node that has not been created, it creates the node and assigns it the character value of the first character in the key. Whether a new node is created or not, the method checks to see if the first character in the string is greater than or less than the character value in the node and makes a recursive call on the appropriate node as in the lookup operation. If, however, the key's first character is equal to the node's value then the insertion procedure is called on the equal kid and the key's first character is pruned away. Like binary search trees and other data structures, ternary search trees can become degenerate depending on the order of the keys. Inserting keys in alphabetical order is one way to attain the worst possible degenerate tree. Inserting the keys in random order often produces a well-balanced tree. Operations: Search To look up a particular node or the data associated with a node, a string key is needed. A lookup procedure begins by checking the root node of the tree and determining which of the following conditions has occurred. If the first character of the string is less than the character in the root node, a recursive lookup can be called on the tree whose root is the lo kid of the current root. Similarly, if the first character is greater than the current node in the tree, then a recursive call can be made to the tree whose root is the hi kid of the current node. Operations: As a final case, if the first character of the string is equal to the character of the current node then the function returns the node if there are no more characters in the key. If there are more characters in the key then the first character of the key must be removed and a recursive call is made given the equal kid node and the modified key. Operations: This can also be written in a non-recursive way by using a pointer to the current node and a pointer to the current character of the key. Operations: Pseudocode Deletion The delete operation consists of searching for a key string in the search tree and finding a node, called firstMid in the below pseudocode, such that the path from the middle child of firstMid to the end of the search path for the key string has no left or right children. This would represent a unique suffix in the ternary tree corresponding to the key string. If there is no such path, this means that the key string is either fully contained as a prefix of another string, or is not in the search tree. Many implementations make use of an end of string character to ensure only the latter case occurs. The path is then deleted from firstMid.mid to the end of the search path. In the case that firstMid is the root, the key string must have been the last string in the tree, and thus the root is set to null after the deletion. Running time: The running time of ternary search trees varies significantly with the input. Ternary search trees run best when given several similar strings, especially when those strings share a common prefix. Alternatively, ternary search trees are effective when storing a large number of relatively short strings (such as words in a dictionary). Running time: Running times for ternary search trees are similar to binary search trees, in that they typically run in logarithmic time, but can run in linear time in the degenerate (worst) case. Further, the size of the strings must also be kept in mind when considering runtime. For example, in the search path for a string of length k, there will be k traversals down middle children in the tree, as well as a logarithmic number of traversals down left and right children in the tree. Thus, in a ternary search tree on a small number of very large strings the lengths of the strings can dominate the runtime.Time complexities for ternary search tree operations: Comparison to other data structures: Tries While being slower than other prefix trees, ternary search trees can be better suited for larger data sets due to their space-efficiency. Comparison to other data structures: Hash maps Hashtables can also be used in place of ternary search trees for mapping strings to values. However, hash maps also frequently use more memory than ternary search trees (but not as much as tries). Additionally, hash maps are typically slower at reporting a string that is not in the same data structure, because it must compare the entire string rather than just the first few characters. There is some evidence that shows ternary search trees running faster than hash maps. Additionally, hash maps do not allow for many of the uses of ternary search trees, such as near-neighbor lookups. Comparison to other data structures: DAFSAs (deterministic acyclic finite state automaton) If storing dictionary words is all that is required (i.e., storage of information auxiliary to each word is not required), a minimal deterministic acyclic finite state automaton (DAFSA) would use less space than a trie or a ternary search tree. This is because a DAFSA can compress identical branches from the trie which correspond to the same suffixes (or parts) of different words being stored. Uses: Ternary search trees can be used to solve many problems in which a large number of strings must be stored and retrieved in an arbitrary order. Some of the most common or most useful of these are below: Anytime a trie could be used but a less memory-consuming structure is preferred. A quick and space-saving data structure for mapping strings to other data. To implement auto-completion. As a spell check. Near-neighbor searching (of which spell-checking is a special case). As a database especially when indexing by several non-key fields is desirable. In place of a hash table.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sunbreak** Sunbreak: A sunbreak is a natural phenomenon in which sunlight obscured over a relatively large area penetrates the obscuring material in a localized space. The typical example is of sunlight shining through a hole in cloud cover. A sunbreak piercing clouds normally produces a visible shaft of light reflected by atmospheric dust and or moisture, called a sunbeam. Another form of sunbreak occurs when sunlight passes into an area otherwise shadowed by surrounding large buildings through a gap temporarily aligned with the position of the sun. Sunbreak: The word is considered by some to have origins in Pacific Northwest English. In art: Artists such as cartoonists and filmmakers often use sunbreak to show protection or relief being brought upon an area of land by God or a receding storm.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**WBXML** WBXML: WAP Binary XML (WBXML) is a binary representation of XML. It was developed by the WAP Forum and since 2002 is maintained by the Open Mobile Alliance as a standard to allow XML documents to be transmitted in a compact manner over mobile networks and proposed as an addition to the World Wide Web Consortium's Wireless Application Protocol family of standards. The MIME media type application/vnd.wap.wbxml has been defined for documents that use WBXML. WBXML: WBXML is used by a number of mobile phones. Usage includes Exchange ActiveSync for synchronizing device settings, address book, calendar, notes and emails, SyncML for transmitting address book and calendar data, Wireless Markup Language, Wireless Village, OMA DRM for its rights language and Over-the-air programming for sending network settings to a phone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Video bingo** Video bingo: Video bingo, or electronic bingo machine, is a type of slot machine or amusement-with-prize machine (AWP) which instead of the typical reel-style game play, one or more bingo cards can be played on the machine. Classes and styles: Video bingo machines come in both Class II and Class III formats, and it is not a reference to the Class II bingo machine which refers to the software legality. Because, it is not exclusive to Class II gaming it is sometimes referred to as a theme game among slot manufacturers. Regulation: Video bingo machines in casinos in the United States are regulated by state or Indian gaming agencies. Since video bingo machines are available in Class II and Class III formats, they are subject to the Indian Gaming Regulatory Act. Also, the exploitation and licensing of the video bingo machines are regulated separately by local state law. According to the Louisiana Chapter RS 4:724, Video Bingo machines are available to the public, and these machines should meet the list of requirements to prize size, license type and locations allowed for placement. According to legislation of North Carolina Video Poker and Video Bingo machines became illegal as of July 1, 2007. It is prohibited also to store and manufacture such machines at the state territory. Manufacturers: Slot manufacturers operating in the United States which sell video bingo slot machinesб include Zitro.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Internet Explorer** Internet Explorer: Internet Explorer (formerly Microsoft Internet Explorer and Windows Internet Explorer, commonly abbreviated as IE or MSIE) is a deprecated (or discontinued for most modern Windows editions) series of graphical web browsers developed by Microsoft that were used in the Windows line of operating systems. While IE has been discontinued on most Windows editions, it remains supported on certain editions of Windows, such as Windows 10 LTSB/LTSC. Starting in 1995, it was first released as part of the add-on package Plus! for Windows 95 that year. Later versions were available as free downloads or in-service packs and included in the original equipment manufacturer (OEM) service releases of Windows 95 and later versions of Windows. Microsoft spent over US$100 million per year on Internet Explorer in the late 1990s, with over 1,000 people involved in the project by 1999. New feature development for the browser was discontinued in 2016 and ended support on June 15, 2022, in favor of its successor, Microsoft Edge. Internet Explorer: Internet Explorer was once the most widely used web browser, attaining a peak of 95% usage share by 2003. This came after Microsoft used bundling to win the first browser war against Netscape, which was the dominant browser in the 1990s. Its usage share has since declined with the launches of Firefox (2004) and Google Chrome (2008) and with the growing popularity of mobile operating systems such as Android and iOS that do not support Internet Explorer. Microsoft Edge, IE's successor, first overtook Internet Explorer in terms of market share in November 2019. Versions of Internet Explorer for other operating systems have also been produced, including an Xbox 360 version called Internet Explorer for Xbox and for platforms Microsoft no longer supports: Internet Explorer for Mac and Internet Explorer for UNIX (Solaris and HP-UX), and an embedded OEM version called Pocket Internet Explorer, later rebranded Internet Explorer Mobile, made for Windows CE, Windows Phone, and, previously, based on Internet Explorer 7, for Windows Phone 7. Internet Explorer: The browser has been scrutinized throughout its development for its use of third-party technology (such as the source code of Spyglass Mosaic, used without royalty in early versions) and security and privacy vulnerabilities, and the United States and the European Union have alleged that the integration of Internet Explorer with Windows has been to the detriment of fair browser competition. Internet Explorer: Internet Explorer 7 is still supported on Windows Embedded Compact 2013 and Internet Explorer lives on to at least 2029 as IE Mode, a feature of Microsoft Edge, enables Edge to display web pages using Internet Explorer 11's Trident layout engine and other core components. Through IE Mode, the underlying technology of Internet Explorer 11 partially exists on versions of Windows that do not support IE11 as a proper application, including Windows 11, Windows Server Insider Build 22463 and Windows Server Insider Build 25110. History: Internet Explorer 1 The Internet Explorer project was started in the summer of 1994 by Thomas Reardon, who, according to former project lead Ben Slivka, used source code from Spyglass, Inc. Mosaic, which was an early commercial web browser with formal ties to the pioneering National Center for Supercomputing Applications (NCSA) Mosaic browser. In late 1994, Microsoft licensed Spyglass Mosaic for a quarterly fee plus a percentage of Microsoft's non-Windows revenues for the software. Although bearing a name like NCSA Mosaic, Spyglass Mosaic had used the NCSA Mosaic source code sparingly.The first version, dubbed Microsoft Internet Explorer, was installed as part of the Internet Jumpstart Kit in the Microsoft Plus! pack for Windows 95. The Internet Explorer team began with about six people in early development. Internet Explorer 1.5 was released several months later for Windows NT and added support for basic table rendering. By including it free of charge with their operating system, they did not have to pay royalties to Spyglass Inc, resulting in a lawsuit and a US$8 million settlement on January 22, 1997.Microsoft was sued by SyNet Inc. in 1996, for trademark infringement, claiming it owned the rights to the name "Internet Explorer." It ended with Microsoft paying $5 million to settle the lawsuit. History: Internet Explorer 2 Internet Explorer 2 is the second major version of Internet Explorer, released on November 22, 1995, for Windows 95 and Windows NT, and on April 23, 1996, for Apple Macintosh and Windows 3.1. Internet Explorer 3 Internet Explorer 3 is the third major version of Internet Explorer, released on August 13, 1996, for Microsoft Windows and on January 8, 1997, for Apple Mac OS. Internet Explorer 4 Internet Explorer 4 is the fourth major version of Internet Explorer, released in September 1997 for Microsoft Windows, Mac OS, Solaris, and HP-UX. It was the first version of Internet Explorer to use the Trident web engine. Internet Explorer 5 Internet Explorer 5 is the fifth major version of Internet Explorer, released on March 18, 1999, for Windows 3.1, Windows NT 3, Windows 95, Windows NT 4.0 SP3, Windows 98, Mac OS X (up to v5.2.3), Classic Mac OS (up to v5.1.7), Solaris and HP-UX (up to 5.01 SP1). Internet Explorer 6 Internet Explorer 6 is the sixth major version of Internet Explorer, released on August 24, 2001, for Windows NT 4.0 SP6a, Windows 98, Windows 2000, Windows ME and as the default web browser for Windows XP and Windows Server 2003. Internet Explorer 7 Internet Explorer 7 is the seventh major version of Internet Explorer, released on October 18, 2006, for Windows XP SP2, Windows Server 2003 SP1 and as the default web browser for Windows Vista, Windows Server 2008 and Windows Embedded POSReady 2009. IE7 introduces tabbed browsing. Internet Explorer 8 Internet Explorer 8 is the eighth major version of Internet Explorer, released on March 19, 2009, for Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008 and as the default web browser for Windows 7 (later default was Internet Explorer 11) and Windows Server 2008 R2. Internet Explorer 9 Internet Explorer 9 is the ninth major version of Internet Explorer, released on March 14, 2011, for Windows 7, Windows Server 2008 R2, Windows Vista Service Pack 2 and Windows Server 2008 SP2 with the Platform Update. Internet Explorer 10 Internet Explorer 10 is the tenth major version of Internet Explorer, released on October 26, 2012, and is the default web browser for Windows 8 and Windows Server 2012. It became available for Windows 7 SP1 and Windows Server 2008 R2 SP1 in February 2013. History: Internet Explorer 11 Internet Explorer 11 is featured in Windows 8.1, Windows Server 2012 R2 and Windows RT 8.1, which was released on October 17, 2013. It includes an incomplete mechanism for syncing tabs. It is a major update to its developer tools, enhanced scaling for high DPI screens, HTML5 prerender and prefetch, hardware-accelerated JPEG decoding, closed captioning, HTML5 full screen, and is the first Internet Explorer to support WebGL and Google's protocol SPDY (starting at v3). This version of IE has features dedicated to Windows 8.1, including cryptography (WebCrypto), adaptive bitrate streaming (Media Source Extensions) and Encrypted Media Extensions.Internet Explorer 11 was made available for Windows 7 users to download on November 7, 2013, with Automatic Updates in the following weeks.Internet Explorer 11's user agent string now identifies the agent as "Trident" (the underlying browser engine) instead of "MSIE." It also announces compatibility with Gecko (the browser engine of Firefox). History: Microsoft claimed that Internet Explorer 11, running the WebKit SunSpider JavaScript Benchmark, was the fastest browser as of October 15, 2013.Internet Explorer 11 was made available for Windows Server 2012 and Windows Embedded 8 Standard, the only still supported edition of Windows 8 in April 2019. History: End of life Microsoft Edge was officially unveiled on January 21, 2015 as "Project Spartan." On April 29, 2015, Microsoft announced that Microsoft Edge would replace Internet Explorer as the default browser in Windows 10. However, Internet Explorer remained the default web browser on the Windows 10 Long Term Servicing Channel (LTSC) and on Windows Server until 2021, primarily for enterprise purposes.Internet Explorer is still installed in Windows 10 to maintain compatibility with older websites and intranet sites that require ActiveX and other legacy web technologies. The browser's MSHTML rendering engine also remains for compatibility reasons. History: Additionally, Microsoft Edge shipped with the "Internet Explorer mode" feature, which enables support for legacy internet applications. This is possible through use of the Trident MSHTML engine, the rendering code of Internet Explorer. Microsoft has committed to supporting Internet Explorer mode at least through 2029, with a one-year notice before it is discontinued.With the release of Microsoft Edge, the development of new features for Internet Explorer ceased. Internet Explorer 11 was the final release, and Microsoft began the process of deprecating Internet Explorer. During this process, it will still be maintained as part of Microsoft's support policies.Since January 12, 2016, only the latest version of Internet Explorer available for each version of Windows has been supported. At the time, nearly half of Internet Explorer users were using an unsupported version.In February 2019, Microsoft Chief of Security Chris Jackson recommended that users stop using Internet Explorer as their default browser.Various websites have dropped support for Internet Explorer. On June 1, 2020, the Internet Archive removed Internet Explorer from its list of supported browsers, due to the browser's dated nature. Since November 30, 2020, the web version of Microsoft Teams can no longer be accessed using Internet Explorer 11, followed by the remaining Microsoft 365 applications since August 17, 2021. WordPress also dropped support for the browser in July 2021.Microsoft disabled the normal means of launching Internet Explorer in Windows 11, but it is still possible for users to launch the browser from the Control Panel's browser toolbar settings or via PowerShell.On June 15, 2022, Internet Explorer 11 support ended for the Windows 10 Semi-Annual Channel (SAC). Users on these versions of Windows 10 were redirected to Microsoft Edge starting on February 14, 2023, and visual references to the browser (such as icons on the taskbar) would have been removed on June 13, 2023. However, on May 19, 2023 various organizations disapproved, leading Microsoft to withdraw the change. History: Other versions of Windows that were still supported at the time were unaffected. Specifically, Windows 7 ESU, Windows 8.x, Windows RT; Windows Server 2008/R2 ESU, Windows Server 2012/R2 and later; and Windows 10 LTSB/LTSC continued to receive updates until their respective end of life dates.On other versions of Windows, Internet Explorer will still be supported until their own end of support dates. IE7 will be supported until October 10, 2023 alongside the end of support for Windows Embedded Compact 2013, while IE9 will be supported until January 9, 2024 alongside the end of ESU support for Azure customers on Windows Server 2008. Barring additional changes to the support policy, Internet Explorer 11 will be supported until January 13, 2032, concurrent with the end of support for Windows 10 IoT Enterprise LTSC 2021. Features: Internet Explorer has been designed to view a broad range of web pages and provide certain features within the operating system, including Microsoft Update. During the height of the browser wars, Internet Explorer superseded Netscape only when it caught up technologically to support the progressive features of the time. Standards support Internet Explorer, using the MSHTML (Trident) browser engine: Supports HTML 4.01, parts of HTML5, CSS Level 1, Level 2, and Level 3, XML 1.0, and DOM Level 1, with minor implementation gaps. Fully supports XSLT 1.0 as well as an obsolete Microsoft dialect of XSLT often referred to as WD-xsl, which was loosely based on the December 1998 W3C Working Draft of XSL. Support for XSLT 2.0 lies in the future: semi-official Microsoft bloggers have indicated that development is underway, but no dates have been announced. Almost full conformance to CSS 2.1 has been added in the Internet Explorer 8 release. The MSHTML browser engine in Internet Explorer 9 in 2011, scored highest in the official W3C conformance test suite for CSS 2.1 of all major browsers. Supports XHTML in Internet Explorer 9 (MSHTML Trident version 5.0). Prior versions can render XHTML documents authored with HTML compatibility principles and served with a text/html MIME-type. Features: Supports a subset of SVG in Internet Explorer 9 (MSHTML Trident version 5.0), excluding SMIL, SVG fonts and filters.Internet Explorer uses DOCTYPE sniffing to choose between standards mode and a "quirks mode" in which it deliberately mimics nonstandard behaviors of old versions of MSIE for HTML and CSS rendering on screen (Internet Explorer always uses standards mode for printing). It also provides its own dialect of ECMAScript called JScript. Features: Internet Explorer was criticized by Tim Berners-Lee for its limited support for SVG, which is promoted by W3C. Non-standard extensions Internet Explorer has introduced an array of proprietary extensions to many of the standards, including HTML, CSS, and the DOM. This has resulted in several web pages that appear broken in standards-compliant web browsers and has introduced the need for a "quirks mode" to allow for rendering improper elements meant for Internet Explorer in these other browsers. Internet Explorer has introduced several extensions to the DOM that have been adopted by other browsers. Features: These include the inner HTML property, which provides access to the HTML string within an element, which was part of IE 5 and was standardized as part of HTML 5 roughly 15 years later after all other browsers implemented it for compatibility, the XMLHttpRequest object, which allows the sending of HTTP request and receiving of HTTP response, and may be used to perform AJAX, and the designMode attribute of the content Document object, which enables rich text editing of HTML documents. Some of these functionalities were not possible until the introduction of the W3C DOM methods. Its Ruby character extension to HTML is also accepted as a module in W3C XHTML 1.1, though it is not found in all versions of W3C HTML. Features: Microsoft submitted several other features of IE for consideration by the W3C for standardization. These include the 'behavior' CSS property, which connects the HTML elements with JScript behaviors (known as HTML Components, HTC), HTML+TIME profile, which adds timing and media synchronization support to HTML documents (similar to the W3C XHTML+SMIL), and the VML vector graphics file format. However, all were rejected, at least in their original forms; VML was subsequently combined with PGML (proposed by Adobe and Sun), resulting in the W3C-approved SVG format, one of the few vector image formats being used on the web, which IE did not support until version 9.Other non-standard behaviors include: support for vertical text, but in a syntax different from W3C CSS3 candidate recommendation, support for a variety of image effects and page transitions, which are not found in W3C CSS, support for obfuscated script code, in particular JScript.Encode, as well as support for embedding EOT fonts in web pages. Features: Favicon Support for favicons was first added in Internet Explorer 5. Internet Explorer supports favicons in PNG, static GIF and native Windows icon formats. In Windows Vista and later, Internet Explorer can display native Windows icons that have embedded PNG files. Features: Usability and accessibility Internet Explorer makes use of the accessibility framework provided in Windows. Internet Explorer is also a user interface for FTP, with operations similar to Windows Explorer. Internet Explorer 5 and 6 had a side bar for web searches, enabling jumps through pages from results listed in the side bar. Pop-up blocking and tabbed browsing were added respectively in Internet Explorer 6 and Internet Explorer 7. Tabbed browsing can also be added to older versions by installing MSN Search Toolbar or Yahoo Toolbar. Features: Cache Internet Explorer caches visited content in the Temporary Internet Files folder to allow quicker access (or offline access) to previously visited pages. The content is indexed in a database file, known as Index.dat. Multiple Index.dat files exist which index different content—visited content, web feeds, visited URLs, cookies, etc.Prior to IE7, clearing the cache used to clear the index but the files themselves were not reliably removed, posing a potential security and privacy risk. In IE7 and later, when the cache is cleared, the cache files are more reliably removed, and the index.dat file is overwritten with null bytes. Features: Caching has been improved in IE9. Features: Group Policy Internet Explorer is fully configurable using Group Policy. Administrators of Windows Server domains (for domain-joined computers) or the local computer can apply and enforce a variety of settings on computers that affect the user interface (such as disabling menu items and individual configuration options), as well as underlying security features such as downloading of files, zone configuration, per-site settings, ActiveX control behavior and others. Policy settings can be configured for each user and for each machine. Internet Explorer also supports Integrated Windows Authentication. Architecture: Internet Explorer uses a componentized architecture built on the Component Object Model (COM) technology. It consists of several major components, each of which is contained in a separate dynamic-link library (DLL) and exposes a set of COM programming interfaces hosted by the Internet Explorer main executable, iexplore.exe: WinInet.dll is the protocol handler for HTTP, HTTPS, and FTP. It handles all network communication over these protocols. Architecture: URLMon.dll is responsible for MIME-type handling and download of web content, and provides a thread-safe wrapper around WinInet.dll and other protocol implementations. Architecture: MSHTML.dll houses the MSHTML (Trident) browser engine introduced in Internet Explorer 4, which is responsible for displaying the pages on-screen and handling the Document Object Model (DOM) of the web pages. MSHTML.dll parses the HTML/CSS file and creates the internal DOM tree representation of it. It also exposes a set of APIs for runtime inspection and modification of the DOM tree. The DOM tree is further processed by a browser engine which then renders the internal representation on screen. Architecture: IEFrame.dll contains the user interface and window of IE in Internet Explorer 7 and above. ShDocVw.dll provides the navigation, local caching and history functionalities for the browser. Architecture: BrowseUI.dll is responsible for rendering the browser user interface such as menus and toolbars.Internet Explorer does not include any native scripting functionality. Rather, MSHTML.dll exposes an API that permits a programmer to develop a scripting environment to be plugged-in and to access the DOM tree. Internet Explorer 8 includes the bindings for the Active Scripting engine, which is a part of Microsoft Windows and allows any language implemented as an Active Scripting module to be used for client-side scripting. By default, only the JScript and VBScript modules are provided; third party implementations like ScreamingMonkey (for ECMAScript 4 support) can also be used. Microsoft also makes available the Microsoft Silverlight runtime that allows CLI languages, including DLR-based dynamic languages like IronPython and IronRuby, to be used for client-side scripting. Architecture: Internet Explorer 8 introduced some major architectural changes, called loosely coupled IE (LCIE). LCIE separates the main window process (frame process) from the processes hosting the different web applications in different tabs (tab processes). A frame process can create multiple tab processes, each of which can be of a different integrity level, each tab process can host multiple web sites. The processes use asynchronous inter-process communication to synchronize themselves. Generally, there will be a single frame process for all web sites. In Windows Vista with protected mode turned on, however, opening privileged content (such as local HTML pages) will create a new tab process as it will not be constrained by protected mode. Extensibility: Internet Explorer exposes a set of Component Object Model (COM) interfaces that allows add-ons to extend the functionality of the browser. Extensibility is divided into two types: Browser extensibility and content extensibility. Browser extensibility involves adding context menu entries, toolbars, menu items or Browser Helper Objects (BHO). BHOs are used to extend the feature set of the browser, whereas the other extensibility options are used to expose that feature in the user interface. Content extensibility adds support for non-native content formats. It allows Internet Explorer to handle new file formats and new protocols, e.g. WebM or SPDY. In addition, web pages can integrate widgets known as ActiveX controls which run on Windows only but have vast potentials to extend the content capabilities; Adobe Flash Player and Microsoft Silverlight are examples. Add-ons can be installed either locally, or directly by a web site. Extensibility: Since malicious add-ons can compromise the security of a system, Internet Explorer implements several safeguards. Internet Explorer 6 with Service Pack 2 and later feature an Add-on Manager for enabling or disabling individual add-ons, complemented by a "No Add-Ons" mode. Starting with Windows Vista, Internet Explorer and its BHOs run with restricted privileges and are isolated from the rest of the system. Internet Explorer 9 introduced a new component – Add-on Performance Advisor. Add-on Performance Advisor shows a notification when one or more of installed add-ons exceed a pre-set performance threshold. The notification appears in the Notification Bar when the user launches the browser. Windows 8 and Windows RT introduce a Metro-style version of Internet Explorer that is entirely sandboxed and does not run add-ons at all. In addition, Windows RT cannot download or install ActiveX controls at all; although existing ones bundled with Windows RT still run in the traditional version of Internet Explorer.Internet Explorer itself can be hosted by other applications via a set of COM interfaces. This can be used to embed the browser functionality inside a computer program or create Internet Explorer shells. Security: Internet Explorer uses a zone-based security framework that groups sites based on certain conditions, including whether it is an Internet- or intranet-based site as well as a user-editable whitelist. Security restrictions are applied per zone; all the sites in a zone are subject to the restrictions. Security: Internet Explorer 6 SP2 onwards uses the Attachment Execution Service of Microsoft Windows to mark executable files downloaded from the Internet as being potentially unsafe. Accessing files marked as such will prompt the user to make an explicit trust decision to execute the file, as executables originating from the Internet can be potentially unsafe. This helps in preventing the accidental installation of malware. Security: Internet Explorer 7 introduced the phishing filter, which restricts access to phishing sites unless the user overrides the decision. With version 8, it also blocks access to sites known to host malware. Downloads are also checked to see if they are known to be malware-infected. Security: In Windows Vista, Internet Explorer by default runs in what is called Protected Mode, where the privileges of the browser itself are severely restricted—it cannot make any system-wide changes. One can optionally turn this mode off, but this is not recommended. This also effectively restricts the privileges of any add-ons. As a result, even if the browser or any add-on is compromised, the damage the security breach can cause is limited. Security: Patches and updates to the browser are released periodically and made available through the Windows Update service, as well as through Automatic Updates. Although security patches continue to be released for a range of platforms, most feature additions and security infrastructure improvements are only made available on operating systems that are in Microsoft's mainstream support phase. Security: On December 16, 2008, Trend Micro recommended users switch to rival browsers until an emergency patch was released to fix a potential security risk which "could allow outside users to take control of a person's computer and steal their passwords.” Microsoft representatives countered this recommendation, claiming that "0.02% of internet sites" were affected by the flaw. A fix for the issue was released the following day with the Security Update for Internet Explorer KB960714, on Microsoft Windows Update.In 2010, Germany's Federal Office for Information Security, known by its German initials, BSI, advised "temporary use of alternative browsers" because of a "critical security hole" in Microsoft's software that could allow hackers to remotely plant and run malicious code on Windows PCs.In 2011, a report by Accuvant, funded by Google, rated the security (based on sandboxing) of Internet Explorer worse than Google Chrome but better than Mozilla Firefox.A 2017 browser security white paper comparing Google Chrome, Microsoft Edge, and Internet Explorer 11 by X41 D-Sec in 2017 came to similar conclusions, also based on sandboxing and support of legacy web technologies. Security: Security vulnerabilities Internet Explorer has been subjected to many security vulnerabilities and concerns such that the volume of criticism for IE is unusually high. Much of the spyware, adware, and computer viruses across the Internet are made possible by exploitable bugs and flaws in the security architecture of Internet Explorer, sometimes requiring nothing more than viewing of a malicious web page to install themselves. This is known as a "drive-by install.” There are also attempts to trick the user into installing malicious software by misrepresenting the software's true purpose in the description section of an ActiveX security alert. Security: A number of security flaws affecting IE originated not in the browser itself, but in ActiveX-based add-ons used by it. Because the add-ons have the same privilege as IE, the flaws can be as critical as browser flaws. This has led to the ActiveX-based architecture being criticized for being fault-prone. By 2005, some experts maintained that the dangers of ActiveX had been overstated and there were safeguards in place. In 2006, new techniques using automated testing found more than a hundred vulnerabilities in standard Microsoft ActiveX components. Security features introduced in Internet Explorer 7 mitigated some of these vulnerabilities. Security: In 2008, Internet Explorer had a number of published security vulnerabilities. According to research done by security research firm Secunia, Microsoft did not respond as quickly as its competitors in fixing security holes and making patches available. The firm also reported 366 vulnerabilities in ActiveX controls, an increase from the previous year. Security: According to an October 2010 report in The Register, researcher Chris Evans had detected a known security vulnerability which, then dating back to 2008, had not been fixed for at least six hundred days. Microsoft says that it had known about this vulnerability, but it was of exceptionally low severity as the victim web site must be configured in a peculiar way for this attack to be feasible at all.In December 2010, researchers were able to bypass the "Protected Mode" feature in Internet Explorer. Security: Vulnerability exploited in attacks on U.S. firms In an advisory on January 14, 2010, Microsoft said that attackers targeting Google and other U.S. companies used software that exploits a security hole, which had already been patched, in Internet Explorer. The vulnerability affected Internet Explorer 6 from on Windows XP and Server 2003, IE6 SP1 on Windows 2000 SP4, IE7 on Windows Vista, XP, Server 2008, and Server 2003, IE8 on Windows 7, Vista, XP, Server 2003, and Server 2008 (R2).The German government warned users against using Internet Explorer and recommended switching to an alternative web browser, due to the major security hole described above that was exploited in Internet Explorer. The Australian and French Government issued a similar warning a few days later. Security: Major vulnerability across versions On April 26, 2014, Microsoft issued a security advisory relating to CVE-2014-1776 (use-after-free vulnerability in Microsoft Internet Explorer 6 through 11), a vulnerability that could allow "remote code execution" in Internet Explorer versions 6 to 11. On April 28, 2014, the United States Department of Homeland Security's United States Computer Emergency Readiness Team (US-CERT) released an advisory stating that the vulnerability could result in "the complete compromise" of an affected system. US-CERT recommended reviewing Microsoft's suggestions to mitigate an attack or using an alternate browser until the bug is fixed. The UK National Computer Emergency Response Team (CERT-UK) published an advisory announcing similar concerns and for users to take the additional step of ensuring their antivirus software is up to date. Symantec, a cyber security firm, confirmed that "the vulnerability crashes Internet Explorer on Windows XP." The vulnerability was resolved on May 1, 2014, with a security update. Market adoption and usage share: The adoption rate of Internet Explorer seems to be closely related to that of Microsoft Windows, as it is the default web browser that comes with Windows. Since the integration of Internet Explorer 2.0 with Windows 95 OSR 1 in 1996, and especially after version 4.0's release in 1997, the adoption was greatly accelerated: from below 20% in 1996, to about 40% in 1998, and over 80% in 2000. This made Microsoft the winner in the infamous 'first browser war' against Netscape. Netscape Navigator was the dominant browser during 1995 and until 1997, but rapidly lost share to IE starting in 1998, and eventually slipped behind in 1999. The integration of IE with Windows led to a lawsuit by AOL, Netscape's owner, accusing Microsoft of unfair competition. The infamous case was eventually won by AOL but by then it was too late, as Internet Explorer had already become the dominant browser. Market adoption and usage share: Internet Explorer peaked during 2002 and 2003, with about 95% share. Its first notable competitor after beating Netscape was Firefox from Mozilla, which itself was an offshoot from Netscape. Market adoption and usage share: Firefox 1.0 had surpassed Internet Explorer 5 in early 2005, with Firefox 1.0 at 8 percent market share.Approximate usage over time based on various usage share counters averaged for the year overall, or for the fourth quarter, or for the last month in the year depending on availability of reference.According to StatCounter, Internet Explorer's market share fell below 50% in September 2010. In May 2012, Google Chrome overtook Internet Explorer as the most used browser worldwide, according to StatCounter. Market adoption and usage share: Industry adoption Browser Helper Objects are also used by many search engines companies and third parties for creating add-ons that access their services, such as search engine toolbars. Because of the use of COM, it is possible to embed web-browsing functionality in third-party applications. Hence, there are several Internet Explorer shells, and several content-centric applications like RealPlayer also use Internet Explorer's web browsing module for viewing web pages within the applications. Removal: While a major upgrade of Internet Explorer can be uninstalled in a traditional way if the user has saved the original application files for installation, the matter of uninstalling the version of the browser that has shipped with an operating system remains a controversial one. Removal: The idea of removing a stock install of Internet Explorer from a Windows system was proposed during the United States v. Microsoft Corp. case. One of Microsoft's arguments during the trial was that removing Internet Explorer from Windows may result in system instability. Indeed, programs that depend on libraries installed by IE, including Windows help and support system, fail to function without IE. Before Windows Vista, it was not possible to run Windows Update without IE because the service used ActiveX technology, which no other web browser supports. Impersonation by malware: The popularity of Internet Explorer led to the appearance of malware abusing its name. On January 28, 2011, a fake Internet Explorer browser calling itself "Internet Explorer – Emergency Mode" appeared. It closely resembled the real Internet Explorer but had fewer buttons and no search bar. If a user attempted to launch any other browser such as Google Chrome, Mozilla Firefox, Opera, Safari, or the real Internet Explorer, this browser would be loaded instead. It also displayed a fake error message, claiming that the computer was infected with malware and Internet Explorer had entered "Emergency Mode.” It blocked access to legitimate sites such as Google if the user tried to access them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rock festival** Rock festival: A rock festival is an open-air rock concert featuring many different performers, typically spread over two or three days and having a campsite and other amenities and forms of entertainment provided at the venue. Some festivals are singular events, while others recur annually in the same location. Occasionally, a festival will focus on a particular genre (e.g., folk, heavy metal, world music), but many attempt to bring together a diverse lineup to showcase a broad array of popular music trends. History: Initially, some of the earliest rock festivals were built on the foundation of pre-existing jazz and blues festivals, but quickly evolved to reflect the rapidly changing musical tastes of the time. For example, the United Kingdom's National Jazz Festival was launched in Richmond from 26 to 27 August 1961. The first three of these annual outdoor festivals featured only jazz music, but by the fourth "Jazz & Blues Festival" in 1964, a shift had begun that incorporated some blues and pop artists into the lineup. In 1965, for the first time the event included more blues, pop and rock acts than jazz, and by 1966, when the event moved to the town of Windsor, the rock and pop acts clearly dominated the jazz artists.A similar, though more rapid, evolution occurred with Jazz Bilzen, a solely jazz festival that was inaugurated in 1965 in the Belgian city of Bilzen. The 1966 festival still featured mostly jazz acts. However, by the time of the third festival from 25 to 27 August 1967, rock and pop acts had edged out most of the jazz bands and become the main attraction.In the United States, rock festivals seemed to spring up with a more self-defined musical identity. Preceded by several precursor events in the San Francisco area, the first two rock festivals in the US were staged in northern California on consecutive weekends in the summer of 1967: the KFRC Fantasy Fair & Magic Mountain Music Festival on Mount Tamalpais (10–11 June) and the Monterey International Pop Festival (16–17 June). History: The concept caught fire and spread quickly as rock festivals took on a unique identity and attracted significant media attention around the world. By 1969, promoters were staging dozens of them. According to Bill Mankin, in their dawning age rock festivals were important socio-cultural milestones: "… it would not be an exaggeration to say that, over a few short years, rock festivals played a unique, significant – and underappreciated – role in fueling the countercultural shift that swept not only America but many other countries [during the 1960s]. It seems fitting… that one of the most enduring labels for the entire generation of that era was derived from a rock festival: the 'Woodstock Generation'."Reflecting their musical diversity and the then-common term 'pop music', for the first few years, particularly in the US, many rock festivals were called 'pop festivals'. This also served to distinguish them among the ticket-buying public from other, pre-existing types of music festivals such as jazz and folk festivals. By the end of 1972, the term 'pop festival' had virtually disappeared as festival promoters adopted more creative, unique and location-specific names to identify and advertise their events. While it was still in vogue, however, over-zealous promoters eager to capitalize on the festival concept made the most of it, with some using the term "Pop Festival" or "Rock Festival" to advertise events held on a single day or evening, often indoors, and featuring only a handful of acts.Today, rock festivals are usually open-air concerts spread out over two or more days and many of the annual events are sponsored by the same organization. Features: Production and financing Several of the early rock festival organizers of the 1960s such as Chet Helms, Tom Rounds, Alex Cooley and Michael Lang helped create the blueprint for large-scale rock festivals in the United States, as well promoters such as Wally Hope in the United Kingdom. In various countries, the organizers of rock festivals have faced legal action from authorities, in part because such festivals have attracted large counterculture elements. In 1972, Mar Y Sol Pop Festival in Manatí, Puerto Rico attracted an estimated 30–35,000 people, and an arrest warrant was issued for promoter Alex Cooley, who avoided arrest by leaving the island before the festival was over. British Free Festival organizers Ubi Dwyer and Sid Rawle were imprisoned for attempting to promote a 1975 Windsor Festival. The British police would later outright attack free festival attendees at the 1985 Battle of the Beanfield. Features: Festivals may require millions of USD to be organized, with the money often gathered through fundraising and angel investors.Stages and sound systems While rock concerts typically feature a small lineup of rock bands playing a single stage, rock festivals often grow large enough to require several stages or venues with live bands playing concurrently. As rock music has increasingly been fused with other genres, sometimes stages will be devoted to a specific genre and may in turn become known and large enough to be seen as festivals themselves, such as was The Glade at the famous Glastonbury Festival in England. Features: Advances in sound reinforcement systems beginning in the 1960s enabled larger and larger rock festival audiences to hear the performers' music with much better clarity and volume. The best example was the pioneering work of Bill Hanley, known as the "father of festival sound", who provided the sound systems for numerous rock festivals including Woodstock. Other examples included the Wall of Sound invented in the 1970s to allow the Grateful Dead to play to larger audiences. Features: Camping and crowd control Many festivals offer camping, either because lodging in the area is insufficient to support the crowd, or to allow easy multi-day access to the festival's features. Festival planning and logistics are frequently a focus of the media, some festivals such as the heavily commercialized Woodstock 1999 were crowd control disasters, with insufficient water and other resources provided to audiences. Many early rock festivals successfully relied on volunteers for crowd control, for example individuals like Wavy Gravy and biker groups such as the Hells Angels and Grim Reapers Motorcycle Club. Gravy in particular called his security group the "Please Force," a reference to their non-intrusive tactics at keeping order, e.g., "Please don't do that, please do this instead". When asked by the press — who were the first to inform him that he and the rest of his commune were handling security — what kind of tools he intended to use to maintain order at Woodstock in 1969, his response was "Cream pies and seltzer bottles." Other rock festivals hire private security or local police departments for crowd control, with varying degrees of success. Traveling festivals: A recent innovation is the traveling rock festival where many musical acts perform at multiple locations during a tour. Successful festivals are often held in subsequent years. The following is an incomplete list. Current rock festivals: The following is a list of festivals that predominantly feature rock genres that take place on a regular basis. Most are held at the same location on an annual basis. Some, like Farm Aid are held at different venues with each incarnation. Current rock festivals: Africa Botswana Overthrust Winter Metal Mania Festival (Ghanzi) Asia China Beijing Pop Festival (Beijing) Midi Modern Music Festival (Beijing) Modern Sky Festival (Beijing) India Bangalore Open Air (Karnataka) Thunderstrock Festival (Ranchi, Jharkhand) Indonesia Brotherground (Surabaya) Hammersonic Festival (Jakarta) Hellprint (Bandung) Rock In Celebes (Makassar) Rock In Solo (Surakarta, Central Java) Japan BLARE FEST (Nagoya) Download Festival (Chiba City) Fuji Rock Festival (Naeba) Loud Park Festival Rising Sun Rock Festival (Ishikari) Rock In Japan (Hitachinaka) Summer Sonic Festival (Chiba and Osaka) South Korea ETPFEST (Seoul) Jisan Valley Rock Festival (Icheon) Pentaport Rock Festival (Incheon) Rest of Asia Hohaiyan Rock Festival (Gongliao, Taiwan) Noise Metal Fest (Ulaanbaatar, Mongolia) Rock The World Festival (Kuala Lumpur, Malaysia) Silence Fest (Kathmandu, Nepal) Europe Austria Donauinselfest (Vienna, Austria) FM4 Frequency Festival (Salzburg, Austria) Kaltenbach Open Air (Styria, Austria) Nova Rock (Nickelsdorf, Austria) Belgium Alcatraz Hard Rock & Metal Festival (Kortrijk) Blast from the Past Festival (Kuurne) Graspop Metal Meeting (Dessel) Huginns Awakening Fest (Oostende) Headbanger's Balls Fest (Izegem) Ieperfest (Ypres) Metalworksfest (Kuurne) Czech Republic Brutal Assault (Jaroměř) Fluff Fest (Rokycany) Masters of Rock (Vizovice) Metalfest (Plzeň) Mighty Sounds (Tábor) Obscene Extreme (Trutnov) Rock for People (Hradec Králové) Trutnov Open Air Music Festival (Trutnov) Denmark Aalborg Metal Festival (Aalborg) Copenhell (Copenhagen, Zealand) Metal Magic Festival (Fredericia, Jutland) Royal Metal Fest (Aarhus, Jutland) Viborg Metal Festival (Viborg, Jutland) Estonia Hard Rock Laager (Vana-Vigala, Märjamaa Parish) Howls of Winter (Tallinn) Barbar Feast (Saula, Kose Parish) Finland France Eurockéennes (Belfort) Hellfest (Clisson) Motocultor Festival (Brittany) Plane'R Fest (Colombier-Saugnieu) Rock en Seine (Saint-Cloud) Sylak Open Air (Saint-Maurice-de-Gourdans) Germany Hungary Sziget Festival (Budapest, Hungary) Italy Agglutination Metal Festival (Chiaromonte,Italy) Rock in Roma (Rome, Italy) Lithuania Devilstone (Anykščiai) Kilkim Žaibu (Varniai) The Netherlands Amsterdam Metalfest (Amsterdam) Dynamo Open Air (Eindhoven) Fortarock (Nijmegen) Into the Grave (Leeuwarden) MidsummerProg (Valkenburg) Netherlands Deathfest (Eindhoven) ProgPower Europe (Baarlo) Roadburn Festival (Tilburg) Zwarte Cross (Lichtenvoorde) Norway Bukta Tromsø Open Air Festival (Tromsø) Garasjefestival (Reinsvoll) Inferno Metal Festival (Oslo) Karmøygeddon Metal Festival (Kopervik) Poland Pol'and'Rock Festival (Kostrzyn) Siekiera fest (Wrocław) Portugal HARDMETALFEST (Mangualde) Laurus Nobilis Music (Vila Nova de Famalicão) MEO Marés Vivas (Vila Nova de Gaia) Paredes de Coura Festival (Paredes de Coura) Rock in Rio (Lisbon) SWR Barroselas Metalfest (Barroselas) Vagos Metal Fest (Calvão (Vagos)) Vagos Open Air (Lisboa) Vilar de Mouros Festival (Vilar de Mouros) Romania Artmania Festival (Sibiu) Festivalul Celtic Transilvania (Beclean) Maximum Rock Festival (Bucuresti) Metal Gates Festival (Bucharest) Metalhead Meeting (Bucharest) Posada Rock Fest (Câmpulung) Rockstadt Extreme Festival (Rasnov) Russia Big Gun (Moscow) Dobrofest (Yaroslavl Oblast) Metal Over Russia (Moscow) Nashestvie (Tver Oblast) Spain Resurrection Festival (Viveiro) Rock Fest Barcelona (Barcelona) Leyendas del Rock (Villena) Sweden Gefle Metal Festival (Gävle) High 5ive Summer Fest (Stockholm) House of Metal (Umeå) Huskvarna Metal Fest (Jönköping) Sweden Rock Festival (Sölvesborg) United Kingdom Rest of Europe Eistnaflug (Neskaupstaður, Iceland) Gitarijada (Zaječar, Serbia) Hills of Rock (Sofia and Plovdiv, Bulgaria) Metaldays (Tolmin, Slovenia) Paléo Festival (Nyon, Switzerland) Rockmaraton (Dunaújváros, Hungary) Rockwave Festival (Athens, Greece) Slane Festival (Slane, Ireland) Zobens un Lemess (Bauska, Latvia) North America Canada Festival d'été de Québec (Quebec city) Hyperspace Metal Festival (Vancouver) Heavy Montréal (Montreal) Osheaga (Montreal) Montebello Rock (Montebello, Quebec) Rockin' the Fields of Minnedosa (Manitoba) Rock the River (Saskatoon, Saskatchewan) Mexico Cumbre Tajín (Veracruz) Hell & Heaven Metal Fest (Mexico City) Vive Latino (Mexico City) Corona Capital (Mexico City) Pal Norte (Nuevo León) United States Oceania Australia Blacken Open Air (Hale, Northern Territory) Byron Bay Bluesfest (Byron Bay) Golden Plains Festival (Meredith, Victoria) Good Things Festival (Melbourne, Sydney, Brisbane) Unify Gathering (South Gippsland, Victoria) Woodford Folk Festival (Queensland, Australia) South America Argentina Cosquín Rock (Córdoba) Quilmes Rock (Buenos Aires) Brazil Armageddon Metal Festival (Joinville) Rock in Rio (Rio de Janeiro) RockOut Fest (São Paulo) Chile The Metal Fest Chile (Castro, Chiloé Island) Rock en Conce (Concepción, Chile) Rest of South America ReciclArte (San Bernardino, Paraguay) Rock al Parque (Bogotá, Chile) Vivo x el Rock (Lima, Peru) Montevideo Rock (Montevideo, Uruguay)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geneforge 3** Geneforge 3: Geneforge 3 is the third video game in the Geneforge series of role-playing video games created by Spiderweb Software. Gameplay: he games are played in a 45° axonometric view and feature turn-based combat. The lands are split up into small areas, which can be traveled through using a world map. During combat, each warrior gets a certain number of action points, which are spent moving, attacking, casting spells, and using items. At the beginning of the game, the player chooses a type of Shaper to be. The three types are Shapers, Guardians, and Agents. Players gain a number of skill points by gaining a level, which can be spent on improving one or more of the character's abilities. Gameplay: The Geneforge game engine has been revamped in this sequel, debatably improving gameplay in some instances and making others more cumbersome to deal with. No new creations or spells are available to players in Geneforge 3, however a number of different features have been added. For instance, there are two NPCs who will join the party, interject comments upon situations, and possibly leave if the player does something they disagree with. Their names are Alwan and Greta. There has also been a new forging system added, allowing players to create powerful artifacts or enhance existing items. Gameplay: Unlike the previous two games, Geneforge 3 offers only two sides to choose from in the ensuing conflict. Players cannot get very far before being forced to choose a side, although they can change sides with some success fairly late in the game if they so desire, whereas the previous games were possible to complete without ever actually taking sides. Plot: The player begins as an apprentice learning the arts of Shaping. While attending school on Greenwood Isle the player character is awoken when the school is attacked. Luckily, two Shapers are ready to join the shaper and assist to survive in the world outside. Plot: Alwan is a loyal Shaper Guardian who has only one skill, that of using his iron sword. He is trained as a Guardian, to obey without questioning. Due to there not being anyone he is able to obey, at first he is disoriented by the attack and does not know what to do. The player can get him to join only after obtaining permission from the servant mind in the school. Plot: Greta is a castout from the school because she started to sympathise with the Shapers' creations. She was an Agent, skilled in magic (starts with Firebolt but the player can get teachers to teach her other spells later in the game) and battle arts (also a sword). She is living in the village, outside the school, that is aptly named South End. She consents to joining the player's group without any conditions. Plot: It is discovered that a traitor Shaper named Litalia had orchestrated the attack on the school and other strikes against Shaper communities. She and others, including a former teacher at your school, believe that the Shapers are tyrannical rulers who make the lives of their creations miserable and should be stopped by extreme measures. The rebellion has been creating rogue spawners throughout the Ashen Isles, summoning creations that are causing chaos and attacking the Shapers and those who serve them. The player can choose between fighting for Litalia and her comrades, or allying with Lord Rahul and the Shapers and stifling the insurgency. Plot: It is found that either Alwan or Greta will leave the player's group depending on which faction the player joins. Greta will leave if the player joins the Shapers due to her thinking that the player is inhumane and Alwan will leave if the player join the rebellion because he will think the player is disloyal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vinblastine** Vinblastine: Vinblastine (VBL), sold under the brand name Velban among others, is a chemotherapy medication, typically used with other medications, to treat a number of types of cancer. This includes Hodgkin's lymphoma, non-small cell lung cancer, bladder cancer, brain cancer, melanoma, and testicular cancer. It is given by injection into a vein.Most people experience some side effects. Commonly it causes a change in sensation, constipation, weakness, loss of appetite, and headaches. Severe side effects include low blood cell counts and shortness of breath. It should not be given to people who have a current bacterial infection. Use during pregnancy will likely harm the baby. Vinblastine works by blocking cell division.Vinblastine was isolated in 1958. An example of a natural herbal remedy that has since been developed into a conventional medicine, vinblastine was originally obtained from the Madagascar periwinkle. It is on the World Health Organization's List of Essential Medicines. Medical uses: Vinblastine is a component of a number of chemotherapy regimens, including ABVD for Hodgkin lymphoma. It is also used to treat histiocytosis according to the established protocols of the Histiocytosis Association. Side effects: Adverse effects of vinblastine include hair loss, loss of white blood cells and blood platelets, gastrointestinal problems, high blood pressure, excessive sweating, depression, muscle cramps, vertigo and headaches. As a vesicant, vinblastine can cause extensive tissue damage and blistering if it escapes from the vein from improper administration. Pharmacology: Vinblastine is a vinca alkaloid and a chemical analogue of vincristine. It binds tubulin, thereby inhibiting the assembly of microtubules. Vinblastine treatment causes M phase specific cell cycle arrest by disrupting microtubule assembly and proper formation of the mitotic spindle and the kinetochore, each of which are necessary for the separation of chromosomes during anaphase of mitosis. Toxicities include bone marrow suppression (which is dose-limiting), gastrointestinal toxicity, potent vesicant (blister-forming) activity, and extravasation injury (forms deep ulcers). Vinblastine paracrystals may be composed of tightly packed unpolymerized tubulin or microtubules.Vinblastine is reported to be an effective component of certain chemotherapy regimens, particularly when used with bleomycin, and methotrexate in VBM chemotherapy for Stage IA or IIA Hodgkin lymphomas. Pharmacology: The inclusion of vinblastine allows for lower doses of bleomycin and reduced overall toxicity with larger resting periods between chemotherapy cycles. Pharmacology: Mechanism of action Microtubule-disruptive drugs like vinblastine, colcemid, and nocodazole have been reported to act by two mechanisms. At very low concentrations they suppress microtubule dynamics and at higher concentrations they reduce microtubule polymer mass. Recent findings indicate that they also produce microtubule fragments by stimulating microtubule minus-end detachment from their organizing centers. Dose-response studies further indicate that enhanced microtubule detachment from spindle poles correlate best with cytotoxicity. But research into the mechanism is still ongoing as recent studies also show vinblastine inducing apoptosis that is phase-independent in certain leukemias. Pharmacology: Pharmacokinetics Vinblastine appears to be a peripherally selective drug due to limited brain uptake caused by binding to P-glycoprotein. Isolation and synthesis: Vinblastine may be isolated from the Madagascar Periwinkle (Catharanthus roseus), its only known biological producer, along with several of its precursors, catharanthine and vindoline. The biosynthetic pathway for vinblastine and its precursors, particularly catharanthine, still remains partially unknown. Extraction is costly and yields of vinblastine and its precursors are low, although procedures for rapid isolation with improved yields avoiding auto-oxidation have been developed. Enantioselective synthesis has been of considerable interest in recent years, as the natural mixture of isomers is not an economical source for the required C16'S, C14'R stereochemistry of biologically active vinblastine. Initially, the approach depends upon an enantioselective Sharpless epoxidation, which sets the stereochemistry at C20. The desired configuration around C16 and C14 can then be fixed during the ensuing steps. In this pathway, vinblastine is constructed by a series of cyclization and coupling reactions which create the required stereochemistry. The overall yield may be as great as 22%, which makes this synthetic approach more attractive than extraction from natural sources, whose overall yield is about 10%. Stereochemistry is controlled through a mixture of chiral agents (Sharpless catalysts), and reaction conditions (temperature, and selected enantiopure starting materials). Due to difficulty of stereochemical restraints in total synthetic processes, other semi-synthetic methods from precursors, catharanthine and vindoline, continue to be developed. History: Vinblastine was first isolated by Robert Noble and Charles Thomas Beer at the university of Western Ontario from the Madagascar periwinkle plant. Vinblastine's utility as a chemotherapeutic agent was first suggested by its effect on the body when an extract of the plant was injected in rabbits to study the plant's supposed anti-diabetic effect. (A tea made from the plant was a folk-remedy for diabetes.) The rabbits died from a bacterial infection, due to a decreased number of white blood cells, so it was hypothesized that vinblastine might be effective against cancers of the white blood cells such as lymphoma.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MediaMan** MediaMan: MediaMan is a general purpose collection organizer software for establishing a personal database of media collections (DVDs, CDs, books, etc.) developed by He Shiming. Debuted in 2004 as freeware, MediaMan is the first software in its genre to create the concept of general purpose organizer, as people usually have to pay two licenses for a book organizer and a video organizer. The license of MediaMan was freeware until late 2006, when the author decided to switch to shareware with a price of $39.95 for each license. Amazon Web Services (later called E-Commerce Service and Product Advertising API) was used to retrieve product information automatically during the import process in MediaMan, which means it is also a part of the Amazon Associates program. However, the latest version of MediaMan (v3.10 series) no longer uses this API due to the efficiency guidelines introduced in October 2010. MediaMan is also known as a Windows alternative to Mac OS X's Delicious Library. Software development seems to have stalled with the last release of a beta of MediaMan 4.0 back in December 2013. There have been a growing number of bugs in the software that has made the program unusable for some. Communications with the developer have stopped, development and bug fixes have ceased, and the site has gone offline.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RIMS2** RIMS2: Regulating synaptic membrane exocytosis protein 2 is a protein that in humans is encoded by the RIMS2 gene. Interactions: RIMS2 has been shown to interact with YWHAH, RAPGEF4, and UNC13A.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adenylyl-(glutamate—ammonia ligase) hydrolase** Adenylyl-(glutamate—ammonia ligase) hydrolase: In enzymology, an adenylyl-[glutamate---ammonia ligase] hydrolase (EC 3.1.4.15) is an enzyme that catalyzes the chemical reaction adenylyl-[L-glutamate:ammonia ligase (ADP-forming)] + H2O ⇌ adenylate + [L-glutamate:ammonia ligase (ADP-forming)]Thus, the two substrates of this enzyme are [[adenylyl-[L-glutamate:ammonia ligase (ADP-forming)]]] and H2O, whereas its two products are adenylate and L-glutamate:ammonia ligase (ADP-forming). This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric diester bonds. The systematic name of this enzyme class is adenylyl-[L-glutamate:ammonia ligase (ADP-forming)] adenylylhydrolase. Other names in common use include adenylyl-[glutamine-synthetase]hydrolase, and adenylyl(glutamine synthetase) hydrolase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Confirmation holism** Confirmation holism: In philosophy of science, confirmation holism, also called epistemological holism, is the view that no individual statement can be confirmed or disconfirmed by an empirical test, but rather that only a set of statements (a whole theory) can be so. It is attributed to Willard Van Orman Quine who motivated his holism through extending Pierre Duhem's problem of underdetermination in physical theory to all knowledge claims.Duhem's idea was, roughly, that no theory of any type can be tested in isolation but only when embedded in a background of other hypotheses, e.g. hypotheses about initial conditions. Quine thought that this background involved not only such hypotheses but also our whole web of belief, which, among other things, includes our mathematical and logical theories and our scientific theories. This last claim is sometimes known as the Duhem–Quine thesis.A related claim made by Quine, though contested by some (see Adolf Grünbaum 1962), is that one can always protect one's theory against refutation by attributing failure to some other part of our web of belief. In his own words, "Any statement can be held true come what may, if we make drastic enough adjustments elsewhere in the system." Underdetermination in physical theory: By 1845 astronomers found that the orbit of planet Uranus around the Sun departed from expectations. Not concluding that Newton's law of universal gravitation was flawed, however, astronomers John Couch Adams as well as Urbain Le Verrier independently predicted a new planet, eventually known as Neptune, and even calculated its weight and orbit through Newton's theory. And yet neither did this empirical success of Newton's theory verify Newton's theory. Underdetermination in physical theory: Le Verrier soon reported that Mercury's perihelion—the peak of its orbital ellipse nearest to the Sun—advanced each time Mercury completed an orbit, a phenomenon not predicted by Newton's theory, which astrophysicists were so confident in that they predicted a new planet, named Vulcan, which a number of astronomers subsequently claimed to have seen. In 1905, however, Einstein's special theory of relativity claimed that space and time are both relative, refuting the very framework of Newton's theory that claimed that space and time were both absolute. Underdetermination in physical theory: In 1915, Einstein's general theory of relativity newly explained gravitation while precisely predicting Mercury's orbit. In 1919, astrophysicist Arthur Eddington led an expedition to test Einstein's prediction of the Sun's mass reshaping spacetime in its vicinity. The Royal Society announced confirmation—accepted by physicists as the fall of Newton's theory. Yet few theoretical physicists believe general relativity is a fundamentally accurate description of gravitation, and instead seek a theory of quantum gravity. Total vs. partial holism: Some scholars, like Quine, argue that if a prediction that a theory makes comes out true, then the corresponding piece of evidence confirms the whole theory and even the whole framework within which that theory is embedded. Some have questioned this radical or total form of confirmational holism. If total holism were true, they argue, that would lead to absurd consequences like the confirmation of arbitrary conjunctions. For example, if the general theory of relativity is confirmed by the perihelion of Mercury then, according to total holism, the conjunction of the general theory of relativity with the claim that the moon is made of cheese also gets confirmed. More controversially, the two conjuncts are meant to be confirmed in equal measure. Total vs. partial holism: The critics of total holism do not deny that evidence may spread its support far and wide. Rather, they deny that it always spreads its support to the whole of any theory or theoretical framework that entails or probabilistically predicts the evidence. This view is known as partial holism. One early advocate of partial confirmational holism is Adolf Grünbaum (1962). Another is Ken Gemes (1993). The latter provides refinements to the hypothetico-deductive account of confirmation, arguing that a piece of evidence may be confirmationally relevant only to some content parts of a hypothesis. A third critic is Elliott Sober (2004). He considers likelihood comparisons and model selection ideas. More recently, and in a similar vein, Ioannis Votsis (2014) argues for an objectivist account of confirmation, according to which, monstrous hypotheses, i.e. roughly hypotheses that are put together in an ad hoc or arbitrary way, have internal barriers that prevent the spread of confirmation between their parts. Thus even though the conjunction of the general theory of relativity with the claim that the moon is made of cheese gets confirmed by the perihelion of Mercury since the latter is entailed by the conjunction, the confirmation does not spread to the conjunct that the moon is made of cheese. In other words, it is not always the case that support spreads to all the parts of a hypotheses, and even when it does, it is not always the case that it spreads to the different parts in equal measure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Delamanid** Delamanid: Delamanid, sold under the brand name Deltyba, is a medication used to treat tuberculosis. Specifically it is used, along with other antituberculosis medications, for active multidrug-resistant tuberculosis. It is taken by mouth.Common side effects include headache, dizziness, and nausea. Other side effects include QT prolongation. It has not been studied in pregnancy as of 2016. Delamanid works by blocking the manufacture of mycolic acids thus destabilising the bacterial cell wall. It is in the nitroimidazole class of medications.Delamanid was approved for medical use in 2014 in Europe, Japan, and South Korea. It is on the World Health Organization's List of Essential Medicines. As of 2016 the Stop TB Partnership had an agreement to get the medication for US$1,700 per six month for use in more than 100 countries. Medical uses: Delamanid is used, along with other antituberculosis medications, for active multidrug-resistant tuberculosis. Adverse effects: Common side effects include headache, dizziness, and nausea. Other side effects include QT prolongation. It has not been studied in pregnancy as of 2016. Interactions: Delamanid is metabolised by the liver enzyme CYP3A4; therefore strong inducers of this enzyme can reduce its effectiveness. History: In phase II clinical trials, the drug was used in combination with standard treatments, such as four or five of the drugs ethambutol, isoniazid, pyrazinamide, rifampicin, aminoglycoside antibiotics, and quinolones. Healing rates (measured as sputum culture conversion) were significantly better in patients who additionally took delamanid.The European Medicines Agency (EMA) recommended conditional marketing authorization for delamanid in adults with multidrug-resistant pulmonary tuberculosis without other treatment options because of resistance or tolerability. The EMA considered the data show that the benefits of delamanid outweigh the risks, but that additional studies were needed on the long-term effectiveness. Society and culture: The medication was not readily available globally as of 2015. It was believed that pricing will be similar to bedaquiline, which for six months is approximately US$900 in low income countries, US$3,000 in middle income countries, and US$30,000 in high income countries. As of 2016 the Stop TB Partnership had an agreement to get the medication for US$1,700 per six month.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Functional symptom** Functional symptom: A functional symptom is a medical symptom with no known physical cause. In other words, there is no structural or pathologically defined disease to explain the symptom. The use of the term 'functional symptom' does not assume psychogenesis, only that the body is not functioning as expected. Functional symptoms are increasingly viewed within a framework in which 'biological, psychological, interpersonal and healthcare factors' should all be considered to be relevant for determining the aetiology and treatment plans.Historically, there has often been fierce debate about whether certain problems are predominantly related to an abnormality of structure (disease) or are psychosomatic in nature, and what are at one stage posited to be functional symptoms are sometimes later reclassified as organic, as investigative techniques improve. It is well established that psychosomatic symptoms are a real phenomenon, so this potential explanation is often plausible, however the commonality of a range of psychological symptoms and functional weakness does not imply that one causes the other. For example, symptoms associated with migraine, epilepsy, schizophrenia, multiple sclerosis, stomach ulcers, chronic fatigue syndrome, Lyme disease and many other conditions have all tended historically at first to be explained largely as physical manifestations of the patient's psychological state of mind; until such time as new physiological knowledge is eventually gained. Another specific example is functional constipation, which may have psychological or psychiatric causes. However, one type of apparently functional constipation, anismus, may have a neurological (physical) basis. Functional symptom: Whilst misdiagnosis of functional symptoms does occur, in neurology, for example, this appears to occur no more frequently than of other neurological or psychiatric syndromes. However, in order to be quantified, misdiagnosis has to be recognized as such, which can be problematic in such a challenging field as medicine. A common trend is to see functional symptoms and syndromes such as fibromyalgia, irritable bowel syndrome and functional neurological symptoms such as functional weakness as symptoms in which both biological and psychological factors are relevant, without one necessarily being dominant. Weakness: Functional weakness is weakness of an arm or leg without evidence of damage or a disease of the nervous system. Patients with functional weakness experience symptoms of limb weakness which can be disabling and frightening such as problems walking or a 'heaviness' down one side, dropping things or a feeling that a limb just doesn't feel normal or 'part of them'. Functional weakness may also be described as functional neurological symptom disorder (FNsD), Functional Neurological Disorder (FND) or functional neurological symptoms. If the symptoms are caused by a psychological trigger, it may be diagnosed as 'dissociative motor disorder' or conversion disorder (CD). Weakness: To the patient and the doctor it often looks as if there has been a stroke or have symptoms of multiple sclerosis. However, unlike these conditions, with functional weakness there is no permanent damage to the nervous system which means that it can get better or even go away completely. Weakness: The diagnosis should usually be made by a consultant neurologist so that other neurological causes can be excluded. The diagnosis should be made on the basis of positive features in the history and the examination (such as Hoover's sign). It is dangerous to make the diagnosis simply because tests are normal. Neurologists usually diagnose wrongly about 5% of the time (which is the same for many other conditions.) The most effective treatment is physiotherapy, however it is also helpful for patients to understand the diagnosis, and some may find CBT helps them to cope with the emotions associated with being unwell. For those with conversion disorder, psychological therapy is key to their treatment as it is emotional or psychological factors which are causing their symptoms. Weakness: Giveway weakness Giveway weakness (also "give-away weakness", "collapsing weakness", etc.) refers to a symptom where a patient's arm, leg, can initially provide resistance against an examiner's touch, but then suddenly "gives way" and provides no further muscular resistance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bog spavin** Bog spavin: Bog spavin is a swelling of the tibiotarsal joint of the horse's hock which, in itself, does not cause lameness. The joint becomes distended by excess synovial fluid and/or thickened synovial tissue bringing about a soft, fluctuant swelling on the front of the joint, as well as in the medial and lateral plantar pouches. Bog spavin is generally an indication of underlying pathology within the joint. Causes: Bog spavin is a physical finding, and does not directly create lameness. Causes include synovitis (inflammation of the lining of the joint capsule), degenerative joint disease, or excessive strain of the joint capsule. In horses younger than the age of three, most cases of bog spavin are caused by a defect in the tibiotarsal joint, while in older, fully mature horses, it is most likely because of chronic strain of the joint capsule. Infection of the joint causes a severe synovitis, and should be treated as an emergency. Many horses with bog spavin will not be lame. However, bog spavin can be a sign that the horse has joint disease, which is a very significant finding. Usually lameness will occur if the workload of the horse is increased. Bog spavin should not be treated lightly, and it is best to have a veterinarian examine the horse to find the cause, even if the horse does not appear lame. Causes: Unlike bone spavin, bog spavin does not show any changes to the bone itself. For this reason it is considered to be of no interest to those studying animal paleopathology (Baker and Brothwell, 1980). Management: A veterinarian will usually radiograph the hocks of the horse to check for bony changes as it is important to address the underlying cause of the joint distension. It's important to have a veterinarian perform an equine prepurchase exam to identify an existing condition such as a bog spavin. If the bog spavin is drained then it will simply refill unless the underlying cause has healed or been treated. In many cases it may be difficult to achieve resolution of the distension. Treatment may involve injection of corticosteroids or hyaluronan into the joint and some cases may require arthroscopic surgery. Rest or controlled exercise is often indicated. Sources: Baker, J, and Brothwell, D. 1980. Animal Diseases in Archaeology. London: Academic Press. King, Christine, and Mansmann, Richard. 1997. Equine Lameness. Equine Research, Inc. pp. 835–836.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Qteros** Qteros: Qteros, Inc. is an American energy company researching the production of cellulosic ethanol from a variety of non-food feedstock sources including corn stover, corn cobs, switchgrass, and sugar cane bagasse.Qteros's process combines proprietary science and microbiology that enables a simplified biomass-to-ethanol conversion. Their proprietary microorganism is the Q Microbe® (Clostridium phytofermentans).In January 2011, Qteros announced its partnership with Praj Industries of India, a publicly traded builder of ethanol plants. Praj Industries has built about 70% of the 400 ethanol mills in India. Qteros: ny_financials</ref> In November 2011, the company reduced staff and sought additional financing. Change of Management and Ownership: In late 2012, Qteros went through a change of management and ownership, with founding COO Stephen Rogers taking over the role of CEO. Qteros continues to develop and scale its technology in collaboration with significant international partners.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Photosynthesis system** Photosynthesis system: Photosynthesis systems are electronic scientific instruments designed for non-destructive measurement of photosynthetic rates in the field. Photosynthesis systems are commonly used in agronomic and environmental research, as well as studies of the global carbon cycle. How photosynthesis systems function: Photosynthesis systems function by measuring gas exchange of leaves. Atmospheric carbon dioxide is taken up by leaves in the process of photosynthesis, where CO2 is used to generate sugars in a molecular pathway known as the Calvin cycle. This draw-down of CO2 induces more atmospheric CO2 to diffuse through stomata into the air spaces of the leaf. While stoma are open, water vapor can easily diffuse out of plant tissues, a process known as transpiration. It is this exchange of CO2 and water vapor that is measured as a proxy of photosynthetic rate. How photosynthesis systems function: The basic components of a photosynthetic system are the leaf chamber, infrared gas analyzer (IRGA), batteries and a console with keyboard, display and memory. Modern 'open system' photosynthesis systems also incorporate miniature disposable compressed gas cylinder and gas supply pipes. This is because external air has natural fluctuations in CO2 and water vapor content, which can introduce measurement noise. Modern 'open system' photosynthesis systems remove the CO2 and water vapour by passage over soda lime and Drierite, then add CO2 at a controlled rate to give a stable CO2 concentration. Some systems are also equipped with temperature control and a removable light unit, so the effect of these environmental variables can also be measured. How photosynthesis systems function: The leaf to be analysed is placed in the leaf chamber. The CO2 concentrations is measured by the infrared gas analyzer. The IRGA shines infrared light through a gas sample onto a detector. CO2 in the sample absorbs energy, so the reduction in the level of energy that reaches the detector indicates the CO2 concentration. Modern IRGAs take account of the fact that H2O absorbs energy at similar wavelengths as CO2. Modern IRGAs may either dry the gas sample to a constant water content or incorporate both a CO2 and a water vapour IRGA to assess the difference in CO2 and water vapour concentrations in air between the chamber entrance and outlet.The Liquid Crystal Display on the console displays measured and calculated data. The console may have a PC card slot. The stored data can be viewed on the LCD display, or sent to a PC. Some photosynthesis systems allow communication over the internet using standard internet communication protocols. How photosynthesis systems function: Modern photosynthetic systems may also be designed to measure leaf temperature, chamber air temperature, PAR (photosynthetically active radiation), and atmospheric pressure. These systems may calculate water use efficiency (A/E), stomatal conductance (gs), intrinsic water use efficiency (A/gs), and sub-stomatal CO2 concentration (Ci). Chamber and leaf temperatures are measured with a thermistor sensor. Some systems are also designed to control environmental conditions. How photosynthesis systems function: A simple and general equation for Photosynthesis is: CO2+ H2O+ (Light Energy)→ C6H12O6+O2 'Open' systems or 'closed' systems: There are two distinct types of photosynthetic system; ‘open’ or ‘closed’. This distinction refers to whether or not the atmosphere of the leaf-enclosing chamber is renewed during the measurement.In an ‘open system’, air is continuously passed through the leaf chamber to maintain CO2 in the leaf chamber at a steady concentration. The leaf to be analysed is placed in the leaf chamber. The main console supplies the chamber with air at a known rate with a known concentration of CO2 and H2O. The air is directed over the leaf, then the CO2 and H2O concentration of air leaving the chamber is determined. The out going air will have a lower CO2 concentration and a higher H2O concentration than the air entering the chamber. The rate of CO2 uptake is used to assess the rate of photosynthetic carbon assimilation, while the rate of water loss is used to assess the rate of transpiration. Since CO2 intake and H2O release both occur through the stomata, high rates of CO2 uptake are expected to coincide with high rates of transpiration. High rates of CO2 uptake and H2O loss indicates high stomatal conductance.Because the atmosphere is renewed, 'open' systems are not seriously affected by outward gas leakage and adsorption or absorption by the materials of the system.In contrast, in a ‘closed system’, the same atmosphere is continuously measured over a period of time to establish rates of change in the parameters. The CO2 concentration in the chamber is decreased, while the H2O concentration increases. This is less tolerant to leakage and material ad/absorption. Calculating photosynthetic rate and related parameters: Calculations used in 'open system' systems; For CO2 to diffuse into the leaf, stomata must be open, which permits the outward diffusion of water vapour. Therefore, the conductance of stomata influences both photosynthetic rate (A) and transpiration (E), and the usefulness of measuring A is enhanced by the simultaneous measurement of E. The internal CO2 concentration (Ci) is also quantified, since Ci represents an indicator of the availability of the primary substrate (CO2) for A.A carbon assimilation is determined by measuring the rate at which the leaf assimilates CO2 . The change in CO2 is calculated as CO2 flowing into leaf chamber, in μmol mol−1 CO2, minus flowing out from leaf chamber, in μmol mol−1. The photosynthetic rate (Rate of CO2 exchange in the leaf chamber) is the difference in CO2 concentration through chamber, adjusted for the molar flow of air per m2 of leaf area, mol m−2 s−1. Calculating photosynthetic rate and related parameters: The change in H2O vapour pressure is water vapour pressure out of leaf chamber, in mbar, minus the water vapour pressure into leaf chamber, in mbar. Transpiration rate is differential water vapour concentration, mbar, multiplied by the flow of air into leaf chamber per square meter of leaf area, mol s−1 m−2, divided by atmospheric pressure, in mBar. Calculating photosynthetic rate and related parameters: Calculations used in 'closed system' systems; A leaf is placed in the leaf-chamber, with a known area of leaf enclosed. Once the chamber is closed, carbon dioxide concentration gradually declines. When the concentration decreases past a certain point a timer is started, and is stopped as the concentration passes at a second point. The difference between these concentrations gives the change in carbon dioxide in ppm. Net photosynthetic rate in micro grams carbon dioxide s−1 is given by; (V • p • 0.5 • FSD • 99.7) / twhere V = the chamber volume in liters, p = the density of carbon dioxide in mg cm−3, FSD = the carbon dioxide concentration in ppm corresponding to the change in carbon dioxide in the chamber, t = the time in seconds for the concentration to decrease by the set amount. Net photosynthesis per unit leaf area is derived by dividing net photosynthetic rate by the leaf area enclosed by the chamber. Applications: Since photosynthesis, transpiration and stomatal conductance are an integral part of basic plant physiology, estimates of these parameters can be used to investigate numerous aspects of plant biology. The plant-scientific community has generally accepted photosynthetic systems as reliable and accurate tools to assist research. There are numerous peer-reviewed articles in scientific journals which have used a photosynthetic system. To illustrate the utility and diversity of applications of photosynthetic systems, below you will find brief descriptions of research using photosynthetic systems; Researchers from the Technion - Israel Institute of Technology and a number of US institutions studied the combined effects of drought and heat stress on Arabidopsis thaliana. Their research suggests that the combined effects of heat and drought stress cause sucrose to serve as the major osmoprotectant. Applications: Plant physiologists from The University of Putra Malaysia and The University of Edinburgh investigated the relative effects of tree age and tree size on the physiological attributes of two broadleaf species. A photosynthetic system was used to measure photosynthetic rate per unit of leaf mass. Researchers at University of California-Berkeley found that water loss from leaves in Sequoia sempervirens is ameliorated by heavy fog in the Western US. Their research suggests that fog may help the leaves retain water and enable the trees to fix more carbon during active growth periods. Applications: The effect of CO2 enrichment on the photosynthetic behavior of an endangered medicinal herb was investigated by this team at Garhwal University, India. Photosynthetic rate (A) was stimulated during the first 30 days, then significantly decreased. Transpiration rate (E) decreased significantly throughout the CO2 enrichment, whereas stomatal conductance (gs) significantly reduced initially. Overall, it was concluded that the medicinally important part of this plant showed increased growth. Applications: Researchers at the University of Trás-os-Montes and Alto Douro, Portugal grew Grapevines in outside plots and in Open-Top Chambers which elevated the level of CO2. A photosynthetic system was used to measure CO2 assimilation rate (A), stomatal conductance (gs), transpiration rate (E), and internal CO2 concentration/ambient CO2 ratio (Ci/Ca). The environmental conditions inside the chambers caused a significant reduction in yield. Applications: A study of Nickel bioremediation involving poplar (Populas nigra), conducted by researchers at the Bulgarian Academy of Sciences and the National Research Institute of Italy (Consiglio Nazionale delle Ricerche), found that Ni-induced stress reduced photosynthesis rates, and that this effect was dependent upon leaf Ni content. In mature leaves, Ni stress led to emission of cis-β-ocimene, whereas in developing leaves, it led to enhanced isoprene emissions. Applications: Plant physiologists in Beijing measured photosynthetic rate, transpiration rate and stomatal conductance in plants which accumulate metal and those that do not accumulate metal. Seedlings were grown in the presence of 200 or 400 μM CdCl2. This was used to elucidate the role of antioxidative enzyme in the adaptive responses of metal-accumulators and non-accumulators to Cadmium stress. Applications: In a study of drought resistance and salt tolerance of a rice variety, researchers at the National Center of Plant Gene Research and the Huazhong Agricultural University in Wuhan, China found that a transgenic rice variety showed greater drought resistance than a conventional variety. Over expression of the stress response gene SNAC1 led to reduced water loss, but no significant change in photosynthetic rate. Applications: This Canadian team examined the dynamic responses of Stomatal conductance (gs) net photosynthesis (A) to a progressive drought in nine poplar clones with contrasting drought tolerance. gs and A were measured using a photosynthetic system. Plants were either well-watered or drought preconditioned. Researchers at Banaras Hindu University, India, investigated the potential of sewage sludge to be used in agriculture as an alternative disposal technique. Agricultural soil growing rice had sewage sludge added at different rates. Rates of photosynthesis and stomatal conductance of the rice were measured to examine the biochemical and physiological responses of sewage addition. Applications: Researchers from Lancaster University, The University of Liverpool, and The University of Essex, UK, measured isoprene emission rates from an oil palm tree. Samples were collected using a photosynthetic system that controlled PAR and leaf temperature (1000 μmol m−2 s−1; 30 °C). It had thought that PAR and temperature are the main controls of isoprene emission from the biosphere. This research showed that isoprene emissions from oil palm tree are under strong circadian control. Applications: The ecophysiological diversity and the breeding potential of wild coffee populations in Ethiopia was evaluated as a thesis submitted to The Rheinischen Friedrich-Wilhelms-University of Bonn, Germany. Complementary field and garden studies of populations native to a range of climatic conditions were examined. Plant ecophysiological behavior was assessed by a number of system parameters, including gas exchange, which was measured using a photosynthetic system. Applications: A collaborative project between researchers at the University of Cambridge, UK, the Australian Research Council Center of Excellence, and the Australian National University resulted in validation of a model that describes carbon isotope discrimination for crassulacean acid metabolism using Kalanchoe daigremontiana. Instruments of this type can also be used as a standard for plant stress measurement. Difficult to measure types of plant stress such as Cold stress, and water stress can be measured with this type of instrumentation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wadding** Wadding: Wadding is a disc of material used in guns to seal gas behind a projectile (a bullet or ball), or to separate the propellant from loosely packed shots.Wadding can be crucial to a gun's efficiency, since any gas that leaks past a projectile as it is being fired is wasted. A harder or more carefully designed item which serves this purpose is often called a sabot. Wadding for muzzleloaders is typically a small piece of cloth, or paper wrapping from the cartridge. Shotguns: In shotgun shells, the wadding is actually a semi-flexible cup-shaped sabot designed to hold numerous much smaller-diameter sub-projectiles (i.e. shots), and is launched out together as one payload-carrying projectile. This minimizes chaotic collisions of the shots with the bore wall and with each other, allowing the internal ballistics to be more consistent. After leaving the muzzle, the wadding loosens and opens up in flight, allowing the much denser shots to be inertially released and scattered. The same function is served when shooting slugs. Model rockets: Wadding is used in model rockets to protect the parachute when it ejects. Without the recovery wadding, the parachute would melt because the ejection is by a small solid-fuel engine, which gets so hot that it melts the glue almost immediately. Effects: Burning wadding may have ignited the fire that led to the explosion that destroyed the Orient at the Battle of the Nile (q.v.). The father of Robert Morris, "Financier of the American Revolution," died as the result of being wounded by flying wadding from a ship's gun that was fired in his honor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Algebraic cycle** Algebraic cycle: In mathematics, an algebraic cycle on an algebraic variety V is a formal linear combination of subvarieties of V. These are the part of the algebraic topology of V that is directly accessible by algebraic methods. Understanding the algebraic cycles on a variety can give profound insights into the structure of the variety. Algebraic cycle: The most trivial case is codimension zero cycles, which are linear combinations of the irreducible components of the variety. The first non-trivial case is of codimension one subvarieties, called divisors. The earliest work on algebraic cycles focused on the case of divisors, particularly divisors on algebraic curves. Divisors on algebraic curves are formal linear combinations of points on the curve. Classical work on algebraic curves related these to intrinsic data, such as the regular differentials on a compact Riemann surface, and to extrinsic properties, such as embeddings of the curve into projective space. Algebraic cycle: While divisors on higher-dimensional varieties continue to play an important role in determining the structure of the variety, on varieties of dimension two or more there are also higher codimension cycles to consider. The behavior of these cycles is strikingly different from that of divisors. For example, every curve has a constant N such that every divisor of degree zero is linearly equivalent to a difference of two effective divisors of degree at most N. David Mumford proved that, on a smooth complete complex algebraic surface S with positive geometric genus, the analogous statement for the group CH 2⁡(S) of rational equivalence classes of codimension two cycles in S is false. The hypothesis that the geometric genus is positive essentially means (by the Lefschetz theorem on (1,1)-classes) that the cohomology group H2(S) contains transcendental information, and in effect Mumford's theorem implies that, despite CH 2⁡(S) having a purely algebraic definition, it shares transcendental information with H2(S) . Mumford's theorem has since been greatly generalized.The behavior of algebraic cycles ranks among the most important open questions in modern mathematics. The Hodge conjecture, one of the Clay Mathematics Institute's Millennium Prize Problems, predicts that the topology of a complex algebraic variety forces the existence of certain algebraic cycles. The Tate conjecture makes a similar prediction for étale cohomology. Alexander Grothendieck's standard conjectures on algebraic cycles yield enough cycles to construct his category of motives and would imply that algebraic cycles play a vital role in any cohomology theory of algebraic varieties. Conversely, Alexander Beilinson proved that the existence of a category of motives implies the standard conjectures. Additionally, cycles are connected to algebraic K-theory by Bloch's formula, which expresses groups of cycles modulo rational equivalence as the cohomology of K-theory sheaves. Definition: Let X be a scheme which is finite type over a field k. An algebraic r-cycle on X is a formal linear combination ∑ni[Vi] of r-dimensional closed integral k-subschemes of X. The coefficient ni is the multiplicity of Vi. The set of all r-cycles is the free abelian group ZrX=⨁V⊆XZ⋅[V], where the sum is over closed integral subschemes V of X. The groups of cycles for varying r together form a group Z∗X=⨁rZrX. Definition: This is called the group of algebraic cycles, and any element is called an algebraic cycle. A cycle is effective or positive if all its coefficients are non-negative. Definition: Closed integral subschemes of X are in one-to-one correspondence with the scheme-theoretic points of X under the map that, in one direction, takes each subscheme to its generic point, and in the other direction, takes each point to the unique reduced subscheme supported on the closure of the point. Consequently Z∗X can also be described as the free abelian group on the points of X. Definition: A cycle α is rationally equivalent to zero, written α∼0 , if there are a finite number of (r+1) -dimensional subvarieties Wi of X and non-zero rational functions ri∈k(Wi)× such that div Wi⁡(ri)] , where div Wi denotes the divisor of a rational function on Wi. The cycles rationally equivalent to zero are a subgroup rat ⊆Zr(X) , and the group of r-cycles modulo rational equivalence is the quotient rat . Definition: This group is also denoted CH r⁡(X) . Elements of the group A∗(X)=⨁rAr(X) are called cycle classes on X. Cycle classes are said to be effective or positive if they can be represented by an effective cycle. If X is smooth, projective, and of pure dimension N, the above groups are sometimes reindexed cohomologically as ZN−rX=ZrX and AN−rX=ArX. In this case, A∗X is called the Chow ring of X because it has a multiplication operation given by the intersection product. Definition: There are several variants of the above definition. We may substitute another ring for integers as our coefficient ring. The case of rational coefficients is widely used. Working with families of cycles over a base, or using cycles in arithmetic situations, requires a relative setup. Let ϕ:X→S , where S is a regular Noetherian scheme. An r-cycle is a formal sum of closed integral subschemes of X whose relative dimension is r; here the relative dimension of Y⊆X is the transcendence degree of k(Y) over k(ϕ(Y)¯) minus the codimension of ϕ(Y)¯ in S. Definition: Rational equivalence can also be replaced by several other coarser equivalence relations on algebraic cycles. Other equivalence relations of interest include algebraic equivalence, homological equivalence for a fixed cohomology theory (such as singular cohomology or étale cohomology), numerical equivalence, as well as all of the above modulo torsion. These equivalence relations have (partially conjectural) applications to the theory of motives. Flat pullback and proper pushforward: There is a covariant and a contravariant functoriality of the group of algebraic cycles. Let f : X → X' be a map of varieties. If f is flat of some constant relative dimension (i.e. all fibers have the same dimension), we can define for any subvariety Y' ⊂ X': f∗([Y′])=[f−1(Y′)] which by assumption has the same codimension as Y′. Conversely, if f is proper, for Y a subvariety of X the pushforward is defined to be f∗([Y])=n[f(Y)] where n is the degree of the extension of function fields [k(Y) : k(f(Y))] if the restriction of f to Y is finite and 0 otherwise. By linearity, these definitions extend to homomorphisms of abelian groups and f∗:Zk(X)→Zk(X′) (the latter by virtue of the convention) are homomorphisms of abelian groups. See Chow ring for a discussion of the functoriality related to the ring structure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LAT2** LAT2: Linker for activation of T-cells family member 2 is a protein that in humans is encoded by the LAT2 gene.This gene is one of the contiguous genes at 7q11.23 commonly deleted in Williams syndrome, a multisystem developmental disorder. This gene consists of at least 14 exons, and its alternative splicing generates 3 transcript variants, all encoding the same protein.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**X-Men in other media** X-Men in other media: The X-Men are a fictional superhero team created by Marvel Comics that appear in comic books and other forms of media. Television: Animation 1960s The X-Men made their first animated appearance on The Marvel Super Heroes TV series in 1966 with Professor X commanding the original X-Men line-up of Cyclops, Beast, Marvel Girl, Angel, and Iceman. In this episode the X-Men are not referred to as the X-Men but rather as the Allies for Peace. Television: 1980s The X-Men guest-starred in several episodes of Spider-Man and His Amazing Friends, starting with a flashback in "The Origin of Iceman". X-Men member Sunfire appeared in a later episode teaming up with the Amazing Friends. The X-Men's next appearance was in "A Firestar is Born", which included appearances from Professor X, Storm, Angel, Cyclops, Wolverine and Juggernaut. The X-Men returned the following season in "The X-Men Adventure", with appearances from Professor X, Cyclops, Kitty Pryde (as Sprite), Storm, Nightcrawler, Colossus and Thunderbird. Television: In 1989, Marvel Productions produced a half-hour X-Men pilot episode titled X-Men: Pryde of the X-Men. It related the story of Kitty Pryde's first adventure with Professor X, Cyclops, Storm, Wolverine, Colossus, Nightcrawler, and Dazzler as they fought against Magneto, the White Queen, Juggernaut, the Blob, Pyro and Toad. The series was never picked up but the single episode aired infrequently in syndication during the Marvel Action Universe series and was released on video in 1990. Television: 1990s In 1992, Fox, 20th Century Fox Television, and Fox Kids launched an X-Men animated series with the roster of Cyclops, Wolverine, Rogue, Storm, Beast, Gambit, Jubilee, Jean Grey and Professor X with Morph making occasional appearances. The two-part pilot episode, "Night of the Sentinels", began a five-season series, ending in 1997. The X-Men guest-starred on Spider-Man in episodes "The Mutant Agenda" and "Mutants Revenge", when Spider-Man seeks Professor X's help. Storm would later guest-star in the Secret Wars arc. In 1995, Cyclops, Jean Grey, Gambit, Wolverine, Storm, and Juggernaut, along with the Scarlet Spider, made cameos in the Fantastic Four series, in "Nightmare in Green". Television: 2000s In 2000, The WB Network launched X-Men: Evolution, which portrayed many of the X-Men as teenagers attending a regular public high school while training to control their powers at the Xavier's School for Gifted Youngsters. The series ended in 2003 after its fourth season. The main cast of young X-Men comprised Cyclops, Jean Grey, Spyke, Rogue, Kitty Pryde (as Shadowcat), and Nightcrawler. Their adult mutant mentors included Professor X, Storm, Wolverine, and Beast. The series also featured the New Mutants starting in the second season, consisting of Boom Boom, Sunspot, Iceman, Wolfsbane, Magma, Multiple, Jubilee, Berzerker and Cannonball. Angel also makes appearances. Colossus, X-23 and Gambit appear as villains in this incarnation. The series was released by Warner Bros. Television Studios instead of releasing by 20th Century Fox Television. Television: In 2003, the X-Men and mutant-kind were mentioned in an episode of the short-lived CGI series Spider-Man: The New Animated Series, "The Party". Peter Parker is quoted as saying, "I bet the X-Men get to go to parties." Soon after, he is ambushed by a group of police officers, one of them calling him a "mutant freak". Wolverine and the X-Men debuted in the United States on January 23, 2009. It featured Wolverine, Emma Frost, Cyclops, Beast, Storm, Kitty Pryde (as Shadowcat), Iceman, Rogue, Nightcrawler, Angel-Archangel, Jean Grey and Professor X. The show was cancelled after just one season. Television: The X-Men appeared on Cartoon Network's The Super Hero Squad Show. Unlike their comic book counterparts, mutants are not discriminated against in Super Hero City, resulting in Professor X opening "Mutant High", where his students are peacefully tutored by the Professor while helping out other heroes to defend the city from the villains of Villainville lead by Doctor Doom. The X-Men are featured heavily in the episode “Mysterious Mayhem at Mutant High!” Wolverine and later Scarlet Witch both appear as main members of the titular “Super Hero Squad,” while the series also includes appearances from Cyclops, Iceman, Jean Grey, Kitty Pryde, Lockheed, Colossus, Storm, Professor X, X-23, and Firestar. Television: 2010s As part of a four-series collaboration between the Japanese Madhouse animation house and Marvel, the X-Men and Wolverine both starred in two separate 12 episode anime series that premiered in Japan on Animax and in the United States on G4 in 2011. The X-Men series deals with the X-Men coming to Japan to investigate the disappearance of Armor. The antagonists are the U-Men. It featured Cyclops, Wolverine, Storm, Beast, Emma Frost, Armor and Charles Xavier, as well as frequent flashbacks with Jean Grey. Other X-Men like Colossus and Rogue made cameo appearances in the finale. Television: 2020s On November 12, 2021, Marvel announced a revival of the 1992–1997 animated series titled X-Men '97 produced by Marvel Studios and is set to release in 2023 on Disney+. Several cast members from the original animated series are set to reprise their roles along with new cast members. Beau DeMayo is announced as the head writer and executive producer for the upcoming series with director Larry Houston, and showrunners and producers Eric and Julia Lewald from the original series will serve as consultants. Television: Live action In 1996, the TV movie Generation X aired on Fox Network. Initially a television pilot, it was later broadcast as a television film. It is based on the Marvel comic book series Generation X. The film featured Banshee and Emma Frost as the headmasters of Xavier's School for Gifted Youngsters and M, Skin, Mondo, Jubilee. The team battled a mad scientist who used a machine to develop psychic powers. Television: In October 2015, 20th Century Fox Television announced that FX had ordered a pilot titled Legion. The series tells the story of David Haller, who is diagnosed as schizophrenic, but following a strange encounter is confronted with the possibility that the voices he hears and the visions he sees might be real. The first season premiered in February 2017. It ended with the third season. Television: The Gifted is a 20th Century Fox Television series that focuses on two parents who discover their children possess mutant powers. Forced to go on the run from a hostile government, the family joins up with an underground network of mutants and must fight to survive. While the X-Men have been disbanded in the series, the underground network of mutants features comic regulars Blink, Polaris, Thunderbird and The Stepford Cuckoos. Fox ended the series after 2 seasons. Motion comics: Marvel produced motion comics based on Astonishing X-Men, releasing them on Hulu, iTunes, the PlayStation Store and other video services. These animated episodes were released on DVD through Shout! Factory. It has been announced that Marvel Knights Animation will continue animating Joss Whedon and John Cassaday's run. Starting with the second storyline of the series Astonishing X-Men: Dangerous.The titles in the series include: Astonishing X-Men: Gifted (2009) Astonishing X-Men: Dangerous (April 2012) Astonishing X-Men: Torn (August 2012) Astonishing X-Men: Unstoppable (November 2012) Film: X-Men: Darktide (2006) In 2006, Minimates released a short animated brickfilm, X-Men: Darktide on DVD with a box set of figures. The story involved the X-Men battling the Brotherhood at an oil rig. The team consists of Cyclops, Jean Grey, Archangel, Wolverine, the Beast, Xavier and Storm. The Brotherhood team is Mystique, Magneto and Juggernaut. 20th Century Fox franchise From 2000 to 2020, 20th Century Fox released thirteen superhero films as part of the X-Men film series. Film: The first three films focus on the conflict between Professor Xavier and Magneto, who have opposing views on humanity's relationship with mutants. While Xavier believes humanity and mutants can coexist, Magneto believes a war is coming, which he intends to fight and win. The Bryan Singer-directed X-Men was released on July 14, 2000, with the team roster of Professor X (Patrick Stewart), Cyclops (James Marsden), Wolverine (Hugh Jackman), Storm (Halle Berry) and Jean Grey (Famke Janssen). Singer returned for the sequel X2 released on May 2, 2003, with Rogue (Anna Paquin), Iceman (Shawn Ashmore), and Nightcrawler (Alan Cumming) joining the team. Singer was replaced by Brett Ratner for X-Men: The Last Stand, released on May 26, 2006, with Beast (Kelsey Grammer), Angel (Ben Foster), Shadowcat (Elliot Page) and Colossus (Daniel Cudmore) joining. Critics praised Singer's films for their dark, realistic tone, and their focus on prejudice as a subtext. Although Ratner's film was met with mixed reviews, it out-grossed both of its predecessors.A sequel tetralogy that served as a prequel to the original trilogy started with X-Men: First Class. Following a young Professor X (James McAvoy), Magneto (Michael Fassbender), Beast (Nicholas Hoult), Mystique (Jennifer Lawrence), Havok (Lucas Till) and Banshee (Caleb Landry Jones) as the original team, the film was directed by Matthew Vaughn and released on June 3, 2011. X-Men: Days of Future Past, a sequel to both the original trilogy and X-Men: First Class, with Singer returning to direct, was released on May 23, 2014. The film centered around the original trilogy members using time travel to gain help from their younger counterparts of the prequel tetralogy. X-Men: Apocalypse was released on May 27, 2016, with Mystique leading the team of Beast, Quicksilver (Evan Peters), Storm (Alexandra Shipp), Nightcrawler (Kodi Smit-McPhee), Cyclops (Tye Sheridan) and Jean Grey (Sophie Turner). The tetralogy concluded with a fourth film, Dark Phoenix, written and directed by Simon Kinberg and released on June 7, 2019, and featured the same roster as Apocalypse.Three spin-off films focusing on Wolverine were also released: X-Men Origins: Wolverine, an origin story of Wolverine that was directed by Gavin Hood, was released on May 1, 2009, followed by The Wolverine, directed by James Mangold and set in Japan, released on July 26, 2013. The series concluded with Logan, once again directed by Mangold and released on March 3, 2017. The film was set in 2029.Two further spin-offs centering around Deadpool were released in 2016 and 2018. Deadpool, which features Colossus (Andre Tricoteux and voiced by Stefan Kapičić) and his X-Men trainee Negasonic Teenage Warhead (Brianna Hildebrand) was released on February 12, 2016, while Deadpool 2 was released on May 18, 2018, with returning X-Men members Colossus and Negasonic Teenage Warhead and new member Yukio (Shiori Kutsuna) helping Deadpool (Ryan Reynolds) as a X-Men trainee. The film also features cameo appearances of Professor X, Cyclops, Quicksilver, Storm, Nightcrawler and Beast from the sequel tetralogy. An adaptation of X-Force was also in development at 20th Century Fox, with Jeff Wadlow writing and Drew Goddard directing.Another spin-off and the final film of the 20th Century Fox franchise, The New Mutants, was released on August 28, 2020, directed by Josh Boone, who also co-wrote the screenplay with Knate Gwaltney. Film: Marvel Cinematic Universe Marvel Studios launched the Marvel Cinematic Universe (MCU) in 2008, focused on the Avengers and their related characters, whose film rights they still owned. Marvel was then bought by Disney in 2009, but could not use the X-Men or other mutants, as their film rights still resided with 20th Century Fox. However, an alternate version of the post-credits scene in Iron Man (2008) had him specifically mention "assorted mutants" in regards to the larger universe he and Tony Stark were a part of. Quicksilver and the Scarlet Witch were an odd case, as they had strong ties with both the Avengers and the X-Men. The studios negotiated a deal so that they could share the characters' film rights on the stipulation Marvel Studios would be unable to make reference to their background as mutants or as Magneto's children, and that 20th Century Fox could not allude to their history as Avengers members. While Pietro Maximoff / Quicksilver would only appear in two MCU films: Captain America: The Winter Soldier (2014) and Avengers: Age of Ultron (2015), Wanda Maximoff / Scarlet Witch would go on to become a semi-regular character, appearing in six films in the franchise in addition to headlining her own eponymously titled television series. On December 14, 2017, Disney announced its intent to acquire 21st Century Fox's film and television studios, which would thereby result in the film rights to the X-Men and associated characters reverting to Marvel Studios. Disney CEO Bob Iger later confirmed that the X-Men would be integrated into the MCU alongside the Fantastic Four, Silver Surfer and Deadpool. The acquisition was completed on March 20, 2019. Film: On July 20, 2019, during the San Diego Comic Con, Marvel Studios head Kevin Feige announced that a film centered on mutants, which will be set in the Marvel Cinematic Universe, is in development. When asked if the film will be X-Men-titled, Feige said that the terms "X-Men" and "Mutants" are interchangeable, and said that the MCU's take on the franchise will differ from 20th Century Fox's. After the deal, Charles Xavier / Professor X became the first mutant character to appear in the MCU. He appeared in the film Doctor Strange in the Multiverse of Madness (2022) as an alternate version of Professor X from Earth-838, and as the leader of the Illuminati of this universe, alongside its other members meeting Doctor Strange and putting him for trial due to his travel in the Multiverse. He is later killed by Scarlet Witch of Earth-616 while rescuing her alternate version of Earth-838. Later that year, the MCU streaming series Ms. Marvel also made reference to the X-Men; in the series finale "No Normal", Kamala Khan is told by her friend Bruno that her genetics have a "mutation", underscored by an excerpt of the theme music from the 1992 X-Men series. In the series She-Hulk: Attorney at Law episode "Superhuman Law" where Wolverine gets a mentioned in a news article on a blog site browsed by Jennifer Walters alludes to a man spotted who fought in a bar brawl with metal claws. Additionally, the end-credits of the episode "Mean, Green, and Straight Poured into These Jeans" depicts a graphic of Augustus Pugliese showing off his sneaker collection to Nikki Ramos, a pair of which are directly inspired by the color scheme of Wolverine's classic costume and the team get a mentioned by Jennifer in the season finale episode "Whose Show Is This?". Film: Deadpool 3 has begun filming as the first entry in Phase Six, with Shawn Levy directing and Reynolds reprising his role. Hugh Jackman, Kapičić, Hildebrand, and Kutsuna are set to reprise their roles as Wolverine, Colossus, Negasonic Teenage Warhead, and Yukio, respectively. The film is set for release in November 8, 2024. Video games: Early X-Men games The first X-Men video game was released by Josh Toevs and LJN for the Nintendo Entertainment System and was titled The Uncanny X-Men. That same year (1989) a computer game was released called X-Men: Madness in Murderworld. Another title, X-Men II: The Fall of the Mutants was released the year after.Konami created an X-Men arcade game in 1992, which featured six playable X-Men characters: Colossus, Cyclops, Dazzler, Nightcrawler, Storm and Wolverine.In 1992, the X-Men teamed with Spider-Man for Spider-Man and the X-Men in Arcade's Revenge, released for the Super NES, Genesis, Game Gear and Game Boy.The following years saw the games X-Men: Gamesmaster's Legacy and X-Men: Mojo World released for the Sega Game Gear.The X-Men made a few appearances in Spider-Man 2: Enter Electro. Professor X and Rogue run a Danger Room simulation for the player to train in. Beast appears in the first level to demonstrate the controller functions to the player.In the 1990s, Sega released two X-Men video games for its Genesis; X-Men and X-Men 2: Clone Wars. Wolverine starred in a solo game in 1994 for both the Super NES and Genesis titled Wolverine: Adamantium Rage. That same year, the X-Men appeared in the X-Men: Mutant Apocalypse game for the Super NES.X2: Wolverine's Revenge was a stealth-action game for the sixth generation of video games starring Wolverine as the only playable character. It was released on April 14, 2003, produced by Marvel Games, Gene Pool, Activision and was released by 20th Century Fox, on PlayStation 2, GameCube, Xbox, Microsoft Windows and Game Boy Advance. Video games: Fighting games The X-Men are featured in many 2-D and 3-D fighting games. Video games: In order of release: X-Men: Children of the Atom (Capcom, 1994) Marvel Super Heroes (Capcom, 1995) X-Men vs. Street Fighter (Capcom, 1996) Marvel Super Heroes vs. Street Fighter (Capcom, 1997) Marvel vs. Capcom (Capcom, 1998) X-Men: Mutant Academy (Activision/Paradox Development, 2000) Marvel vs. Capcom 2: New Age of Heroes (Capcom, 2000) X-Men: Mutant Academy 2 (Activision/Paradox Development, 2001) X-Men: Next Dimension (Activision/Paradox Development, 2002) Marvel vs. Capcom 3: Fate of Two Worlds (Capcom, 2011) Ultimate Marvel vs. Capcom 3 (Capcom, 2011) Marvel: Contest of Champions (Kabam, 2014) Marvel: Future Fight (Netmarble, 2015) Film-based games To coincide with the release of the third film, 20th Century Fox, Activision, and Marvel Games released X-Men: The Official Game which filled in gaps between the two Fox X-Men films X2: X-Men United and X-Men: The Last Stand, such as explaining Nightcrawler's absence. Video games: X-Men Legends and Marvel: Ultimate Alliance X-Men Legends and its sequel X-Men Legends II: Rise of Apocalypse are games that featured multiple X-Men as playable characters.Every installment of Marvel: Ultimate Alliance has featured the X-Men as one of the numerous playable characters: Deadpool, Iceman, Storm, and Wolverine are playable in the major Marvel video game, Marvel: Ultimate Alliance. Colossus is playable on the Xbox 360, Wii and PS3 versions of the game, and Jean Grey is playable on the GBA version. Cyclops, Jean Grey, Nightcrawler, Professor X, and Psylocke appear as NPC's on all versions while the Beast, Forge, Karma and Dr. Moira MacTaggert were mentioned by different characters. In addition, during a cut-scene, the Beast, Colossus, Cyclops, Gambit, Magneto, Professor Xavier, Psylocke, and Shadowcat were seen defeated by Doctor Doom alongside the Hulk. Xbox 360 owners were later able to download eight new playable characters for the game, including X-Men heroes and villains: Cyclops, Magneto, Nightcrawler and Sabretooth.In Marvel: Ultimate Alliance 2, Wolverine, Deadpool, Iceman, Storm, Gambit, and Jean Grey are featured as playable characters while Cyclops and Psylocke are exclusive to PS2, PSP and Wii. While Colossus appears as an NPC. In the briefing that follows the Wakanda incident, Captain America and Iron Man mention that the other X-Men members have been absorbed into The Fold. Psylocke, Cable, Magneto and the Juggernaut were later added as downloadable characters for Marvel: Ultimate Alliance 2.Wolverine, Storm, Nightcrawler, Psylocke, Deadpool, and Magneto appear as playable characters in Marvel Ultimate Alliance 3: The Black Order, while Mystique and Juggernaut appear as bosses. Cyclops, Colossus, the Beast, and Professor X appear on a portrait in the X-Mansion when Magneto attacks it in the X-Men trailer; the former two are playable DLC characters while the other two also appear as non-playable helper characters. Books: Science of the X-Men by Linc Yaco and Karen Haber explains how different superpowers would work and how such abilities would affect the people that have them. The mutants featured include Quicksilver, Wolverine, Shadowcat, and Nightcrawler.Several X-Men novels have been published.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Uranium–thorium dating** Uranium–thorium dating: Uranium–thorium dating, also called thorium-230 dating, uranium-series disequilibrium dating or uranium-series dating, is a radiometric dating technique established in the 1960s which has been used since the 1970s to determine the age of calcium carbonate materials such as speleothem or coral. Unlike other commonly used radiometric dating techniques such as rubidium–strontium or uranium–lead dating, the uranium-thorium technique does not measure accumulation of a stable end-member decay product. Instead, it calculates an age from the degree to which secular equilibrium has been restored between the radioactive isotope thorium-230 and its radioactive parent uranium-234 within a sample. Background: Thorium is not soluble in natural water under conditions found at or near the surface of the earth, so materials grown in or from this water do not usually contain thorium. In contrast, uranium is soluble to some extent in all natural water, so any material that precipitates or is grown from such water also contains trace uranium, typically at levels of between a few parts per billion and few parts per million by weight. As time passes after such material has formed, uranium-234 in the sample with a half-life of 245,000 years decays to thorium-230. Thorium-230 is itself radioactive with a half-life of 75,000 years, so instead of accumulating indefinitely (as for instance is the case for the uranium–lead system), thorium-230 instead approaches secular equilibrium with its radioactive parent uranium-234. At secular equilibrium, the number of thorium-230 decays per year within a sample is equal to the number of thorium-230 produced, which also equals the number of uranium-234 decays per year in the same sample. History: In 1908, John Joly, a professor of geology at Trinity College Dublin, found higher radium contents in deep sediments than in those of the continental shelf, and suspected that detrital sediments scavenged radium out of seawater. Piggot and Urry found in 1942, that radium excess corresponded with an excess of thorium. It took another 20 years until the technique was applied to terrestrial carbonates (speleothems and travertines). In the late 1980s, the method was refined by mass spectrometry. After Viktor Viktorovich Cherdyntsev's landmark book about uranium-234 had been translated into English, U-Th dating came to widespread research attention in Western geology.: 7 Methods: U-series dating is a family of methods which can be applied to different materials over different time ranges. Each method is named after the isotopes measured to obtain the date, mostly a daughter and its parent. Eight methods are listed in the table below. The 234U/238U method is based on the fact that 234U is dissolved preferentially over 238U because when a 238U atom decays by emitting an alpha ray the daughter atom is displaced from its normal position in the crystal by atomic recoil. This produces a 234Th atom which quickly becomes a 234U atom. Once the uranium is deposited, the ratio of 234U to 238U goes back down to its secular equilibrium (at which the radioactivities of the two are equal), with the distance from equilibrium decreasing by a factor of 2 every 245,000 years. Methods: A material balance gives, for some unknown constant A, these expressions for activity rations (assuming that the 230Th starts at zero): 234 238 245000 230 Th 238 75000 245000 245000 75000 245000 75000 We can solve the first equation for A in terms of the unknown age, t: 234 238 245000 Putting this into the second equation gives us an equation to be solved for t: 230 Th 238 234 238 75000 245000 75000 234 238 75000 245000 245000 75000 Unfortunately there is no closed-form expression for the age, t, but it is easily found using equation solving algorithms. Dating limits: Uranium–thorium dating has an upper age limit of somewhat over 500,000 years, defined by the half-life of thorium-230, the precision with which one can measure the thorium-230/uranium-234 ratio in a sample, and the accuracy to which one knows the half-lives of thorium-230 and uranium-234. Using this technique to calculate an age, the ratio of uranium-234 to its parent isotope uranium-238 must also be measured. Precision: U-Th dating yields the most accurate results if applied to precipitated calcium carbonate, that is in stalagmites, travertines, and lacustrine limestones. Bone and shell are less reliable. Mass spectrometry can achieve a precision of ±1%. Conventional alpha counting's precision is ±5%. Mass spectrometry also uses smaller samples.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Twister ribozyme** Twister ribozyme: The twister ribozyme is a catalytic RNA structure capable of self-cleavage. The nucleolytic activity of this ribozyme has been demonstrated both in vivo and in vitro and has one of the fastest catalytic rates of naturally occurring ribozymes with similar function. The twister ribozyme is considered to be a member of the small self-cleaving ribozyme family which includes the hammerhead, hairpin, hepatitis delta virus (HDV), Varkud satellite (VS), and glmS ribozymes. Discovery: In contrast to in vitro selection methods, which have aided in identifying several classes of catalytic RNA motifs, the twister ribozyme was discovered by a bioinformatics approach as a conserved RNA structure of unknown function. The hypothesis that it functions as a self-cleaving ribozyme was suggested by the similarity between genes nearby to twister ribozymes and genes nearby to hammerhead ribozymes, Indeed, the genes located nearby to these two self-cleaving ribozyme classes overlap significantly. Researchers were inspired to name the newly found twister motif due to its resemblance to the Egyptian hieroglyph 'twisted flax'. Structure: The basic structure of the Oryza sativa twister ribozyme was crystallographically determined at atomic resolution in 2014. The active site of the twister ribozyme is centered in a double-pseudoknot, facilitating a compact fold structure through two long-range tertiary interactions, in partnership with a helical junction. Magnesium is important for secondary structure stabilization of the ribozyme. Catalytic Mechanism: Similar to other nucleolytic ribozymes, the twister ribozyme selectively cleaves phopshodiester bonds, through an SN2-related mechanism, into a 2',3'-cyclic phosphate and 5' hydroxyl product. Both experimental and modelling evidence have supported a concerted general-acid-base catalysis involving highly conserved adenine (A1) and guanine (G33) bases, where N3 of A1 acts as a proton donor and G33 the general base. The twister ribozyme generates catalytic activity by specifically orienting the to-be-cleaved P O bond for in-line nucleophilic attack within the active site. Currently, it is known that the rate of reaction of the twister ribozyme is dependent on both pH and temperature. Replacements of the pro-S nonbridging oxygen of the scissile phosphate with a thiol group leads to reduced self-cleavage rates, suggesting that the mechanism is not reliant on bound magnesium. Rescue of the thiol-derivative by cadmium cations indicates that divalent metal ions play a role in rate enhancement. A likely mechanism for this is the stabilization of the transition state by reducing electrostatic strain on the substrate strand from the growing negative charge during cleavage. Prevalence in Nature: The twister ribozyme motif is relatively common in nature with 2,700 examples observed across bacteria, fungi, plants, and animals. Similarly to hammerhead ribozymes, some eukaryotes contain large numbers of twister ribozymes. In the most extreme known example, there are 1051 predicted twister ribozymes in Schistosoma mansoni, an organism that also contains many hammerhead ribozymes. In bacteria, twister ribozymes are near to gene classes that are also commonly associated with bacterial hammerhead ribozymes. Currently, there is no understood biological function associated with the twister ribozyme.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded