proba
float64
0.5
1
text
stringlengths
16
174k
0.999403
Minmi was an armored herbivore (plant-eater) that lived in Australia during the early Cretaceous period, between about 119 and 113 million years ago. Minmi was about 6½ feet (2 meters) long, and stood about 3 feet (1 meter) tall at the shoulder. Its body, head and tail were protected by a variety of types of bony armor. Its hind legs were longer than its front limbs, and from fossilized tracks, scientists believe that Minmi was relatively slow moving. Australian Dinosaurs - Minmi lived in Australia. Minmi was a genus of dinosaur. "Minmi" is named after the Minmi Crossing in Australia. Minmi was a member of the Ornithischia ("bird-hipped") order of dinosaurs. What this means, is that although Minmi was not closely related to birds, it did have similarly shaped pelvic bones. Minmi lived between about 119 million years ago and 113 million years ago, during the Cretaceous period. Minmi was a herbivore (plant-eater). Minmi was about 6½ feet (2 meters) long, and 3 feet (1 meter) tall. Minmi weighed about 500 pounds (225 kilograms). Minmi was protected by several types of bony armor covering its head and body, and along the tail.
0.943504
Around the year 1850 neo-Gothicism was maturing and increasingly became a Roman Catholic style almost exclusively. For the next four decades only few catholic churches were designed in other styles, until at the end of the century neo-Romanesque became more important. The protestants on the other hand started to look for a style of their own, from now on rarely building in a pure neo-Gothic style, although elements of Gothicism continued to be used in combination with other styles. First there was Eclecticism, a style in which elements of various styles were mixed, of which Romanesque was the most prominent. The reformed church in Gorinchem (ZH) from 1849-1851 is a typical example. Here neo-Classicism is mixed with elements taken from Romanesque architecture. The contrast with the medieval Gothic tower could hardly be bigger. Even more ecclectic is the Keizersgrachtkerk in Amsterdam (NH), built in 1888-1890 for the Gereformeerden, a split off branch of the reformed church. The church, which was designed by G.B. and A. Salm shows influences of early Gothicism and Venetian Renaissance. After ca. 1875 this Ecclecticism gradually vanished from profane architecture, where it was replaced by neo-Renaissance; another ecclectic style, but this time based on Mannerism, the indigenous variant of Renaissance of the late-16th and early-17th centuries. Neo-Renaissance was favoured by the government as well, as it referred to what was then believed to be the most glorious moment in the history of the Netherlands, the war against Spain and the foundation of the protestant Republic. The style was soon adopted for protestant churches of all denominations until being replaced by Rationalism at the end of the century. The reformed Nieuwe Kerk in Scheveningen (ZH) is a classic example of a neo-Renaissance church, mostly built out of brick and with many details taken from Manneristic architecture. The tower for instance was inspired by the work of Hendrick de Keyser. It dates from 1893 and was designed by Roelof Kuipers, brother of Tjeerd Kuipers who also designed his first churches in this style. Protestantism was divided in several different denominations, but neo-Renaissance became popular with most of these. The Remonstrant church in Haarlem (NH) is only one example. In several ways it's very similar to many houses from this period. Despite the preference for neo-Gothicism, a few catholic architects did design in other styles. Even P.J.H. Cuypers, who designed the basilica in Oudenbosch (NB), a scaled-down copy of the St. Peter in Rome, built 1867-1880. Of more architectural importance however is the St. Nicolaas in Amsterdam (NH) Architect A.C. Bleys actually used the 'protestant' neo-Renaissance style for some of his churches, although mixed with Baroque elements, reason why his style is often referred to as neo-Baroque. Another style that was of some importance to catholic architecture was neo-Romanogothicism, based on the late-Romanesque architecture of the Rhineland. The first church in this style was this O.L. Vrouw Geboorte in Ohé en Laak (L), designed by A.C. Bolsius and dating from 1865-1867. In the 1880's C. Weber designed several churches in a similar but much more refined style.
0.999978
These three words succinctly describe our core business. At UtiliWorks, we pride ourselves on being a full-service consulting firm, attuned to the needs of today’s utilities. We provide thorough consultation and recommendations, based solely on the unique needs of your utility. UtiliWorks Consulting is a professional services advisory firm that specializes in smart utilities and smart city initiatives. Founded in 2005, we have worked with over 85 clients across the United States and abroad. Together with our clients, UtiliWorks advances business and technology solutions that strategically enhance operations for utilities and their cities. Our people, processes, and analysis tools work in conjunction to lower costs, reduce risk, and ensure benefits capture for each technology implementation. The UtiliWorks Advantage™ is guiding principle behind the success of our work and a proven methodology for assessing and delivering technology projects for our clients. Over the past 15 years, UtiliWorks has worked across the United States and abroad, providing benefits for our clients regardless of their geography or topology. Our recommendations are informed by a detailed review of your service area and unique characteristics.
0.999786
How to get rid of duplicate values in an array in Perl 6? Basically we show various ways how one can take a list of values and return a sublist of the same values after eliminating the duplicates. Specifially there is a related article: Unique values in an array in Perl 5. With Perl 6 its quite easy to eliminate duplicate values from a list as there is a built-in called unique that will do the job. If you long for the behavior of the Unix uniq command that would only eliminate the consecutive duplicates then you might be happy to hear that Perl 6 has a function for you. It is called squish. (Learning Perl 6 has the advantage that you also learn new words in English :). There is however another slightly related tool called a Bag which is a method, that will count the number of occurrences of each element and create a Bag out of them. One should start every Perl 6 script by asking for v6; or v6.c; version 6 of Perl. It is important in order to avoid strange error messages when someone runs it with perl 5 by mistake. You can add .perl to almost every kind of variable and get back a representation of the data in it. Very handy for debugging. say() is a built-in function in Perl 6 similar to say() in Perl 5.10 though not identical. It is printing to the screen and appending a newline at the end. qw// returns the individual string values from a space separated list of items. In Perl 5 the qw() operator was used for this. ... and some people will prefer to add the .say method call at the end of the expression. Though I think this isn't as clear as having the say at the beginning.
0.999169
Write a function coin_flip that accepts as its parameter an input file name. Assume that the input file data represents results of sets of coin flips that are either heads (H) or tails (T) in either upper or lower case, separated by at least one space. Your function should consider each line to be a separate set of coin flips and should output to the console the number of heads and the percentage of heads in that line, rounded to the nearest tenth. If this percentage is more than 50%, you should print a “You win” message. For example, consider the following input file:H T H H TT t t T h HhFor the input above, your function should produce the following output:3 heads (60.0%)You win!2 heads (33.3%)1 heads (100.0%)You win!The format of your output must exactly match that shown above. You may assume that input file contains at least 1 line of input, that each line contains at least one token, and that no tokens other than h, H, t, or T will be in the lines.
0.664791
An example of an image macro, once a common type of internet meme. An Internet meme, commonly known as just a meme (/miːm/ MEEM), is an activity, concept, catchphrase, or piece of media that spreads, often as mimicry or for humorous purposes, from person to person via the Internet. An Internet meme usually takes the form of an image (traditionally an image macro), GIF or video. It may be just a word or phrase, sometimes including intentional misspellings (such as in lolcats) or corrupted grammar (such as in doge and all your base are belong to us). These small movements tend to spread from person to person via social networks, blogs, direct email, or news sources. They may relate to various existing Internet cultures or subcultures, often created or spread on various websites. Fads and sensations tend to grow rapidly on the Internet because the instant communication facilitates word-of-mouth transmission. Some examples include posting a photo of people lying down in public places (called "planking") and uploading a short video of people dancing to the Harlem Shake. The word meme was coined by Richard Dawkins in his 1976 book The Selfish Gene as an attempt to explain the way cultural information spreads; the concept of the Internet meme was first proposed by Mike Godwin in the June 1993 issue of Wired. In 2013, Dawkins characterized an Internet meme as being a meme deliberately altered by human creativity—distinguished from biological genes and his own pre-Internet concept of a meme, which involved mutation by random change and spreading through accurate replication as in Darwinian selection. Dawkins explained that Internet memes are thus a "hijacking of the original idea", the very idea of a meme having mutated and evolved in this new direction. Furthermore, Internet memes carry an additional property that ordinary memes do not—Internet memes leave a footprint in the media through which they propagate (for example, social networks) that renders them traceable and analyzable. In the early days of the Internet, memes were primarily spread via email or Usenet discussion communities. Messageboards and newsgroups were also popular because they allowed a simple method for people to share information or memes with a diverse population of internet users in a short period. They encourage communication between people, and thus between meme sets, that do not normally come in contact. Furthermore, they actively promote meme-sharing within the messageboard or newsgroup population by asking for feedback, comments, opinions, etc. This format is what gave rise to early internet memes, like the Hampster Dance. Another factor in the increased meme transmission observed over the internet is its interactive nature. Print matter, radio, and television are all essentially passive experiences requiring the reader, listener, or viewer to perform all necessary cognitive processing; in contrast, the social nature of the Internet allows phenomena to propagate more readily. Many phenomena are also spread via web search engines, internet forums, social networking services, social news sites, and video hosting services. Much of the Internet's ability to spread information is assisted from results found through search engines, which can allow users to find memes even with obscure information. Typical format for image macros. Internet memes grew as a concept in the mid-1990s. At the time, memes were just short clips that were shared between people in Usenet forums. As the internet evolved, so did memes. When YouTube was released in 2005, video memes became popular. Around this time, rickrolling became popular and the link to this video was sent around via email or other messaging sites. Video sharing also created memes such as "Turn Down For What" and the "Harlem Shake". As social media websites such as Twitter and Facebook started appearing, it was now easy to share GIFs and image macros to a large audience. Meme generator websites were created to let users create their own memes out of existing templates. Memes during this time could remain popular for a long time, from a few months to a decade, which contrasts with the fast lifespan of modern memes. An Internet meme may stay the same or may evolve over time, by chance or through commentary, imitations, parody, or by incorporating news accounts about itself. Internet memes can evolve and spread extremely rapidly, sometimes reaching worldwide popularity within a few days. Internet memes usually are formed from some social interaction, pop culture reference, or situations people often find themselves in. Their rapid growth and impact has caught the attention of both researchers and industry. Academically, researchers model how they evolve and predict which memes will survive and spread throughout the Web. Commercially, they are used in viral marketing where they are an inexpensive form of mass advertising. One empirical approach studied meme characteristics and behavior independently from the networks in which they propagated, and reached a set of conclusions concerning successful meme propagation. For example, the study asserted that Internet memes not only compete for viewer attention generally resulting in a shorter life, but also, through user creativity, memes can collaborate with each other and achieve greater survival. Also, paradoxically, an individual meme that experiences a popularity peak significantly higher than its average popularity is not generally expected to survive unless it is unique, whereas a meme with no such popularity peak keeps being used together with other memes and thus has greater survivability. Multiple opposing studies on media psychology and communication have aimed to characterise and analyse the concept and representations in order to make it accessible for the academic research. Thus, Internet memes can be regarded as a unit of information which replicates via internet. This unit can replicate or mutate. This mutation instead of being generational follows more a viral pattern, giving the Internet memes generally a short life. Other theoretical problems with the Internet memes are their behaviour, their type of change, and their teleology. Writing for The Washington Post in 2013, Dominic Basulto asserted that with the growth of the Internet and the practices of the marketing and advertising industries, memes have come to transmit fewer snippets of human culture that could survive for centuries as originally envisioned by Dawkins, and instead transmit banality at the expense of big ideas. Dank memes are a subgenre of memes usually involving meme formats but in a different way to image macros. The term "dank", which means "a cold, damp place", was later adapted by marijuana smokers to refer to high-quality marijuana, and then became an ironic term for a type of meme, and also becoming synonymous for "cool". This term originally meant a meme that was significantly different from the norm, but is now used mainly to differentiate these modern types of memes from other, older types such as image macros. Dank memes can also refer to "exceptionally unique or odd" memes. They have been described as "internet in-jokes" that are "so played out that they become funny again" or are "so nonsensical that they are hilarious". The formats are usually from popular television shows, movies, or video games (such as SpongeBob and The Simpsons) and users then add humorous text and images over it. One example of a "dank" meme is the "Who Killed Hannibal", which is made of two frames from a 2013 episode of The Eric Andre Show. The meme features the host Andre shooting his co-host Buress in the first frame and then blaming someone else in the second. This was then adapted to other situations, such as baby boomers blaming millennials for problems that they allegedly caused. Dank memes can also stem from interesting real-life images that get shared or remixed many times. So-called "moth" memes (often stylized with diacritics on the "o": "möth") came about after a Reddit user posted a close up picture of a moth that they had found outside their window onto the r/creepy subreddit. This image of a moth became popular, and began to be used in memes. These moth memes usually revolved around the moth wanting to find a lamp. According to Chris Grinter, a lepidopterist from the California Academy of Sciences, these memes took off because people find moths' attraction to lamps quite strange and this phenomenon is still not completely explained by science. Many modern memes stem from nonsense or otherwise unrelated phrases that are then repeated and placed onto other formats. One example of this is "they did surgery on a grape," from a video of a da Vinci Surgical System performing test surgery on a grape. People sharing the post tended to add the same caption to it ("they did surgery on a grape"), and eventually created a satirical image with several layers of captions on it. Memes such as this one continue to propagate as people start to include the phrase in different, otherwise unrelated memes. The increasing trend towards irony in meme culture has resulted in absurdist memes not unlike postmodern art. Many internet memes have several layers of meaning built off of other memes, not being understandable unless the viewer has seen all previous memes. "Deep-fried" memes, memes that have been distorted and run through several filters, are often counter-culture and strange to one not familiar with them. An example of these memes is the "E" meme, a picture of Markiplier photoshopped onto Lord Farquaad from the movie Shrek, photoshopped into a scene from Mark Zuckerberg's hearing in Congress. "Surreal" memes are based on the idea of increasing layers of irony so that they are not understandable by popular culture or corporations. The strange irony has been discussed in the Washington Post article "Why is millennial humor so weird?" as a disconnect from how millennials and other generations conceive of humor; the article itself also became a meme where people photoshopped examples of deep-fried and surreal memes onto the article to make fun of the point of the article and the abstraction of meme culture. After the success of the application Vine, a new format of memes was created in the form of short videos and scripted sketches. Vine, in spite of its closure in early 2017, has still retained success through uploads of viral vines onto other sharing social media sites such as Twitter and YouTube. Users on said websites will often upload Vine "compilations", sometimes relating to a theme assigned to the vines or just a collection of assorted vine videos. Public relations, advertising, and marketing professionals have embraced Internet memes as a form of viral marketing and guerrilla marketing to create marketing "buzz" for their product or service. The practice of using memes to market products or services is known as memetic marketing. Internet memes are seen as cost-effective, and because they are a (sometimes self-conscious) fad, they are therefore used as a way to create an image of awareness or trendiness. To this end, businesses have taken to attempting two methods of using memes to increase publicity and sales of their company; either creating a meme or attempting to adapt or perpetuate an existing one. Marketers, for example, use Internet memes to create interest in films that would otherwise not generate positive publicity among critics. The 2006 film Snakes on a Plane generated much publicity via this method. Used in the context of public relations, the term would be more of an advertising buzzword than a proper Internet meme, although there is still an implication that the interest in the content is for purposes of trivia, ephemera, or frivolity rather than straightforward advertising and news. Examples of memetic marketing include the FreeCreditReport.com singing ad campaign, the "Nope, Chuck Testa" meme from an advertisement for taxidermist Chuck Testa, Wilford Brimley saying "Diabeetus" from Liberty Medical and the Dumb Ways to Die public announcement ad campaign by Metro Trains Melbourne. ^ "American Pronunciation of meme by Macmillan Dictionary". www.macmillandictionary.com. Retrieved 2017-04-19. ^ Shifman, Limor. Memes in Digital Culture. Print. ^ a b c d Coscia, Michele (April 5, 2013). "Competition and Success in the Meme Pool: a Case Study on Quickmeme.com". arXiv:1304.1712 [physics.soc-ph]. Paper explained for laymen by Mims, Christopher (June 28, 2013). "Why you'll share this story: The new science of memes". Quartz. Archived from the original on July 18, 2013. ^ "Memes On the Internet". Oracle Thinkquest. Archived from the original on 11 May 2013. Retrieved 30 November 2012. ^ Marshall, Garry. "The Internet and Memetics". School of Computing Science, Middlesex University. Retrieved 30 November 2012. ^ Watercutter, Angela; Grey Ellisby, Emma (April 2018). "The WIRED Guide to Memes". WIRED. Retrieved 2018-11-30. ^ Kempe, David; Kleinberg, Jon; Tardos, Éva (2003). "Maximizing the spread of influence through a social network". Int. Conf. on Knowledge Discovery and Data Mining. ACM Press. doi:10.1145/956750.956769. ^ a b Castaño, Carlos (2013). "Defining and Characterising the Concept of Internet Meme". Revista CES Psicología. 6 (2): 82–104. ISSN 2011-3080. Retrieved 23 April 2015. ^ Julien, Chris (2014-06-30). "Bourdieu, Social Capital and Online Interaction". Sociology. 49 (2): 356–373. doi:10.1177/0038038514535862. ^ Zetter, K. (29 February 2008). "Humans Are Just Machines for Propagating Memes". Wired website. ^ Basulto, Dominic (July 5, 2013). "Have Internet memes lost their meaning?". The Washington Post. Archived from the original on July 10, 2013. ^ Hoffman, Ashley (2 February 2018). "Donald Trump Jr. Just Became a Dank Meme, Literally". TIME. Retrieved 19 May 2018. ^ "Dank Memes - What does dank meme mean?". Dictionary.com. Retrieved 30 Nov 2018. ^ Griffin, Annaliese (9 March 2018). "What does "dank" mean? A definition of everyone's new favourite adjective". Quartzy. Retrieved 19 May 2018. ^ Mary von Aue (19 April 2018). "Meme About 'Who Killed Hannibal' Is Reddit's Current Obsession". Inverse. Retrieved 19 May 2018. ^ u/No_Reason27 (July 2018). "Close up of moth outside my window". Reddit. Retrieved 30 November 2018. ^ Spalding, Katie (2 Oct 2018). "The Latest Viral Meme Trend Is (Possibly) Not As Stupid As You Think". IFL Science. Retrieved 30 Nov 2018. ^ EdwardHospital (11 August 2010). "da Vinci Surgical System: Surgery on a grape". Retrieved 30 November 2018 – via YouTube. ^ Feldman, Brian. "They Did Surgery on a Grape". Intelligencer. NYMag. Retrieved 27 November 2018. ^ Hess, Peter. ""They Did Surgery on a Grape" Meme Began With Legally Suspect Medical Tool". Inverse. Retrieved 27 November 2018. ^ Santiago, Amanda Luz Henning. "'They did surgery on a grape' is the weird meme that's your new obsession". Mashable. Retrieved 27 November 2018. ^ Know Your Meme. "Deep-fried memes". Retrieved 26 March, 2019. ^ The Daily Dot. (2018). "The ‘E’ meme shows just how weird memes can get". Retrieved 26 March, 2019. ^ Mashable. (2019). "Surreal memes deserve their own internet dimension". Retrieved 26 March, 2019. ^ Washington Post. (2017). "Why is millennial humor so weird?" Retrieved 26 March, 2019. ^ Know Your Meme. "Why is millennial humor so weird?" Retrieved 26 March, 2019. ^ Hathaway, Jay (December 9, 2015). "Tumblr's Biggest Meme of 2015 Was Pepe the Frog". New York Magazine. Archived from the original on July 25, 2017. Retrieved 2017-09-14. ^ "Pepe the Frog". Know Your Meme. Retrieved 2018-08-21. ^ "We Asked The Art World How Much Rare Pepes Are Going For". BuzzFeed News. Retrieved 2018-08-21. ^ "About US – Rare Pepe Directory". rarepepedirectory.com. p. 122. Retrieved 2018-08-21. ^ Nuzzi, Olivia (May 26, 2016). "How Pepe the Frog Became a Nazi Trump Supporter and Alt-Right Symbol". The Daily Beast. Retrieved May 26, 2016. ^ "Pepe the Frog". Anti-Defamation League. Retrieved 2018-12-10. ^ Plaugic, Lizzie (2017-01-10). "How a group of Redditors is creating a fake stock market to figure out the value of memes". The Verge. Retrieved 2018-12-10. ^ "Reddit post on r/MemeEconomy detailing name controversy with NASDAQ stock exchange". Reddit. 2017-12-15. Retrieved 2018-12-18. ^ Flor, Nick (December 11, 2000). "Memetic Marketing". InformIT. Retrieved 2011-07-29. ^ Council, Forbes Communications. "Meme Marketing: How Brands Are Speaking A New Consumer Language". Forbes. Retrieved 2018-12-10. ^ Carr, David (29 May 2006). "Hollywood bypassing critics and print as digital gets hotter". New York Times. Retrieved 16 October 2012. ^ "We Found The FreeCreditReport.Com Band, and They Aren't Who You Thought They Were". PigeonsandPlanes. Retrieved 2017-04-19. ^ "diabeetus - WordSense.eu". www.wordsense.eu. Retrieved 2017-04-19. Blackmore, Susan (March 16, 2000). The Meme Machine (Volume 25 of Popular Science Series ed.). Oxford University Press, 2000. p. 288. ISBN 978-0192862129. Retrieved 30 November 2012. Shifman, Limor (Nov 8, 2013). Memes in Digital Culture. MIT Press, 2013. Distin, K. (2005). The selfish meme: A critical reassessment. Cambridge, U.K: Cambridge. Wikimedia Commons has media related to Internet memes. Gary Marshall, The Internet and Memetics – academic article about Internet and memes. This page was last edited on 14 April 2019, at 04:19 (UTC).
0.999379
Are nail ridges in fingernails a sign of a health problem? It depends on the direction of the nail ridges. Vertical nail ridges, which are fairly common, extend from the cuticle to the tip of your nail. They often become more numerous or prominent with age, possibly due to variations in cell turnover within your nail. If your fingernails change color or you develop horizontal nail ridges across your nails, talk to your doctor. These changes could indicate an underlying health condition.
0.949998
This performer has been marked mature level 10 due to the following, "blowin". While other musicians have trouble crossing over to a different musical genre, Bob Dylan has helped define numerous musical styles that have drawn crowds on tour dates spanning half a century. During the early 60s, Dylan pioneered folk music that would smirk in the face of establishment and become the voice of a generation. In the latter half of the 60s, Bob Dylan smirked in the face of folk fans by adopting a hard rocking, electric musical style that added a new dimension to rock music. Since the late 80s, the musician has been playing a continuous string of concert dates on his Never Ending Tour, including tour dates in 2011. Robert Allen Zimmerman was born in Duluth, Minnesota, on May 24, 1941. While attending his one year at the University of Minnesota, Bob became interested in folk music and, inspired by both Woodie Guthrie and Dylan Thomas, Robert Zimmerman moved to New York City and became Bob Dylan. Bob Dylan's 1962, self-titled debut wasn't commercially successful, but he would gain more notoriety during a trip to the UK later in the year. There he debuted "Blowin' in the Wind" and, upon his return to the US, became more involved in the civil rights movement with fellow musician Joan Baez. This showed in many songs on 1963's The Freewheelin' Bob Dylan, which became a musical awakening for audiences. Bob Dylan garnered even more attention through tour dates with Joan Baez and the release of The Times They Are a-Changin' in 1964. However, shortly after the album's release, Dylan became increasingly frustrated with the constraints and expectations imposed upon him by the folk scene and took a huge stylistic leap with Bringing It All Back Home in 1965. The album saw Dylan's first use of electric instruments on an album, and was followed by the use of electric guitars during a tour date at the Newport Folk Festival. Dylan continued to develop his wild electric sound on the albums Highway 61 Revisited and Blonde on Blonde. After being injured in a motorcycle accident in 1966, Bob Dylan shrank from the public eye while continuing to release albums that experimented with country, rock, and blues. Bob Dylan experienced huge success as a part of the super group, the Traveling Wilburys, which he co-founded with George Harrison, Tom Petty, Roy Orbison, and Jeff Lynne. After two hit albums and sold-out tour dates with the band, Dylan followed them up with the critically acclaimed, Oh Mercy, in 1989. His next big hit came in 1997 with Time Out of Mind which saw Dylan nail down an applauded musical style that won three Grammys. With at least one album conquering each decade, Bob Dylan released Modern Times to critical and commercial acclaim in 2006. This was followed by Together Through Life in 2009 and The Witmark Demos in 2010, along with tour dates that proved the music of Bob Dylan will remain timeless. Bob Dylan's Never Ending Tour marches on with summer tour dates in 2011. Mr. Dylan's concert schedule will focus on Europe until July 14, when he will return to the United States. Bob Dylan will visit most of the country before his 2011 tour dates end on August 20 in Bangor, ME, so make sure to check ticket info on Eventful soon.
0.998656
The G20 countries have committed to subjecting the global financial system to far-reaching reforms. Numerous measures have significantly increased the resilience of systemically important banks. Financial stability is a global issue, and global coordination is crucial. However, the relevant legislative and regulatory changes are largely being implemented at national level. An important link in this process is the international Financial Stability Board (FSB), which works closely with national regulators. Three main objectives are being pursued in the area of financial stability: making financial institutions more resilient, solving the too-big-to-fail issue and strengthening regulations in connection with shadow banking. A number of efforts linked to resilience are underway at the international level; these are aimed at strengthening the stability of financial institutions. The core element is the Basel III regulatory framework, which builds on the Basel II regulatory framework issued by the Basel Committee on Banking Supervision (BCBS). This committee has established the principles of effective banking supervision. In addition, Basel III also sets out stricter requirements for the risk-weighted capital of banks and supplements this with the leverage ratio: a simpler, non-risk-based measurement. A third element are new standards that define the minimum requirements for liquidity. These measures strengthen the resistance of the individual institution and the entire banking sector to a crisis. In a high-profile speech in November 2014, the Governor of the Bank of England, Mark Carney, estimated that the capital requirements for the majority of banks have increased seven-fold over the past seven years. The requirements for the major global banks have increased ten-fold over the same period. However, many of these companies will have implemented the new requirements ahead of time. What is our view: The Basel Committee on Banking Supervision published a discussion paper in the second half of 2013 to initiate discussion on the future design of the regulatory framework. In its paper titled "The regulatory framework: balancing risk sensitivity, simplicity and comparability" the Committee makes a number of suggestions on how to simplify capital standards and improve the comparability of results. UBS made comprehensive comments on the discussion paper. The answer describes UBS's view on prudential regulation. In the fall of 2014 the Basel Committee published the timeline for the implementation of the individual measures during a G20 meeting. Additional requirements apply to financial institutions that are classified as systemically important due to their size, significance for the market and connections. In particular, systemically important banks must develop workable measures that, in the event of a crisis, will ensure that the bank can be stabilized or – if this does not work – liquidated without the involvement of the taxpayer. A more stable financial market infrastructure (see also under “Financial market regulation”). In November 2015 the Financial Stability Board (FSB) issued the final Total Loss-Absorbing Capacity (TLAC) standard for global systemically important banks (G-SIBs). The TLAC standard has been designed so that failing G-SIBs will have sufficient loss-absorbing and recapitalisation capacity available in resolution for authorities to implement an orderly resolution. UBS supports international efforts to increase financial stability and avoid the risk of banks that are systemically 'too big to fail' (TBTF), requiring future taxpayer bailouts. Switzerland responded promptly and comprehensively to the TBTF challenge. In its evaluation report of February 2015, the Federal Council stated that, by international comparison, the assessment of the Swiss approach is positive overall. UBS has built up its capital considerably, reduced risks significantly and greatly reduced its balance sheet. UBS is one of the best capitalized major banks in the world (UBS Investor Relations). UBS has introduced the necessary organizational measures to ensure continuation of the functions that are of systemic importance for Switzerland in a potential crisis. New legal structure: In order to increase the Group’s resolvability, UBS has established a Group holding company based on a 1:1 share swap. Certain parts of the Swiss business will be transferred to a new Swiss subsidiary, UBS Switzerland AG. Further information is available here. In October 2015 the Swiss Federal Council adopted the parameters for amendments to the current too-big-to-fail provisions. The resilience of systemically important banks will thereby be further enhanced. In this context the Swiss regime is now by far the most demanding in the world on a relative basis. The basic requirement for the leverage ratio is 4.5%, and 12.9% for risk-weighted assets. For the two big banks this results in going concern requirements of 5% overall for the leverage ratio and 14.3% overall for risk-weighted assets. The going concern requirements are mirrored in that the two big banks must fulfil additional gone concern requirements of an added 5% for the leverage ratio and 14.3% for risk-weighted assets. The gone concern requirements are fulfilled in principle with bail-in instruments. The new requirements must be met by the end of 2019. In November the Financial Stability Board (FSB) also issued the final Total Loss-Absorbing Capacity (TLAC) standard for global systemically important banks (G-SIBs). A monitoring framework to track financial sector developments outside the banking system. The FSB has established an annual monitoring exercise to assess the global trends and potential risks of the shadow banking system to the stability of the financial system. Policies to strengthen oversight and regulation of the shadow banking system. An overview of regulation in the areas of capital and liquidity. If you have further questions about financial market stability, please contact us.
0.977884
Create a best in class B to B website design that assisted in employee recruitment, client acquisition and provided a client portal for online financial management and account access. MC provided a total branding solution from logo design, to site redesign, and creating a user-friendly portal interface. MC shifted the accounting website from a anemic company billboard to a robust marketing tool focused on relevant client-centered tools. New website has transformed how FFF speaks to its clients, services their needs and underscores its client-centric service focus. The site also provides a great recruitment tool for new and seasoned talent to join the firm.
0.946081
Living in space is not just all work and no play. Astronauts like to have fun, too. If you're going to work on the space shuttle for a week or two, it is certainly okay to look out the window, play with your food or tease your crewmates once in awhile. If you're staying on the International Space Station for a few months, fun is an essential ingredient to the quality of life. Astronauts need a break from their busy schedules when they are orbiting Earth. Days or even months of straight work is certain to cause stress among space workers. That is why flight planners on Earth schedule time during each day so astronauts can relax, exercise and have some fun. Shuttle and station crewmembers even manage to have fun while working. Experiments in space sometimes involve ordinary toys and how microgravity affects them. Click an image to see astronauts relaxing and having fun. A popular pastime while orbiting the Earth is simply looking out the window. Astronauts onboard the space shuttle can look out the cockpit windows and watch the Earth below or the deep blackness of space. Inside the International Space Station, crewmembers have numerous windows they can look out. Astronauts often comment on their fascination and awe as they look at the Earth spin beneath them with its multiple shades and textures. Sunsets and sunrises are also very spectacular, occurring every 45 minutes above the Earth's atmosphere. Check out an orbital sunset and lightning from the space shuttle. Onboard the space station, crewmembers have many opportunities to relax and play. Like most people who work full time, they get weekends off. On any given day, crewmembers can watch movies, read books, play cards and talk to their families. They have an exercise bike, a treadmill and various other equipment to help keep their bodies in shape. During their off time, they certainly take time out to play games and generally have a good time. What are some of the fun things astronauts do in space? Play basketball in the shuttle's payload bay. Play video games in the cockpit. Race through the space station. Expedition Two Commander Yury Usachev and Flight Engineer James Voss 'play ball'. The STS-98 crew and the Expedition Two crew perform somersaults. Check out some disco dancing onboard the shuttle during STS-92. Astronauts run circles inside the International Space Station.
0.999908
When is the next solar eclipse in Canada? The following table listens all solar eclipses, whose path is crossing Canada. It is crucial that the eclipse path touches the country. Solar eclipses which can only be seen partital are not considdered in the table below. With other words: The following table shows all solar eclipses, whose totallity or annularity can be seen in Canada. The next partial solar eclipse in Canada is in 779 days on Thursday, 06/10/2021. The next total solar eclipse in Canada is in 1812 days on Monday, 04/08/2024. The next annular solar eclipse in Canada is in 779 days on Thursday, 06/10/2021.
0.940818
What is Gulliver's Travels (1996) about? All star adaptation of Jonathan Swift's satirical tale about a normal man who, after returning home following eight years of absence, relates fantastical tales about how he was thought to be giant in the Land of Lilliput, but was only six inches high in the Land of Brobdingnag. He also tells of his visit to the floating island of Laputa populated by scientists who are so obsessed with reason that they act with no common sense. Finally, he tells of his journey to the land where his disturbing likeness to the bestial Yahoos and his inferiority to the intelligent horses there makes him question the very worth of his humanity.
0.996921
As pre-arranged combative forms, kata played a significant role in the training of the classical Japanese warrior. The earliest kata we are familiar with began to appear during the late-Kamakura to early-Muromachi period although we know little about them except a few of their names. Kata, in fact, are still being created today. However, in the classical martial traditions (koryu) these combative forms varied greatly among the myriad traditions and, in an historical and hoplological perspective, not all kata were equal. Generally speaking there were at least three categories of kata developed in the classical systems: 1) those forms which were designed by warriors who, having survived battle and/or personal duel, encoded their successful strategies as pre-arranged combative scenarios--they were often seen as divinely inspired by a particular deity; 2) those forms which were created by warriors, most without battle experience, in the peaceful years of the Tokugawa Shogunate or later; and 3) those forms which were extrapolated from earlier forms in order to teach basic and intermediate combative technique or to cover variations in earlier combative scenarios. In the case of this first category, some warriors--martial geniuses--were able, in the midst of battle or at locations of spiritual power, to intuit and create highly effective strategies and tactics for combat. The strategies (heih�) were not simply techniques in the sense of manipulating a weapon. They were methods requiring psycho-physical perfection; a supreme synergy of body, breath, and mind in a unified whole. This synergy would empower the warrior with the ability to defeat an enemy with what might often appear to an observer as the simplest of movements. While we may analyze these strategies through our own cognitive abilities, they were not designed constructions arrived at through normal cognition. They were, instead, intuited in the heat of battle or as the culmination of exhaustive, protracted religious austerities. Also, these strategies were neither applied through normal, cognitive consciousness, nor were they taught through normal intellectual-pedagogical means. A master teacher passed them on to a disciple in a way that required the student to use intuition under stressful conditions; in several martial traditions this was accomplished in front of altars indicating a line of direct transmission from the divine. Finally, and probably due to the influence of Buddhism--especially Rinzai Zen--many of these early, classical kata were constructed, both in name and pedagogy, in the form of riddles. The Zen k�an was a teaching method popular in Rinzai Zen and its intent was to force the student to intuit an answer under stressful situations. Some warriors, such as Kamiizumi Ise-no-Kami, took phrases directly from collections of Zen koan and applied them as names of kata. The second type of kata--those created by samurai, some as headmasters of older schools, others as founders of new schools--were intended to have the same purpose as earlier forms. However, with the evolution of the warrior's art and capabilities during the years of Tokugawa peace, these forms often lack the depth and vigor of their Sengoku period predecessors. The third type of kata as noted above often had no pretention of being battlefield inspired. They are a mixed bag, many limited to the repetitive teaching of specific techniques, and, during the mid- to late-Tokugawa period, were often aimed at success in sportive, competitive matches with other schools (taryujiai). This process is still in play today. Many classical ryu which have come down to us today contain kata of all three types. David Hall, the author and Koryu.com The article is an a excerpt for Hall's forthcoming book on Classical Japanese Combative Culture. Draeger, Donn F. (1973a). Classical Bujutsu: The Martial arts and ways of Japan (Vol. 1). Tokyo: Weatherhill. ______. (1973b). Classical Budo: The Martial arts and ways of Japan (Vol. 2). Tokyo: Weatherhill. ______. (1974a). Modern Bujutsu and Budo: The Martial arts and ways of Japan (Vol. 3). Tokyo: Weatherhill. ______. (1997). "Marishiten: Buddhist Influences on Combative Behavior." In Koryu Bujutsu: Classical Warrior Traditions of Japan. Edited by Diane Skoss. Koryu Books, pp. 87-119. Hayes, Richard. (1984a). "Paleolithic Adaptive traits and the Fighting Man." Hoplos. 4, no. 2 (June 1984): 9-11. ______. (1984b). "Conceptual Tools for the Hoplologist: The IAT/MAT Continued." Hoplos. 4, no. 3 (December 1984): 2-4. ______. (1985). "Conceptual Tools for the Hoplologist: The IAT/MAT Continued." Hoplos. 4, no. 4 (August 1985): 23-24. ______.(1986). "Conceptual Tools for the Hoplologist: The IAT/MAT Continued." Hoplos. 5, no. 1 & 2. (Spring 1986): 31-34. ______. (1987a): "Hoplology Theoretics, An Overview: Part 1 - The IAT/MAT." Hoplos: The Journal of the International Hoplology Society. 5, nos. 3 & 4 (Spring 1987): 24-26. ______. (1987b): "Hoplology Theoretics, An Overview: Part 2 - The Innate/Manifest Volitional Trait." HIS Newsletter (December 1987): 2-3. ______. (1988a): "Hoplology Theoretics, An Overview: Part 3 - The Innate/Manifest Cognitive/Intuitive Trait." Hoplos: The Journal of the International Hoplology Society. 6, nos. 1 & 2 (Winter 1988): 25-26. ______. (1988b): "Hoplology Theoretics, An Overview: Part 4 - The Innate/Manifest Imperturbable-mind/Steadfast-mind Trait." Hoplos: The Journal of the International Hoplology Society. 6, nos. 3 (Fall 1988): 7-12. ______. (1989): "Hoplology Theoretics, An Overview: Part 5 - The Innate/Manifest Omnipoise Trait." Hoplos: The Journal of the International Hoplology Society. 6, nos. 4 (Winter 1989): 29-31. ______. (1992). "Hoplology Theoretics, an Overview: Innate/Manifest Force/Yield Trait and Innate/Manifest Synchronous Trait. Part 7." Hoplos 7, no. 2 (Winter 1992): 27-29. ______. (1994). "Hoplology Theoretics, an Overview: Transcendent Synergy of the Manifest Adaptive Traits. Part 8 (and) Practical Application. Part 9." Hoplos: The Journal of the International Hoplology Society. 7, no. 3 (Winter 1994): 20-27. Leggett, Trevor. (1985). Warrior Koans: Early Zen in Japan. Arkana. Routledge and Kegan Paul, Inc. Rosenbaum, Michael. (2005). Kata and the Transmission of Knowledge: In Traditional Martial Arts. YMAA Publication Center. Copyright ©2008 David A. Hall. All rights reserved. David A. Hall earned an M.A. in Asian Studies from the University of Hawaii in 1977 and a Ph.D. in Buddhist Studies/Military History from the University of California, Berkeley in 1990. He began training in martial arts in 1965 in the US and later on Okinawa as a student of karate and later aikido. In 1975 he began training under Donn F. Draeger in Shindo Muso Ryu in Hawai. Moving to Japan in 1977, he continued studying Shindo Muso Ryu at the Rembukan Dojo under Shimizu Takaji, and joined the Kashima Shinden Jiki Shinkage Ryu under Namiki Yasushi, the 18th headmaster, in 1978. He also he began formally training in Yagyu Shinkage Ryu heiho in 1985, under 21st headmaster Yagyu Nobuharu. David continues to train and teach Shindo Muso Ryu, Jiki Shinkage Ryu, and Yagyu Shinkage Ryu under the auspices of the Hobyokai/Hobyokan in Rockville, MD. In addition to his academic and martial studies, David was ordained as a priest of the Japanese Buddhist Tendai School in 1978 and completed a rigorous training program under Professor Masao Ichishima at the Tamon-in temple. He later integrated this training in his academic research at U.C. Berkeley where he produced a dissertation entitled Marishiten: Buddhism and the Warrior Goddess in 1990 (a version of which appeared in the collection Koryu Bujutsu), currently under revision for popular press publication. After the death of Donn F. Draeger in 1982, David collaborated for ten years with Hunter Armstrong in running the International Hoplology Society. From 1983 to 1993 he also co-edited Hoplos: Journal of the International Hoplology Society. David is currently a professor at Montgomery College in Maryland where he is also Director of CyberWATCH--a National Science Foundation supported center dealing with Information Assurance and Security.
0.993937
We have considered the internal and external evidence for the authorship of Hebrews. Now we will consider the most probable possibilities for the authorship of Hebrews. The first suggestion worthy of consideration is the apostle Paul. Although external evidence for Pauline authorship is stronger than any other suggested author, modern scholars since the Reformation have almost unanimously denied that Paul wrote the Epistle on the basis of internal evidence. There are several reasons why Paul could have been the author of the Epistle. First, the circumstances in the closing verses of Hebrews 13 are very similar to those in the acknowledged Pauline letters (Lightfoot 1976 20). Paul and Timothy were very close associates for many years, which could easily explain the remark in 13:23. Also, the author asks for his readers to "pray for us, for we are sure that we have a clear conscience ..." (13:18), while Paul asks for his readers to pray for him and often refers to a clean conscience (Rom. 15:30; 2 Cor. 11:1; Acts 23:1; 24:16; 2 Cor. 11:2; 1 Tim. 3:9; 2 Tim. 1:3). The author also asked the Hebrews that he might be restored to them sooner (13:19), while Paul wrote to the Philippians and Philemon in the same manner (Phile. 22; Phil. 1:24-25). Finally, expressions like "the God of peace" (13:20) and "grace be with you all" (13:25) also appear in the writings of Paul (Rom. 15:33; 1 Thess. 5:28; 2 Thess. 3:18). The second reason in favor of Pauline authorship is that the ideas in Hebrews are also very similar to those found in the Pauline letters (Lightfoot 1976 20). Paul places heavy stress on Christology and the two covenants in his various letters, and these topics are of paramount concern to the writer of Hebrews. Shackelford (1987 395) lists some of the examples of similar thought: Christ given a name above every name (14; Eph. 1:21; Phil. 2:9), the law given by angels (2:2; Gal. 3:19), the weakness of the law (7:18; 8:7; Rom. 8:3), public persecution of saints (10:33; 1 Cor. 4:9), and the heavenly Jerusalem (12:22; Gal. 4:26). Lightfoot (1976 21-22) also lists several terms and phrases in Hebrews which are similar to those found in the Pauline letters.' Witherington (1991 146-152) has found a number of striking similarities between the book of Galatians and Hebrews, and even suggests that Galatians may have influenced Hebrews if Paul did not write the Epistle. A third reason supporting Pauline authorship is that the construction of the Epistle also follows the same general pattern as Paul's other letters, the doctrinal portion first, followed by an exhortation to duty. Filson (in Witherington 1991 150) points out that Hebrews 13 follows the pattern for letter closings found elsewhere in Paul and in particular in Galatians. In Galatians 6 and Hebrews 13 there is a remarkable fourfold pattern of similarities: (1) injunctions and teaching; (2) benediction; (3) greetings and messages; and (4) final benedictions. The style and perspective is hardly Paul's; the Greek is hardly Paul's; and the theology is not quite Paul's. Certainly, Hebrews has verbal similarities to Paul, but there are striking theological differences such as different twists of meaning on faith, on law, on soteriology, on flesh and spirit, on covenant, and on priesthood. Moreover, the lack of emphasis on the resurrection seems telling. Paul is an apostle of the resurrection. Such is not the emphasis of Hebrews. Although at first glance these objections seem formidable, Milligan (1875 13) stresses that the force of these arguments have been greatly overstated. He argues that the time, place, and circumstances have a tremendous influence over the thoughts, feelings, and expressions of an author. He also gives an example of how different the style is between Deuteronomy and Leviticus or the Gospel of John and the Revelation, even though their authorship is relatively undebated. Another objection to Pauline authorship is that Paul makes no claim to be the author. Guthrie (1982 2666) says, "This is in striking contrast with his practice as we know it from his acknowledged Epistles." It is very puzzling as to why he would have omitted his name if he were the writer. Pantaenus believed that Paul did not sign his name to the Epistle because of reverence to the Lord, who was the true apostle to the Hebrews (Shackelford 1987 394). This view is rather unconvincing though. Shackelford (1987 394) further suggests that Paul knew that the Jews were prejudiced against him because he was an apostle to the Gentiles (Rom. 11:13; cf. Acts 22:22), and he did not sign his name that they might more readily receive the letter. Paul is also rejected as the author of Hebrews on the grounds of apostleship. The author of Hebrews nowhere shows any consciousness of apostolic authority, which was very important to Paul (Guthrie 1982 2666). Shackelford (1987 395) stresses that Paul received his doctrine directly from Christ (Gal. 1:11-12), which would tend to go against what the author states in 2:3-4. However, Milligan (1875 14-15) argues that Paul is simply associating himself with his readers, and that he is referring to Christ's personal ministry on earth, of which Paul was not a witness. The only other serious possibility which has been attested externally is Barnabas. As mentioned above, when Tertullian suggested Barnabas, he seemingly did so not out of personal opinion, but of a common belief which circulated in his time. Henshaw (1952 344) says, "Had Barnabas been the author no difficulty would have been felt anywhere in accepting the Epistle into the Canon, as it would have been the work of an Apostle." Barnabas has several items which are in favor of his authorship. First, Barnabas can be associated with Rome, having accompanied Peter on a visit to that city after they left Corinth, following Claudius' death in A.D. 54 (Hill 1979 145). A second reason which supports Barnabas as an author is that his name meant "son of encouragement" (Acts 4:36), and 13:22 may have been designed as a play on words. This would certainly fit in well with Barnabas' known exhoratory skills. Third, Barnabas was a Levite who would have been acquainted with the temple ritual, but Guthrie (1982 2666) argues that this consideration carries little weight because the author of Hebrews is more interested in the biblical cults than in the current ritual, although a Levite would certainly have been deeply concerned about the issues raised in this book. In opposition to this view, Borchert (1985 322) says, "The question remains whether a Cypriot Jew would develop a writing style closely akin to the Alexandrian writers. It is, of course, not impossible because Philo and other Alexandrian writings were known on the island." Fourth, Hill (1979 145) argues, "The situation ad-dressed by the letter to the Hebrews requires that it be written by someone who had already proved himself a mediator in the church, and this Barnabas had certainly done (Acts 9:26-30; 11:22-30; 15:22-39)." However, Guthrie (1982 2666) makes a strong argument that Acts 15:23-24 could not apply to Barnabas for the same reason that it could not apply to Paul, but he does say, "The absence of data regarding the way in which Barnabas became a Christian makes it impossible to be certain." The most confusing aspect concerning Barnabas is the Epistle of Barnabas. Westcott (1892 78) says, "It (Hebrews) may have been written by Barnabas, if the `Epistle of Barnabas' is apocryphal." Borchert (1985 322) says, "Could it be, then, that as the arguments for canonicity developed, clerics attributed a work of Barnabas to Paul for the purpose of guaranteeing its acceptance in the canon and then in parallel fashion attributed to a lesser work the name of Barnabas so that his tradition would be pre-served?" It seems very weak to assume that the name of someone as prominent as Barnabas would have been left off of an Epistle he actually wrote while being falsely as-signed to another one. Lightfoot (1976 24) concludes, "It is probable that Barnabas as author was simply an ancient hypothesis advanced in the absence of any real knowledge on the question. Certainly there is nothing in the Epistle to indicate that Barnabas wrote it." Ever since Luther first suggested Apollos, he has gained tremendous popularity among New Testament scholars, although some consider Apollos nothing more than a "brilliant guess" (Lightfoot 1976 25). Borchert (1985 322) says, "Nevertheless, if one is to conjecture about who wrote Hebrews, it would be difficult to propose a finer candidate." Henshaw (1952 344) says, "There is only one person, of those whom we know, satisfies all the conditions, namely, Apollos." Montefiore and Lo Blue (in Hurst 1985 505) hold the position that Hebrews was written from Ephesus by Apollos to the Corinthian church between the years 52 and 52. Because Apollos was aware that there was a growing tendency in Corinth to venerate him above Paul, he decided not to accede to Paul's wish that he revisit the church at that time, stating instead that he would come sometime later (1 Cor. 16:12). In lieu of this proposed visit, Apollos sent a letter to the church addressed to the "He-brews" because, from 2 Corinthians 11:22, there is evidence of Jewish troublemakers at Corinth. Montefiore suggests that instead of following Apollos' advice, the Hebrews took his letter and used it as an example of the wisdom and eloquence which they themselves boasted. They also launched an intense depreciation of Paul be-cause he, they claimed, lacked these qualities (506). Paul's response to this matter is contained in 1 Corinthians 1-4. There are several points which Guthrie (1982 2666) and Lightfoot (1976 26) give in support of Apollos. First, he was an Alexandrian Jew and therefore could have been well versed in the type of thought current there. This fact would also account for the extensive use of the Septuagint in the Old Testament quotations. Second, Acts mentions his great biblical knowledge and his oratorical gifts, both of which would support the claim of his authorship of Hebrews. Third, Apollos knew Timothy and had a close association with Paul. Fourth, Apollos was "fervent in spirit," a man characterized by boldness of speech. Fifth, Apollos was a man of high reputation in the early church. Sixth, Apollos "spoke and taught accurately the things concerning Jesus." This accords with the subject of the Epistle. R.C.H. Lenski (1946 24) believes that the evidence is simply too strong to deny that Apollos wrote the Epistle. He states that the only evidence lacking that would re-move all doubt that Apollos was the author is a New Testament passage that actually places Apollos in Rome. Borchert (1985 322) says, "The major problem with this view is that it seems to lack any sense of antiquity and .. . I have the feeling that it is a construct of the last five hundred years." If Apollos had written the Epistle, the absence of any recorded history at Alexandria would be very surprising, because the city was known for its Christian writers of the past  Pantaenus, Clement, and Origen. Another problem with the Apollos suggestion is that no argument concerning style and phraseology is possible because there are no extant writings of Apollos to compare with Hebrews. Hiebert (1977 81) concludes that no decisive evidence against Apollos exists.4 (78-79) argues that Apollos was probably not the only Alexandrian in the apostolic age who was mighty in the Scriptures or that he possessed all the characteristics in more abundance his contemporaries. He concludes, "The wide acceptance of the conjecture as a fact is only explicable by our natural unwillingness to frankly confess our ignorance on a matter which excites our interest" (1892 79). Lightfoot (1976 26) has similar reservations by saying, "The hypothesis of Apollos as author has received wide acceptance; but without doubt much of this can be accounted for on the ground that in the search for a positive solution, there seems to be no other place to go." The present suggestion is that we should detach merely verses 22-25, and this not as a fragment, but as a covering letter to the main epistle. The epistle would thus end quite naturally with the first benediction; the covering letter would end with the Pauline benediction, no name being needed as it was written in Paul's own handwriting. The advantages of this suggestion are as follows: First, it accounts for the Pauline characteristics of the last four verses. Second, it suggests an author for whom there is some internal evidence. Timothy was well acquainted with Paul, would have fit the situation in 23, and he was a Hellenistic Jew. Third, it provides an explanation for the double benediction. Fourth, it provides as explanation as to why the early church believed the author was Paul (2:22). Legg also gives evidence (2:23) that the Epistle can be harmonized with Acts. If we date Hebrews, as many do, around A.D. 63-64, then the most likely destination must be Ephesus, where Timothy had spent a considerable time, according to the Pastoral epistles. Acts 19-20 make it clear that the Ephesians suffered persecution after their conversion, but gave no evidence of martyrdoms. This fits Hebrews 10:32ff. and 12:4. This is certainly one of the more interesting suggestions ever put forth. The major problem with the above speculation is that it must assume both the date and the destination, a difficult matter at best. Legg recognizes this deficiency, but says, "All these however, are mere conjectures but they do show the sort of picture which can be deduced  one at least as convincing as most that are fabricated around the anonymous author of Hebrews" (2:23). Over the years, Luke has found many supporters which base their opinion upon the verbal similarities between Hebrews and Acts, particularly some affinities with Stephen's speech (Guthrie 1982 2666). Westcott (1892 76) remarks, "When every allowance has been made for coincidences which consist in forms of expression which are found also in the Septuagint or in other writers of the New Testament, or in late Greek generally, the likeness is unquestionably remarkable." However, Lightfoot (1976 24-25) adds, "It would be precarious to claim Lukan authorship solely on the grounds of stylistic similarities." Some scholars and early church writers have suggested that Paul wrote the epistle, and Luke translated it into Greek. Borchert (1985 321-322) suggests that this is improbable for two reasons. First, the Greek of Hebrews does not look like a transliterated Greek; and second, Luke-Acts has a very Gentile outlook, while Hebrews has a highly Jewish outlook. Most scholars immediately deny independent Lukan authorship for several stylistic reasons. However, Franz Delitzsch (1978 409-417) has suggested that Luke acted as Paul's secretary, writing down the ideas of Paul in his own style and vocabulary.' This hypothesis would certainly alleviate the problems of both Pauline and Lukan authorship in a way which could be very feasible. Delitzsch (413) suggests that Paul could have perhaps instructed Luke to let the Hebrews feel the authority of his apostleship as little as possible, and place himself willingly in the back-ground as regarded the original apostles. Milligan (1875 14) feels that Epistle's unique style from other Pauline letters is easy to reconcile, especially if one gives Luke some liberty in phraseology. The list of possible candidates over the years has in-deed been endless. Some scholars have suggested Clement of Rome as a possible author. However, Guthrie (1982 2666) says, "A careful comparison of 1 Clement with Hebrews does not lead to the conviction that they were both written by the same author." Borchert (1985 323) mentions William Ramsay's view that the writer was Philip, the Caesarean deacon, although this view has found little support. Lightfoot (1976 26) briefly mentions Silas (or Silvanus) as a possible candidate because of his close association with the apostle Paul and his known writing activities (1 Thess. 1:1; 2 Thess. 1:1; 1 Pet. 5:12).' The suggestion mentioned by Shackelford (1987 395) and Guthrie (1982 2667) of Priscilla is rather interesting, especially in the era of growing feminism. Shackelford states, "The masculine participle, translated as `to tell' or 'telling' (1132) excludes a female as a writer; hence, Priscilla cannot be seriously considered." Likewise, Borchert (1985 323) says, "It is hard to make a clear case for a specific feminine touch in this book and the idea that the book lacks an author because the writer was a woman is clearly an argument from silence which can go both ways." Hiebert (1977 80) points out that if there were signs of femininity one would expect to see Deborah instead of Barak mentioned in chapter eleven. Perhaps the intrigue with the author of Hebrews stems from man's frustration to solve a mystery. It would, of course, be a wonderful discovery if man were to somehow figure out the real identity of the writer of Hebrews. How-ever, for the time being, scholars can only speculate. Guthrie (1982 2667) concludes, "The only reasonable course is to maintain an open verdict." When all of the evidence is weighed, the argument for Pauline authorship seems to this author as strong as any other candidate. The hypothesis of Luke acting as Paul's amanuensis (or secretary) seems to give even more potential credibility to the candidate which had the earliest attestation in the first place. If Paul were to be completely discounted, the case for Barnabas as a second choice is also very credible. A.B. Bruce (in Lightfoot 1976 27) has beautifully summarized the whole matter by saying, "We must be content to remain in ignorance as to the writer of this remarkable work. Nor should we find this difficult. Some of the greatest books of the Bible . . . are anonymous writings. It is meet that this one should belong to the number, for it bears witness in its opening sentence to one who speaks God's final word to men. In presence of the Son, what does it matter who points the way to him? The witness-bearer does not desire to be know. He bids us listen to Jesus and then retires into the background." One must be careful not to unjustly dogmatize any one belief above another. The question of authorship does not affect the doctrinal message taught by the book of He-brews. Christians have one of the most wonderful pieces of literature before them, and one must not allow this uncertainty to affect his view of the Epistle. 1. Lightfoot (1976 21) seriously wonders if the parallels have been looked at by scholars of recent years, but he is quick to caution that conclusions concerning parallels should not be drawn without a careful study of the parallels in the Greek text. While the Greek phrases are often similar, they are not necessarily identical. 2. Delitzsch (1978 412) states, "It cannot, however, well be imagined, especially looking at Paul's other epistles written in captivity, that an epistle from his hand to the Jewish Christians of Palestine would have received exactly this shape and stamp." 3. Hill (1979 146) states, "If the case for the authorship of Hebrews by Barnabas may be regarded as cumulatively more convincing than that for any other of the suggested authors, then it is a strong pointer towards the Christian-prophetic origin of the book, for, as we have seen in the chapter on Acts, Barnabas was one of the prophets of the early Christian community" (Acts 131). 4. Although many scholars hold the opinion that Apollos or Barnabas wrote the Epistle, some discount Apollos because of the lack of external testimony. For example, Conybeare and Howson (1910 854) say, "We need not dwell on this opinion, since it is not based on external testimony, and since Barnabas fulfills the requisite conditions almost equally well." 5. Legg (1968 222) says, `Timothy has usually been excluded from the lists of possibilities which scholars have drawn up, be-cause he is mentioned in the text and obviously would not have written about himself in this way. The present theory, however, removes this obstacle which has, for the most part, prevented scholars from even considering Timothy. The fact which usually rules him out is really the best reason for considering him." 6. Legg (1968 223) also gives internal evidence that the author was in prison at the time of writing, and this would account for the request for prayers made by the author for his release. 7. It must also be mentioned that Peter, Stephen, Aristion, and even Jude have had their advocates over the years. How-ever, all of these writers, like many others, simply have no attestation in order to be seriously considered. They are merely guesses in the dark about a guess in the dark. Borchert, Gerald L. "A Superior Book: Hebrews." Review and Expositor 82 (Summer 1985) 319-323. Delitzsch, Franz. Commentary on the Epistle to the Hebrews. 2 vols. Reprint ed. Minnesota: Klock and Klock, 1978. Filson, Floyd V. "The Epistle to the Hebrews." Journal of Bible and Religion 22 (1954):20-26. Guthrie, D. "The Epistle to the Hebrews." International Standard Bible Encyclopedia. (1982) 2665-667. Henshaw, T. "The Epistle to the Hebrews." New Testament Literature in the Light of Modern Scholarship. London, George Allen Ltd., 1952. Hiebert, D. Edmond. "The Non-Pauline Epistle and Revelation." An Introduction to the New Testament, 3. Chicago, Moody Press, 1977. Hill, David. New Testament Prophecy. Atlanta, John Knox Press, 1979. Hurst, L.D. "Apollos, Hebrews, and Corinth: Bishop Montefiore's Theory Examined." Scottish Journal of Theology 38 (1985): 505-513. Legg, John D. "Our Brother Timothy: A Suggested Solution to the Problem of the Authorship of the Epistle to the Hebrews." Evangelical Quarterly 40 (October-December 1968): 220-223. Lenski, R.C.H. An Interpretation of the Epistle to the Hebrews and the Epistle of James. Columbus, Wartburg Press, 1946. Lightfoot, Neil R. Jesus Christ Today. Abilene, Bible Guides, 1976. Milligan, R. The Epistle to the Hebrews. New Testament Commentary, 9. St. Louis, Christian Board of Publication, 1875. Shackelford, Don. "On to Maturity." New Testament Survey: An Introduction and Survey of the New Testament. Searcy, AR. College of Bible and Religion at Harding University, 1987. Westcott, B.F. The Epistle to the Hebrews. 2nd ed. London, MacMillan and Sons, 1892. Witherington, Ben. "The Influence of Galatians on Hebrews." New Testament Studies 37 (1991): 146-152.
0.999801
Know what combination of technologies to use, design your architecture correctly, and differentiate your app from others — these are the fundamental practices of coding in the age of Windows 8. To start, I'm going to point you to the tools and resources you'll find at Windows Dev Center. After that, I recommend the videos from Microsoft's last BUILD conference. Beyond the basics, I can also give you some advice that might not be immediately obvious to you from reading the docs. That's what the rest of this article is about. Picking a technology is only one piece of the technical puzzle, however. You also need to be careful about how you structure your app. As an industry, we've moved from client-only to client-server to n-tier to our current favorite: hub-and-spoke. The hub-and-spoke architecture provides the basic structure for most modern mobile apps: At the hub, you've got your server, which provides data and logic synchronized between multiple client spokes, such as Windows Phone, Win8, Web, iOS, Android, etc. Each spoke is designed and optimized for the specific host OS and the hub is the authoritative holder of the data shared between all clients. This kind of architecture calls for several considerations, including communication channel decisions (pull over HTTP, push over SMS, etc.), offline caching support, per-client data projection and filtering, and so on. As each spoke is added, you'll find that the hub often needs to change to support the particulars, so you'll need to think about the user experience you want and make those hub design decisions accordingly. Specifically, when it comes to Windows 8, you'll want to tailor a hub-and-spoke client for things you've undoubtedly already heard about: a touch-centric UX, Win8 UI style, fast and fluid animations, and so forth. What you might not have heard about is the three primary rules for a high-quality Win8 app: integrate, integrate, integrate! In prior versions of Windows, we had the Clipboard, which allowed each app to share whatever data it wanted in whatever formats it used. In Windows 8, there is still a Clipboard, of course, but it is only one of many "contracts" for sharing data between apps. For example, the SkyDrive app not only allows you to browse the files in your SkyDrive account, but it's also a File and Folder "provider," so when you want to open a file from a Win8 app, the file and folder dialogs know to let you choose something in SkyDrive as well as on your local computer. The app that's requesting the file doesn't need to know or care that the file is coming from SkyDrive — the operating system takes care of that, in the same way it maintains the Clipboard. Other contracts include Search, Settings, Share, and Contact Picker, just to name a few. In addition to the contracts, which allow you to integrate your app with other installed apps, the Win8 Start Screen also enables a great deal of integration with Live Tiles, Badges, Notifications, and the Lock Screen. Live Tiles are especially important, as they represent a key differentiator of the "Metro style" UI shared between Windows Phone, Windows 8, and the XBOX. And speaking of differentiators, another key thing you should do in your app is just that: differentiate. When you read the guidelines that describe Win8 UI style, you'll hear a great deal about the Windows 8 way of arranging data, using the contracts, providing the right animations, consuming the right gestures, using the right fonts, and so on. However, once you've gotten your head around the Win8 UI style, you'll need to bend or even break some of the rules so that your app stands out from the crowd. Your app must have personality as well as functionality, if it's going to be featured as a top app in the Store and get you the reputation and revenue it deserves. With Windows 8 and the Windows Store, we're at the beginning of a new era of Windows, which brings devices, touch, and UX design to the foregroud — while keeping desktop apps running well, too. The future is going to bring great changes as we learn how to design apps for this new world. Stick around. We're just getting started!
0.957974
Does stress cause Parkinson’s? My tremor is only noticeable when I am under stress. Although stress may not cause Parkinson’s, it certainly can affect how you feel with the disease, cope and adjust to symptoms and your own tendency to take charge of your condition.
0.999975
accomplish this. Using a two screen computer will help you multitask with ease. You could, for example, play a video on one monitor, while working on the other. You could dedicate one screen just to email and Skype, while comparing products side-by-side on the other. Additionally, you can get more space for data intensive projects by viewing information side by side instead of bouncing back and forth between screens. Using a two screen computer, you no longer have to fumble to find documents or reference materials. You desk space will become less cramped and you will feel more organized. You’ll be happy to learn that most laptop already allow you to connect one additional monitor, while desktops typically allow two monitor screens, without the need for extra hardware. In the event that you do need to purchase additional hardware you can always contact the experts over at multi-monitors.com for advice on what to buy. They have an extremely knowledgeable technician base that can help you pick out exactly what you need, depending on your particular situation. This is extremely helpful for most people, as they generally don’t understand computer hardware that well. Using a two screen computer could be beneficial for you and your business. It’s time to get connected and get working and see the difference between using a single monitor computer and a multiple monitor- two screen computer.
0.992629
The top and bottom faces of this model are regular hexagons. Four side faces are square and two other sides have square pyramids extending from them. One square face is between pyramids and the other three squares are adjacent. The total number of faces is eight equilateral triangles, four squares, and two hexagons.
0.93699
George Frideric Handel (German: Georg Friedrich Händel; pronounced [ˈhɛndəl]) (born in Halle, Germany, 1685) became a prominent German-British Baroque composer, famous for his operas, oratorios, anthems and organ concertos. Handel received critical musical training in Halle, Hamburg and Italy before settling in London in 1712. He became a naturalised British subject in 1727. By 1741, Handel's pre-eminence in British music was evident from the honours he had accumulated, including a pension from the court of King George II, the office of Composer of Musick for the Chapel Royal, and—most unusually for a living person—a statue erected in his honour in Vauxhall Gardens. Within a large and varied musical output, Handel was a vigorous champion of Italian opera, which he had introduced to London in 1711 with Rinaldo. He had subsequently written and presented more than 40 such operas in London's theatres. By the early 1730s public taste was beginning to change. The popular success of John Gay and Johann Christoph Pepusch's The Beggar's Opera (first performed in 1728) had heralded a spate of English-language ballad-operas that mocked the pretensions of Italian opera. With box-office receipts falling, Handel's productions were increasingly reliant on private subsidies from the nobility. Such funding became harder to obtain after the launch in 1730 of the "Opera of the Nobility", a rival company to his own. Handel overcame this challenge, but he spent large sums of his own money to do so. Future prospects for Italian opera in London declined during the 1730s. Handel remained committed to the genre, but began to introduce English-language oratorios as occasional alternatives to his staged works. As a young man in Rome in 1707–08, he had written two Italian oratorios at a time when opera performances in the city were temporarily forbidden under papal decree. His first venture into English oratorio had been Esther, which was written and performed for a private patron in about 1718. In 1732 Handel brought a revised and expanded version of Esther to the King's Theatre, Haymarket, where members of the royal family attended a glittering premiere on 6 May. Its success encouraged Handel to write two more oratorios (Deborah and Athalia). All three oratorios were performed to large and appreciative audiences at the Sheldonian Theatre in Oxford in mid-1733. Undergraduates reportedly sold their furniture to raise the money for the five-shilling tickets. In 1735 Handel received the text for a new oratorio named Saul from its librettist Charles Jennens, a wealthy landowner with musical and literary interests. Because Handel's main creative concern was still with opera, he did not write the music for Saul until 1738, in preparation for his 1738–39 theatrical season. The work, after opening at the King's Theatre in January 1739 to a warm reception, was quickly followed by the less successful oratorio Israel in Egypt (which may also have come from Jennens). Although Handel continued to write and present operas, the trend towards English-language productions became irresistible as the decade ended. After three performances of his last Italian opera Deidamia in January and February 1741, he abandoned the genre. In July 1741 Jennens sent him a new libretto for an oratorio. In a letter dated 10 July to his friend Edward Holdsworth, Jennens wrote: "I hope [Handel] will lay out his whole Genius & Skill upon it, that the Composition may excell all his former Compositions, as the Subject excells every other subject. The Subject is Messiah". In the Christian tradition the figure of the "Messiah" or redeemer is identified with the person of Jesus, known by his followers as the Christ or "Jesus Christ". Handel's Messiah has been described by the early-music scholar Richard Luckett as "a commentary on [Jesus Christ's] Nativity, Passion, Resurrection and Ascension", beginning with God's promises as spoken by the prophets and ending with Christ's glorification in heaven. In contrast with most of Handel's oratorios, the singers in Messiah do not assume dramatic roles; there is no single, dominant narrative voice; and very little use is made of quoted speech. In his libretto, Jennens's intention was not to dramatise the life and teachings of Jesus, but to acclaim the "Mystery of Godliness", using a compilation of extracts from the Authorized (King James) Version of the Bible, and from the Psalms included with the Book of Common Prayer (which uses the original translations of Miles Coverdale rather than the later version of the King James Bible's translators). Jennens's letter to Holdsworth of 10 July 1741, in which he first mentions Messiah, suggests that the text was a recent work, probably assembled earlier that summer. As a devout Anglican and believer in scriptural authority, part of Jennens's intention was to challenge advocates of Deism, who rejected the doctrine of divine intervention in human affairs. Shaw describes the text as "a meditation of our Lord as Messiah in Christian thought and belief", and despite his reservations on Jennens's character, concedes that the finished wordbook "amounts to little short of a work of genius". There is no evidence that Handel played any active role in the selection or preparation of the text, such as he did in the case of Saul; it seems, rather, that he saw no need to make any significant amendment to Jennens's work. The music for Messiah was completed in 24 days of swift composition. Having received Jennens's text some time after 10 July 1741, Handel began work on it on 22 August. His records show that he had completed Part I in outline by 28 August, Part II by 6 September and Part III by 12 September, followed by two days of "filling up" to produce the finished work on 14 September. The autograph score's 259 pages show some signs of haste such as blots, scratchings-out, unfilled bars and other uncorrected errors, but according to the music scholar Richard Luckett the number of errors is remarkably small in a document of this length. The original manuscript for Messiah is now one of the chief highlights from the British Library's music collection. At the end of his manuscript Handel wrote the letters "SDG"—Soli Deo Gloria, "To God alone the glory". This inscription, taken with the speed of composition, has encouraged belief in the apocryphal story that Handel wrote the music in a fervour of divine inspiration in which, as he wrote the "Hallelujah" chorus, "he saw all heaven before him". Burrows points out that many of Handel's operas, of comparable length and structure to Messiah, were composed within similar timescales between theatrical seasons. The effort of writing so much music in so short a time was not unusual for Handel and his contemporaries; Handel commenced his next oratorio, Samson, within a week of finishing Messiah, and completed his draft of this new work in a month. In accordance with his frequent practice when writing new works, Handel adapted existing compositions for use in Messiah, in this case drawing on two recently completed Italian duets and one written twenty years previously. Thus, Se tu non lasci amore from 1722 became the basis of "O Death, where is thy sting?"; "His yoke is easy" and "And he shall purify" were drawn from Quel fior che alla'ride (July 1741), "Unto us a child is born" and "All we like sheep" from Nò, di voi non vo' fidarmi (July 1741). Handel's instrumentation in the score is often imprecise, again in line with contemporary convention, where the use of certain instruments and combinations was assumed and did not need to be written down by the composer; later copyists would fill in the details. In continental Europe, performances of Messiah were departing from Handel's practices in a different way: his score was being drastically reorchestrated to suit contemporary tastes. In 1786, Johann Adam Hiller presented Messiah with updated scoring in Berlin Cathedral. In 1788 Hiller presented a performance of his revision with a choir of 259 and an orchestra of 87 strings, 10 bassoons, 11 oboes, 8 flutes, 8 horns, 4 clarinets, 4 trombones, 7 trumpets, timpani, harpsichord and organ. In 1789, Mozart was commissioned by Baron Gottfried van Swieten and the Gesellschaft der Associierten to re-orchestrate several works by Handel, including Messiah.[n 5] Writing for a small-scale performance, he eliminated the organ continuo, added parts for flutes, clarinets, trombones and horns, recomposed some passages and rearranged others. The performance took place on 6 March 1789 in the rooms of Count Johann Esterházy, with four soloists and a choir of 12.[n 6] Mozart's arrangement, with minor amendments from Hiller, was published in 1803, after his death.[n 7] The musical scholar Moritz Hauptmann described the Mozart additions as "stucco ornaments on a marble temple". Mozart himself was reportedly circumspect about his changes, insisting that any alterations to Handel's score should not be interpreted as an effort to improve the music. Elements of this version later became familiar to British audiences, incorporated into editions of the score by editors including Ebenezer Prout. The opening Sinfony is composed in E minor for strings, and is Handel's first use in oratorio of the French overture form. Jennens commented that the Sinfony contains "passages far unworthy of Handel, but much more unworthy of the Messiah"; Handel's early biographer Charles Burney merely found it "dry and uninteresting". A change of key to E major leads to the first prophecy, delivered by the tenor whose vocal line in the opening recitative "Comfort ye" is entirely independent of the strings accompaniment. The music proceeds through various key changes as the prophecies unfold, culminating in the G major chorus "For unto us a child is born", in which the choral exclamations (which include an ascending fourth in "the Mighty God") are imposed on material drawn from Handel's Italian cantata Nò, di voi non vo'fidarmi. Such passages, says the music historian Donald Jay Grout, "reveal Handel the dramatist, the unerring master of dramatic effect". The pastoral interlude that follows begins with the short instrumental movement, the Pifa, which takes its name from the shepherd-bagpipers, or pifferare, who played their pipes in the streets of Rome at Christmas time. Handel wrote the movement in both 11-bar and extended 32-bar forms; according to Burrows, either will work in performance. The group of four short recitatives which follow it introduce the soprano soloist—although often the earlier aria "But who may abide" is sung by the soprano in its transposed G minor form. The final recitative of this section is in D major and heralds the affirmative chorus "Glory to God". The remainder of Part I is largely carried by the soprano in B flat, in what Burrows terms a rare instance of tonal stability. The aria "He shall feed his flock" underwent several transformations by Handel, appearing at different times as a recitative, an alto aria and a duet for alto and soprano before the original soprano version was restored in 1754. The appropriateness of the Italian source material for the setting of the solemn concluding chorus "His yoke is easy" has been questioned by the music scholar Sedley Taylor, who calls it "a piece of word-painting ... grieviously out of place", though he concedes that the four-part choral conclusion is a stroke of genius that combines beauty with dignity. The second Part begins in G minor, a key which, in Hogwood's phrase, brings a mood of "tragic presentiment" to the long sequence of Passion numbers which follows. The declamatory opening chorus "Behold the Lamb of God", in fugal form, is followed by the alto solo "He was despised" in E flat major, the longest single item in the oratorio, in which some phrases are sung unaccompanied to emphasise Christ's abandonment. Luckett records Burney's description of this number as "the highest idea of excellence in pathetic expression of any English song". The subsequent series of mainly short choral movements cover Christ's Passion, Crucifixion, Death and Resurrection, at first in F minor, with a brief F major respite in "All we like sheep". Here, Handel's use of Nò, di voi non vo'fidarmi has Sedley Taylor's unqualified approval: "[Handel] bids the voices enter in solemn canonical sequence, and his chorus ends with a combination of grandeur and depth of feeling such as is at the command of consummate genius only". The opening soprano solo in E major, "I know that my Redeemer liveth" is one of the few numbers in the oratorio that has remained unrevised from its original form. its simple unison violin accompaniment and its consoling rhythms apparently brought tears to Burney's eyes. It is followed by a quiet chorus that leads to the bass's declamation in D major: "Behold, I tell you a mystery", then the long aria "The trumpet shall sound", marked pomposo ma non allegro ("dignified but not fast"). Handel originally wrote this in da capo form, but shortened it to dal segno, probably before the first performance. The extended, characteristic trumpet tune that precedes and accompanies the voice is the only significant instrumental solo in the entire oratorio. Handel's awkward, repeated stressing of the fourth syllable of "incorruptible" may have been the source of the 18th-century poet William Shenstone's comment that he "could observe some parts in Messiah wherein Handel's judgements failed him; where the music was not equal, or was even opposite, to what the words required". After a brief solo recitative, the alto is joined by the tenor for the only duet in Handel's final version of the music, "O death, where is thy sting?" The melody is adapted from Handel's 1722 cantata Se tu non lasci amore, and is in Luckett's view the most successful of the Italian borrowings. The duet runs straight into the chorus "But thanks be to God". In 1954 the first recording based on Handel's original scoring was conducted by Hermann Scherchen for Nixa using London forces[n 11]; it was quickly followed by another version, judged scholarly at the time, under Sir Adrian Boult for Decca. By the standards of 21st-century performance, however, Scherchen's and Boult's tempi were still slow, and there was no attempt at vocal ornamentation by the soloists. In 1966 and 1967 two new recordings were regarded as great advances in scholarship and performance practice, conducted respectively by Colin Davis for Philips and Charles Mackerras for HMV. They inaugurated a new tradition of brisk, small scale performances, with vocal embellishments by the solo singers.[n 12] An important recording from 1965 conducted by Otto Klemperer is also available, featuring superstar soloists Elisabeth Schwarzkopf, Nicolai Gedda, and Jerome Hines. Among the last notable recordings of older-style performances were Beecham's recording with the Royal Philharmonic Orchestra of the extravagant large-scale orchestration he commissioned from Sir Eugene Goossens (although the orchestrations were actually written by Leonard Salzedo [1921-2000], per Beecham's widow, Lady Shirley Beecham - see the letter from Lady Beecham in the June, 1999 edition of Gramophone), made for RCA Victor in 1959; one conducted by Karl Richter for DG in 1973, though it used authentic orchestration;[n 13] and a third based on Prout's 1902 edition of the score, with a 325-voice choir and 90-piece orchestra conducted by Sir David Willcocks in 1995. By 2006, much more was known about "authentic" performance, and many instrumentalists skilled in the period style, and equipped with the right instruments were available. Edward Higginbottom produced a new recording, based on the edition of 1751 (Naxos 8.570131). The choir of New College Oxford (men and boys) provided the chorus and soloists... bass, tenor, alto and treble. The orchestra was the Academy of Ancient Music. Several reconstructions of early performances have been recorded: the 1742 Dublin version by Scherchen in 1954 and again in 1959; and by Jean-Claude Malgoire in 1980 and several recordings of the 1754 Foundling Hospital version, including those under Hogwood (1979), Andrew Parrott (1989), and Paul McCreesh. Unorthodox adaptations have included a late 1950s recording conducted by Leonard Bernstein of his own edition which regrouped and reordered the numbers into a "Christmas section" and an "Easter section".[n 15] In 1973 David Willcocks conducted a set for HMV in which all the soprano arias were sung in unison by the boys of the Choir of King's College, Cambridge, and in 1974, for DG, Mackerras conducted a set of Mozart's reorchestrated version, sung in German. On the Saturday 11 Apr 2009 broadcast of BBC 3's CD Review – Building a Library, musicologist Berta Joncus surveyed recordings of Messiah and recommended the 2008 recording by The Sixteen, Harry Christophers (conductor), as the "first choice". ↑ Since its earliest performances the work has often been referred to, incorrectly, as "The Messiah". The article is absent from the proper title. ↑ The description "Sinfony" is taken from Handel's autograph score. ↑ It is possible that Delaney was alluding to the fact that Cibber was, at that time, involved in a scandalous divorce suit. ↑ Anthony Hicks gives a slightly different instrumentation: 14 violins and 6 violas. ↑ Swieten provided Mozart with a London publication of Handel's original orchestration (published by Randal & Abell), as well as a German translation of the English libretto, compiled and created by Friedrich Gottlieb Klopstock and Christoph Daniel Ebeling. ↑ A repeat performance was given in the Esterháza court on 7 April 1789, and between the year of Mozart's death (1791) and 1800, there were four known performances of Mozart's re-orchestrated Messiah in Vienna: 5 April 1795, 23 March 1799, 23 December 1799 and 24 December 1799. ↑ Hiller was long thought to have revised Mozart's scoring substantially before the score was printed. Ebenezer Prout pointed out that the edition was published as "F. G. [sic] Händels Oratorium Der Messias, nach W. A. Mozarts Bearbeitung" – "nach" meaning after rather than in Mozart's arrangement. Prout noted that a Mozart edition of another Handel work, Alexander's Feast published in accordance with Mozart's manuscript, was printed as "mit neuer Bearbeitung von W. A. Mozart" ("with new arrangement by W. A. Mozart)." When Mozart's original manuscript subsequently came to light it was found that Hiller's changes were not extensive. ↑ Many of the editions before 1902, including Mozart's, derived from the earliest printed edition of the score, known as the Walsh Edition, published in 1767. ↑ In 1966 an edition by John Tobin was published. More recent editions have included those edited by Donald Burrows (Edition Peters, 1987) and Clifford Bartlett (Oxford University Press, 1999). ↑ The numbers customarily omitted were: from Part II, "Unto which of the angels"; "Let all the angels of God worship Him"; and "Thou art gone up on high"; and from Part III, "Then shall be brought to pass"; "O death, where is thy sting?", "But thanks be to God"; and "If God be for us". ↑ This recording was monophonic and issued on commercial CD by PRT in 1986; Scherchen re-recorded Messiah in stereo in 1959 using Vienna forces; this was issued on LP by Westminster and on commercial CD by Deutsche Grammophon in 2001. Both recordings have appeared on other labels in both LP and CD formats. A copyright-free transfer of the 1954 version (digitized from original vinyl discs by Nixa Records) is available on YouTube: part 1, part 2, part 3. ↑ The Davis set uses a chorus of 40 singers and an orchestra of 39 players; the Mackerras set uses similarly sized forces, but with fewer strings and more wind players. ↑ The Richter set follows the Peters edition of the score edited by Kurt Soldan (1939) and Arnold Schering (1967). ↑ A 1997 recording under Harry Christophers employed a chorus of 19 and an orchestra of 20. In 1993, the Scholars Baroque Ensemble released a version with 14 singers including soloists. ↑ In a review in The Gramophone, Andrew Porter referred to Jens Peter Larsen's observation that Messiah "is 'manifold in its splendours, yet completely balanced, a unity': not selected scenes from the life of Our Lord, but 'a representation of the fulfilment of Redemption through the Redeemer'. Part I is the prophecy and realisation of God's plan to send the Redeemer to earth ; Part II is the accomplishment of redemption; and Part III 'a Hymn of Thanksgiving for the final overthrow of Death'." This page was last modified on 8 January 2016, at 14:27.
0.999961
if I connect through the tunnel to my friend's friend work LAN, will use only his LAN (IP) and will not have my own internet at all, force entry connection and getting their internet to host my internet, is it possible to trace my physical location (address)? The only address government will be able to find is that, that ISP have on file for that particular IP thus my friend's friend work address, and it will be impossible for them to find my location? The problem with this scenario is that you are relying on Network Address Translation (NAT) to hide your physical location. This assumes that IP Address = Physical Location. Large IP address sets like 34. *. *. * are owned by top tier providers, who usually sell an address range like 34. 138. *. * to local providers, who assign ranges like 34. 138. 17. * to a business. Since there are limited IP Addresses most businesses have an internal network IP Address assigned to their employees separate from any Internet public IP addresses. Then the business keeps track of what internal IP address did which request out to the Internet and routes the response to the internal IP address. This is called Network Address Translation or NAT for short. Your friend's work most likely has a log of who logged in over the LAN VPN connection. Therefore the government can ask or serve a warrant to your friend's company to see who was on at a specific time. The trail will lead back to your friend. And because his work likely knows where he lives. It won't be hard to find you. Also depending on the capabilities of the government, they might have physical address location capabilities that don't require the cooperation of your friend's company. In order to protect yourself from government spying, try a network anonymizer like Tor ( https://www.torproject.org/projects/torbrowser.html.en ). The connection between your ISP assigned IP address and the "end of the tunnel" IP address are not secrets and are known by the ISP. Furthermore, if you login to a web service using this secure tunnel which you also login in from, even once, from your ISP assigned IP address, you're privacy may be compromised. There are numerous tracking methods beyond IP addresses as well. As far as privacy goes: If privacy were measured in a scale from 1 to 10, the extra privacy from tunnel usage may raise the measure from a 5 to a 6. However, why not just use Tor? The friend of a friend's work place is certainly many degrees closer to you than the nearest Tor entry node. Not the answer you're looking for? Browse other questions tagged privacy protection or ask your own question. How can you be caught using Private VPN when there's no logs about who you are? Can p2p use be traced by targeting a known individual rather than an offending IP Address? What is the ideal anonymous workstation setup? How can I stop an account I don't control from sending spam email in my name to my contacts? Do VPNs really mask your identity or is it a facade? Is it possible to hide my physical location from my ISP? Can my phone be located (e.g. by emergency personnel) when I use a VPN?
0.990517
Why is the 13th (or the 6th) of a Cm chord A, not Ab? This is simply the way these chords are written. But, this has advantages; it means that we always know what interval a 13th (or 7th or 9th or anything else) is going to add to a chord, no matter what the other notes of the chord are. In particular, we know what interval to add even if the tonality or modality of surrounding chords, other parts of the chord itself, or written parts (eg. bass line or melody), might suggest a different interval. The key signature of C minor and the C Natural Minor Scale (Aeolian) have a flattened sixth degree (Ab). Yes, but there are minor modes/scales on C (with a minor triad on the root note, i.e. chord I is Cm) which have MAJOR sixths: C Dorian and C Melodic Minor Ascending are two obvious ones. Tim mentioned these already. It might also be possible that you were reading C…m13 rather than Cm…13, in other words, a C chord with a minor 13th. This would be a completely different chord, and, although we do talk about minor 13th intervals, these would be notated in a chord symbol as b13 (i.e. a flattened 13th - Ab). Lastly, it is worth mentioning where this misunderstanding arises; through trying to work out what intervals are in a chord by using a supposed mode/scale containing these notes, rather than choosing a mode to use which fits the chords (the harmony). The second way is how we actually play. The chord symbols tell us exactly which notes the composer wants in the harmony at any particular point in a piece/song, we then choose a mode for improvisation or creating a melody, bass or harmony line to fit the chord, not the other way around. For instance, Cm13 suggests the notes C Eb G Bb D (F) A (as a 13th chord is built using tertian harmony - sorry if you don't know anything about that yet!), this in turn would suggest that C Dorian would be a good scale to use with it. This is one of the weird bits with chords. With a maj.6 on C, it's C E G A. With a min.6 on C it's C Eb G A. It's the chord that's minor, not the 6th interval.The minor 6th interval is Ab rather than G# anyway. So for 6th or 13th, which is effectively and in reality the same note - possibly different octave - seems to depend on one's persuasion - its going to be A natural.It would never have been G in any case, as G is the 5th or 12th , and that is in the original triad. On the subject of notes in scales, in the melodic minor,(classical) ascending, the 6th note IS A natural. This is not an exception or something illogical. There are many scales whose 1, 3, and 5 form a minor triad. Some of these have a major 6, some a minor 6. A C minor chord could, for example, appear in the key of B flat, where an A natural occurs in the key signature. People reading chord symbols want to be able to connect the symbol with a specific chord, so we use specific rules about what the symbols mean. The rule is that in a 13th chord, the 13 (6) is major. There's the closely related notation Cm6. Note that if you interpreted Cm6 as having an Ab in it, you'd get C Eb G Ab, which would sound to most people like an Ab maj7 chord, not a C minor triad. Because, in our system of triad-based chord naming, 7ths are minor but other added intervals are major except when stated otherwise. The 'm' in 'Cm13' refers to the basic triad, not to the 13th. And 'Borough' is pronounced 'burrer'. Live with it. And that's really all there is to say about it. When chord-naming developed, no-one was thinking in terms of what scales might be played over them. Why do we call the Hendrix chord 7#9 instead of 7b10?
0.946771
Buddhism in the West broadly encompasses the knowledge and practice of Buddhism outside Asia in Europe, the Americas, Australia and New Zealand. Occasional intersections between Western civilization and the Buddhist world have been occurring for thousands of years. With the rise of European colonization of Buddhist countries in Asia during the 19th century detailed knowledge of Buddhism became available to large numbers of people in the West, as a result of accompanying scholarly endeavours. One of the oldest images of the Buddha, from the Greco-Buddhist period in Central Asia, 1st-2nd century CE. The Western and Buddhist worlds have occasionally intersected since the distant past. It was possible that the earliest encounter was in 334 BCE, early in the history of Buddhism, when Alexander the Great conquered most of Central Asia. The Seleucids and successive kingdoms established Hellenistic influence in the area, interacting with Buddhism introduced from India, producing Greco-Buddhism. The Mauryan Emperor Aśoka (273–232 BCE) converted to Buddhism after his bloody conquest of the territory of Kalinga (modern Orissa) in eastern India during the Kalinga War. Regretting the horrors brought about by the conflict, the Emperor decided to renounce violence. He propagated the faith by building stupas and pillars urging, amongst other things, respect of all animal life and enjoining people to follow the Dharma. Perhaps the finest example of these is the Great Stupa of Sanchi in India. This stupa was constructed in the 3rd century BCE and later enlarged. Its carved gates, called Toran, are considered among the finest examples of Buddhist art in India. He also built roads, hospitals, universities and irrigation systems around the country. He treated his subjects as equals regardless of their religion, politics or caste. The Maurya Empire under Emperor Aśoka was the world's first major Buddhist state. It established free hospitals and free education and promoted human rights. This period marks the first spread of Buddhism beyond India to other countries. According to the plates and pillars left by Aśoka (the edicts of Aśoka), emissaries were sent to various countries in order to spread Buddhism, as far south as Sri Lanka and as far west as the Greek kingdoms, in particular the neighboring Greco-Bactrian Kingdom, and possibly even farther to the Mediterranean. In the Christian era, Buddhist ideas periodically filtered into Europe via the Middle East. Stories of the Christian saints Barlaam and Josaphat were christianized renditions of the life of Siddhartha Gautama, as translated from Indian sources into Persian to Arabic to Greek versions, the religious language being only cosmetically altered along the way. The first direct recorded encounter between European Christians and Buddhists was in 1253 when the king of France sent William of Rubruck as an ambassador to the court of the Mongol Empire. Later, in the 17th century, Mongols practicing Tibetan Buddhism established Kalmykia, the only Buddhist nation in Europe, at the eastern edge of the continent. The Indo-Greek king Menander (155-130 BCE) is the first Western historical figure documented to have converted to Buddhism. The Hellenistic influence in the area, furthered by Seleucids and the successive Greco-Bactrian and Indo-Greek kingdoms, interacted with Buddhism, as exemplified by the emergence of Greco-Buddhist art, especially within the Gandhara civilization which covered a large part of modern-day northern Pakistan and eastern Afghanistan. Greek sculptors in the classical tradition came to teach their skills to Indian sculptors resulting in the distinctive style of Greco-Buddhist art or Gandhara art in both stone and stucco in hundreds of Buddhist monasteries which are still being discovered and excavated in this region. Greco-Buddhism is the cultural merging between the cultures of Hellenism and Buddhism, which developed over a period of close to eight centuries in Central Asia between the 4th century BCE and the 5th century CE. Several instances of interaction between Buddhism and the Roman Empire are documented by Classical and early Christian writers. Roman historical accounts describe an embassy sent by the Indian king Pandion (Pandya?), also named Porus, to Augustus around 13 CE. The embassy was travelling with a diplomatic letter in Greek, and one of its members—called Zarmanochegas—was an Indian religious man (sramana) who burned himself alive in Athens to demonstrate his faith. The event created a sensation and was described by Nicolaus of Damascus, who met the embassy at Antioch, and related by Strabo (XV,1,73) and Dio Cassius. A tomb was made for Zarmanochegas, still visible in the time of Plutarch, which bore the following inscription, "ΖΑΡΜΑΝΟΧΗΓΑΣ ΙΝΔΟΣ ΑΠΟ ΒΑΡΓΟΣΗΣ" ("Zarmanochegas from Barygaza in India"). These accounts at least indicate that Indian religious men (Sramanas, to which the Buddhists belonged, as opposed to Hindu Brahmanas) were visiting Mediterranean countries. However, the term sramana is a general term for Indian religious man in Jainism, Buddhism, and Ājīvika. It is not clear which religious tradition the man belonged to in this case. During the 19th century, Buddhism (along with other non-European religions and philosophies) came to the attention of Western intellectuals through the work of Christian missionaries, scholars, and imperial civil servants who wrote about the countries in which they worked. In English, Sir Edwin Arnold's book-length poem The Light of Asia (1879), a life of the Buddha, became a best-seller and has remained continuously in print since it first appeared. These included the German philosopher Schopenhauer, who first read about Buddhism and other Asian religions at an early stage before he devised his philosophical system. The American philosopher Henry David Thoreau translated a Buddhist sutra from French into English. There are frequent comparisons between Buddhism and the German philosopher Friedrich Nietzsche, who praised Buddhism in his 1895 work The Anti-Christ, calling it "a hundred times more realistic than Christianity". Robert Morrison believes that there is "a deep resonance between them" as "both emphasise the centrality of humans in a godless cosmos and neither looks to any external being or power for their respective solutions to the problem of existence". In the latter half of the 19th century, Buddhism came to the attention of a wider Western public, such as through the writings of Lafcadio Hearn. The late 19th century also saw the first-known modern western conversions to Buddhism, including leading Theosophists Henry Steel Olcott and Helena Blavatsky in 1880 in Sri Lanka, "beachcombers" such as the Irish ex-hobo U Dhammaloka around 1884 and intellectuals such as Bhikkhu Asoka (H. Gordon Douglas), Ananda Metteyya and Nyanatiloka at the start of the 20th century. A hallway in California's Hsi Lai Temple. Immigrant monks soon began teaching to western audiences, as well. The first Buddhists to arrive in the United States were Chinese. Hired as cheap labor for the railroads and other expanding industries, they established temples in their settlements along the rail lines. At about the same time, immigrants from Japan began to arrive as laborers on Hawaiian plantations and central-California farms. In 1899, they established the Buddhist Missions of North America, later renamed the Buddhist Churches of America. In 1893 Soyen Shaku was one of four priests and two laymen, representing Rinzai Zen, Jodo Shinshu, Nichiren, Tendai, and Shingon, composing the Japanese delegation that participated in the World Parliament of Religions in Chicago organized by John Henry Barrows and Paul Carus. In 1897, D.T. Suzuki came to the USA to work and study with Paul Carus, professor of philosophy. D.T. Suzuki was the single-most important person in popularizing Zen in the west. His thoughts and works were influenced by western occultism, such as Theosophy and Swedenborgianism. By his works Suzuki contributed to the emergence of Buddhist modernism, a syncretistic form of Buddhism which blends Asian Buddhism with western transcendentalism. The first Buddhist temple in Europe, named Das Buddhistische Haus, was founded by Paul Dahlke in 1924 in Berlin. Dahlke had studied Buddhism in Sri Lanka prior to World War I. The first English translation of the Tibetan Book of the Dead was published in 1927 and the reprint of 1935 carried a commentary from Carl Jung. The book is said to have attracted many westerners to Tibetan Buddhism. Also published in English in 1927, Alexandra_David-Néel's "My Journey to Lhasa" helped popularized the modern perception of Tibet and Tibetan Buddhism at large. Western spiritual seekers were attracted to what they saw as the exotic and mystical tone of the Asian traditions, and created esoteric societies such as the Theosophical Society of H.P. Blavatsky. The Buddhist Society, London was founded by Theosophist Christmas Humphreys in 1924. At first Western Buddhology was hampered by poor translations (often translations of translations), but soon Western scholars such as Max Müller began to learn Asian languages and translate Asian texts. During the 20th century the German writer Hermann Hesse showed great interest in Eastern religions, writing a book entitled Siddhartha. American beat generation writer Jack Kerouac became a well-known literary Buddhist, for his roman à clef The Dharma Bums and other works. Also influential was Alan Watts, who wrote several books on Zen and Buddhism. The steady influx of refugees from Tibet in the 1960s and from Vietnam, Laos, and Cambodia in the 1970s led to renewed interest in Buddhism, and the countercultural movements of the 1960s proved fertile ground for its Westward diffusion. Buddhism supposedly promised a more methodical path to happiness than Christianity and a way out of the perceived spiritual bankruptcy and complexity of Western life. After the Second World War, a mainstream western Buddhism emerged. In 1959, a Japanese teacher, Shunryu Suzuki, arrived in San Francisco. At the time of Suzuki's arrival, Zen had become a hot topic amongst some groups in the United States, especially beatniks. Suzuki-roshi's classes were filled with those wanting to learn more about Buddhism, and the presence of a Zen master inspired the students. In 1965, Philip Kapleau traveled to Rochester, New York with the permission of his teacher, Haku'un Yasutani to form the Rochester Zen Center. At this time, there were few if any American citizens that had trained in Japan with ordained Buddhist teachers. Kapleau had spent 13 years (1952–1965) and over 20 sesshin before being allowed to come back and open his own center. During his time in Japan after World War II, Kapleau wrote his seminal work The Three Pillars of Zen. In 1965, monks from Sri Lanka established the Washington Buddhist Vihara in Washington, D.C., the first Theravada monastic community in the United States. The Vihara was quite accessible to English-speakers, and Vipassana meditation was part of its activities. However, the direct influence of the Vipassana movement would not reach the U.S. until a group of Americans returned there in the early 1970s after studying with Vipassana masters in Asia. In the 1970s, interest in Tibetan Buddhism grew dramatically. This was fuelled in part by the 'shangri-la' view of this country and also because Western media agencies are largely sympathetic with the 'Tibetan Cause'. All four of the main Tibetan Buddhist schools became well known. Tibetan lamas such as the Karmapa (Rangjung Rigpe Dorje), Chögyam Trungpa Rinpoche, Geshe Wangyal, Geshe Lhundub Sopa, Dezhung Rinpoche, Sermey Khensur Lobsang Tharchin, Tarthang Tulku, Lama Yeshe and Thubten Zopa Rinpoche all established teaching centers in the West from the 1970s. In 1982 Thích Nhất Hạnh founded Plum Village in Dordogne, France which, along with his hundreds of publications, has helped spread interest in Engaged Buddhism and Vietnamese Thiền (Zen). Perhaps the most widely visible Buddhist teacher in the west is the much-travelled Tenzin Gyatso, the current Dalai Lama, who first visited the United States in 1979. As the exiled political leader of Tibet, he is now a popular cause célèbre in the west. His early life was depicted in glowing terms in Hollywood films such as Kundun and Seven Years in Tibet. He has attracted celebrity religious followers such as Richard Gere and Adam Yauch. In addition to this a number of Americans who had served in the Korean or Vietnam Wars stayed out in Asia for a period, seeking to understand both the horror they had witnessed and its context. A few of these were eventually ordained as monks in both the Mahayana and Theravadan tradition, and upon returning home became influential meditation teachers establishing such centres as the Insight Meditation Society in America, such as Bill Porter. Another contributing factor in the flowering of Buddhist thought in the West was the popularity of Zen amongst the counter-culture poets and activists of the 1960s, due to the writings of Alan Watts, D.T. Suzuki and Philip Kapleau. Today, Buddhism is practiced by increasing numbers of people in the Americas, Europe and Oceania. Buddhism has become the fastest growing philosophical religion in Australia and some other Western nations. There is a general distinction between Buddhism brought to the West by Asian immigrants, which may be Mahayana, Theravada or a traditional East Asian mix, and Buddhism as practiced by converts, which is often Zen, Pure Land, Vipassana or Tibetan Buddhism. Some Western Buddhists are actually non-denominational and accept teachings from a variety of different sects, which is far less frequent in Asia. Tibetan Buddhism in the West has remained largely traditional, keeping all the doctrine, ritual, faith, devotion, etc. An example of a large Buddhist group established in the West is the Foundation for the Preservation of the Mahayana Tradition (FPMT). FPMT is a network of Buddhist centers focusing on the Geluk lineage of Tibetan Buddhism. Founded in 1975 by Lamas Thubten Yeshe and Thubten Zopa Rinpoche, who began teaching Buddhism to Western students in Nepal, the FPMT has grown to encompass more than 142 teaching centers in 32 countries. Like many Tibetan Buddhist groups, the FPMT does not have "members" per se, or elections, but is managed by a self-perpetuating board of trustees chosen by its spiritual director (head lama), Lama Zopa Rinpoche. A feature of Buddhism in the West today is the emergence of other groups which, even though they draw on traditional Buddhism, are in fact an attempt at creating a new style of Buddhist practice. Buddhist imagery is increasingly appropriated by modern pop culture and also for commercial use. For example, the Dalai Lama's image was used in a campaign celebrating leadership by Apple Computer. Similarly, Tibetan monasteries have been used as backdrops to perfume advertisements in magazines. Hollywood movies such as Kundun, Little Buddha and Seven Years in Tibet have had considerable commercial success. The largest Buddhist temple in the Southern Hemisphere is the Nan Tien Temple (translated as "Southern Paradise Temple"), situated at Wollongong, Australia, while the largest Buddhist temple in the Western Hemisphere is the Hsi Lai Temple (translated as "Coming West Temple"), in Hacienda Heights, California, USA. Both are operated by the Fo Guang Shan Order, founded in Taiwan, and around 2003 the Grand Master, Venerable Hsing Yun, asked for Nan Tien Temple and Buddhist practice there to be operated by native Australian citizens within about thirty years. The largest monastery in the USA is the City of 10,000 Buddhas near Ukiah, California. This monastery was founded by Ven. Hsuan Hua who purchased the property. "Dharma Realm Buddhist Association purchased the City of Ten Thousand Buddhas in 1974 and established its headquarters there. The City currently comprises approximately 700 acres of land." ↑ See Urs App, "Schopenhauers Begegnung mit dem Buddhismus." Schopenhauer-Jahrbuch 79 (1998):35-58. The same author provides an overview of Schopenhauer's discovery of Buddhism in Arthur Schopenhauer and China. Sino-Platonic Papers Nr. 200 (April 2010) whose appendix contains transcriptions and English translations of Schopenhauer's early notes about Asian religions including Buddhism. ↑ David R. Loy, "Review of Nietzsche and Buddhism: A Study in Nihilism and Ironic Affinities by R.G. Morrison". ↑ Fields 1992, p. 124. ↑ 5.0 5.1 5.2 McMahan 2008. ↑ "80th anniversary of Das Buddhistische Haus in Berlin – Frohnau, Germany". Daily News (Sri Lanka). April 24, 2004. Retrieved November 20, 2014. ↑ Thévoz, Samuel (2016-07-21). "On the Threshold of the "Land of Marvels:" Alexandra David-Neel in Sikkim and the Making of Global Buddhism". Transcultural Studies. 0 (1): 149–186. ISSN 2191-6411. ↑ Woodhead, Linda; Partridge, Christopher; Kawanami, Hiroko (2016). Religions in the Modern World: Traditions and Transformations 3rd Edition. ↑ E.L. Mullen, "Orientalist commercializations: Tibetan Buddhism in American popular film Archived 2007-01-26 at the Wayback Machine." Halkias, G. T. "The Self-immolation of Kalanos and other Luminous Encounters Among Greeks and Indian Buddhists in the Hellenistic World." JOCBS, 2015 (8), pp. 163–186. Halkias, Georgios. “When the Greeks Converted the Buddha: Asymmetrical Transfers of Knowledge in Indo-Greek Cultures.” In Religions and Trade: Religious Formation, Transformation and Cross-Cultural Exchange between East and West, ed. Volker Rabens. Leiden: Brill, 2013: 65-115. This article includes content from Buddhism in the West on Wikipedia (view authors). License under CC BY-SA 3.0. This page was last edited on 3 October 2018, at 07:33.
0.999986
Who has access to perform this task: Owner, Manager, Base user. All three user types can create a new Area through the Add New Item Process. Below are steps for the Owner and Manager to add a New Area without having to add a new item. The Manage Areas page will load. Click on the button. The New Area page will load. Enter the Area in the text box. Click the button. This new Area will be available when creating a new item or editing an existing item.
0.998874
- /// <seealso cref="Analyzer"/> for German language. + /// <see cref="Analyzer"/> for German language. /// <li> As of 3.6, GermanLightStemFilter is used for less aggressive stemming. - /// Contains the stopwords used with the <seealso cref="StopFilter"/>. + /// Contains the stopwords used with the <see cref="StopFilter"/>. + /// used to tokenize all the text in the provided <see cref="Reader"/>. - /// Factory for <seealso cref="GermanLightStemFilter"/>. + /// Factory for <see cref="GermanLightStemFilter"/>. - /// Factory for <seealso cref="GermanMinimalStemFilter"/>. + /// Factory for <see cref="GermanMinimalStemFilter"/>. - /// Factory for <seealso cref="GermanNormalizationFilter"/>. + /// Factory for <see cref="GermanNormalizationFilter"/>. - /// A <seealso cref="TokenFilter"/> that stems German words. + /// A <see cref="TokenFilter"/> that stems German words. - /// filter object is created (as long as it is a <seealso cref="GermanStemmer"/>). + /// filter object is created (as long as it is a <see cref="GermanStemmer"/>). - /// Set a alternative/custom <seealso cref="GermanStemmer"/> for this filter. + /// Set a alternative/custom <see cref="GermanStemmer"/> for this filter. - /// Factory for <seealso cref="GermanStemFilter"/>. + /// Factory for <see cref="GermanStemFilter"/>. - /// <seealso cref="Analyzer"/> for the Greek language. + /// <see cref="Analyzer"/> for the Greek language. /// that will not be indexed at all). /// <li> As of 3.1, StandardFilter and GreekStemmer are used by default. - /// <seealso cref="GreekLowerCaseFilter"/> for best results. + /// <see cref="GreekLowerCaseFilter"/> for best results. /// and standardizes final sigma to sigma. /// <li> As of 3.1, supplementary characters are properly lowercased. - /// Factory for <seealso cref="GreekLowerCaseFilter"/>. + /// Factory for <see cref="GreekLowerCaseFilter"/>. - /// either <seealso cref="GreekLowerCaseFilter"/> or ICUFoldingFilter before GreekStemFilter. + /// either <see cref="GreekLowerCaseFilter"/> or ICUFoldingFilter before GreekStemFilter. - /// Factory for <seealso cref="GreekStemFilter"/>. + /// Factory for <see cref="GreekStemFilter"/>. - /// either <seealso cref="GreekLowerCaseFilter"/> or ICUFoldingFilter. + /// either <see cref="GreekLowerCaseFilter"/> or ICUFoldingFilter. - /// <seealso cref="Analyzer"/> for English. + /// <see cref="Analyzer"/> for English. - /// Builds an analyzer with the default stop words: <seealso cref="#getDefaultStopSet"/>. + /// Builds an analyzer with the default stop words: <see cref="#getDefaultStopSet"/>. + /// which tokenizes all the text in the provided <see cref="Reader"/>. - /// Factory for <seealso cref="EnglishMinimalStemFilter"/>. + /// Factory for <see cref="EnglishMinimalStemFilter"/>. /// TokenFilter that removes possessives (trailing 's) from words. - /// @deprecated Use <seealso cref="#EnglishPossessiveFilter(Version, TokenStream)"/> instead. + /// @deprecated Use <see cref="#EnglishPossessiveFilter(Version, TokenStream)"/> instead. - /// Factory for <seealso cref="EnglishPossessiveFilter"/>. + /// Factory for <see cref="EnglishPossessiveFilter"/>. /// All terms must already be lowercased for this filter to work correctly. - /// in a previous <seealso cref="TokenStream"/>. + /// in a previous <see cref="TokenStream"/>. - /// Factory for <seealso cref="KStemFilter"/>. + /// Factory for <see cref="KStemFilter"/>. - /// in a previous <seealso cref="TokenStream"/>. + /// in a previous <see cref="TokenStream"/>. - /// Factory for <seealso cref="PorterStemFilter"/>. + /// Factory for <see cref="PorterStemFilter"/>. - /// <seealso cref="Analyzer"/> for Spanish. + /// <see cref="Analyzer"/> for Spanish. /// <li> As of 3.6, SpanishLightStemFilter is used for less aggressive stemming. + /// Builds an analyzer with the default stop words: <see cref="#DEFAULT_STOPWORD_FILE"/>. - /// Factory for <seealso cref="SpanishLightStemFilter"/>. + /// Factory for <see cref="SpanishLightStemFilter"/>. - /// <seealso cref="Analyzer"/> for Basque. + /// <see cref="Analyzer"/> for Basque. - /// <seealso cref="Analyzer"/> for Persian. + /// <see cref="Analyzer"/> for Persian. /// yeh and keheh) are standardized. "Stemming" is accomplished via stopwords. - /// Factory for <seealso cref="PersianCharFilter"/>. + /// Factory for <see cref="PersianCharFilter"/>. - /// Factory for <seealso cref="PersianNormalizationFilter"/>. + /// Factory for <see cref="PersianNormalizationFilter"/>. - /// <seealso cref="Analyzer"/> for Finnish. + /// <see cref="Analyzer"/> for Finnish. - /// Factory for <seealso cref="FinnishLightStemFilter"/>. + /// Factory for <see cref="FinnishLightStemFilter"/>. - /// <seealso cref="Analyzer"/> for French language. + /// <see cref="Analyzer"/> for French language. /// <li> As of 3.6, FrenchLightStemFilter is used for less aggressive stemming. - /// Builds an analyzer with the default stop words (<seealso cref="#getDefaultStopSet"/>). + /// Builds an analyzer with the default stop words (<see cref="#getDefaultStopSet"/>). - /// Factory for <seealso cref="FrenchLightStemFilter"/>. + /// Factory for <see cref="FrenchLightStemFilter"/>. - /// Factory for <seealso cref="FrenchMinimalStemFilter"/>. + /// Factory for <see cref="FrenchMinimalStemFilter"/>. - /// A <seealso cref="TokenFilter"/> that stems french words. + /// A <see cref="TokenFilter"/> that stems french words. - /// filter object is created (as long as it is a <seealso cref="FrenchStemmer"/>). + /// filter object is created (as long as it is a <see cref="FrenchStemmer"/>). - /// Set a alternative/custom <seealso cref="FrenchStemmer"/> for this filter. + /// Set a alternative/custom <see cref="FrenchStemmer"/> for this filter.
0.999999
What is an isotope? An isotope is defined as any different forms, or species of atoms of a chemical element having the same atomic number, position in the Periodic Table, and virtually identical chemical behavior, but with different physical properties and atomic masses. Nearly every chemical element known has at least one known isotope. Some isotopes of each element occur naturally, and these, along with their manufactured counterparts, have many chemical, medical, and scientific applications for use. Isotopic labeling is perhaps the most common of these applications, and involves the use of more unusual isotopes as markers in chemical reactions, allowing these reactions to be more easily recognized and distinguished using mass or infrared spectroscopy. A radiopharmaceutical is a radioactive compound that is used in radiotherapy or diagnosis. Many of these compounds are used in nuclear medicine as tracers in the diagnosis and treatment of many diseases, and in addition, radiopharmaceuticals are also used to create imaging sides of the brain and other body organs, as well as of tumors and cancers. Some radiopharmaceuticals that are currently being used to treat cancer include: Chromic phosphate P 32 for lung, ovarian, and prostate cancers, Sodium iodide I131 for certain types of thyroid cancer, Strontium chloride Sr 89 for treatment of cancerous bone tissue, and Sodium phosphate P 32 for the treatment of cancerous bone tissue and other types of cancers as well.
0.999992
Does ruby have real multithreading? I know about the "cooperative" threading of ruby using green threads. How can I create real "OS-level" threads in my application in order to make use of multiple cpu cores for processing? You seem to be confusing two very different things here: the Ruby Programming Language and the specific threading model of one specific implementation of the Ruby Programming Language. There are currently around 11 different implementations of the Ruby Programming Language, with very different and unique threading models. (Unfortunately, only two of those 11 implementations are actually ready for production use, but by the end of the year that number will probably go up to four or five.) (Update: it's now 5: MRI, JRuby, YARV (the interpreter for Ruby 1.9), Rubinius and IronRuby). The first implementation doesn't actually have a name, which makes it quite awkward to refer to it and is really annoying and confusing. It is most often referred to as "Ruby", which is even more annoying and confusing than having no name, because it leads to endless confusion between the features of the Ruby Programming Language and a particular Ruby Implementation. It is also sometimes called "MRI" (for "Matz's Ruby Implementation"), CRuby or MatzRuby. MRI implements Ruby Threads as Green Threads within its interpreter. Unfortunately, it doesn't allow those threads to be scheduled in parallel, they can only run one thread at a time. However, any number of C Threads (POSIX Threads etc.) can run in parallel to the Ruby Thread, so external C Libraries, or MRI C Extensions that create threads of their own can still run in parallel. The second implementation is YARV (short for "Yet Another Ruby VM"). YARV implements Ruby Threads as POSIX or Windows NT Threads, however, it uses a Global Interpreter Lock (GIL) to ensure that only one Ruby Thread can actually be scheduled at any one time. Like MRI, C Threads can actually run parallel to Ruby Threads. In the future, it is possible, that the GIL might get broken down into more fine-grained locks, thus allowing more and more code to actually run in parallel, but that's so far away, it is not even planned yet. XRuby also implements Ruby Threads as JVM Threads. Update: XRuby is dead. IronRuby implements Ruby Threads as Native Threads, where "Native Threads" in case of the CLR obviously means "CLR Threads". IronRuby imposes no additional locking on them, so, they should run in parallel, as long as your CLR supports that. Ruby.NET also implements Ruby Threads as CLR Threads. Update: Ruby.NET is dead. Rubinius implements Ruby Threads as Green Threads within its Virtual Machine. More precisely: the Rubinius VM exports a very lightweight, very flexible concurrency/parallelism/non-local control-flow construct, called a "Task", and all other concurrency constructs (Threads in this discussion, but also Continuations, Actors and other stuff) are implemented in pure Ruby, using Tasks. Update: The information about Rubinius in this answer is about the Shotgun VM, which doesn't exist anymore. The "new" C++ VM does not use green threads scheduled across multiple VMs (i.e. Erlang/BEAM style), it uses a more traditional single VM with multiple native OS threads model, just like the one employed by, say, the CLR, Mono, and pretty much every JVM. MacRuby started out as a port of YARV on top of the Objective-C Runtime and CoreFoundation and Cocoa Frameworks. It has now significantly diverged from YARV, but AFAIK it currently still shares the same Threading Model with YARV. Update: MacRuby depends on apples garbage collector which is declared deprecated and will be removed in later versions of MacOSX, MacRuby is undead. Cardinal is a Ruby Implementation for the Parrot Virtual Machine. It doesn't implement threads yet, however, when it does, it will probably implement them as Parrot Threads. Update: Cardinal seems very inactive/dead. MagLev is a Ruby Implementation for the GemStone/S Smalltalk VM. I have no information what threading model GemStone/S uses, what threading model MagLev uses or even if threads are even implemented yet (probably not). Unfortunately, only two of these 11 Ruby Implementations are actually production-ready: MRI and JRuby. So, if you want true parallel threads, JRuby is currently your only choice – not that that's a bad one: JRuby is actually faster than MRI, and arguably more stable. Otherwise, the "classical" Ruby solution is to use processes instead of threads for parallelism. The Ruby Core Library contains the Process module with the Process.fork method which makes it dead easy to fork off another Ruby process. Also, the Ruby Standard Library contains the Distributed Ruby (dRuby / dRb) library, which allows Ruby code to be trivially distributed across multiple processes, not only on the same machine but also across the network. Ruby 1.8 only has green threads, there is no way to create a real "OS-level" thread. But, ruby 1.9 will have a new feature called fibers, which will allow you to create actual OS-level threads. Unfortunately, Ruby 1.9 is still in beta, it is scheduled to be stable in a couple of months. RMI doesn't have, YARV is closer. Ruby has closures as Blocks, lambdas and Procs. To take full advantage of closures and multiple cores in JRuby, Java's executors come in handy; for MacRuby I like GCD's queues. Note that, being able to create real "OS-level" threads doesn't imply that you can use multiple cpu cores for parallel processing. Look at the examples below. As you can see here, there are four OS threads, however only the one with state R is running. This is due to a limitation in how Ruby's threads are implemented. Same program, now with JRuby. You can see three threads with state R, which means they are running in parallel. If you are interested in Ruby multi-threading you might find my report Debugging parallel programs using fork handlers interesting. For a more general overview of the Ruby internals Ruby Under a Microscope is a good read. Also, Ruby Threads and the Global Interpreter Lock in C in Omniref explains in the source code why Ruby threads don't run in parallel. How about using drb? It's not real multi-threading but communication between several processes, but you can use it now in 1.8 and it's fairly low friction. If you are using MRI, then you can write the threaded code in C either as an extension or using the ruby-inline gem. If you really need parallelism in Ruby for a Production level system (where you cannot employ a beta) processes are probably a better alternative. But, it is most definitely worth trying threads under JRuby first. Also if you are interested in future of threading under Ruby, you might find this article useful. Because could not edit that answer, so add a new reply here. truffleruby is a high performance implementation of the Ruby programming language. Built on the GraalVM by Oracle Labs,TruffleRuby is a fork of JRuby, combining it with code from the Rubinius project, and also containing code from the standard implementation of Ruby, MRI, still live development, not production ready. This version ruby seem like born for performance, I don't know if support parallel threads, but I think it should. Not the answer you're looking for? Browse other questions tagged ruby multithreading concurrency or ask your own question. Python, Ruby, Haskell - Do they provide true multithreading? Why is the << operation on an array in Ruby not atomic? What use can I give to Ruby threads, if they are not really parallel? multi threading python/ruby vs java? Can Ruby on Rails tap into multi-core Intel processing power?
0.999673
war From the beginning of the 18th Dynasty the relations of Egypt to the Near East are dominated by military action. Egyptian kings tried to gain control over most of Syria-Palestine. The region was at this period divided into small city-states, none of which had the resources and power to resist Egypt. Farther North the Egyptians met two greater powers: the Mitanni empire (hinterland of modern Syria) and the Hittite empire (modern central to east Turkey). Both empires blocked Egyptian expansion. Several gods previously unknown in Egypt were now also worshipped in Egypt: Astarte, Reshpu, Baal, Qadshu. Many of the objects relating to these gods were found at Memphis, a key trading centre, where many foreigners must have lived. The cult of such deities might have been introduced by foreigners to Egypt, but many Egyptians also worshipped them in this and later periods. People of the Near East are often shown in New Kingdom art. On tomb scenes they appear as people bringing commodities. On temple scenes they are shown in the same function, and also in battle scenes celebrating the victories of Egyptian kings. They are recognisable by their long beards, paler skin and colourful clothes, covering the whole body. diplomatic contacts Diplomatic contacts between Egypt and Asian countries are especially well attested for two periods: the Amarna period and the reign of Ramesses II. There is evidence for diplomatic contact in other periods of the New Kingdom, but the evidence is not so abundant. Diplomatic contacts (via letters written in cuneiform) between states in the Near East are already well attested at the beginning of the second millennium BC. However, Egypt did not seem to have been part of this early international network. This seems to have changed at the beginning of the 18th Dynasty, when Egyptian kings started to have direct contact with rulers in Asia on a regular basis. Amarna period About 400 cuneiform tablets were found at Amarna. The tablets contain the diplomatic correspondence of the Amarna period. They are of the utmost importance for studying the history of the Near East in the 14th century BC, and illustrate that Egyptian kings had diplomatic contact to a great number of Near Eastern states including the major powers of the region (Babylon, Assyria, the Hittite empire, etc.). Ramesses II In Hattusha the capital of the Hittite empire were found many cuneiform tablets belonging to the diplomatic correspondence under Ramesses II. In addition to the kings themselves, their wives and high officials also exchanged letters. Hittite: war and peace The strong Hittite empire (with its centre in modern Turkey) was one of the main enemies of Egypt in the Near East. At Kadesh (town in Syria) the armies of Ramesses II clashed with the Hittite armies, but neither side was able to secure decisive victory. After the battle, hostilities continued until both empires arranged the first known peace treaty of the world history. Ramesses II married a daughter of the Hittite king (Maathorneferure). The nature of Egyptian rule in Asia in the 18th Dynasty In the 18th Dynasty there is no evidence for a colonisation of Syria-Palestine to the extent attested in Nubia. The city-states of the region seem to have been in most cases loose vassals, who only had to pay sometimes some kind of tribute. In contrast to Nubia, there was no large-scale Egyptian-style administration installed in Asia. The nature of Egyptian rule in the 19th and 20th Dynasties There is some evidence for a stronger Egyptian presence in South Palestine. At several places Egyptian style houses were found, possible belonging to Egyptians (mayors?) living here. The burial customs in the region are very much influenced by Egypt: in particular, the use of pottery coffins in human shape seems to come directly from Egypt. There were many Egyptian inscriptions and Egyptian objects found in that area. This direct control seems to have ended after Ramesses IV, the last New Kingdom ruler still well attested in the area. After his reign the Egyptians seem to have left the country.
0.959494
A phrase is a group of words working together. A clause is a group of words working together with a subject and a verb showing time (or tense). The type of clause depends on its completeness and its function in the sentence.
0.999814
This field contains data about the aviation turbine fuel used in HEP-owned vehicles. The value can be recorded to three decimal places. Fuel used in vehicles owned or leased by the HEP - aviation turbine fuel (litres).
0.982007
Pope Stephen II (Latin: Stephanus II (or III); 715 – 26 April 757) was Pope from 26 March 752 to his death in 757. He succeeded Pope Zachary following the death of Pope-elect Stephen (sometimes called Stephen II). Stephen II marks the historical delineation between the Byzantine Papacy and the Frankish Papacy. The Lombards to the north of Rome had captured Ravenna, capital of the Eastern Roman Empire Exarchate of Ravenna, in 751, and began to put pressure on the city of Rome. Relations were very strained in the mid-8th century between the papacy and the Eastern Roman emperors over the support of the Isaurian Dynasty for iconoclasm. Likewise, maintaining political control over Rome became untenable as the Eastern Roman Empire itself was beset by the Abbasid Caliphate to the south and Bulgars to the northwest. As a result, Rome was unable to secure military support from Constantinople to push back Lombard forces. Prior to Stephen's alliance with Pepin, Rome had constituted the central city of the Duchy of Rome, which composed one of two districts within the Exarchate of Ravenna, along with Ravenna itself. At Quiercy the Frankish nobles finally gave their consent to a campaign in Lombardy. Roman Catholic tradition asserts that then and there Pepin executed in writing a promise to give to the Church certain territories that were to be wrested from the Lombards, and which would be referred to later as the Papal States. Known as the Donation of Pepin, no actual document has been preserved, but later 8th century sources quote from it. Stephen now anointed Pepin at Saint-Denis in a memorable ceremony that was evoked in the coronation rites of French kings until the end of the ancien regime in 1789. In return, in 756, Pepin and his Frankish army forced the last Lombard king to surrender his conquests, and Pepin officially conferred upon the pope the territories belonging to Ravenna, even cities such as Forlì with their hinterlands, laying the Donation of Pepin upon the tomb of Saint Peter, according to traditional later accounts. The gift included Lombard conquests in the Romagna and in the duchies of Spoleto and Benevento, and the Pentapolis in the Marche (the "five cities" of Rimini, Pesaro, Fano, Senigallia and Ancona). For the first time, the Donation made the pope a temporal ruler over a strip of territory that extended diagonally across Italy from the Tyrrhenian to the Adriatic. Over these extensive and mountainous territories the medieval popes were unable to exercise effective sovereignty, given the pressures of the times, and the new Papal States preserved the old Lombard heritage of many small counties and marquisates, each centered upon a fortified rocca. ↑ Herbermann, Charles, ed. (1913). "Pope Stephen (II) III". Catholic Encyclopedia. New York: Robert Appleton Company. ↑ Peter O'Brien (23 Dec 2008). European Perceptions of Islam and America from Saladin to George W. Bush. Palgrave Macmillan. p. 24. ISBN 9780230617803. ↑ Pierre Riche, The Carolingians: A Family Who Forged Europe, transl. Michael Idomir Allen, (University of Pennsylvania Press, 1993), 97. Ekkart Sauser (1995). "Stephan II. (III.)". In Bautz, Traugott. Biographisch-Bibliographisches Kirchenlexikon (BBKL) (in German). 10. Herzberg: Bautz. cols. 1351–1354. ISBN 3-88309-062-X. Rudolf Schieffer: Stephan II in: Lexicon of the Middle Ages (Lexikon des Mittelalters, LexMA). Vol. 8, LexMA-Verlag, Munich 1997, ISBN 3-89659-908-9, Col. 116–117. Stephan II.. In: Salvador Miranda: The Cardinals of the Holy Roman Church, online at fiu.edu, Website of Florida International University, retrieved 28 December 2011.
0.989244
Why should you be serious about web performance testing? We need performance testing make sure that our websites load as fast as possible. There are a few reasons why you need to test your website speed and make sure that your site is loading fast: Visitor Retention or Conversion Rate Your website visitors (who are mostly potential clients) will not stay around waiting for a website to load. There are plenty of other fast-loading websites out there the serve up the same information, products and services. To make sure that your competition doesn’t capture your customers, make sure you don’t give them a reason to leave your website. Search engine optimization isn’t just about optimize everything on a page such as meta tags and image alt tags. You should also pay attention to the off-site elements such as site load time. Google does. Google actually factors in site speed in the ranking equation and, all things equal, if your competitor’s site loads faster…they will probably rank higher than you in the search engines. If you are selling products online, the difference in a sale on your website could simply be seconds in load time. An additional delay of seconds to page-load time caused a 20% drop in traffic. Think about 20% fewer customers and 20% fewer sales. It makes a difference. Web Load testing is the process of concurrent users accessing the application and measuring its response. Load testing is performed to determine an applications functional behavior under both normal and anticipated peak load conditions. It helps to identify the maximum concurrent users’ support of an application as well as any bottlenecks and determine which feature is slow. Web application load testing is usually a type of non-functional testing although it can be used as a functional test to validate application behavior. For example, a word processor or graphics editor can be forced to read an extremely large document; or a financial package can be forced to generate a report based on several years’ worth of data. The most accurate load testing simulates actual use, as opposed to testing using theoretical or analytical modeling. Load testing lets you measure your website’s Quality of Service (QoS) performance based on actual customer behavior. Nearly all the load testing tools and frameworks follow the classical load testing paradigm: when customers visit your website, a script recorder records the communication and then creates related interaction scripts. A load generator tries to replay the recorded scripts, which could possibly be modified with different test parameters before replay. In the replay procedure, both the hardware and software statistics will be monitored and collected by the tool. And at last, all these statistics will be analyzed and a load testing report will be generated. Load and performance testing analyzes software intended for a multi-user audience by subjecting the software to different numbers of virtual and live users while monitoring performance measurements under these different loads. Load and performance testing is usually conducted in a test environment identical to the production environment before the software system is permitted to go live. A test analyst can use various load testing tools to create these Virtual Users and their activities. Once the test has started and reached a steady state, the application is being tested at the 100 Virtual User load as described above. The application’s performance can then be monitored and captured. The specifics of a load test plan or script will generally vary across organizations. For example, in the bulleted list above, the first item could represent 25 Virtual Users browsing unique items, random items, or a selected set of items depending upon the test plan or script developed. However, all load test plans attempt to simulate system performance across a range of anticipated peak workflows and volumes. The criteria for passing or failing a load test are generally different across organizations as well. There are no standards specifying acceptable load testing performance metrics. A common misconception is that load testing software provides record and playback capabilities like regression testing tools. Load testing tools analyze the entire OSI protocol stack whereas most regression testing tools focus on GUI performance. For example, a regression testing tool will record and playback a mouse click on a button on a web browser, but a load testing tool will send out hypertext the web browser sends after the user clicks the button. In a multiple-user environment, load testing tools can send out hypertext for multiple users with each user having a unique login ID, password, etc. The load testing tools also provide insight into the causes for slow performance. There are numerous possible causes for slow system performance, including, but not limited to, the following: Application server(s) or software Database server(s) Network – latency, congestion, etc. Client-side processing Load balancing between multiple servers Load testing is especially important if the application, system or service will be subject to a service level agreement or SLA. User Experience Under Load test In the example above, while the device under test is under production load – 100 Virtual Users, run the target application. The performance of the target application here would be the User Experience Under Load. Apache JMeter is a protocol level load testing tool. It can be used to test loading times for static and dynamic elements in a web application. A tester can simulate a heavy load on a server, group of servers, network or object to test their strengths. It can be installed on any desktop with Windows, Mac or Linux. It has a user-friendly interface or can be used in a command line interface. It has the ability to extract data from popular response formats like HTML, JSON, XML or any textual format. It will automatically put together plenty of performance-related statistics for you based on the test result. You can trace your performance history. See how fast a website loads with various geographical locations. It saves each test for you so you can review it later and also see how things change over time. Testers can use CloudQA’s Smart Recorder for generating JMX scripts for JMeter. We added this feature because JMeter is too difficult for beginners, and even with skills the updation of the script is not easy. The tool supports large-scale performance testing with heavy user load and complex scenarios and provides a clear analysis of the functionality and performance of the web application. This tool is generally best for large enterprises. This is an HP product. This can be bought as an HP product from its HP software division. It is useful in understanding and determining the performance and outcome of the system when there is an actual load. The LoadRunner comprises of different tools like Virtual User Generator, Controller, Load Generator and Analysis.
0.995556
Do you allow your children to do normal things or always make limits? Never forbid children from doing normal things. You may be thinking limits make the child feel safer and calmer. Your friends also might have suggested making restrictions on everything. But certain restraints can do just the opposite and make your child feel insecure and slow development in life.
0.984405
Are laws of creation different from and independent of laws of physics? In order to answer this question, we must first review the question of initial condition at or before the Big Bang. In cosmology, there are two key issues -- the amount of matter and its distribution in the universe. These two issues then give rise to many other issues -- the geometry of the universe, the initial conditions, the evolution of the universe, etc.. In the early 1930s, Fritz Zwicky pointed out that many galaxies were moving and rotating much faster than the amount of visible matter in them can hold them together with gravity force. Forty years or so later, this missing mass phenomenon was rediscovered as a missing radiation problem, that is, there must be some dark matter in those galaxies. What are those dark matter? There are basically of two types -- baryonic and fictitious dark matter. The baryonic dark matter is made of particles we know of. The black holes, brown draft stars, intergalactic hydrogen clouds, blue (very old and distant) galaxies and neutrinos are baryonic dark matter. These type of baryonic dark matter do indeed exist. Although they are invisible themselves, they can be detected with other means, such as the absorption spectrums and the gravitational lenses. With those baryonic dark matter, the missing mass issue for galaxies can be resolved. Then, why shall we contemplate any fictitious dark matter, such as squarks or sleptons, etc.. As I mentioned in previous TOE papers, those fictitious matter were predicted by supersymmetry which was only a half-right idea. Although those fictitious particles have never been observed in laboratories around the world, many physicists and cosmologists believe in them because our universe sort of needs them. There are three possible geometry for the universe -- open, flat or closed. These three possibilities are described with a fundamental cosmological number know as omega. Omega is the ratio between the density of matter there really is in the universe and the amount it would take to slow the expansion forever but never stop it. If omega is small than one (1), the universe will expand forever and reach a heat death which is a state of uniform temperature (that is, at absolute zero degree of kelvin). If omega is larger than one (1), the universe will eventually stop expansion and begin to contract into a Big Crunch, the ultimate inferno. Both of these conditions will eliminate the human race eventually. But, if omega is equal to one (1) exactly, then the universe will expand forever but with a smaller and smaller rate. In this case, there is a chance for civilizations to build bio-islands in that universe and thus to last eternally. Today, omega appeared to be about .35, based on a rough estimate of the number of galaxies in the universe and the presumed weight of each, which includes all baryonic dark matter. Seemingly our fate has set; we will face the heat death on the judgement day. Not so! Most of physicists and cosmologists believe that omega must be exactly equal to one, and we simply haven't found the missing 65%. Their faith comes from two reasonings -- anthropic and teleological. The anthropic argument: If omega was less than one at early stages of universe, it will be close to zero by now after 15 billion years of evolution. If omega was larger than one at early stages of universe, it will be close to infinity by now. In both cases, we would not be here to discuss what value omega is. The fact we are talking about it implies that omega must be exactly equal to one at all time. The teleological argument: The universe has some deep meaning, and part of that meaning is ourselves. In order to avoid the heat death or inferno and to perpetuate these meanings, the omega must be equal to one. At any rate, if omega is equal to one exactly, then there must be some non-baryonic dark matter. Those fictitious matter predicted by supersymmetry thus become the foundation of a new cosmology theory -- the CDM (Cold Dark Matter model). Why cold? What does it mean? Cold is the opposite of hot. In cosmology, neutrino is called hot dark matter. The temperature of those dark matter is defined in terms of the free streaming scale. The free streaming scale is determined by the time it would take a blob of particles to collapse under gravity versus the time it takes the typical particle to traverse the blob. When baryons try and form into a blob, there are two competing forces at work: gravity, which makes the blob want to collapse, and pressure, which resists collapse. For neutrinos, there is no such thing as pressure, since the particles don't even notice each other, let alone other matter or light. This tendency to boil off into space is the reason it is called hot. The free streaming scale for neutrino is a least ten thousand times bigger than the size of an average galaxy. If neutrinos were the dark matter to bring omega to one (1), then the most fundamental mass concentrations in the universe should be supercluster instead of galaxy. It also means that a neutrino-dominated universe shall form structure on the largest scales first, and on the scale of galaxies last. But, there is strong evidence that the reversed order is true, that is, galaxies formed first, the superclusters last. Furthermore, because of its large free streaming scale (being too hot), the neutrino cannot be packed too tightly. It means that galaxies much smaller than the Milky Way shouldn't have significant dark halos at all, but observation of the dwarf galaxies show that they do have massive dark halos. Now, it is clear that neutrino cannot be a major factor to bring omega to one (1), if it plays any role at all. So, cosmologists must find some colder dark matter if they insist that omega ought to be one (1). They must be cold enough to allow galaxies to form first, that is, their free streaming scale must be equal to the size of galaxy. A big zoo of those kind of cold dark matter was dreamed up by physicists. Since they are fictitious particles, their temperature (hot or cold) can be assigned arbitrarily depending upon the needs of the physicists. No one can prove one way or another anyway. Gravitinos are considered to be warm. The photino, the supersymmetric counterpart of photon, is cold. The original cold dark matter model (CDM) consists of three components. First, CDM claims that omega must be equal to one (1) exactly. Since the observed omega is much less than one, there must be dark matter. Second, these dark matter must be cold, which is to say that their characteristic speed as they whip through the universe is much slower than the speed of light. Third, the universe went through a period of incomprehensibly rapid inflation before it was far into its first second of life; it increased in size by an astounding factor of 30th power of 10 in a fraction of a trillionth of a second before the explosive expansion of the Big Bang. Under this CDM, the cosmos should have a hierarchy of structure, with galaxies huddled together into clusters of galaxies, and the clusters gathered into superclusters. For almost a decade, the universe looked seemingly exactly like the CDM model. But by 1986, two independent sky surveys discovered Giant Bubbles and the Great Wall. In a universe dominated by CDM, those newly discovered cosmic objects are too big to exist comfortably. In order to overcome these new difficulties, cosmologists dreamed up a fourth component to add into the original CDM. It is called biasing, that is, the galaxies formed not everywhere, but only in the most densely packed regions of dark matter. All these four statements of CDM are unproven. The first statement came from a religious craving, sort of at least. The second statement on the one hand is demanded by the known structure (galaxies first, superclusters last) of cosmos; on the other hand, it claims that many fictitious particles (squarks, photinos, etc.) to be realities. So, these first two statements are sort of science fictions. On the contrary, the fourth statement is demanded by some new observations, but the most important of all is the third statement. If CDM is a genuine science theory at all, it is because it contains this third statement -- the inflationary scenario. The inflationary scenario was proposed by Alan Guth in 1980. This new idea solved three long standing cosmological issues -- the horizon problem, the flatness problem and the large-scale structure problem -- at least on the observational level if not on the metaphysical level. Today, the prevailing view is that our universe is about 15 billion years old. When lights from two opposite sides of universe took 15 billion years to reach us, it means that they could never have been in causal contact at any stage of their entire history. This is the horizon problem. According to the Big Bang theory, the universe shall become more curved as time passes. But observation reveals that the spatial geometry on the part of the universe we can observe is extremely flat, although it may indeed be curved at some scale far beyond the horizon. This is the flatness problem. In order to understand the large scale structure problem, we must first discuss the Copernican Cosmological Principle. In 1510s, Nicolas Copernicus formulated a cosmological principle which declares: the Earth does not represent a special observation position. This principle was later expanded to contain two statements. First, the universe looks identical in whichever direction we look, that is, it is isotropic. Second, it goes one step further, that the universe is isotropic as seen from any other point, which is known as homogeneity. A homogeneous and isotropic universe has the greatest possible degree of spatial symmetry and the simplest spatial structure. With this Copernican Cosmological Principle (CCP), many predictions about the structure of cosmos can be made, and many of them were proved to be true. The first prediction is the Olbers' paradox in an infinite universe: Why does it get dark at night? In 1823, Heinrich Olbers showed with a simple mathematical model that the night sky ought to be as bright as day if the universe is infinite in size and if the CCP (Copernican Cosmological Principle) is true. The fact that the night sky is dark means that those assumptions are wrong or there are some unknown factors at work, such as the universe is expanding or it had a finite beginning (not infinite in size). In 1922, Alexander Friedmann combined CCP with General Relativity and predicted that the universe cannot be static. His prediction was ignored by the world (including Einstein) but was confirmed seven years later in 1929 by Edwin Hubble. Since the universe is expanding, then it must have a finite beginning. In 1948, George Gamow put forward the idea of the Big Bang theory, and the Olbers' paradox was put to rest. The expansion of the universe will dim the light (through red-shift) by a factor of about 2. The finite size of universe (that is, the universe is still young) gives darkness to the night sky. Although in a small scale (such as the size of a galaxy or a cluster of galaxies) the mass distribution is not very uniform; the universe does seem to be roughly the same in every direction, provided one views it on a large scale compared to the distance between clusters of galaxies. Not only CCP survived, but it predicts that there shall be a primordial microwave background when CCP is combined with the Big Bang theory. When the background temperature of the universe was above 6000 degrees Kelvin, the photons couldn't travel very far before being absorbed, and photons and mater tied together in the cycle of emission and reabsorption. This is called coupling era which lasted until 300,000 years after the Big Bang. When the temperature of the universe dropped below six thousand degrees, the decoupling took place, and photons were finally able to shine freely. The photons that escaped at the time of decoupling, and which has been shining more or less unimpeded since then, shall be still here with us now. After 15 billion years since the decoupling, those primordial photons ought to be cooled to about 3 degree Kelvin (2.7 to be exact) and to have shifted its wavelength into the microwave region according to the calculation of the Big Bang theory. This microwave background was accidentally discovered by two radio engineers (not physicists) in 1965, and they were awarded the Nobel Prize for physics in 1978. The discovery of this microwave background on the one hand reaffirms the validity of CCP, on the other hand gives rise to the large-scale structure problem. After almost three decades of measurement, this microwave background is too smooth (being isotropic and homogeneous to an accuracy of at least one part in ten thousand) to be able to give rise to a cosmos like the one we know of. Since there is indeed a hierarchy of structure of galaxies, there must be some fluctuation in the microwave background. In short, there is a seemingly irreconcilable difficulty between the smoothness of microwave background and the actual large scale structures of the universe. This is the large scale structure problem. This situation was getting even worse in late 1980s. In a deep sky survey, many giant void (bubbles) and the Great Wall (a sheet of galaxies five hundred million light years long, two hundred million wide and about fifteen million thick) were discovered. Although the Copernican Cosmological Principle allows some anisotropy in a small scale (in the size of cluster of galaxies), these newly discovered giant bubbles and the Great Wall are much bigger than the traditional CCP allows. In short, the validity of CCP is again in question. In 1989, the COBE (COsmic Background Explorer) satellite was sent into orbit around Earth to probe deeper than ever before into the microwave background. In 1992, the COBE team reported that they have seen God -- the fluctuation (about seventeen millionths of a degree Kelvin) during the inflation which was before the Big Bang. All these theories (CDM, HDM- Hot Dark Matter), discoveries (giant bubbles, the Great Wall, the microwave background), the principle (CCP) and the claim (seeing God) boil down to a single issue -- the initial condition before or at the inflationary bang. What is the initial condition at and before the creation. On the one hand, the inflationary scenario seemingly provided an answer. It solves the horizon problem because the observed universe moved out of each other's horizon from this huge inflation. The flatness problem vanishes because the huge expansion blows the universe up so much that it appears flat. The large structure problem is also solved because the sudden expansion would have locked in quantum fluctuations that could have seeded the formation of large-scale structures. On the other hand, this inflationary scenario gives rise to some new questions. Why would such a moment of inflation happen? What happened before inflation? In short, the inflationary scenario does not really solve the question of initial condition but only pushes it further back in time. For the name sake, the initial condition has at least two attributes. One, it cannot be the result of any known physics process (including the Big Bang) because it is the condition at or before the Big Bang. That is, it must be a self-existent and a self-referencing entity. Two, it cannot be annihilated by any process, such as the inflationary bang or a converging system which rapidly lose memory of their initial condition or a diverging system which is ultimately chaotic. That is, it must still exist even today in some forms. Since the microwave background is the finger print of the initial condition, the microwave background cannot be influenced by any process (such as galaxies formation) in any scale (the size of galaxy, cluster or supercluster). The degree of fluctuation must have been just about the same for all scales -- tiny, large or extralarge. This is in fact the finding of COBE. The mathematical expression of this microwave background is described as HZP (Harrisons - Zel'dovich - Peeble) spectrum. The COBE data matches HXP spectrum very well. Both of them hug along the line of zero. In fact, the areas above and under the zero line can very much cancel each other out. That is, the net fluctuation is zero although the local fluctuation gave rise to the large structure of the universe. Thus, we can very confident to state a initial condition hypothesis as follow: The initial condition at or before Big Bang is that the net quantum fluctuation is ZERO. In order to maintain this initial condition, any localized positive fluctuation (which gave rise to galaxies) must be canceled out by a negative fluctuations (which gave rise to gravity), and this is the real-ghost creation process. The modern cosmology is based on Friedmann's model and Hubble's discovery. Friedmann's model is based on CCP and General Relativity. General Relativity was intended to be a gravity theory which was supposed to describe an attractive force. Then, for heaven's sake why shall the universe expand? There are only four type forces in nature. Two of them (strong and gravity) are attractive forces. Electromagnetic and weak forces can be either attractive or repulsive. The weak force operates in an extremely short distance. It did influence the structure of the early universe. Today, it still makes contribution on many cosmic processes, such as the supernova, but it seems not to be the force to expand the universe. The electromagnetic force is a long range force, but most of large objects are electrically neutral. Although the momenta of photons randomly impact the objects in the universe, the photon momenta cannot be accounted to cause the expansion of the universe. In the whole world, only Einstein and Alan Guth felt the need to introduce a kind of antigravity force. Einstein came up with a cosmological constant, but it turned out to be equal to zero exactly, that is, there is no cosmological constant. Guth came up the idea of false vacuum which acted as an antigravity only during the inflationary period which was before the Big Bang. Why shall the universe continue to expand even today after Guth's antigravity force (the false vacuum) vanished 15 billion years ago. What is the antigravity force now? Any cosmology model without including an antigravity force is doomed to failure because Hubble's discovery is a fact in nature. The expansion of the universe is described by Hubble's law: Galaxies recede from the Milky Way at speeds proportional to their distances. So, all galaxies seem to fly away from Milky Way. If those galaxies are flying, they are powered by something. But, what kind of motor do those galaxies run on -- General Motor, Ford motor or a Datsun? This is not a joke. On the other hand, those galaxies can be viewed as stationary, but the space between them is expanding. Their recessional velocity is only the illusion caused by the expanding space. With a uniform expanding rate, the Hubble's law remains valid -- twice as far away looks twice as fast. Why shall and how can space expand? The space expansion is the essence of this new physics, described with Equation Zero. The bouncing between real and ghost time creates 64 subspaces. A quarter of these 64 subspaces are true space, pure vacuum. Three quarter of these 64 subspaces are particles, baryonic matter. So, when time is bouncing forward, the space is expanded, and matter is created. In short, the Big Bang is not an isolated single event in the long ago history of our universe but is only a beginning of a continuous process. Although in essence the CDM (Cold Dark Matter model) is a science fiction, it cannot be completely wrong because its description of universe is quite close to the real world. Although the omega does not have to equal to one (1) and there is no need for those fictitious cold dark matter (such as squarks, photinos, etc.), there ought to have some nonbaryonic cold dark matter. What are those nonbaryonic cold dark matter in this new physics? They are the unborn baryonic matter. The essence of this new physics is that the universe not only is interacting with the past (such as the primordial fluctuation, the primordial neutrinos, supernova remnants, etc.) but with the future (the unborn baryonic matter). That is what life is all about. The universe is a conscious life. I will discuss this in a much better details in a future TOE issue The Conscious Universe. As I have shown before, the gravity is the ghost partner of those created space and matter. Gravity acts as the banker on the one hand lends out energy for new spacetime, on the other hand it charges interest for the repayment. Thus, matter and anti-matter alternately appear in each Big Bang, and the size of the universe will increase by a factor 2 during each cycle. See Figure 1. These oscillating universes keep energy conservation law valid while new spacetime and matter are created constantly, and also give meanings to anti-matter. One, the initial condition of the universe is that the net fluctuation must be ZERO. Two, if ZERO remains to be ZERO always, there will be no creation. Three, any constant allows the rise of real-ghost symmetry, but ZERO is the simplest and most logical choice. Four, I have discussed four fundamental constants of nature in detail. Planck constant and light speed must be true constants; so they can give rise to a causal world which is only a very small part of the universe. The major part of the universe is non-casual and non-local. Those non-causal and non-local parts of universe are glued into a whole by gravitational constant which unifies zero (0) and infinity by its varying value. Omega is a measurement that describes the present structure of the universe. On the one hand, it only reflects an instant point during the evolution of the cosmos; on the other hand, it determines the fate of the universe. Since the primordial fluctuation determined the large scale structure of today and the fate of universe in the future, omega must be the manifestation of that primordial fluctuation. Since the primordial fluctuation cannot be influence by any cosmic process in any scale, omega shall be the same, cannot be influence by any cosmic process in any scale. That is to say that omega is not the result but the cause of the cosmic process. Since the cosmos is in fact evolving, it must be driven by a varying omega, from 0 to infinity. Five, I mentioned in my books many times that the initial condition cannot be blown away by any physical process. So, that initial condition, ZERO (0), must be expressed in some forms. Incidentally, there is a ZERO as the most fundamental constant of nature, the cosmological constant. In this new physics, every point in spacetime is the boundary of the rest of the universe. Thus, on the one hand, the universe has no perceivable boundary. On the other hand, every point in spacetime is a point of new creation.
0.968629
Why script-tags are placed at the bottom of the page in Prosilver? In Prosilver, script-tags are placed at the bottom of the page. This allows static content of the page to be shown a little bit faster to a user who doesn't have necessary resources yet in his browser cache. On some sites, this may be beneficial. For example, if the site has mostly random users who are not loading several pages during a visit. But are there any other advantages? The major drawback is that the DOM cannot be modified until it has been loaded completely. If you need to modify DOM but can't do it until the ONLOAD event, the page may flicker, element positions may change etc. This is annoying and confusing to a user. A forum is usually such a website where most visitors are returning users who have all the resources already in browser cache most of the time. Therefore, I'm suggesting that the scripts, especially jQuery, would be loaded in the head-tag. I don't understand this. Son? Standard? Are you talking about jQuery in general or jQuery in phpBB? In my knowledge there's no requirements about where to put the jQuery. I don't buy this. Only difference between script tags in <head> and <body> is the amount of markup above the script tag, right? Even if the script tag for jQuery is at the bottom of the page there's still at least </body> and </html> tags below it. You can start manipulating the unfinished DOM if the browser supports it but the DOM is definitely not ready at that point. So, in any case (scripts in <head> or <body>), you should use listeners to test if the DOM is ready. And when you start doing that, it does not matter where to put the script tag. I echo the comment already mentioned: loading jQuery near the bottom of pages is fairly standard practice and isn't anything new. I'm not sure do we have some kind of a language barrier here but to me "standard" sounds like "this is how it should be done". I agree many sites put jQuery at the end but disagree it should be there always. Even https://jquery.com/ loads jQuery in <head>! Ok then let me go a different route you can maybe understand better: exactly what in phpBB flickers when loading that is caused by jQuery being near the bottom of the page? If, for example, I want to made an extension that automatically shows a mobile version of the page or changes the width of the page or something like that... The only way to do that without user seeing unnecessary flickering and element re-sizing is to manipulate the DOM as early as possible. So you haven't actually run into this issue developing an extension or doing something with phpBB?
0.999936
The concept for J. Mark's Restaurant began forming in the minds of brothers James ''J'' and Steven ''Mark'' Wilson several years ago. They desired to be a locally owned and operated neighborhood restaurant and bar. It was important to both of them that J. Mark's be a place where locals could come and dine in a comfortable and relaxing atmosphere. After five years of planning and hard work, J. Mark's first opened its doors in July of 2007. J. Mark's Restaurant has quickly become a favorite spot for locals. Family, neighbors and friends. We strive to be a fun, friendly and relaxing environment for our guests. Owner Steven Mark Wilson and his entire staff are seen stopping by tables to introduce themselves and greet guests personally. We are actively involved in the community and local commerce, and often participate in local charity functions and events. Our entire team has been hired because they are efficient, confident, and friendly. The staff is professional and eager to please, ensuring the guests have a pleasant dining experience on each and every visit to J. Mark's. At J. Mark's it is our goal to serve generous portions of delicious home-made quality food at reasonable prices. We use only the best and freshest ingredients for every product that we proudly serve. As you see, our menu boasts many wonderful options for our guests to choose from. J. Mark's selections include made-to-share appetizers, crisp salads, hearty pastas, great sandwiches and hand-pattied burgers as well as fresh seafood and Certified Angus Beef steaks cut by our in-house butcher. J. Mark's Restaurant offers a wide variety of bottled and draft beers. Our extensive and reasonably priced wine list has many choices for both by-the-glass and bottle selections. Additionally, we offer a selection of great martinis, margaritas, mojitos and other specialty drinks to choose from. When guests have an occasion to celebrate, they find J. Mark's is the perfect place. Showers, birthdays, family gatherings and important business meetings can all be accommodated in our private Sun-Room. To summarize, at J. Mark's Restaurant we believe in providing our guests with food that is unmatched in quality and value, friendly and knowledgeable service, and building a positive lasting relationship for an exceptional dining experience each and every visit. We strive for perfection and will expect a Commitment to Excellence from you.
0.994441
How many Branches of chemistry are there? There are numerous branches of chemistry. in fact, chemistry is everywhere like in our environment, in medical, in food etc. but the scientists have divided chemistry in nine (9) major and important branches which are discussed below with their definition and some of the examples from our daily life. the branch of chemistry which deals with the study of carbon and their compounds (except CO2, CO, CO3, HCO3, CN etc) is known as organic chemistry. the branch of chemistry which deals with the study of the chemistry of elements and their compounds, generally obtained from non-living organisms, i.e. from minerals is known as inorganic chemistry. the branch of chemistry which deals with the study of minerals and the elements (except hydrocarbon and their derivatives ) is known as inorganic chemistry. the branch of chemistry which deals with the laws and the principles governing the combination of atoms and molecules in chemical reactions and study of physical properties of matter is called physical chemistry. e.e. H2 gas contains covalent bond while NaCl contains ionic bond, etc. the branch of chemistry which deals with the study of the methods and techniques involved determining the types, quality, and quantity of various compounds ( elements) in a given substance is known as analytical chemistry. For example Analysis of water given an idea that H2O contains hydrogen and oxygen with the ratio of 1:2 by parts or 1:8 by mass. the branch of chemistry which deals with the study of compounds chemical reactions ( metabolism) involves in living organism i.e. plants and animals cell is known as biochemistry. For example the digestion of food in living things, the action of medicines, etc. the branch of chemistry which deals with the study of changes occurring in the nuclei of atoms, accompanied by the emission of invisible radiations is known as nuclear chemistry. For example the emission of alpha (α), beta(β) and gamma(Γ) radiations. the branch of chemistry which deals with the study of those methods or techniques which are used to decrease or eliminate of the pollution of our environment is known as environmental chemistry. For Example, Global warming can be reduced by planting the more trees, etc. the branch of chemistry which deals with the study of the study of different chemical processes involved in the chemical industries for the manufacturing of synthetic products is known as industrial chemistry. For example glass, cement, paper, soda ash, fertilizers, medicines, etc. the branch of chemistry which deals with the study of polymerization and the products obtained through the process of polymerization is known as polymeric chemistry. For example polyethylene ( plastic bags), polyvinyl chloride (PVC), synthetic fibers, etc.
0.992162
Diagnostic Technology: Molecular or Not Molecular? Asking the above question is like asking whether the computer has changed the world, whether you love microbes, or whether Louis Pasteur had something to do with the glass of wine in your hand: there is a clear, right answer. Molecular technologies have not only revolutionized the field of microbiology, they have also transformed medicine, patient care, infection control and prevention, and overall hospital operation. There is no doubt that laboratories are adopting more and more molecular technologies to provide better quality of service, and, in particular, to diagnose emerging infectious diseases. Soon after the introduction of the first effective antimicrobials—the sulfonamides and penicillin, in the 1930s and 40s—scientists and clinicians realized that pathogens can develop tolerance or resistance to these therapeutic agents. The competition between humans (to discover new antimicrobial agents) and pathogens (to develop drug resistance) has persisted since then. In recent decades, pathogens have evolved into multidrug-resistant strains, and have become a global public health and economic burden. Just to name a few examples: methicillin resistance caused by mecA and mecC genes; vancomycin resistance caused by vanA and vanB genes; carbapenem resistance caused by carbapenemases KPC, NDM, OXA, IMP, VIM; emergence of highly resistant Candida auris and ciprofloxacin-resistant Shigella ... the list goes on and on. Multidrug-resistant organisms (MDROs) are often associated with increased lengths of stay, costs, and mortality. In addition to the direct issue of patient care, antibiotic-resistant infections confer a financial cost. The overall economic burden of antibiotic resistance was estimated to be at least €1.5 billion in 2007 in Europe and $55 billion in 2000 in the US, including patient and hospital costs. The development of multidrug resistance is mostly attributable to the widespread availability and overuse of antimicrobial agents among humans and animals. Imperfect diagnostic technology for new or emerging organisms and resistance mechanisms, as well as suboptimal infection control, contribute to the spread of MDROs. Furthermore, rapid globalization with international population mobility has directly contributed to the emergence and spread of diseases and multidrug-resistant organisms (MDROs). There is an urgent need to better detect and control the spread of MDROs. How Can Molecular Technologies Help? The advantages of molecular technologies are high sensitivity/specificity, fast turnaround time (TAT), and high throughput. Life-threatening conditions such as bloodstream infection (BSI) and meningitis can be diagnosed in a timely manner. Pathogens are identified from direct specimens or positive blood culture samples using the sample-to-result approach. To ensure prompt optimization of antibiotic regimen and infection control measures, a technologist may report out a positive nucleic acid amplification test (NAAT, a molecular test) result as a critical value while awaiting the growth of the pathogen on culture media as confirmation. This is particularly important for infections caused by MDROs that necessitate contact/droplet precautions. Let’s take methicillin-resistant S. aureus (MRSA) as an example. The TATs for MRSA detection using conventional workflow and specific chromogenic medium are 48 hours and 24 hours, respectively, from a positive blood culture. With NAATs, the TAT time is reduced to 1 – 2.5 hours, resulting in improved clinical outcomes and reduced hospital cost. NAATs can also be used to diagnose infections that are of epidemiological importance and that require isolation precautions. A negative result of NAAT targeting Mycobacterium tuberculosis from a sputum specimen is highly predictive of 3 negative acid-fast smear results, thus allowing earlier removal of patients from airborne isolation precautions. The high negative predictive value of NAAT for detection of toxigenic Clostridium difficile reduces the use of empiric antibiotics by 54%, and may allow removal of patients with diarrhea from isolation precautions. Importantly, these measures are most effective when accompanied with the implementation of an antibiotic stewardship program, as recently highlighted by the Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America. Effective notification of the critical result to the care provider, as well as education on the utility and limitations of new technologies, have also been shown to be critical to ensure optimal use of the information by care providers. The partnership between the laboratory, infectious diseases specialists, pharmacists, and infection preventionists is essential to make the best use of molecular technologies to improve patient care. Bacterial infections aren’t the only infectious diseases to benefit from molecular testing. Conventional viral culture for detecting influenza virus and other respiratory viruses, such as respiratory syncytial virus (RSV), adenovirus, and parainfluenza virus, is laborious, slow, and not sensitive. The cytopathic effect of conventional culture that indicates the presence of virus may take several days. The long TAT of this methodology contradicts the optimal administration window of antiviral drugs: oseltamivir should be taken within 48 hours of flu-like symptom onset. The patient may also be unnecessarily admitted to the hospital while awaiting the test result. Rapid influenza diagnostic tests (RIDTs) based on antigen detection, on the other hand, have a fast TAT, but their sensitivity is only about 50–70%. When influenza is prevalent (40%), the negative predictive value (NPV) of these RIDT is only about 70 – 80%. In other words, a negative test result does not necessarily rule out infection. NAATs have significantly shortened the time to diagnose respiratory viral diseases. The targets of such assays range from influenza virus alone to a panel of viruses that commonly cause respiratory illness. Some of the NAATs that can be performed as point-of-care tests have turnaround times as short as 30 minutes, with sensitivity and specificity outperforming those of conventional viral culture and rapid antigen assay. Early detection of influenza by NAAT is associated with significantly lower odds ratios for admission, length of stay, antimicrobial duration, and number of chest radiographs. In addition to seasonal influenza, the investigation of recent viral disease outbreaks has also relied upon the application of NAATs. Cross-reactivity between closely related virus species may hinder the use of enzyme immunoassay (EIA) to detect antibodies for disease diagnosis. NAATs are sensitive and specific to diagnose infections by Ebola virus and Zika virus, especially soon after symptom onset when the viral load is high. For infectious gastroenteritis, certain NAATs are now able to detect astrovirus and sapovirus, two pathogens previously not commonly tested. Early diagnosis of a viral etiology helps reduce the unnecessary use of antibiotics, and it is anticipated that more viral targets will be included in future molecular assays. While molecular technologies improve diagnostic performance, turnaround time, and patient outcomes, it is also important to understand their limitations. NAATs can only detect the targets designed as part of the assay – a new test would need to be developed to detect any new/rare strains or resistance markers. For example, the outbreak of carbapenemase-producing Klebsiella pneumoniae associated with endoscopic retrograde cholangiopancreatography was due to OXA-232 , a carbapenemase that was detected by whole genome sequencing but is not targeted by existing diagnostic NAATs. A positive result of a NAAT for toxigenic C. difficile might indicate colonization rather than infection; misinterpretation of the result could lead to unnecessary treatment. High-resolution molecular sequencing technology, if not used carefully, may create nomenclatural instability and confusion. Finally, the adoption of molecular technologies should not replace the conventional culture system, as the isolation of pathogens is still critical for conducting epidemiological studies and for optimizing antimicrobial therapy in patients. The above post reflects the thoughts of its author, Dr. Jacky Chow, and not the American Society for Microbiology. Siu-Kei (Jacky) Chow is the Technical Director of Infectious Diseases Diagnostics at MultiCare Health System in Washington.
0.999948
What’s the alternative to a race to the bottom? The alternative is a race to the bottom. And the problem with that race is that you might win, or worse, come second.
0.981563
Every wanted to grow a venus flytrap? Venus flytraps are the most widely known and widely grown carnivorous plant. Flytraps and all other carnivorous plants are native to bogs and are the result of natural adaptations caused by a humid environment with poor soil conditions. Carnivorous plants couldn’t depend on the soil to provide nutrients for them to thrive, so the bog plants developed other ways of getting the food and fertilization that they require to live. Some carnivorous plants developed pitfall traps, where the leaves formed deep pools that were coated and partially filled with digestive enzymes that encouraged insects to slip down into the liquid-filled pitchers, where the enzymes would work to break down and consume the trapped insects in the same way that your stomach breaks down a meal. Other bog plants developed super-sticky leaves that will trap any insect that lands upon them, some made suction cup leaves, or long, inescapable chambers with entrances that close up behind the prey that crawls or flies inside. The Venus flytrap, however, became equipped with what is known as snap traps. These hinged, sharp-toothed leaves feature tiny hairs that are triggered when prey lands inside the trap, When the hairs are touched, the doors snap shut around the prey, trapping the insect inside its airtight chamber and feeding on it while it is still alive inside. Aside from being incredibly odd, these swamp-dwelling, insect-devouring plants are surprisingly easy to grow, if given the right environment. Despite their otherworldly namesake, Venus flytraps do not come from Venus, but rather, from the bogs that habitat a few small humid areas in North and South Carolina. It requires a moist, even slightly soggy acidic based soil, preferably a mix of equal parts peat moss and sand. The peat moss will help with water retention and the sand will encourage drainage, so they will make the perfect pair to suit your needs. The water that you use is also a key factor to your success when growing flytraps and other carnivorous plants. Using tap water will not work in this case, as carnivorous plants are very sensitive to minerals and other chemicals that tap water usually contains. Rainwater will work perfectly, otherwise use distilled water or reverse osmosis water to keep the unwanted nutrients out. Instead of watering from the top like you do with most garden plants, it’s better to submerge the dish of your flytraps in standing water. This is necessary because carnivorous plants rely on their sensory tissue to be able to notice and attract living prey, which can be tough to do when covered in drops of water, as water moves and shifts as it trickles down the plant, which can confuse the tiny hairs to mistake the rain for potential prey. The environment in which you are growing Venus flytraps needs to be very humid. If you have a humidifier setup, your growing room should be set at 60% humidity, with daytime temperature ranging between 70 and 75 degrees Fahrenheit. Flytraps will not survive cold temperatures, and at nighttime, they need temperatures of at least 55 F to survive. If you don’t happen to live in an area that is hot, humid and perfectly suited to carnivorous plants, there is still an alternative option available. This option is to create your own terrarium. Repurpose an old aquarium, or find another large glass container that can be sealed air tight and make your own mini bog-like environment. Line the inside of your terrarium with a mixture of two parts sphagnum moss and one part sand. Encourage humidity and moisture retention and be sure to supply some living and preferably flying food once your plants are established by releasing some insects inside the container. Place your terrarium in an east-facing window with high indirect lighting and adjust moisture levels as needed, never letting the soil become completely dry. If you are new to growing carnivorous plants, we recommend that you do not attempt to grow them from seed, but order small, already established plants online or purchase them from your local nursery (if they have an oddities section). Growing flytraps from seed is not an impossible task, but can be tedious and does require a tremendous amount of care for a very low success rate. If you decide to try your hand at growing Venus flytraps from seed, this guide should get you pointed in the right direction. Once you get your hands on some flytraps, plant them at least three inches apart to allow a little bit of space and room to expand. No fertilizers are needed for Venus flytraps, due to their sensitivity to nutrients and their ability to thrive in nutrient-poor soils. Water with distilled water, rainwater, or tap water only, and maintain a consistent environment of humidity and dampness. Instead of fertilizer, flytraps require live insects in order to get the nutrients that they need to thrive, so ensure that they are in an area where they are exposed to insects, or provide them with insects yourself by releasing insects in their terrariums. If you cannot use living insects to feed your flytraps, you will need to trick the plant into consuming dead ones. Place the dead insect inside the traps and gently tickle the inside of the traps with a toothpick to get them to snap shut and begin breaking down your offerings. Though somewhat difficult to start, seeds can be produced and harvested directly from the flowers of the plant. When your flytrap starts to bloom, you must make a choice. To cut or not to cut? We suggest cutting the blooms down more often than not to promote healthy plant growth and keep your flytraps strong, but the blooms can be an especially enjoyable part of growing flytraps. The flowers that flytraps produce are quite odd and stunning, but they put too much strain on the plants themselves, as making them requires a lot of energy that could otherwise be focused on capturing and consuming insects. If you are satisfied with the amount of flytraps that you have and your plants are not yet overgrowing their containers, it is probably best to remove the bloom as soon as it starts to form. However, if you are ready to propagate and want to try your hand growing from seed, allowing the blooms to unfold is the only way to do it. When the flowers pollinate, they create seeds. If you are harvesting the seeds, allow them four to six weeks to mature and become black and pear-shaped before harvesting. Refrigerate them inside of a paper towel in a plastic container to begin the germination process. The easiest way to reproduce more Venus flytraps, however, is through division. Flytraps will reproduce asexually if they are not allowed to flower and pollinate. They do this by extending their roots and growing a bulb root, from which a new plant will grow. To divide your flytraps when they reproduce, gently remove the plant from the soil, loosely and patiently brushing away the soil until the roots are fully exposed. Then, with a clean pair of garden shears, carefully separate the new plants by cutting the connecting roots and separating the new, smaller plants, from the parent plant. Now you are ready to replant. Venus flytraps and other carnivorous plants may eat insects, but they still have to deal with certain garden pests that they don’t consider food. Aphids, mealybugs, snails, slugs and caterpillars can sometimes impair carnivorous plants. However, sprays that are recommended for plant use won’t hurt your flytraps any more than other plants, and are probably the best solution to pest problems. Because of the humidity level needed to grow carnivorous plants, you will also need to keep a close eye out for any signs of fungus growing in your terrariums or in your flytrap habitats. Botrytis, a fluffy grey mold which can sometimes infect Sarracenias and Venus flytraps around the spring or autumn seasons. Try fungicide if you think you may have caught it early enough, but most likely, you will have to cut away and discard all infected plant materials to save the plants if Botrytis occurs. As a preventative measure, keep a close eye on the drainage and ensure that the humidity does not cause a buildup of stagnant water, which can lead to unwanted fungus and mold growth. BBC makes some incredible films that document the world’s plant and animal kingdoms. Their award-winning nature documentaries highlight many strange, beautiful, and rarely-seen glimpses of the wonders of nature. Want to Learn More About Venus Flytraps?
0.99994
How to use K-Cup reusable coffee filter? I prefer a Keurig single-cup coffee brewer, but it is demonstrably much more economical and tastier to utilize the My K-Cup filter adapter to make my very own fresh-ground coffee instead of using K-Cups constantly. Plus there is less waste. The key issue with this particular strategy is that it is hard to get a consistent sit down elsewhere. Often it is also poor, often it really is also strong. Occasionally it triggers enough force that the Keurig believes it requires to be descaled even when it doesn't, and eventually ends up making a rather powerful glass that's not full. I have been playing around with-it a bit, and I think I've found a pretty good strategy that is mostly consistent, doesn't cause the stress problem, and gives about the correct strength. It is possible to obviously nonetheless choose your glass dimensions based on whether you prefer it to finish up a little weaker or stronger. I start with a medium grind. You never want to buy too coarse (like in French Press range) or too powdery. Somewhere in the center is pretty good. Mine leans more towards coarse than fine. It will help (whilst does with any coffee brewing technique) to have a great grinder that provides a fairly constant routine dimensions, you is okay even although you have actually an affordable blade grinder, you simply wont get ideal results. The key should bring the coffee straight down a bit (not excessively) by tapping the filter regarding the countertop, but also keep room at the top of the filter. If you do not bring it down after all, water will move right through and you will end up with a weak cup. In the event that you pack it down but pack it also complete, you will have pressure problem. The Keurig will believe it must be descaled, and it will not brew a full cup due to the force. Therefore I've found that completing it about to the lip, after which tapping it regarding the countertop once or twice provides me an excellent result. We get about an eighth of an inches of actual noticeable mesh within the filter over the coffee reasons, which gives it only a little space to spread out so I don't have pressure issue, but keeps it compacted adequate so it extracts fairly well.
0.999391
The SETI Institute's Allen Telescope Array in Hat Creek, California is now searching 20,000 red dwarf stars for signs of intelligent life. The billions of stars in the night sky can give rise to the question, are we alone in the universe? The Search for Extraterrestrial Intelligence (SETI) seeks to answer that question by hunting for signs of advanced civilizations in the cosmos. The term SETI can be applied in two ways. The first characterizes the quest itself, the search for other advanced lifeforms undertaken by people around the world. The SETI Institute, the second application, leads the charge in the pursuit of broadcasts from life beyond Earth. The largest player in the hunt for advanced life beyond the solar system, the SETI Institute is made up of scientists, engineers, technicians, teachers, and other support staff. In 1988, NASA began funding a strategy to sweep all directions of the sky in the hunt for life. Observations began in 1992, on the 500th anniversary of Christopher Columbus' arrival in the New World. However, within a year, Congress terminated funding. The SETI Institute then sought private funding to continue the hunt for advanced life in the universe. Donations from the enthusiastic public have helped continue the hunt for signals from other worlds. According to its website, the Institute has over 100 active projects, spanning astronomy and planetary sciences, chemical evolution, the origin of life and climate change. Project Phoenix continued the targeted search initially instituted by NASA. The program carefully examined regions around a thousand nearby sun-like stars with the world's largest antennae. In a joint project with the University of California, Berkeley, the Institute built 42 individual telescopes that function as a single massive instrument. The Allen Telescope Array, named for benefactor Paul Allen (co-founder of Microsoft), began observations in 2007. According to the SETI Institute, the array should allow scientists to examine as many as 1 million nearby stars in the next two decades. Extraterrestrial life can be roughly grouped into two categories. The first is the broad classification of life itself, a process that includes microbial and other simple forms. Without civilization and technology, life cannot produce the advanced signals that travel across the galaxy. However, many scientists continue to investigate atmospheres and other characteristics of worlds both in and out of the solar system as part of the search for life beyond our planet. The search for extraterrestrial intelligence looks beyond this broad category in an effort to find advanced civilizations. Most SETI searches focus on the hunt for radio or optical signals that can signify highly evolved alien life. Because life on Earth arose within 100 million years after the planet was habitable, many scientists think that life should evolve on planets with the right characteristics. With billions of stars in the galaxy, each thought to host at least one planet, there are numerous opportunities for life to evolve. The wealth of planets revealed by NASA's Kepler space telescope have produced a slew of potentially habitable worlds for SETI scientists to target. According to SETI Institute astronomer Seth Shostak, there are three ways to find life on other worlds. The first is to go and look, a process only feasible within the solar system. The second is by studying light from the planet to investigate its atmosphere, currently under way with instruments like NASA's Hubble Space Telescope. The third is to search for signals that could indicate intelligence. "That's what SETI does," Shostak said in a broadcast. Most SETI searches focus on radio signals, and most of these hunt for narrow-band signals, radio emissions that cover only a small portion of the radio spectrum. Natural objects blanket the spectrum with signals, so finding a signal that only dominated a small region would be suggestive of an artificial source. Scientists also focus on optical searches for advanced civilizations. These hunts involve looking for very brief flashes of light that last only nanoseconds. Messages from other worlds could be deliberately beamed or they could be accidental. Earth has been broadcasting signals since World War II, when radio communications became more common. SETI searches also look for intentional messages transmitted into space. More recently, the SETI hunt has begun to look for communications between two worlds along Earth's line of sight; messages beamed toward a planet or moon in the system could continue on toward Earth. Whether or not humans would be capable of understanding the message is another story. If a civilization is deliberately beaming a message into space, they may seek to distill it to its simplest form. However, if the message is accidentally broadcast or is a message for another world, it is possible that scientists will never be able to decode it. According to the SETI Institute, the signal will reveal a few things about the civilization producing it. Scientists will be able to pinpoint its origination, and changes can help determine how the planet is rotating and moving. "But even though this information is limited, the detection of an alien intelligence will be an enormously big story," the SETI Institute said on its website. "We'll be aware that we're neither alone nor the smartest thing in the universe." Having found the signal, the institute envisions that enthusiasm on Earth will spur humans to build larger dishes more capable of receiving weak signals. It is unlikely that Earth and an advanced civilization far from the sun will engage in much communication. That's because it can take years for a signal to travel from one planet to the other. The closest star, Alpha Centauri, is only 4.3 light-years away. If an advanced civilization exists on a (yet-unseen) planet around the star, it would take over eight years for a signal to travel from Earth to that world and back. In addition to accidental broadcasts, Earth has sent a handful of messages into space. In 1974, a simple message was transmitted from Arecibo Observatory in Puerto Rico. Both NASA and Russia have since sent a handful of brief, deliberate signals into space since then, according to the SETI Institute. When an interesting signal is detected, scientists must first verify it came from beyond Earth. By confirming observations with another radio telescope, they can make sure they have not picked up a human-created signal. Even if the original detector determines that the source didn’t come from Earth, additional instruments provide “duplication,” an important part of the scientific process. "Once an artificial signal is confirmed as being of extraterrestrial intelligent origin, the discovery will be announced as quickly and as widely as possible," said the SETI Institute. "There will be no secrecy, and indeed getting the word out quickly is important as there would be an urgent need to have astronomers world-wide monitor any detected signal 24 hour a day." While the SETI Institute is easily the most well-known seeker of signs of advanced civilizations, they are not the only ones. The University of California, Berkeley has several SETI programs under way, including one using the Arecibo Observatory. Italy's University of Bologna also has a radio SETI search in progress. Both Berkeley and Harvard University in Boston have optical SETI searches in progress. In a 2014 presentation to Congress, Shostak predicted that life would be found on worlds other than Earth in the near future. "It's unproven whether there is any life beyond Earth," Shostak said. "I think that situation is going to change within everyone's lifetime in this room." Even if no sign of an advanced civilization is found, the SETI Institute remains optimistic. According to their website, "We are just scratching the surface of what a modern search can do. Failure to find a signal wouldn’t prove that we’re the only thinking beings in the Galaxy. After all, absence of evidence is not evidence of absence." "The SETI Institute intends to press the search. Needless to say, the march of technology and new scientific discoveries will influence future SETI strategies. But giving up is not in the cards. Christopher Columbus did not turn around simply because he failed to find any new lands during his first few days at sea." Editor's note: This article was corrected to reflect the accurate distance to Alpha Centauri.
0.973574
What research before contacting a leasing broker should I do? With so many options on the market for every category of vehicle from SUV's to small town cars, how do you choose what is right for you? This comes in at number one in the things you need to research before contacting a leasing broker. I recommend making a list of all the essential features the vehicle needs to have to meet your requirements. Consider your daily vehicle usage and the type of roads you usually drive on. Think about the space you need in the boot and cabin. You may want to consider fuel efficiency if you're likely to do a lot of mileage. Then, the desirable features of your ideal car. Perhaps you'd like to have a DAB radio, Bluetooth integration or TV's for the kids installed. Now you have a pretty good picture of the type of car you need and the features you want you can begin the next process.
0.999996
I was walking the pooch in my beautiful neighborhood recently, and I began to notice a trend. More and more houses had plants growing on top of them. Not only did it improve the aesthetics of their homes, but practically, it makes great sense. Naturally, someone thought to give this type of garden an official name: "Green Roofs." • Increased savings on heating and cooling energy costs. A 6-inch green roof reduces heat gains by 95% and heat losses by 26% compared to a conventional roof. • Green roofs reduce the "urban heat island effect" -- the phenomenon of metropolitan areas being significantly warmer than surrounding rural areas, due to the heat-reflecting nature of concrete and other man-made materials and the release of heat from air-conditioning systems and machinery. • Some wildlife can be sustained by green roofs. In densely populated areas, beneficial insects, birds, bees and butterflies can be attracted to green roofs. • Rooftop agriculture can help mitigate the negative impacts of urban sprawl, ensure heightened food security, and engage communities in the food production process. • Rooftop community gardens can help meet nutritional requirements and reduce household expenditures on food, while creating accessible meeting places and activity areas that can increase social interaction and community cohesion.
0.999999
Is nodding syndrome in northern Uganda linked to consumption of mycotoxin contaminated food grains? Nodding syndrome (NS) is a type of epilepsy characterized by repeated head-nodding seizures that appear in previously healthy children between 3 and 18 years of age. In 2012, during a WHO International Meeting on NS in Kampala, Uganda, it was recommended that fungal contamination of foods should be investigated as a possible cause of the disease. We therefore aimed to assess whether consumption of fungal mycotoxins contributes to NS development. We detected similar high levels of total aflatoxin and ochratoxin in mostly millet, sorghum, maize and groundnuts in both households with and without children with NS. Furthermore, there was no significant association between concentrations of total aflatoxin, ochratoxin and doxynivalenol and the presence of children with NS in households. In conclusion, our results show no supporting evidence for the association of NS with consumption of mycotoxins in contaminated foods. Nodding syndrome (NS) is an epileptic disorder occurring among certain rural African populations in onchocerciasis endemic regions. It is characterized by repeated head-nodding seizures, developmental retardation and growth faltering. The first nodding seizures occur in previously healthy children between the ages of 3–18 years [1–3]. The disorder has been observed in onchocerciasis endemic regions in Uganda [3, 4], South Sudan and Tanzania [5, 6], but children with similar symptoms were also described in other onchocerciasis endemic regions in Liberia and Cameroon . In northern Uganda (Acholiland), the first cases of NS were diagnosed retrospectively in Kitgum district around 1997 to 1998 and were later reported from the neighbouring districts of Lamwo and Pader . NS remains a public health problem in Uganda where it is associated with high morbidity and mortality, severe socio-economic consequences, and social exclusion [1, 10]. Currently, the natural history, etiological agent and pathogenesis of NS remain unknown, and there is no specific treatment for those affected. Despite this, many children receive antiepileptic therapy which improves outcome . However, for effective management and control of NS to take place, it is important that the causative agent of the disease is identified. The hypothesized causes so far have included infections with micro-organisms such as trypanosomes, malaria, measles, cysts or viruses , genetic epilepsy disorders [3, 11], chemical warfare; infectious agents [3, 12], infection with the filarial parasite Onchocerca volvulus [2, 3, 12, 13] and toxins and nutritional deficiencies [3, 12, 14]. However, the only associated risk factor observed in epidemiological studies in all regions where NS or NS-like symptoms have been reported is an O. volvulus infection. There is currently increasing evidence to support the hypothesis that NS is triggered by infection with O. volvulus, and that NS is only one in a spectrum of clinical presentations of onchocerciasis-associated epilepsy . However, in 2012, during a WHO International Meeting on NS in Kampala, it was stated that fungal contamination of foods with mycotoxins required further study as a possible cause of NS . So far little attention has been paid to the potential role of mycotoxins consumed in staple foods in causing neurological diseases despite their known neurotoxic effects in humans. Mycotoxins are secondary metabolites produced by toxigenic fungi that infect food crops both in fields and stored foods . Recent studies have demonstrated that the Fusarium mycotoxin fumonisin B1, often present in maize, interacts with neuroblastoma cells leading to mitochondrial membrane potential depolarization and calcium deregulation . More evidence shows that fumonisin B1 makes neurons more vulnerable to epileptiform conditions. The ribotoxin deoxynivalenol (DON) has been reported to interfere with protein biosynthesis through binding of ribosomal subunits. This affects brain homeostasis and possibly participates in the etiology of neurological diseases in which alteration of the glia are involved . We investigated whether mycotoxins could be a co-factor in developing NS and determined the concentrations of mycotoxins in grain-based consumed staple foods in Northern Uganda in households with and without children with NS. The study was conducted in the districts of Kitgum (3°17′20.0″N, 32°0.52′40.0″E) and Lamwo (3°32′0″N, 32°48′0″E) in northern Uganda, bordering South Sudan. The total land area of the two districts is 9556 km2 characterized by woody savannah vegetation with a population of 338,427 [20–22]. These two districts were affected by civil war between the Lord’s Resistance Army (LRA) and the Uganda People’s Defense Force, which disrupted social service delivery between the mid 1980’s and 2006. It also resulted in the creation of many internally displaced person (IDP) camps. The majority of the population rely on small scale agriculture as a primary source of income . Ninety percent of farmers are engaged in crop production, while a small percentage rear livestock, including Ankole and Zebu cattle in the Mid North [22, 23]. The northern districts receive around 750–1500 mm of annual rainfall . The dry season, which lasts from November until March, can be severe. Drought tolerant crops are therefore cultivated and include finger millet, sesame, cassava and sorghum . A total of 38 households with 62 children with NS cases, and 46 households with children without NS cases were recruited for the study (Additional file 1: Table S1). The diagnosis of NS was made by an experienced pediatrician (JMK) by medical history taking, clinical examination and if available, reviewing the medical records. NS was defined according to the WHO case definition of NS. Collections of grain-based food samples from households with and without NS cases were made in seven villages (Additional file 2: Table S2) between November 2014 and July 2015. These samples were picked from storage bags in randomly selected households with or without NS (Additional file 1: Table S1). The samples included sorghum, maize, millet, and sesame. We collected 500 g of cereal grain per household and stored each sample separately in a polythene bag labeled with a unique identification number. In total, 105 cereal grain samples were collected from the two districts (Additional file 2: Table S2). These samples were transported to the Gulu University Bioscience Research Laboratory where mycotoxin was extracted, as described below, and stored at 4 °C until analysed. We carried out quantitative analyses of total aflatoxin, ochratoxin and DON on the grain-based food samples collected from all households. Briefly, 20 g of each grain sample was thoroughly ground in a blender (IKA, Model M20, Germany) and aflatoxin and ochratoxin were extracted using methanol, whereas distilled water was used for DON extraction (as per the Romer Labs kits procedures). Each sample was then filtered through fluted filter paper (Whatman 1; WHATMAN International Ltd, England) and the elute was used for analysis. The assays for total aflatoxin, ochratoxin and deoxynivalenol were performed on all the samples by direct competitive enzyme-linked immunosorbent assay (ELISA) using AgraQuant assay kits (Romer Labs Singapore Pte Ltd). Results were measured optically using an ELISA reader MULTISKAN FC, model 357, China) with an absorbance filter of 450 nm (OD450) and differential filter of 630 nm. Standard curves for the mycotoxins were generated from kit sample standards and inbuilt Log-logit regression models using Microsoft Excel 2013. We examined differences in concentrations of aflatoxin, ochratoxin and DON in sampled grains between households in each study group using the Mann–Whitney U test. A Pearson correlation analysis was used to establish the relationship between concentrations of aflatoxin, ochratoxin and DON in sampled grains and the presence of children with NS in households. Statistical analyses were run with alpha set at 0.005 . All analyses were conducted using IBM SPSS version 24 (Armonk, NY, USA). This study was approved by the institutional review board of St. Mary’s Hospital Lacor (LHIREC 028/05/14). Formal approval to conduct the study was granted by the Uganda National Council for Science and Technology and the Office of the Ugandan president (HS1824). Heads of households provided written informed consent. The concentrations of total aflatoxin, ochratoxin and DON did not differ significantly between food grains sampled from households with and without children with NS (Table 1). There were no significant correlations between concentrations of total aflatoxin, total ochratoxin and DON in food grains sampled and the presence of NS in households in the two districts (Table 2). We assessed the association between concentrations of mycotoxins in grain-based consumed staple foods in Northern Uganda in households with and without children with NS, with the aim to detect a potential role in disease development. The concentration of total aflatoxin, total ochratoxin and DON were high in households both with and without children with NS, without there being a statistical difference between the two groups. There appeared to be a stronger association between aflatoxin consumption and NS, than for both ochratoxin and DON, but this was not statistically significant (Table 2). This could suggest that aflatoxin can be a risk factor in hastening the development of NS considering its neurotoxic effect and neural tube defects . Our study has one major limitation. Most children involved in this study developed NS many years before the study, when living in the IDP camps. The food children received in these camps were different from the food families were eating at the moment of the study and therefore we cannot exclude that it is possible that in these camps children who later developed NS were more exposed to contaminated food compared to controls. High levels of aflatoxin were detected in foods of households with and without children with NS. Daily consumption of small doses of aflatoxin can lead to its accumulation which could potentially be neurotoxic. Children are more sensitive to mycotoxins due to their high metabolic rates, low body mass, underdeveloped organ functions and detoxification mechanisms . It is also possible that mycotoxin toxicity can act as a cofactor in NS disease development in certain individuals with genetic or immunological complications as has been suggested by Dowell and Idro . An association between stunted growth in children and high intake of aflatoxin was reported from Benin and Togo [27, 28]. Mycotoxin contamination is known to have profound health risks in certain populations, and high mortality rates have been recorded in countries including Kenya, India and Ethiopia [29–31]. There is greater risk of toxicity caused by human consumption of mycotoxins in northern Uganda due to chronic dietary exposure to contaminated food grains. Dietary aflatoxin exposure has been associated with human hepatocellular carcinomas particularly in areas of high hepatitis B prevalence such as in northern Uganda . Most African countries remain at high risk of mycotoxin contamination, while children are much more susceptible to the effects of toxicity as has been shown in studies in Benin and Togo [27, 28]. Our results show no supporting evidence for the association of NS with consumption of mycotoxins in contaminated foods. Most children involved in this study developed NS many years before the study, when living in the IDP camps. The food children received in these camps were different from the food families were eating at the moment of the study and therefore we cannot exclude that it is possible that in these camps children who later developed NS were more exposed to contaminated food compared to controls. RE and HE contributed to field collections, performed laboratory work, analyzed the data, and drafted an initial version of the manuscript. JMK contributed in field collection diagnosis of NS cases and helped to design the study. AH and GMM performed field sample collections and data analysis. GH, RC and EO conceived, design the study, coordinated fieldwork and provided guidance. All authors read and approved the final manuscript. We are grateful to the communities in Lamwo and Kitgum districts in northern Uganda for participating in this research. The authors declare that all the data supporting the findings of this study are available within the article (and its Additional files). This work was supported by Belgium Government through VLIR South Initiative Grant awarded to Prof. Geert Haesaert entitled “Unknown neurotropic virus and mycotoxins: an exploratory study to unravel the cause of nodding syndrome”. 13104_2018_3774_MOESM1_ESM.docx Additional file 1: Table S1. Concentration of mycotoxins in grain samples from Lamwo and Kitgum in households with NS and without NS. 13104_2018_3774_MOESM2_ESM.docx Additional file 2: Table S2. Sampling sites for cereal grains. Uganda Bureau of Statistics (UBOS). Uganda National Household Survey, 2012/2013. [Internet]. Kampala: UBOS; 2012. https://www.ubos.org/onlinefiles/uploads/ubos/UNHS_12_13/2012_13%20UNHS%20Final%20Report.pdf. Accessed 22 Feb 2018. (CDC) C for DC and P. Outbreak of aflatoxin poisoning—eastern and central provinces, Kenya, January–July 2004. MMWR. Morbidity and mortality weekly report [Internet]. Vol. 53. 2004. https://www.cdc.gov/mmwr/preview/mmwrhtml/mm5334a4.htm. Accessed 6 Mar 2018.
0.999998
How to cite this article: Langan RC, Mullinax JE, Raiji MT, Upham T, Summers T, Stojadinovic A, Avital I. Colorectal Cancer Biomarkers and the Potential Role of Cancer Stem Cells. “Amid worries of U.S. protectionism, the opportunities CETA creates provide a shining example that international trade is not a zero-sum game,” said Craig Alexander, the board’s chief economist. Distance From Times Square To Statue Of Liberty Cochrane Times – a place for remembering loved ones; a space for sharing memories, life stories, milestones, to express condolences, and celebrate life of your loved ones. CAPE TOWN AERIAL PANORAMA. Aerial Panoramas by OLEG GAPONYUK / Tech. assistant ANDREY ZUBETZ. Aerial Panoramas have during the last years become an interesting speciality in VR Photo.
0.942227
A pale recession is a phrase used in May 2008 by former Federal Reserve Board Chairman Alan Greenspan to describe an economic environment in which recession has not yet hit all the areas of the economy. In particular, Greenspan was speaking of the U.S. employment numbers at the time, which had not yet seen as significant a decline as would be expected in a full recessionary environment, which is generally marked by a broad decline in economic activity across the economy. Greenspan used the term "pale recession" in a television interview with Bloomberg on May 4, 2008. When asked whether the U.S. was in a recession he responded, "We're in a recession ... but this is an awfully pale recession at the moment. The declines in employment have not been as big as you'd expect to see." Alan Greenspan served as Federal Reserve Chairman for 19 years under four Presidents. In this role, he was responsible for leading the Federal Reserve Board in what amounts to an executive chairman position. His main responsibility was to carry out the mandate of the Federal Reserve, which is charged with maintaining stable prices, high employment and moderate inflation in the U.S. economy. As the public face of the Fed, Greenspan was well known by legislators and the general public, and his testimonies and speeches were highly anticipated and followed. He became known for somewhat rambling oratories laden with terms such as "crony capitalism" and "irrational exuberance," which in turn promptly popped up in headlines and conversations across the nation. "Pale recession" is an example of such a term. Fed speak is a phrase used to describe former Federal Reserve Board Chairman Alan Greenspan's tendency to make wordy statements with little substance. The Philadelphia Fed Survey tracks regional manufacturing conditions in the Northeastern United States.
0.997889
Identify thiols (mercaptans) by the presence of an SH group. The mild oxidation of thiols gives disulfides. Because sulfur is in the same group (6A) of the periodic table as oxygen, the two elements have some similar properties. We might expect sulfur to form organic compounds related to those of oxygen, and indeed it does. Thiols (also called mercaptans), which are sulfur analogs of alcohols, have the general formula RSH. Methanethiol (also called methyl mercaptan), has the formula CH3SH. Ethanethiol (ethyl mercaptan) is the most common odorant for liquid propane (LP) gas. The mild oxidation of thiols gives compounds called disulfides. The amino acids cysteine [HSCH2CH(NH2)COOH] and methionine [CH3SCH2CH2CH(NH2)COOH] contain sulfur atoms, as do all proteins that contain these amino acids. Disulfide linkages (–S–S–) between protein chains are extremely important in protein structure. Thioethers, which are sulfur analogs of ethers, have the form general formula RSR′. An example is dimethylsulfide (CH3SCH3), which is responsible for the sometimes unpleasant odor of cooking cabbage and related vegetables. Note that methionine has a thioether functional group. Paramedics are highly trained experts at providing emergency medical treatment. Their critical duties often include rescue work and emergency medical procedures in a wide variety of settings, sometimes under extremely harsh and difficult conditions. Like other science-based professions, their work requires knowledge, ingenuity, and complex thinking, as well as a great deal of technical skill. The recommended courses for preparation in this field include anatomy, physiology, medical terminology, and—not surprisingly—chemistry. An understanding of basic principles of organic chemistry, for example, is useful when paramedics have to deal with such traumas as burns from fuel (hydrocarbons) or solvent (alcohols, ethers, esters, and so on) fires and alcohol and drug overdoses. To become a paramedic requires 2–4 y of training and usually includes a stint as an emergency medical technician (EMT). An EMT provides basic care, can administer certain medications and treatments, such as oxygen for respiratory problems and epinephrine (adrenalin) for allergic reactions, and has some knowledge of common medical conditions. A paramedic, in contrast, must have extensive knowledge of common medical problems and be trained to administer a wide variety of emergency drugs. Paramedics usually work under the direction of a medical doctor with a title such as “medical director.” Some paramedics are employed by fire departments and may work from a fire engine that carries medical equipment as well as fire-fighting gear. Some work from hospital-sponsored ambulances and continue to care for their patients after reaching the hospital emergency room. Still other paramedics work for a government department responsible for emergency health care in a specific geographical area. Finally, some work for private companies that contract to provide service for a government body. Thiols, thioethers, and disulfides are common in biological compounds. What is the functional group of a thiol? Write the condensed structural formula for ethanethiol (ethyl mercaptan). What is the functional group of a disulfide? Write the condensed structural formula for dipropyl disulfide. A common natural gas odorant is tert-butyl mercaptan. What is its condensed structural formula? Write the equation for the oxidation of ethanethiol to diethyl disulfide.
0.999919
The RB Company is one of the pioneering companies in making electronic boards. This company has recently faced a difficult problem to solve in designing its special power boards. Each power board is a flat plastic plate with special red and/or blue colored plugs on it. The blue plugs are recognized as null poles, and the red ones are phase poles. This company's special design requires that all the blue plugs should be inter-connected with straight lines to make a simple blue polygon. All vertices of the resulting polygon should be blue plugs, and any blue plug should be a vertex of this polygon. With similar conditions, all the red plugs should make a red polygon. You may assume that no three plugs of the same color are co-linear, i.e. lie on one line. The design problem is that safety precautions require that there should be no red and blue polygon intersections; otherwise a disastrous explosion would be inevitable. This happens when the two polygons have non-empty intersection. The RB engineers have realized that some configurations of red and blue plugs makes it impossible to have non-intersecting red and blue polygons. They consider such configurations disastrous. Your task is to write a program to help the RB engineers recognize and reject the disastrous configurations. The first line of the input contains a single integer t (1 <= t <= 5), the number of test cases, followed by the input data for each test case. The first line of each test case contains two integers b and r (3 <= b, r < 10), the number of blue and red plugs respectively, followed by b lines, each containing two integers x and y representing the coordinates of a blue plug followed by r lines, each containing two integers x and y representing the coordinates of a red plug. Note that all coordinates are pairwise distinct and are in range 0 to 100,000 inclusive. There should be one line per test case containing YES if there exist non-intersecting polygons or NO otherwise. The output is considered to be case-sensitive.
0.99602
The floating_to_decimal functions convert the floating-point value at *px into a decimal record at *pd, observing the modes specified in *pm and setting exceptions in *ps. If there are no IEEE exceptions, *ps will be zero. is a correctly rounded approximation to *px, where sig is +1 or −1, depending upon whether pd→sign is 0 or −1. pd→ds has at least one and no more than DECIMAL_STRING_LENGTH–1 significant digits because one character is used to terminate the string with a null. pd→ds is correctly rounded according to the IEEE rounding modes in pm→rd. *ps has fp_inexact set if the result was inexact, and has fp_overflow set if the string result does not fit in pd→ds because of the limitation DECIMAL_STRING_LENGTH. If pm→df == floating_form, then pd→ds always contains pm→ndigits significant digits. Thus if *px == 12.34 and pm→ndigits == 8, then pd→ds will contain 12340000 and pd→exponent will contain −6. If pm→df == fixed_form and pm→ndigits >= 0, then the decimal value is rounded at pm→ndigits digits to the right of the decimal point. For example, if *px == 12.34 and pm→ndigits == 1, then pd→ds will contain 123 and pd→ exponent will be set to −1. If pm→df == fixed_form and pm→ndigits< 0, then the decimal value is rounded at −pm→ndigits digits to the left of the decimal point, and pd→ds is padded with trailing zeros up to the decimal point. For example, if *px == 12.34 and pm→n digits == −1, then pd→ds will contain 10 and pd→exponent will be set to 0. When pm→df == fixed_form and the value to be converted is large enough that the resulting string would contain more than DECIMAL_STRING_LENGTH−1 digits, then the string placed in pd→ds is limited to exactly DECIMAL_STRING_LENGTH-1 digits (by moving the place at which the value is rounded further left if need be), pd→exponent is adjusted accordingly and the overflow flag is set in *ps.
0.992248
Book restaurants with the Google Assistant's human-like voice AI. Google Duplex—the Google Assistant's restaurant-booking phone call bot—is finally getting a wider rollout. The tool was previously only available on Google's Pixel phones, but now you can send out a robocall from most smartphones. An updated Duplex support page (which was first spotted by XDA Developers) now shows support for "iPhones with the Google Assistant installed" and "Android devices running version 5.0 or newer." Google Duplex is one of the more impressive products Google has shown off in recent years. Just ask the Google Assistant to make a restaurant reservation at a certain time, and it will do it. By "do it," I mean it will make a phone call to a business, speak to the business on your behalf with one of the most human-sounding computer-generated voices ever made, negotiate a reservation time, and get back to you. Last summer I was able to take restaurant reservations from Duplex—Google had a few journalists pack into a New York City Thai restaurant and field phone calls from its voice AI. Over the low-quality codec of a voice call, Google's voice technology sounds almost indistinguishable from a human, complete with emulated human flaws like pauses in speech and disfluencies like "um" and "uh" in the middle of a sentence. There isn't just one Duplex voice, either. Google's voice technology was able to generate voices from a range of artificial people, with different personalities and styles of speech. Duplex also does an incredible job of understanding the real human on the other line, and it almost seems like a full generation ahead of the voice technology currently in the Google Assistant or Siri. If something does go wrong, though, Google has a call center of actual humans standing by. Using what is probably the best voice AI on Earth to book restaurants, which you can already do over the Internet without speaking to anyone at all, seems like a waste, but Google is being very conservative with its new voice technology. For now, it's trained to book reservations and that's it. But even simply using Duplex up until now has been difficult. You needed the right phone—a Google Pixel—and you needed to be in the right location—at first four cities, now 47 states. With the wider device and location roll out, Duplex should soon be usable by most people in the US.
0.952578
Police asked to search car do I have to let them? If the police have probable cause to search the car or pat down the driver. It’s also important to note that in some cases the right to search may be expanded based on the circumstances of the arrest. For example, if a police officer had reason to suspect that there were weapons or drugs in the car (for a drug arrest) and the police officer is outnumbered and believes there is a risk. So what do you do after a police stops your car? The most important way to protect yourself during a police stop is to understand your rights. First, you do not have to consent to a search just because an officer asks you to. Consider, if you consent to the search, all evidence that is found can generally be used against you. Next, understand that anything illegal which the officer finds in plain sight can give the officer probable cause to conduct a more invasive search. Additionally, officers can also search your car if they have probable cause. As mentioned above, probable cause can be established if they see something which they believe may be illegal, but it can also include an admission of criminal activity on your part. Probable cause for a search, however, is generally not established by a driving infraction, such as speeding or a broken tail light, but as mentioned above, during the traffic stop if the officer sees something in your car, smells marijuana, or if you say something incriminating, he may have probable cause for a search. Bottom line: So what do you do if you are stopped? Stay calm and be polite. Although you do not have to allow a search, there is never any reason to raise your voice or be disrespectful. Just remember: you have the legal right to refuse a request to search your car. A refusal in no way is an admission of guilt; it is simply as assertion of your rights as outlined in the 4th Amendment of the U.S. Constitution. Recent blogs: Chemical test can they do it if I am unconscious?
0.999961
The email message below with the subject: "England Rugby World Cup 2015," which claims the recipients are lucky beneficiaries of two hundred and fifty thousand Rand(R250,000), for the ongoing 2015 England Rugby World Cup, is a scam. The email message should not be responded to, especially with personal or financial information. Every month, thousands of these email messages are sent out by scammers to trick their potential victims into stealing their personal information and/or sending money for fake lottery draws or sweepstakes prizes. Subject: England Rugby World Cup 2015!!! We are pleased to inform you that your E-mail address is one of the lucky beneficiaries selected for the ongoing 2015 England Rugby World Cup. Your E-mail address is one of the 50 email addresses randomly selected through a computer ballot system. You are therefore been approved for a total payout of Two Hundred & Fifty Thousand Rand (R250, 000 .00). YOU ARE REQUIRED TO FILL THE FORM BELOW AND SEND IT TO US BY MAIL's OR VIA FAX AS SOON AS POSSIBLE FOR THE IMMEDIATE RELEASE OF YOUR PRIZE.
0.992094
Elon Musk's renewed feud with the Securities and Exchange Commission after another errant tweet raises concerns about the Tesla Board's ability to control the electric car manufacturer's strong and unpredictable CEO. 9 had Musk in SEC's crosshair again and pushed for contemptible charges. Until March 11, he explained to a New York federal judge why he should not be despised for tweeting production numbers for 2019, which were inaccurate and that Musk corrected later that day. The SEC said that Musk sent tweets without submitting them for review or obtaining company approval. Under the agreement, Tesla was also obliged to add two independent directors to the board. Software mogul Larry Ellison, chairman and chief engineer for Oracle, and Kathleen Wilson-Thompson, global head of human resources officers in the pharmacy chain Walgreens Boots Alliance, joined the board in December. It also alienated Australia Telecom Executive Robyn Denholm, who joined the Tesla board in 2014, to replace Musk as chair in early November. But it is unclear how much power she actually has over Musk, Kelley Blue Book editor Matt DeLorenzo said Tuesday on CNBC's "Power Lunch". "There was this agreement to involve Robyn Denholm as chairman, and the real question here is: Is it window dressing? I mean, the question is:" Who is kind of Elon's boss and how should this question be addressed? DeLorenzo said. After the Secretariat filed its latest complaint, Musk went to twitter again to criticize the agency and said "Something's broken with SEC oversight." Even with the new additions, Musk seems to control the board, including who has been appointed and removed, Elson said. Any attempt to hesitate in results as potentially in their replacement. "And they appreciate that I'm more on the table than acting like normal board members would in such a situation," Elson said. "In most companies, they would be gone." In his complaint, SEC quoted an interview by Musk with CBS & quot; 60 minutes & quot; in December when he said the company did not have to review its tweets. Asked how the company would know if he was planning to send a potentially moving-moving tweet without being able to read it, Musk told the news show: "Well, I think we can make mistakes. Who knows?" Turnover in Tesla's management teams has also raised red flags, especially the sudden departure of former Secretary-General Dane Butswinkas, who left after just two months at work. Tesla has lost over 40 executives since 2016. "It should be a signal of something," Elson said. Former SEC President Harvey Pitt told CNBC that Musk should be disregarded for his comments. Although Pitt said he did not know why Butswinkas left, he believes that many recently deceased leaders may feel that Musk is not listening to them. "For anyone who is a high-powered, highly skilled lawyer he was and is, it has to be incredibly frustrating" Pitt said Tuesday on "Squawk Box". "I think the same problem exists in many areas, and unless the board takes its oversight, we will continue to see this huge turnover."
0.997955
You desire a luxury property / luxury real estate on the Costa del Sol? That’s understandable, as Spain’s southern coast more than earns its name “Sun Coast” with more than 3,000 hours of sun every year. Kilometers of wonderful, sandy beaches just outside the door of your new luxury property on the Costa del Sol and the pleasantly refreshing Mediterranean Sea invite you to relax. Other good reasons for a new luxury real Estate on the Costa del Sol are an extensive sport and recreational offering, excellent cuisine, and an exquisite ambience. Our luxury properties on the Costa del Sol are located in Marbella and the surrounding area, including the famous “Golden Mile”, the Sierre Blanca mountains, and the “Club de Campo La Zagaleta”, one of the classiest places in the world. As an upscale, luxury residential area, this place is something extraordinary. A luxury property on the Costa del Sol offers you everything that can lend your life the highest standard of pleasure and comfort, e.g. two golf courses, clubs, restaurants, and numerous shopping options. Marbella, one of the most exclusive and beautiful places in the world, is also ideal for your new luxury property on the Costa del Sol. Find everything for your enjoyable, stylish lifestyle, including an ideal luxury property on the Costa del Sol, which leaves no wish unfulfilled.
0.988504
What special knowledge, training or qualifications do baristas receive that make them more expert on coffee than any other front line food service worker or interested home coffee consumer? I don't mean simply operating the machine, but the why's and science that contribute to better coffee. How do they get this knowledge? Is there an industry standard certification or qualification that demonstrates them? Bottom line: I would like to know how the knowledge base of baristas is deeper than simply operating the machine, if in fact it is. What you're asking is no different than: what special knowledge or certification/license do bakers have other than operating the oven? Many people know how to work the oven, but can they bake an artisan loaf? Do they even know the different kinds of bread? The same answer applies to baristas. Some chains like *bucks have their own training program. Specialty shops chase baristas like the Neapolitan pizza parlors chase pizzaioli. The special skill? Besides the latte art which seems more of a North American thing, it is very difficult to pull a great espresso shot and choose good beans. I personally have been pulling shots on a pro machine daily for 7-years and still take my hat off when I meet a good barista. What you get from one of those Nesspresso or other superautomatics is the equivalent to microwave dinner lasagna vs Mario Batali's. The job scope a barista can depend on the country that they are in. In Italy, a barista is someone that not only can make great coffee and lattes, but can usually also tend a full bar. In North America it's a bit of a different story. The barista's skill set will depend on the type of cafe they work in. Certain companies may require them to pass a small course that is unique to their company. This is true of large cafe chains such as Starbucks and Second Cup. These courses teach a lot of theory thus most barista skills are learned practically, on the job. For smaller chains or independent cafes, there is usually an absence of training course materials and all skills will be learned practically, on the job. Although there is no officially defined industry standard, a barista must be able to do certain things to be considered competent or qualified to work in the industry. Grind size and how it relates to factors such as humidity, temperature, etc. Ability to operate espresso machine and monitor boiler and dispensing pressures. Toppings such as whipped cream, syrups, etc. Ability to recognize differences in aroma, body, flavour, etc in different coffees. Understanding the coffee production process, from growth all the way to the cup you serve to your customer. Knowledge of Fair Trade, fairly traded, and Rain Forest Alliance coffee. Understanding customers and personality types. Learning how to provide your customer with the product that they will like. Understanding all the factors at every point in the coffee making process and how they will affect the final product, which is the beverage that is served. The level to which you may need to know these things can depend on the type of cafe you work in, and where you work in the world. So in short, yes, there are many many many many skills that a barista must know other than a regular food service worker. It is never as simple as knowing how to operate a machine. Some of the skills that I listed can take over a year to get good at. The attitude of most cafe managers I have met is the following: "I can train anyone to make a great cup of coffee if they don't have barista skills, but I can't train people to be good with customers if they don't have interpersonal skills." I am a barista in a cafe in Eastern Canada. Is it first thing in the morning? Espresso machines take a little while to warm up and are slower first thing. Are the grounds weighing the correct amount? A shot has an acceptable weight of grounds. Any more and your coffee/water ratio will be off and it will taste horrid. Grind times need adjusting. Think of it as the difference between water running through gravel and sand. Are the grounds properly tamped (pressed down)? This also affects the way the water runs through them. Is the espresso shot pulling through in the correct amount of time? Seconds out and your coffee will taste like sludge. Have the grounds been allowed to sit under the portafilter? This will burn them. Has the milk wand been cleaned properly between uses? Is it with flat or cappuccino (frothy) milk? Has the milk been taken to the correct temperature? If it's soy milk has it been taken to the correct (lower) temperature or allowed to curdle or burn? How good is the barista's knowledge of the coffee they serve? Can they tell you where the beans in the blend come from? Is it rainforest alliance certified? Fairtrade? Organic? Arabica or Robusta beans? How long has the espresso shot sat there for? How clean are the milk jugs and milk thermometers? Are the beans from a smaller roastery, these can be greener and less roasted as opposed to most of the major chains which roast more in order to ensure uniformity of flavour? Now bear in mind. As a trained barista, you're thinking about all of this with every coffee you make in order to ensure the best possible experience for the person drinking it. Can you honestly tell me that most people who go to a coffee shop think about any of this? This is science, an artisan skill and frankly it's insulting to suggest that just anyone could walk into a coffee shop and make a fantastic espresso. Can I descale/decalcify my coffee machine without a special product?
0.925193
Evaluating queries over probabilistic databases needs to calculate the "reliability" of a query and is #P-hard in general. We propose a new efficient approach for approximate inference which (i) is always in PTIME for every query and data instance (and even expressible in relational algebra), (ii) always results in a unique and well-defined score in [0,1], (iii) is an upper bound to query reliability and both are identical for safe queries, and (iv) is inspired by existing widely deployed propagation approaches for ranking in networks. SSLH is a family of linear inference algorithms that generalize existing graph-based label propagation algorithms by allowing them to propagate generalized assumptions about ``attraction'' or ``compatibility'' between classes of neighboring nodes (in particular those that involve heterophily between nodes where ``opposites attract''). Importantly, this framework allows us to reduce the problem of estimating the relative compatibility between nodes from partially labeled graph to a simple optimization problem. The result is a very fast algorithm that -- despite its simplicity -- surprisingly effective algorithm. We are developing a novel Quiz Item Management System (QIMS) that engages students throughout the semester in a sequence of learning activities. Each of these learning activities fosters critical thinking, and together they lead to knowledge artifacts in the form of quiz items. These artifacts go through cycles of creation, improvements, selections and deduplications and result in re-usable learning artifacts with known provenance and item response characteristics. Combined, the approach thus addresses 5 important challenges for technology-enhanced learning (e.g., MOOCs): 1. Continuous practice and assessment 2. Practice of "generative" skills, 3. "Auto-calibrated" peer evaluation, 4. Re-usable learning artifacts, and 5. Optimal use of instructors' time. When queries return unexpected results, users are interested in explanations for what they see. Recently, the database community has proposed various notions of lineage of query results, such as why or where provenance, and very recently, explanations for non-answers. We propose to unify and extend existing approaches for provenance and missing query result explanations ("positive and negative provenance") into one single framework, namely that of understanding causal relationships in databases. Re-use of existing SQL queries is difficult since queries are complex structures and not easily understood. QueryViz is a method for translating SQL into a visual formalism that helps users intuitively understand their meaning. It is inspired by the First-Order logic representation of a query and combines succinctness features of both tuple and domain relational calculus, thereby providing a minimal yet expressive visual vocabulary. The project page has a link to the yet incomplete online demo. Data management is becoming increasingly social. Questions arise about how to best model inconsistent and changing opinions in a community of users inside a DBMS. First, we propose an annotation semantics based on modal logic, which allows users to engage in a structured discussion. Second, we propose a principled solution to the automated conflict resolution problem. While based on the certain tuples of all stable models of a logic program, our algorithm is still in PTIME. Web tables contain a vast amount of semantically explicit information, which makes them a worthwhile target for automatic information extraction and knowledge acquisition from the Web. However, extracting and interpreting web tables is difficult, because of the variety in which tables can be encoded in HTML and style sheets. Our approach VENTex can extract arbitrary web tables by focusing on table representations in the "Visual Web" instead of the "Syntactic Web" as used by previous approaches. Often in life, 20% of effort can achieve roughly 80% of the desired effects. Interestingly, this does not hold in the context of web information extraction. We develop an analytic model for information acquisition from redundant data (in the limit of infinitely large data), and use it to derive a new 40-20 rule (crawling 20% of the Web will help us learn less then 40% of its content). We further describe a new family of power law functions that remains invariant under sampling, and use its properties to give a second rule of thumb. While modern database management systems (DBMSs) provide sophisticated tools for managing data, tools for managing queries over these data have remained relatively primitive. As analysts today share and explore increasingly large volumes of data, they need assistance for repeatedly issuing their queries. This project develops the essential techniques for a Collaborative Query Management System (CQMS) which provides new capabilities ranging from query browsing to automatic query recommendations. We revisit commonly accepted assumptions about the economics of deregulated electricity markets. First, we disprove that, in theory and under the condition of perfect information, decentralized and centralized unit commitment would lead to the same power quantities trade and the same optimal social welfare. Second, we show that a generator owner's optimum bid sequence for an auction market is generally above marginal cost, even where absolutely no abuse of market power is involved. The Graz Cycle is a thermodynamical combustion cycle that allows to retain and capture CO2 emissions stemming from combustion processes. It burns fossil fuels with pure oxygen which enables the cost-effective separation of the combustion CO2 by condensation. The efforts for the oxygen supply in an air separation plant are partly compensated by cycle efficiencies far higher than 65%. The combined efficiency is equal in thermodynamic performance to any other proposal in the field of Carbon Capture and Storage (CCS).
0.937238
The Great Trek ,Afrikaans: Die Groot Trek was an eastward and north-eastward migration away from British control in the Cape Colony during the 1830s and 1840s by Boers ,Dutch/Afrikaans for "farmers". The migrants were descended from settlers from western mainland Europe, most notably from the Netherlands, northwest Germany and French Huguenots. The Great Trek itself led to the founding of numerous Boer republics, the Natalia Republic, the Orange Free State Republic and the Transvaal being the most notable. The Voortrekkers comprised two groups from the eastern frontier region of the Cape Colony, semi-nomadic pastoralists known as Trekboers, and established farmers and artisans known as Grensboere, or Border Farmers. Together these groups were later called Voortrekkers (Pioneers). While most settlers who lived in the western Cape ,later known as the Cape Dutch, did not trek eastward, a small number didThe first colonists, who arrived in 1652 to set up a depot for the provision of ships under the auspices of The Dutch East India Company, were of Dutch stock. Many later settlers were of German origin and after the revocation of the Edict of Nantes in 1685, French Huguenot refugeess. By 1800, white colonists numbered rather fewer than 40,000, and were so interconnected by marriage that they represented a giant family rather than a new polyglot community. The community was also governed by The Council of Seventeen in Amsterdam, who governed the far-reaching empire of the Dutch East India Company. During the Napoleonic Wars the colony passed into the control of the United Kingdom. This was formally ratified in 1815 by the Congress of Vienna. Historians have identified various factors that contributed to the migration of an estimated 12,000 Voortrekkers to the future Natal, Orange Free State and Transvaal regions. The primary motivations included discontent with the British rule: its Anglicisation policies, restrictive laws on slavery and its eventual abolition, arrangements to compensate former slave owners, and the perceived indifference of British authorities to border conflicts along the Cape Colony's eastern frontier. Many contemporary sources argue that Ordinance 50 (1828), which guaranteed equal legal rights to all free persons of colour, and prohibitions on inhumane treatment of workers, spurred the Boer migrations. However, some scholars argue that most Trekboers did not own slaves, unlike the more affluent Cape Dutch who did not migrate from the western Cape. The three republics subsequently founded by the Voortrekkers prohibited slavery, but enshrined racial separatism in their constitutions. Most versions agree that the following happened: Dingane's authority extended over some of the land in which the Boers wanted to settle. As prerequisite to granting the Voortrekker request, Dingane demanded that the Voortrekkers return some cattle stolen by Sekonyela, a rival chief. After the Boers retrieved the cattle back, Dingane invited Retief to his residence at Umgungundlovu to finalise the treaty, having either planned the massacre in advance, or deciding to do so after Retief and his men arrived. Perhaps an earlier display of arms from horseback by Retief's men provoked the massacre. Dingane's reputed instruction to his warriors, "Bulalani abathakathi!" ,Zulu for "kill the wizards", showed that he may have considered the Boers to wield evil supernatural powers. After murdering Piet Retief's delegation, the Zulu impis (battalions) immediately attacked Boer encampments in the Drakensberg foothills at what later was called Blaauwkrans and Weenen. In contrast to earlier conflicts with the Xhosa on the eastern Cape frontier, the Zulu killed the women and children along with the men, wiping out half of the Natal contingent of Voortrekkers. On 6 April 1838 the Voortrekkers retaliated with a 347-strong punitive raid against the Zulu ,later known as the Flight Commando, supported by new arrivals from the Orange Free State. They were roundly defeated by about 7,000 warriors at Ithaleni, southwest of uMgungundlovu. The well-known reluctance of Afrikaner leaders to submit to one another's leadership, which later so hindered sustained success in the Anglo-Boer wars, was largely to blame. On 16 December 1838 a 470-strong force of Andries Pretorius confronted about 12,000 Zulu at prepared positions.The Boers reputedly suffered only 3 injuries without any fatalities, while the blood of 3,000 slain Zulu turned the river red with blood, so that the conflict afterwards became known as the Battle of Blood River. The Boers' guns offered them an obvious technological advantage over the Zulu's traditional weaponry of short stabbing spears, fighting sticks, and cattle-hide shields. The Boers attributed their victory to a vow they made to God before the battle: if victorious, they and future generations would commemorate the day as a Sabbath. Thus 16 December was celebrated by Boers as a public holiday, first called "Dingane's Day," later changed to the Day of the Vow. It is still a public holiday, but the name was changed to the Day of Reconciliation by the post-apartheid ANC government, in order to foster reconciliation between all South Africans. However, the Day of the Vow is still celebrated by Boers today.After the defeat of the Zulu forces and the recovery of the treaty between Dingane and Retief from the latter's skeleton, the Voortrekkers proclaimed the Natalia Republic.This Boer state was annexed by British forces in 1843. Due to the return of British rule, emphasis moved from occupying lands in Natal, east of the Drakensberg mountains, to the west of them and onto the high veld of the Transvaal and Orange Free State, which were unoccupied due to the devastation of the Mfecane. Andries Wilhelmus Jacobus Pretorius ,27 November 1798 – 23 July 1853, was a leader of the Boers who was instrumental in the creation of the Transvaal Republic, as well as the earlier but short-lived Natalia Republic, in present-day South Africa. Pretorius received his education at home and although a school education wasn't a priority on the eastern frontier of the Cape Colony, he was schooled enough to read the Bible and put his thoughts down on paper. Andries Pretorius was the oldest of five children of Marthinus Wessel Pretorius and his wife Susanna Elizabeth Viljoen.Pretorius descended from the line of the earliest Dutch settlers in the Cape Colony. In September 1836, after the up company of Gerrit Maritz left Graaff-Reinet to go northwards, those that stayed behind including Pretorius began to strongly consider leaving the Cape Colony. He left his home in October 1837 on a scouting expedition to visit the Trekkers. Eventually Pretorius would leave the Cape Colony permanently. He abandoned his trek toward the Modderrivier and made haste to the Klein-Tugela river in Natal when he was summoned to lead the Voortrekkers who were there leaderless; Gerrit Maritz died of illness and Andries Potgieter left Natal moving deeper inland. Piet Retief and Piet Uys were murdered in February 1838 along with their men under command of the Zulu king Dingane.They were invited under false pretenses, during a negotiations visit, along with 70 men with boys among them and with 30 servants to enter the Zulu kraal Mgungundlovu unarmed.Pretorius arrived at the desperate Trekkers' main camp on the 22nd of November 1838. Pretorius' diligence and thorough action immediately instilled confidence and he was appointed chief commander of the punitive commando against Dingane. Pretorius lead 470 men with 64 wagons into Dingane's territory and on the dawn of 16 December 1838, next to the Ncome river, they would achieve victory over an attacking army of 10,000 to 15,000 Zulu warriors. The Voortrekkers fought with muzzle-loading rifles and made use of two small cannons. The Zulus sustained losses of an estimated 3,000 warriors in what became known as the Battle of Blood River. The Boers sustained no casualties. Three men were injured including Andries Pretorius who was injured on his hand by an Assegai. After the battle Pretorius made an agreement with Dingane's brother Mpande which forced Dingane and those loyal to him into exile. The Boers believe that God granted them victory and thus promised that they and their descendants would commemorate the day of the battle as a day of rest. Boers memorialized it as "Dingane's Day" until 1910. It was renamed "Day of the Vow", later "Day of the Covenant", and made a public holiday by the first South African government. After the fall of apartheid in 1994, the new government kept the day as a public holiday as an act of conciliation to Boers, but renamed it "Day of Reconciliation".In January 1840, Pretorius with a commando of 400 burghers, helped Mpande in his revolt against his half-brother Dingane. He was also the leader of the Natal Boers in their opposition to the British. In 1842, Pretorius besieged the small British garrison at Durban, but retreated to Pietermaritzburg on the arrival of reinforcements under Colonel Josias Cloete. Afterward, he exerted his influence with the Boers to reach a peaceful solution with the British, who annexed Natalia. With a considerable following, he was preparing to cross the Drakensberg when Sir Harry Smith, newly appointed governor of the Cape, reached the emigrants' camp on the Tugela River in January 1848. Smith promised the farmers protection from the natives and persuaded many of the party to remain. Pretorius departed, and, on the proclamation of British sovereignty up to the Vaal River, fixed his residence in the Magaliesberg, north of that river. He was chosen by the burghers living on both banks of the Vaal as their commandant-general. At the request of the Boers at Winburg, Pretorius crossed the Vaal in July and led the anti-British party in their "war of freedom", occupying Bloemfontein on 20 July. In August, he was defeated at Boomplaats by Smith and retreated to the north of the Vaal. He became leader of one of the largest of the parties into which the Transvaal Boers were divided, and commandant-general of Potchefstroom and Rustenburg, his principal rival being Commandant-General A. H. Potgieter. In 1851, Boer malcontents in the Orange River Sovereignty and the Basotho chief Moshoeshoe I asked Pretorius to come to their aid. He announced his intention of crossing the Vaal to "restore order" in the Sovereignty. His goal was to obtain an acknowledgment of the independence of the Transvaal Boers from the British. Having decided on a policy of abandonment, the British cabinet entertained his proposal. The government withdrew its reward of 2000 pounds, which had been offered for his capture after the Boomplaats battle. Pretorius met the British commissioners near the Sand River. On 17 January 1852 they concluded the convention by which the independence of the Transvaal Boers was recognized by Britain. Pretorius recrossed the Vaal River, and on 16 March he reconciled with Potgieter at Rustenburg. The followers of both leaders approved the convention, although the Potgieter party was not represented. In the same year, Pretorius paid a visit to Durban with the object of opening up trade between Natal and the new republic. In 1852, he also attempted to close the road to the interior through Bechuanaland and sent a commando to the western border against Sechele.Pretorius died at his home at Magaliesberg in July 1853. He is described by Theal as "the ablest leader and most perfect representative of the Emigrant Farmers." In 1855, a new district and a new town were formed out of the Potchefstroom and Rustenburg districts by his son, Marthinus Wessel Pretorius, who named them Pretoria in honour of the late commandant-general. Marthinus Wessel Pretorius was the first president of the Transvaal Republic.
0.986892
Pat Tiberi (TEE'-behr-ee) was born to Italian immigrants in Columbus, Ohio, where he still lives. He earned a bachelor's degree in journalism in 1985 from Ohio State University, where he played trumpet in the marching band. He worked in real estate and, for eight years, as an aide to then-Rep. John Kasich, who represented Ohio's 12th Congressional District. Tiberi was elected to the Ohio House in 1992, becoming majority floor leader _ the No. 3 GOP leadership position _ in 1998. He was elected to replace Kasich in the U.S. House in 2000. Tiberi and his wife, Denice, have four daughters. Pat Tiberi, representing Ohio's 12th Congressional District, joined other House Republicans in chipping away at the 2010 health care reform bill and supporting a limit on tax breaks for insurance policies that cover abortions. "What we are trying to do is codify longstanding policy that federal dollars should not be used for abortion," Tiberi said in March 2011 before the House passed the measure. Supporters said existing law doesn't go far enough in ensuring that no tax money is used to subsidize abortions. The measure had little chance of surviving the Democratic-controlled Senate or a possible presidential veto. Tiberi supports a full repeal of the 2010 health care reform law. He said people should not be denied coverage because of pre-existing conditions and residents should have access to affordable health care, but that decisions should be made by patients and doctors, not the government. "I support health care reform, but I don't support the Democrats' trillion dollar health reform law," he said. In 2012, after the Supreme Court upheld the 2010 health care reform bill, Tiberi said that although the court found the legislation constitutional by labeling it a tax, it did not mean that it was good policy. Tiberi, who replaced former Rep. John Kasich in Congress in 2000, has been a consistent vote for GOP-sponsored legislation. He voted in 2010 against repealing the "don't ask, don't tell" policy banning gays from serving openly in the military. He was one of eight Ohio Republicans who voted against the 2009 approximately $800 billion economic stimulus package backed by President Barack Obama, calling it a temporary solution to a large problem. Tiberi, whose district experienced high rates of home foreclosures, was among Republicans who initially opposed a housing rescue plan in 2008. But he voted for the plan after President George W. Bush dropped his threat to veto the bill. Tiberi said he voted for it because "the impact on the markets if we did nothing would be a far greater risk than the bad policies." He joined Republicans in pushing for both of Bush's tax cuts. He also worked on the No Child Left Behind education bill, signed in 2002. Tiberi spoke out that year about GOP-backed campaign finance legislation that made it illegal for children to donate money to political campaigns. "Banning honest-to-goodness 17-year-olds from giving a few bucks to candidates they support is overkill at its worst," Tiberi said. In 2006, Bush signed the Tiberi-written Older American Act Amendments that reauthorized and amended programs for senior citizens. Tiberi's former boss Kasich was instrumental in helping him win a seat in the Ohio Legislature. When Kasich retired, Tiberi got the support of the future governor and the state GOP. "I was preparing to go into real estate because I didn't think John Kasich was ever going to leave Congress," Tiberi said. "Then, it was like 'boom,' the opportunity of a lifetime. The right place, right time. I had to run."
0.999995
Translate the executable statements of the following Pascal Program into quadruples. Assume that integer and real values require four words each. repeat flag[i]:=true; while turn !=i do begin while flag[j] do skip turn:=i; end critical section flag[i]:=false; until false Program Test; var i:integer ... [1...10] of real; begin i:=0; While i:<=10 do begin a[i]:=0; i:=i+1 end; end.
0.955604
Archaeological studies support a human presence in the Chek Lap Kok area of Hong Kong (where the new airport has been built) from 35,000 to 39,000 years ago and in the Sai Kung Peninsula (in the New Territories) from 6,000 years ago. In 214 BC, the first emperor of China conquered the Baiyue tribes in Jiaozhi and incorporated the territory into imperial China for the first time. The area was consolidated under the kingdom of Nanyue, founded by general Zhao Tuo in 204 BC after the Qin Dynasty collapsed. When the kingdom was conquered by Emperor Wu of Han in 111 BC, the land was re-assigned in the Han Dynasty. Archaeological evidence indicates the population increased and early salt production flourished in this time period. During the Tang Dynasty period, the Guangdong region flourished as a regional trading center. In 736, Emperor Xuanzong of Tang established a military town in Tuen Mun to defend the coastal area in the region. The earliest recorded European visitor was Jorge Álvares, a Portuguese explorer who arrived in 1513. After establishing settlements in the region, Portuguese merchants began trading in southern China. At the same time, they invaded and built up military fortifications in the Tuen Mun district of Hong Kong. Military clashes between China and Portugal led to the expulsion of the Portuguese. In the mid-16th century, the Haijin order banned maritime activities and prevented contact with foreigners; it also restricted local sea activity. In 1661–69, the territory was affected by the Great Clearance ordered by Kangxi Emperor, which required the evacuation of the coastal areas of Guangdong. It is recorded that about 16,000 persons from Xin'an County were driven inland, and 1,648 of those who left are said to have returned when the evacuation was rescinded in 1669. What is now the territory of Hong Kong became largely wasteland during the ban. In 1685, Kangxi became the first emperor to open limited trading with foreigners, which started with the Canton territory. He also imposed strict terms for trades such as requiring foreign traders to live in restricted areas, staying only for the trading seasons, banning firearms, and trading with silver only. The East India Company made the first sea venture to China in 1699, and the region's trade with British merchants developed rapidly soon after. In 1711, the company established its first trading post in Canton. By 1773, the British reached a landmark 1,000 chests of opium in Canton with China consuming 2,000 chests annually by 1799. Hong Kong became a colony of the British Empire after the First Opium War (1839–42), when military forces invaded the country after the ruling Qing Dynasty's refusal to allow opium to be imported into Hong Kong. Slowly but surely the British way of life was introduced to the country, which very quickly became one of the main industrial nations of the Far East. Originally confined to Hong Kong Island, the colony's boundaries were extended in stages to the Kowloon Peninsula in 1860 and then the New Territories in 1898. It was occupied by Japan during the Pacific War, after which the British resumed control until 1997, when China resumed sovereignty. The region espoused minimum government intervention under the ethos of positive non-interventionism during the colonial era. The time period greatly influenced the current culture of Hong Kong, often described as "East meets West" and the educational system, which used to loosely follow the system in England until reforms implemented in 2009. However, after years of negotiation and conflict, 1997 saw the signing of an agreement between the UK government and Chinese authorities to transfer sovereignty back to China. Under a complicated "one country, two tier” system, Hong Kong will retain its own laws, police force and monetary system.
0.999708
Basslines in "Good Vibrations" and "I'll Be Back" Paul McCartney has cited The Beach Boys' bass lines as an influence in his own bass playing. They freed him up to play more melodic patterns. "[I]t was good not always to have to play the root notes" he said in the Beatles' Anthology (p. 80). Though in different keys, "Good Vibrations" (in E-flat) and "I'll Be Back" (in A) feature the same chord progression: i - bVII - bVI - V. They therefore serve as the perfect comparison, showing what Paul played and what the Beach Boys played given the exact same chord progression. In "I'll Be Back", Paul plays the root (scale degree 1) of each chord almost exclusively (occasionally he'll play the fifth instead). The Beach Boys play a more melodic pattern using scale degrees 1, 2, 3, and 5. McCartney also plays in very bottom of the bass's range, whereas the Beach Boys played in the upper register, giving it a very different timbre. Now here are a few MIDI excerpts, both transposed to the same key (C), for side-by-side comparison: first is the opening phrase of "Good Vibrations"; next the opening phrase of "I'll Be Back". And since the tunes use identical chord progressions, we can substitute one bass line for the other to further compare them: here is "Good Vibrations" with Paul's bass line from "I'll Be Back"; and here is "I'll Be Back" with the Beach Boys' bass line from "Good Vibrations". By comparing these 4 excerpts side by side, we can easily hear how much more melodic and sophisticated the Beach Boys' bass line is than the Beatles'. Paul, of course, could hear it, too - and he responded by making his own bass playing more melodic. Interesting take on things, and always fun to read an intelligent analysis of good music. However, despite both songs sharing that one particular structure its a rather unfair example when put into context. Note: I'm not debating the influences the bands mutually had on each other. That's indisputable! I'll Be Back was recorded during mid-1964 which is still relatively early in The Beatles musical development and evolution. Good Vibrations was recorded in '66, by 1966 The Beatles were musically and stylistically in an entirely new era. If I weren't so lazy I'm sure it would be fairly easy to find an earlier Beach Boys tune that shared yet another progression with The Beatles. But instead here's my hypothetical, If I were to compare McCartney's playing on "Something" to a hypothetical song recorded by The Beach Boys in 1964 sharing the same chordal structure,unquestionably the bass line from "Something" would have reigned supreme. It's just misleading comparing the groups music based on songs recorded in drastically different eras (as quickly as music was evolving in the 60s', six months, much less a discrepancy of two years makes a huge, huge difference stylistically when it comes to The Beatles, and The Beach Boys). Regardless, I enjoyed reading and listening. It was fun hearing the two songs bass-lines being swapped. Thanks for sharing! Nonetheless, no one can argue the influence Wilson had on The Beatles in many ways and varying aspects, and vice-versa.
0.995589
Are these blue and white lidded vases Kangxi? I have attached pictures of a pair of lidded jars that I own, they appear not being made by the same person which I determined by the color intensity of the blue as well as the difference in the signature's. They were originally sold to me as a pair of blue and white lidded jars bearing the mark of Kang Hsi 1662-1722 and they stand 34cm tall, the big question is, what are they? Interestingly enough your vases have the mark of "Guangxu nian zhi" which is referring to the Qing dynasty, Guangxu (1875-1908) period. When the mark matches the period from which they are made this might sometimes mean that they could be of Imperial make. In this case I am reluctant to think so since they are nice but more of Trade quality then "Imperial". As a pretty surefire sign, we see that the two first characters in an Imperial mark, the Da Qing referring to the "Great Qing dynasty" in the mark, has been omitted. Their shape and the domed lids with their unusual finals would to me indicate a slightly later date than that even though many would accept them as being exactly of the period of the mark. I can't tell for sure, but personally I think these vases are of the period of the mark. This is in that case called mark and period, or in short M+P in the auction catalogs. This is interesting for collectors and it helps with the price. However not of the Kangxi period they are quite nice anyway.
0.951395
German Chancellor Angela Merkel with Finance Minister Guido Westerwelle in the German parliament on Friday. (CNN) -- A task force of European Union finance ministers was meeting in Brussels Friday to discuss proposed tougher measures intended to prevent another regional crisis on the scale of Greece's economic meltdown. The meeting comes as Germany's parliament voted in favor of a near-trillion dollar eurozone rescue package after German Chancellor Angela Merkel warned that the future of the common currency was in danger. "If the euro fails, then Europe too will fail. But if we manage to avert the danger, the euro and Europe will emerge stronger than before," Merkel told lawmakers earlier this week to justify the €750 billion rescue package. Both houses of the German parliament approved the package on Friday although its passage through the lower house was backed by just 319 out of 622 lawmakers with 195 abstaining and 73 voting against a measure seen by some as a bailout for other countries at Germany's expense. On Thursday French President Nicolas Sarkozy denied reports of a rift between Paris and Berlin after Germany's unilateral ban imposed earlier this week on the naked short selling of eurozone sovereign debt instruments. Q&A: What is 'naked short-selling'? Following talks with Merkel and new British Prime Minister David Cameron in Paris, Sarkozy said the eurozone's two biggest economies were doing everything possible to work in harmony. "I told Angela Merkel ... that we cannot have disagreements between Germany and France about subjects of this importance," Sarkozy said. "We do everything so that we don't have disagreements together. That's why we talk together." Fears over Europe's economic stability have triggered markets selloffs in recent days. New York's Dow Jones industrial average (INDU) fell 376 points on Thursday in its biggest one-day point loss since February 10, 2009. Thursday's point loss was equivalent to 3.6%, the biggest one-day percentage loss since March 5 of 2009. New York markets were mostly flat on Friday with the Dow marginally down. Asian markets followed Wall Street's lead earlier Friday with Tokyo's Nikkei down almost 250 points, or 2.45 percent, at 9,785. European stocks, already down heavily for the week, sank lower in afternoon trading with London's FTSE 100 falling below 5,000 points, before recovering a little, down 0.6 percent at 5,041. Friday's meeting in Brussels has been called by EU President Herman Van Rompuy with the goal of delivering "economic governance" reform across the 27-state economic and political bloc by October. "The recent crises and the risk for the stability of the euro area have underlined the interdependence among EU economies and exposed the vulnerability of Member States, in particular inside the euro area," the European Commission said in a memo on Thursday. "Fiscal discipline, competitiveness gaps and private sector imbalances are also a matter for the EU as a whole. This is why there is a need for economic policy coordination across the EU."
0.997856
Make an ant farm and give your child a no-cost educational pet. Make an ant farm and give your child a no-cost educational pet. A fun way to teach entomology and how an ecosystem works. It can also be addictive as you watch the the ants going about the business of building a new home. Place the smaller glass container that you have chosen inside the larger container. The purpose of the smaller container is purely to take up space and to encourage the ants to build their tunnels against the outside glass for easy viewing. Locate an ant colony in your yard and dig carefully in the area where you see the most ants. Transfer some soft soil, with the ants, into a bucket. Try to find some larger ants or a queen ant with wings, along with eggs and larvae. Using a paper cone or funnel, gently add soil and the smaller worker ants to the space between the two containers. Add the queen, eggs and larvae last, sliding them gently down the funnel to rest on the soil. The worker ants will quickly begin to relocate their queen and her offspring in their new home. CAUTION: Some ants bite, so keep your child away from exposure to the ants while you work. Ants will climb even glass walls, so you'll need to securely cap your container. Punch air holes in the lid of the larger container, but make the hole openings too small to allow ants to escape. Once you have the ants in place, put the lid on the container. Make a paper sleeve, covering the container from the bottom to the top of the soil. This darkens the ant farm and recreates an underground environment. Your ants will begin working immediately. Ants appreciate a drop of honey, some sugar, or bread dipped in sugar water, and tiny bits of fruit or vegetables. Very, very small amounts will do; you don't want the food going mouldy in the bottle. Ants get water mainly from their food; however, every couple of days you can add a cottonball soaked in water to supplement the supply. Be careful not to knock the bottle over or shake it up; this will destroy the new ant farm. To view your ant farm, remove the paper sleeve. Make notes about the ants' progress each day. This would make a neat science project if your child is studying entomology, nature, or ecosystems.
0.999999
Which of the following is characteristic of a healthy weight loss diet? A. Eating fewer than 1200 calories daily. B. Losing 5 lbs a week. C. Promoting physical activity. D. Limiting milk and other animal products. Promoting physical activity is a characteristic of a healthy weight loss diet.
0.999995
I want to go out and win, I race everyone hard,' he said.Pranging your boyfriend's car is always going to cause a bit of tension in a relationship.The subject will be hard for the two to avoid as they compete against each other this season for rookie of the year honors in NASCAR's top Sprint Cup Series. He began his sprint car racing career in 360 cubic inch winged sprint cars. Stenhouse won the National Sprint Car Hall of Fame Driver Poll in 2003, began racing in the USAC sprint car series in 2004, and won the dual Rookie of the Year honors in the United States Auto Club sprint car in 2007. The couple waited until the end of Charlotte Motor Speedway's weeklong annual media tour to go public with their relationship, which started as a friendship as they raced each other the last two seasons in the Nationwide Series. 'It was out of respect to NASCAR, to all the manufacturers, the new cars, the teams, the sponsors, just to allow the news of the day to be about racing and not let anything interfere with that. He led the Rebels onto the field before the annual Egg Bowl game versus in-state rival, Mississippi State. First woman to lead the Indianapolis 500 First woman to win an Indy Car Series race Highest woman finisher in a NASCAR's top 3 series race 4th place 2013 Daytona 500 pole winner First woman to win a Monster Energy NASCAR Cup Series pole First woman to lead the Daytona 500 First woman to lead the Coca-Cola 600 First woman to lead a Cup Series race under green Most top tens by a woman in the Cup Series: 7 Most laps led by a woman in the Cup Series: 64 born March 25, 1982) is an American professional stock car racing driver, model, and advertising spokeswoman. After making several starts in the Barber Dodge Pro Series, she moved to the Toyota Atlantic Championship for 2003. 'I think I am just finally excited to tell someone about this,' Patrick laughed, sounding almost giddy as she said the two-time Nationwide champion's middle name is Lynn and he prefers she use his first name. Patrick added of the couple’s reasons for keeping their relationship a secret: “Yes, we are dating.
0.994623
For some women the experience of menopause creates a set of challenges that has a negative impact on their self-esteem - both physically and psychologically. For many women, the fact that they are no longer fertile strikes a blow to their sense of womanhood and sensuality. Many associate menopause with middle age and lack of vitality and youth, making another dent in their self-image. Menopause symptoms such as weight gain, hair loss, and dry skin perpetuate this downward spiral and pack a powerful punch against a woman's sense of worth and self-acceptance. In order to get through menopause with one's self-esteem and sense of self-value intact, there are a number of things women can do as they approach menopause. Eating Healthfully: Eating a healthy diet can help keep women's weight in check, thus preventing weight gain commonly associated with menopause and low self-image. Proper nutrition can also offset other troubling symptoms of menopause such as mood swings and insomnia. The experts recommend eating a low-fat diet with plenty of whole grains, fruits, and vegetables. Positive Outlook: The way in which women approach menopause psychologically can make all the difference in the world. Maintaining a positive outlook towards this natural stage of life and womanhood is one of the keys to emerging with one's self-esteem intact. Try to view menopause an exciting opportunity to enhance or create new possibilities in your life, a time to reevaluate what is important to you, or simply a time to reflect upon your accomplishments until this point. Seek Social Support: Talking to other women who are going through menopause is a terrific source of empowerment. Sharing your experiences with other women who can identify with and relate to you can do wonders for a woman's self-esteem, and knowing that 'you are not alone' is greatly comforting. Exercise: Numerous studies have shown that physical exercise enhances self-confidence and self-image in menopausal women. Furthermore, exercise will improve the functioning of your internal organs and keep your outer body looking great! Self-Care: It is okay to pamper yourself during this emotionally and physically challenging time of life! If you are stressed, uncomfortable, or depressed about menopause, schedule in time for some daily TLC: Read a good book, have a bubble bath, talk to a good friend, see a movie, practice yoga and deep breathing - simply do things that are non-stressful and that you enjoy. All of things can help women keep their self-esteem up when menopause is causing their mood and self-esteem to swing down.
0.986426
how the universe was created. Hello! I'm kind of new here, I registered like 5 seconds ago, and I was wondering how the universe was created. Many religions and many people and cultures have their own ideology and perspective, but what is the real scientific explanation as to how our universe was created? Perhaps we are inside a microscopic little atom that exists on another little microscopic atom, just like how everything can't cease to exist or will just keep on getting smaller and smaller. Or maybe we are in a simulation produced by other living things for their own purpose? Maybe every day for them is every 5 centuries for us? Maybe how they perceive time is different? Or maybe even time for them is irrelevant? Maybe we are purposeless beings that are incapable of gaining the true knowledge about how the whole world or universe was created? Maybe we are in a computer code accidentally programmed by another life form? Maybe we are created by other beings as a source of entertainment for them? Or maybe we are created just to give energy or power to something else by breathing and eating other living organisms? Maybe we are the little blood and the other life forms are the leeches? Discuss your opinions in the comments below. 1. Welcome to the forums! The Big Bang is the current accepted theory, so I go by that. The big bang theory doesn't make sense to me honestly. There has to be something that triggered that big bang, just think about it, think about it all before, before the dinosaurs, before the planets, before everything else, before atoms and molecules existed, before the big bang. and then suddenly boom, we're here. There has to be something that flicked a lever to make that big bang occur right? there can't just be nothing, and then something? ¯\_(ツ)_/¯ the universe works in strange ways. As far as we know, that's the only theory that's been accepted by scientists. Also! The way the Big Bang works is that all matter was condensed into a single point, and then it exploded! So it's not really that nothing became something. It's more like there was always something but it needed to be pushed. What caused the push and what came before then, I'm not sure we will ever know. I think there's no a thing as "before" the Big Bang, and time started with it. Not sure tho. Yeah, I think that's it. The big bang created time, as it took time for things to exist. Metal Slug 4 was my personal favourite! To add my two cents: maybe it's human condition to assume that time is a constant, and that time itself having a beginning is something we'll need to accept, if not understand. Or maybe time goes back a distance that's not any more comprehensible. Maybe it just loops around. At this point in time, any theory could work. Maybe we are in a computer code accidentally programmed by another life form? Maybe we are created by other beings as a source of entertainment for them? ... Except these two. What sort of entities would spend the time to make, or even support making, something like this? Must be a bunch of idiots. Yes, but, from a more objective viewpoint, who in their right mind would contribute to such a massive simulation for entertainment alone? I wouldn't be surprised if they had their own community forum, with one of them interrupting a discussion on their universe's origin to make a "wise"crack on their own lives' work. Considering we are on a forum revolving around a game that will simulate the universe, is this not going to be non-profit entertainment? Sure it won't be nearly as high tech, or complex, but it is still a massive effort for entertainment alone.
0.999998
My friend, together with her siblings, are playing in the backyard. The grammar rule that is tested here is subject and verb agreement. Although there are multiple people in the sentence, the subject is "my friend," which is singular. The first underlined word is the verb "are," which is plural. You should be able to take out all the words between the two commas and still have a coherent sentence because this information is only of secondary importance to whoever wrote this sentence. Mark this error and look down at your answer choices. A. This answer choice matches the original sentence, which has an error. Eliminate it. B. This answer choice contains the singular "is." Keep this choice. C. This answer choice still uses a plural verb, even though it has changed the tense of the verb. Eliminate it. D. This answer choice changes the meaning of the sentence by changing the tense of the verb. Eliminate it. E. This answer choice may seem to be in a similar tense to the original sentence, but it completely changes the meaning of the sentence. The original sentence explains that friend is playing in the backyard right now. This answer choice could simply mean that the friend sometimes, often, or even rarely plays in the backyard. Eliminate it.
0.961069
Should India play both wristspinners and both allrounders? Who are the back-ups for Bumrah and Bhuvneshwar? Fitness and form permitting, Jasprit Bumrah and Bhuvneshwar Kumar will be the frontline fast bowlers in the Indian XI at the World Cup. Ideally, they would also want a third seamer. Hardik Pandya held that job until he injured his back in the Asia Cup in September and the team management is likely to be careful about overworking him. With India set to include four fast bowlers in the 15-man World Cup squad, Mohammed Shami and Khaleel Ahmed are currently first in line as back-ups to Bumrah and Bhuvneshwar. Khaleel made an immediate impression with his ability to swing the ball during his debut, but he has largely only bowled in the subcontinent so far. And in the three matches that he did played outside of Asia - the T20Is in Australia - he couldn't get sideways movement and his speeds kept dropping to the low to mid 130kph. He'll want to rectify that. Shami, meanwhile, has the experience, the high speeds, and, more importantly, good form since the England tour last year. Additionally - even though the white ball barely swings - he has the knack of producing reverse which can come in handy during the middle overs. In case any of the frontline quicks are sidelined, the inclusion of Mohammed Siraj for the Australia and New Zealand series suggests he will be the back-up for the back-ups. Siraj can swing the ball both ways at high speeds, which has probably helped him leapfrog Umesh Yadav in the pecking order. Kuldeep Yadav and Yuzvendra Chahal are match-winners for Virat Kohli, so expect them to feature in the XI consistently. However, playing both of them means India's tail gets stretched. This could be sorted if they play both their allrounders - Pandya and Kedar Jadhav - with Bhuvneshwar coming in at No. 8. However, with Joe Root's men unravelling the mystery of Kuldeep in England last year, India might be tempted to field just one wristspinner and bring in a fingerspinner as support. Ravindra Jadeja, who is part of the squads for the limited-overs legs of the Australia and New Zealand series, is the best fielder in the squad and can be a handful on slow pitches. He also provides India the option of batting till No. 8. This, though, would mean the selectors having to drop KL Rahul (the third opening option) or carry just three fast bowlers. How to fit in Dinesh Karthik? He might be the second wicketkeeper in the squad, but Dinesh Karthik is also a back-up for Ambati Rayudu at No. 4. Virat Kohli is convinced Rayudu provides the right balance in the middle order and has the temperament to boot. But Karthik has the power-hitting ability to change the nature of a game - and very quickly at that. If he can quiet his mind, he has the ability to marshall the lower order in the event of an early collapse and light up the final 15 overs of the innings with fireworks in the company of the other allrounders. Besides, India have to give Karthik some game time so that he is ready to replace MS Dhoni, should the need arise.
0.947226
Do you know that most people, whether they care to admit it to themselves or not, carry within them a needy inner child that is responsible for feelings of insecurity, low self confidence, a fear of being alone, the need to be loved and taken care of, impulsive decisions, controlling and manipulative behaviours, all types of addictive behaviours, co-dependent relationship patterns, low self esteem and self worth, lack of self care, and ultimately for sabotaging one's chances of fulfillment and success in life? Do you know many believe that this "inner child" is a) an important part of them and makes up who they are and b) must try to embrace, love or soothe it in order to make themselves feel whole, mature, and capable adults? The concept of "inner child" came into vogue in the 1970's when it was noticed that individuals' behaviours, feelings and personality characteristics would "switch" in certain situations much like they were assuming another "role". In the well known book "Games People Play" by Eric Berne M.D. he delineates how individuals without realizing it assume many roles with others some of which include "needy child" roles. A decade ago while researching the nature of negative memories it was learned that these so-called "roles" were generated around thematically related negative memories (of say being abused, rejected, unloved, neglected, unwanted, abandoned, humiliated, bullied, etc.) The "roles" themselves in effect are buried deep in the subconscious mind/body and tend to get re-triggered automatically and much beyond the person's control whenever some event in their current life resembles the old original traumatic/negative event(s). Sadly this effectively sends the person into a hypnotic trance-like state where they find themselves feeling and behaving, say, like a scared, weak, helpless, vulnerable, insecure, needy child who then tends to dominate the person's conscious mind and freewill. As I am sure you can see that feeling and behaving like a child is not attractive to others when they in fact are physically an adult and have adult obligations. This lack of "self control" only adds to lower feelings of self esteem, a sense of uncertainty about one's self, a fear of taking on new responsibilities, and essentially makes one feel defective, inadequate and like withdrawing from life. Can this be remedied, you ask? Absolutely! What was also discovered a decade ago and which goes against all traditional views on how to address these inner "roles" is that a) they are actually foreign intruders b) have no right living/residing inside of you and c) can be completely and permanently deleted/purged with Master’s Solution Series: Emotional Child. Purging these intruders is accomplished simply by erasing the negative emotions from memories associated with them. This feels empowering and freeing and is akin to banishing squatters who have been living in your house while you were away on vacation! It also helps restore one's sense of adequacy, self esteem, self worth, self confidence, self assuredness, self trust, self image, and inner peace, peace of mind, clarity, courage, strength, resilience and joy for life. What's more it is also what it means to become fully conscious and enlightened! So if you are one of those people who has felt hijacked by internal forces that feel destructive and demoralizing and you would like to reclaim control and ownership of your mind, body and life back kindly get the series of Master’s Solutions: Emotional Child. Thank you for Master’s Solutions. I feel alive now. "wow this really resonated with me, thanks, needed to hear this."
0.999982
Shall the Town negotiate a lease agreement with the MBTA, including language indemnifying the Town from environmental liability associated with trail construction, for a rail-to-trails conversion of the railroad tracks between Harbor Village and Depot Street in Townsend center for recreational purposes? This question is non-binding, and any decision to authorize construction would require an additional town vote. This question asks whether the town should take the next step in the process a pursuing a rail trail . The MBTA is offering to lease their land to Townsend for 85 years for one dollar. There is concern about environmental liability, so the question specifically states that the town would not sign the lease unless that issue was dealt with. This question does not make any final decision about whether to build a rail trail, but this question's passage is critical if the project is to continue. A "Yes" vote supports pursuing a rail trail in Townsend, and a "No" vote rejects the trail.
0.987833
Fantasy has taken Peter Jackson from cult fame to Hollywood force. His handling of Tolkien's best-known tale has put Frodo Baggins alongside James Bond and Luke Skywalker as one of the most successful ongoing stories in movie history, and with good reason. The Fellowship of the Ring is the pot of gold that fans of the fantasy genre have long hoped for: a thing of vision and magic, and respect.Ian Prior: Peter Jackson: From prince of splatter to lord of the rings. Sir Peter Jackson was born in Wellington, New Zealand, on Halloween in 1961. He grew up in Pukerua Bay, near Wellington. He began making movies at an early age using his parents' Super 8 camera. At 17, he left school and, after purchasing a 16mm camera, began shooting a science fiction comedy short, which 3 years later had grown into a 75-minute feature called Bad Taste. In 1986, Sir Peter quit his job as an apprentice photo engraver after he received his first grant from the New Zealand Film Commission of $5,000. Bad Taste, his first feature film, was released two years later. He wrote, photographed, edited, did make-up and special effects and starred in the film. Bad Taste has become and indie movie classic. The collaborative creative partnership between Sir Peter and his wife Fran Walsh has been at the centre of Sir Peter's career in film. Fran and Sir Peter co-wrote Meet the Feebles in 1990. This was one of three movies created with support from producer Jim Booth, who also assisted the production of Braindead and Heavenly Creatures. Braindead won 16 international science fiction awards, including a Saturn Award. Sir Peter continues to work closely with Fran Walsh, with whom he shares writing and producing credits for many of his films. Sir Peter received widespread acclaim for Heavenly Creatures, which received an Academy Award nomination for Best Screenplay in 1994. Sir Peter's first box-office hit was The Frighteners, starring Michael J Fox, which was released in 1996. Sir Peter took on the role of producer with the television documentary Forgotten Silver, which he also co-directed. The film featured in the film festival circuit. In 2009, Sir Peter produced the Academy Award nominated scinence fiction drama District 9 directed by Neill Blomkamp. He is a producer on Steven Spielberg's Tintin and he will also direct the second of the three planned films. Sir Peter made history with The Lord of the Rings trilogy by becoming the first person to direct three major feature films simultaneously. The Fellowship of the Ring, the Two Towers and The Return of the King were nominated for and collected a slew of film awards from around the globe, including 17 Academy Awards, 12 British Academy of Film and Television Awards and four Golden Globes. It was for The Return of the King that Sir Peter received his most impressive collection of awards. This included three Academy Awards (Best Adapted Screenplay, Best Director and Best Picture), two Golden Globes (Best Director and Best Motion Picture - Drama), three BAFTAs (Best Adapted Screenplay, Best Film and Audience), a Directors Guild Award, a Producers Guild Award and a New York Film Critics Circle Award. After completing The Lord of the Rings trilogy, Sir Peter directed, wrote and produced King Kong for Universal Pictures. The film won three Oscars. In the 2010 Queen's New Year's Honours List he was made a Knight Companion of the Order of New Zealand for his services to the film industry. He received an Arts Foundation Icon Award in 2011. The Icon Award honours extraordinary New Zealand artists who have made a significant impact on their chosen art form. Sir Peter was recognised for his contribution as a leader in New Zealand film. The Award is considered the Arts Foundation's highest honour and is limited to a living circle of twenty artists. In 2012 Sir Peter received an ONZ (Additional Member of the said Order) in the Queen's Birthday Honours list. This is New Zealand's highest honour. Jackson directed The Hobbit Trilogy: An Unexpected Journey (2012), The Desolation of Smaug (2013), and The Battle of the Five Armies (2014), which was nominated for and won multiple awards. On a visit to the South Pacific in May 2016, French Prime Minister Manuel Valls presented Pardington with one of France's highest honours, the Order des Arts et des Lettres. The Order recognises those who have made a significant contribution to the arts, and can be awarded to recipients from all over the world. Sir Peter has a special interest in WWI memorabilia and is the proud owner of a Sopwith Camel bi-wing. Sir Peter lives in Wellington with his partner Fran Walsh and their two children. The Hobbit - View the four trailers for Sir Peter Jackson's 'The Hobbit'. Peter Jackson (Film maker) featured on TVNZ 7's broadcast of 'The Artists', produced in partnership with the Arts Foundation following the 2011 Arts Foundation Icon Awards. Sir Peter Jackson directs the film adaptation of novel The Lovely Bones, written by Alice Sebold in 2002. District 9 - produced by Sir Peter Jackson. An extraterrestrial race forced to live in slum-like conditions on Earth suddenly find a kindred spirit in a government agent that is exposed to their biotechnology. In 2005, Sir Peter Jackson directed the remake of the 1933 film King Kong. View the trailers for Peter Jackson's Lord of the Rings trilogy, The Fellowship of the Ring, The Two Towers and Return of the King. Sir Peter Jackson's first box-office hit, featuring Michael J Fox. Sir Peter Jackson co-wrote and directed 'Heavenly Creatures', based on the 1954 murder of Honora Parker by her daughter Pauline Parker and friend Juliet Hulme. Sir Peter Jackson does adults only puppet comedy. The Adventures of Tintin wins Best Animated Film award at Golden Globes;Awarded ONZ (Order of New Zealand) - New Zealand's highest honour - for services to New Zealand. Jackson directs the first of The Hobbit Trilogy, An Unexpected Journey. The second in The Hobbit series, The Desolation of Smaug, is released. The final in The Hobbit series, The Battle of the Five Armies, is released.
0.924334
reclaiming the restoration: THE COMPONENTS OF FAITH - and on having faith "not to be healed" THE COMPONENTS OF FAITH - and on having faith "not to be healed" As I'm rereading the Book of Mormon from a different angle, I rediscover a great deal of verses. They seem to have become more powerful and more interesting, since the last, or first time I noticed them. Below, I've parted the concepts, color coded them and also put in a bonus verse from 1 Ne. 2. as well in times of old as in the time that he should manifest himself unto the children of men. For he is the same yesterday, today, and forever; and the way is prepared for all men from the foundation of the world, if it so be that they repent and come unto him. Faith is not a result, but the intent of your heart and mind, for if you, with hope, focus your mind and heart on Jesus, the veracity of Him will be shown you. Faith is the combination of your belief, hope/desire and action. You believe that you will reap as a consequence of sowing, with the hope of reaping a benefit from sacrificing seeds for sowing (seeds which you could have consumed) you perform the actual sowing and then look forward to the reward with the eye of faith (that is, the combined effect of your belief, hope and action). Remove either of the ingredients in faith, and it's barren and unfruitful. Therefore, it's not weird that James will show his faith through his works (James 2:18). He understood that without works, there is no faith, but mere belief. For faith to be fruitful and yield dividend, it has to be focused on something that is true. Faith can be had and expressed in relation to agricultural practices (evidence of this are abundant), or in summoning evil spirits (ample evidence of this could be given from users of ouija boards), or in God and Jesus Christ (until you experience God yourself, even though there's a lot o, it's anecdotal). Nephi believed the Lord was able to make it all known unto him, he desired it (that is, he hoped that he would be the beneficiary of the Lord using this power of concealing the mysteries of God) and he did the work (in this case, pondering; I guess Nephi previously had filled his mind with things to ponder about). How can faith become more of a principle of power in my life? What traditions and unbelief must I shed before this can happen? Lord, have mercy and grant me real faith! A side note: David Bednar taught that we should have "faith not to be healed". I guess that is an idea he got from D&C 42: "And again, it shall come to pass that he that hath faithin me to be healed, and is not appointed unto death, shall be healed." This talks about dying, and that when God has decreed that someone's time is up, it's up! This doesn't talk about being healed when you're sick, it talks about trying to postpone your "expiration date". God won't allow that! But, this doesn't say that God won't let you be healed or that he has a purpose with you being sick. The biggest purpose of giving us sickness has to be to grow our faith, and what would grow it more than being healed by it? Clinging to this erroneous idea and also believing that God won't heal you even though you have true faith in him can easily become an excuse and a way to justify and explain away your lack of faith and unbelief. We give ourselves the opportunity to say that God wants us to learn something that we haven't learned and we will never have to own up to how far away we are from having real faith. I'm far from having this kind of faith, but the ability to see a want is a good beginning (sometimes, it's more soothing and comforting to get a correct diagnosis than to be actually healed!). That was by far the best of your writings yet. I can feel your sincerity and love for the savior. The light is shining bright. Thank you for your eloquently written insights. I have been edified. Amazing how early in the book of Mormon we learn that anyone can have the same experience Nephi had. It's not enough for him to share his experience but he makes sure to teach why he got it and emphasizes that God is the same in his day as in any day, whenever Nephis words are read.
0.999764
I am not an Ontario high school student. Where can I find information about scholarships and bursaries available to me? Many universities offer scholarships for students applying from outside Ontario and outside Canada, as well as upper-year and mature students. To find more information about this financial aid, contact the universities directly.
0.978046
It’s been several months since the Russian aggression against Georgia. Though the media has entirely abandoned this story, some of us continue to think about and discuss the implications of the situation, which as far as I know remains fairly tense and problematic. A friend argued that in invading Georgia, Russia is only doing the same thing the US has done any number of times for oppressed countries. The rebels of South Ossetia are like the 13 colonies of America at the Revolutionary War. – Russia chose this summer to invade Georgia, though South Ossetia has had its share of rebels since the Soviet Union fell. This summer was a time when world attention was on other things. The invasion happened just before the start of the Olympic Games. Economic times were hard and more pressing to most of the world than foreign affairs. America was and continues to be engaged in a close and important election, while its sitting government has proved impotent. – Only after Georgia sought to join the NATO alliance did Russia act against them. Russia is less interested in revolutionaries than it is in bullying smaller nations out of alliances with the democratic West. Russia is engaged in a new Cold War with the West, though the West seems unaware of this development. Russia is testing the strength of the NATO nations’ friendship with Georgia, much as Hitler did by stepping into Austria, the Sudetanland, and Czechoslavakia before the free world decided with Poland that enough was enough and Europe was in danger. – Russia has economic/oil interests in disabling Georgia or in annexing the small country. Georgia has the only oil pipeline to northeastern European countries that is outside of Russian control. Russia wishes to control those NE countries, many of which were formerly part of the Soviet Empire. Controlling the supply of such an essential resource essentially holds hostage any dependent nations. – Russia is busy forming an alliance with Iran and the Islamic states. Georgia is in the way. – The revolutionaries in South Ossetia are Islamic troublemakers, not interested in freedom. If they wanted to be free, they would want to be independent, not to join Russia. Like Iran supplying insurgents in Iraq with weapons and training, so has Russia been backing these rebels for over a decade. – South Ossetia says that Georgia’s rule was oppressive. There are three possible explanations for this: 1) Georgia is abusing its power and depriving South Ossetians of their rights based on ethnicity. If that is the case, the best first move is a demonstration of these “atrocities” to the world. America did this with its Declaration of Independence. 2) Georgia is engaged in a military conflict begun by the rebels themselves. A sovereign nation has the right and responsibility to quell insubordination within its borders. 3) South Ossetians are lying in order to justify their rebellion. – Georgia is a small country still wobbling towards maturity as a democratic republic. In the interest of discouraging the return of Communism or totalitarianism, the US is justified in making alliances with this nation. It was proposed as part of a potential NATO treaty that Georgia allow the US to post technology military in nature on their land and directed at the aggressively posturing Russian nation. Many young nations with democratic ideals look to the US (successful in these very pursuits) for help and example in establishing their governments. – If the US or any other nation has a defense treaty with Georgia, it must be honored less the validity of any treaty made by said nations be weakened and doubted. A treaty is like a contract, each nation receiving a needed good or service. One party cannot withdraw on its agreement. – The free world must take a strong stand against Russia lest they, growing confident, invade more countries in Europe and Asia. – If the US has unjustly invaded other countries, this is no argument for Russia to do the same. However, in many cases the US has invaded countries in order to honor treaties it has with threatened nations. In other cases, the US has engaged in preemptive or retributive strikes against countries whose military/weapon technology has threatened us directly. – Whether the US should militarily support Georgia is dependent on at least two things: Have we made any official promise to Georgia to do so? and Are we nationally threatened by this move Russia is making? In conclusion, I believe that Russia’s motives are suspect in a large way, its methods are inappropriately aggressive, and its response to world denouncements chillingly indifferent or dishonest. Georgia is a little former Soviet ‘republic’ with ethnic tensions, economic precariousness, and threatening neighbors. Whether right or wrong in its treatment of the northern province, the country ought to be esteemed as a sovereign nation, not as a child-state of Russia. As such it has the right to international relations and to addressing its own civil order. The US needs to pay more attention to world events, especially Russia. Russia is quietly rebuilding its empire, reducing the freedoms within its boundaries. It is also allying itself, including through the sale of weapons, with professed enemies of the United States. Watching is not enough; the US needs to take a stand. In this age of global technology, we must be very careful lest those who wish to destroy us get the weapons capabilities of doing so. We are engaged in a global war on terror, declared first by the terrorists on us. Failure to engage our enemies means defeat. We as Christians need to give careful thought to prophecy and the roles of countries such as Russia, Iran, and Iraq. It is written in the Bible that they who bless Abraham and his heirs will be blessed. Essential for our preservation in the world is that we side with Israel, not only in word, but in diplomacy and force. Also important at this time is evangelism: in America, in the closing country of Russia, and in the Middle East. I believe biblical prophecy predicts that a revival is at hand.
0.937666
Day trading is speculation in securities, specifically buying and selling financial instruments within the same trading day, such that all positions are closed before the market closes for the trading day. Traders who trade in this capacity with the motive of profit are therefore speculators. The methods of quick trading contrast with the long-term trades underlying buy and hold and value investing strategies. Day traders exit positions before the market closes to avoid unmanageable risks negative price gaps between one day's close and the next day's price at the open. Day traders generally use margin leverage; in the United States, Regulation T permits an initial maximum leverage of 2:1, but many brokers will permit 4:1 leverage as long as the leverage is reduced to 2:1 or less by the end of the trading day. In the United States, people who make more than 4 day trades per week are termed pattern day traders and are required to maintain $25,000 in equity in their accounts. Since margin interest is typically only charged on overnight balances, the trader may pay no interest fees for the margin benefit, though still running the risk of a margin call. Margin interest rates are usually based on the broker's call. Some of the more commonly day-traded financial instruments are stocks, options, currencies, and a host of futures contracts such as equity index futures, interest rate futures, currency futures and commodity futures. Day trading was once an activity that was exclusive to financial firms and professional speculators. Many day traders are bank or investment firm employees working as specialists in equity investment and fund management. Day trading gained popularity after the deregulation of commissions in the United States in 1975, the advent of electronic trading platforms in the 1990s, and with the stock price volatility during the dot-com bubble. Some day traders use an intra-day technique known as scalping that usually has the trader holding a position for a few minutes or even seconds. Because of the nature of financial leverage and the rapid returns that are possible, day trading results can range from extremely profitable to extremely unprofitable, and high-risk profile traders can generate either huge percentage returns or huge percentage losses. Because of the high profits (and losses) that day trading makes possible, these traders are sometimes portrayed as "bandits" or "gamblers" by other investors. The common use of buying on margin (using borrowed funds) amplifies gains and losses, such that substantial losses or gains can occur in a very short period of time. In addition, brokers usually allow bigger margin for day traders. In the United States for example, while the initial margin required to hold a stock position overnight are 50% of the stock's value due to Regulation T, many brokers allow pattern day trader accounts to use levels as low as 25% for intraday purchases. This means a day trader with the legal minimum $25,000 in his account can buy $100,000 (4x leverage) worth of stock during the day, as long as half of those positions are exited before the market close. Because of the high risk of margin use, and of other day trading practices, a day trader will often have to exit a losing position very quickly, in order to prevent a greater, unacceptable loss, or even a disastrous loss, much larger than his original investment, or even larger than his total assets. Originally, the most important U.S. stocks were traded on the New York Stock Exchange. A trader would contact a stockbroker, who would relay the order to a specialist on the floor of the NYSE. These specialists would each make markets in only a handful of stocks. The specialist would match the purchaser with another broker's seller; write up physical tickets that, once processed, would effectively transfer the stock; and relay the information back to both brokers. Before 1975, brokerage commissions were fixed at 1% of the amount of the trade, i.e. to purchase $10,000 worth of stock cost the buyer $100 in commissions and same 1% to sell. Meaning that to profit trades had to make over 2 % to make any real gain. Financial settlement periods used to be much longer: Before the early 1990s at the London Stock Exchange, for example, stock could be paid for up to 10 working days after it was bought, allowing traders to buy (or sell) shares at the beginning of a settlement period only to sell (or buy) them before the end of the period hoping for a rise in price. This activity was identical to modern day trading, but for the longer duration of the settlement period. But today, to reduce market risk, the settlement period is typically two working days. Reducing the settlement period reduces the likelihood of default, but was impossible before the advent of electronic ownership transfer. The systems by which stocks are traded have also evolved, the second half of the twentieth century having seen the advent of electronic communication networks (ECNs). These are essentially large proprietary computer networks on which brokers can list a certain amount of securities to sell at a certain price (the asking price or "ask") or offer to buy a certain amount of securities at a certain price (the "bid"). ECNs and exchanges are usually known to traders by a three- or four-letter designators, which identify the ECN or exchange on Level II stock screens. The first of these was Instinet (or "inet"), which was founded in 1969 as a way for major institutions to bypass the increasingly cumbersome and expensive NYSE, and to allow them to trade during hours when the exchanges were closed. Early ECNs such as Instinet were very unfriendly to small investors, because they tended to give large institutions better prices than were available to the public. This resulted in a fragmented and sometimes illiquid market. The next important step in facilitating day trading was the founding in 1971 of NASDAQ—a virtual stock exchange on which orders were transmitted electronically. Moving from paper share certificates and written share registers to "dematerialized" shares, traders used computerized trading and registration that required not only extensive changes to legislation but also the development of the necessary technology: online and real time systems rather than batch; electronic communications rather than the postal service, telex or the physical shipment of computer tapes, and the development of secure cryptographic algorithms. These developments heralded the appearance of "market makers": the NASDAQ equivalent of a NYSE specialist. A market maker has an inventory of stocks to buy and sell, and simultaneously offers to buy and sell the same stock. Obviously, it will offer to sell stock at a higher price than the price at which it offers to buy. This difference is known as the "spread". The market maker is indifferent as to whether the stock goes up or down, it simply tries to constantly buy for less than it sells. A persistent trend in one direction will result in a loss for the market maker, but the strategy is overall positive (otherwise they would exit the business). Today there are about 500 firms who participate as market makers on ECNs, each generally making a market in four to forty different stocks. Without any legal obligations, market makers were free to offer smaller spreads on electronic communication networks than on the NASDAQ. A small investor might have to pay a $0.25 spread (e.g. he might have to pay $10.50 to buy a share of stock but could only get $10.25 for selling it), while an institution would only pay a $0.05 spread (buying at $10.40 and selling at $10.35). Following the 1987 stock market crash, the SEC adopted "Order Handling Rules" which required market makers to publish their best bid and ask on the NASDAQ. Another reform made was the "Small-order execution system", or "SOES", which required market makers to buy or sell, immediately, small orders (up to 1000 shares) at the market maker's listed bid or ask. The design of the system gave rise to arbitrage by a small group of traders known as the "SOES bandits", who made sizable profits buying and selling small orders to market makers by anticipating price moves before they were reflected in the published inside bid/ask prices. The SOES system ultimately led to trading facilitated by software instead of market makers via ECNs. In the late 1990s, existing ECNs began to offer their services to small investors. New ECNs arose, most importantly Archipelago (NYSE Arca) Instinet, SuperDot, and Island ECN. Archipelago eventually became a stock exchange and in 2005 was purchased by the NYSE. Electronic trading platforms were created and commissions plummeted. An online trader in 2005 might have bought $300,000 worth of stock at a commission of less than $10, compared to the $3,000 commission the trader would have paid in 1974. Moreover, the trader was able in 2005 to buy the stock almost instantly and got it at a cheaper price. This combination of factors has made day trading in stocks and stock derivatives (such as ETFs) possible. The low commission rates allow an individual or small firm to make a large number of trades during a single day. The liquidity and small spreads provided by ECNs allow an individual to make near-instantaneous trades and to get favorable pricing. In March 2000, this bubble burst, and a large number of less-experienced day traders began to lose money as fast, or faster, than they had made during the buying frenzy. The NASDAQ crashed from 5000 back to 1200; many of the less-experienced traders went broke, although obviously it was possible to have made a fortune during that time by short selling or playing on volatility. In parallel to stock trading, starting at the end of the 1990s, several new market maker firms provided foreign exchange and derivative day trading through electronic trading platforms. These allowed day traders to have instant access to decentralised markets such as forex and global markets through derivatives such as contracts for difference. Most of these firms were based in the UK and later in less restrictive jurisdictions, this was in part due to the regulations in the US prohibiting this type of over-the-counter trading. These firms typically provide trading on margin allowing day traders to take large position with relatively small capital, but with the associated increase in risk. The retail foreign exchange trading became popular to day trade due to its liquidity and the 24-hour nature of the market. The following are several basic strategies by which day traders attempt to make profits. In addition, some day traders also use contrarian investing strategies (more commonly seen in algorithmic trading) to trade specifically against irrational behavior from day traders using the approaches below. It is important for a trader to remain flexible and adjust techniques to match changing market conditions. Some of these approaches require short selling stocks; the trader borrows stock from his broker and sells the borrowed stock, hoping that the price will fall and he will be able to purchase the shares at a lower price. There are several technical problems with short sales - the broker may not have shares to lend in a specific issue, the broker can call for the return of its shares at any time, and some restrictions are imposed in America by the U.S. Securities and Exchange Commission on short-selling (see uptick rule for details). Some of these restrictions (in particular the uptick rule) don't apply to trades of stocks that are actually shares of an exchange-traded fund (ETF). Trend following, a strategy used in all trading time-frames, assumes that financial instruments which have been rising steadily will continue to rise, and vice versa with falling. The trend follower buys an instrument which has been rising, or short sells a falling one, in the expectation that the trend will continue. Contrarian investing is a market timing strategy used in all trading time-frames. It assumes that financial instruments that have been rising steadily will reverse and start to fall, and vice versa. The contrarian trader buys an instrument which has been falling, or short-sells a rising one, in the expectation that the trend will change. Range trading, or range-bound trading, is a trading style in which stocks are watched that have either been rising off a support price or falling off a resistance price. That is, every time the stock hits a high, it falls back to the low, and vice versa. Such a stock is said to be "trading in a range", which is the opposite of trending. The range trader therefore buys the stock at or near the low price, and sells (and possibly short sells) at the high. A related approach to range trading is looking for moves outside of an established range, called a breakout (price moves up) or a breakdown (price moves down), and assume that once the range has been broken prices will continue in that direction for some time. Scalping was originally referred to as spread trading. Scalping is a trading style where small price gaps created by the bid–ask spread are exploited by the speculator. It normally involves establishing and liquidating a position quickly, usually within minutes or even seconds. Scalping highly liquid instruments for off-the-floor day traders involves taking quick profits while minimizing risk (loss exposure). It applies technical analysis concepts such as over/under-bought, support and resistance zones as well as trendline, trading channel to enter the market at key points and take quick profits from small moves. The basic idea of scalping is to exploit the inefficiency of the market when volatility increases and the trading range expands. Scalpers also use the “fade” technique. When stock values suddenly rise, they short sell securities that seem overvalued. Rebate trading is an equity trading style that uses ECN rebates as a primary source of profit and revenue. Most ECNs charge commissions to customers who want to have their orders filled immediately at the best prices available, but the ECNs pay commissions to buyers or sellers who "add liquidity" by placing limit orders that create "market-making" in a security. Rebate traders seek to make money from these rebates and will usually maximize their returns by trading low priced, high volume stocks. This enables them to trade more shares and contribute more liquidity with a set amount of capital, while limiting the risk that they will not be able to exit a position in the stock. The basic strategy of news playing is to buy a stock which has just announced good news, or short sell on bad news. Such events provide enormous volatility in a stock and therefore the greatest chance for quick profits (or losses). Determining whether news is "good" or "bad" must be determined by the price action of the stock, because the market reaction may not match the tone of the news itself. This is because rumors or estimates of the event (like those issued by market and industry analysts) will already have been circulated before the official release, causing prices to move in anticipation. The price movement caused by the official news will therefore be determined by how good the news is relative to the market's expectations, not how good it is in absolute terms. Price action trading relies on technical analysis but does not rely on conventional indicators. These traders rely on a combination of price movement, chart patterns, volume, and other raw market data to gauge whether or not they should take a trade. This is seen as a "simplistic" and "minimalist" approach to trading but is not by any means easier than any other trading methodology. It requires a solid background in understanding how markets work and the core principles within a market, but the good thing about this type of methodology is it will work in virtually any market that exists (stocks, foreign exchange, futures, gold, oil, etc.). It is estimated that more than 75% of stock trades in United States are generated by algorithmic trading or high-frequency trading. The increased use of algorithms and quantitative techniques has led to more competition and smaller profits. Algorithmic trading is used by banks and hedge funds as well as retail traders. Retail traders can choose to buy a commercially available Automated trading systems or to develop their own automatic trading software. Commissions for direct-access brokers are calculated based on volume. The more shares traded, the cheaper the commission. The average commission per trade is roughly $5 per round trip (getting in and out of a position). While a retail broker might charge $7 or more per trade regardless of the trade size, a typical direct-access broker may charge anywhere from $0.01 to $0.0002 per share traded (from $10 down to $.20 per 1000 shares), or $0.25 per futures contract. A scalper can cover such costs with even a minimal gain. The numerical difference between the bid and ask prices is referred to as the bid–ask spread. Most worldwide markets operate on a bid-ask-based system. The ask prices are immediate execution (market) prices for quick buyers (ask takers) while bid prices are for quick sellers (bid takers). If a trade is executed at quoted prices, closing the trade immediately without queuing would always cause a loss because the bid price is always less than the ask price at any point in time. The bid–ask spread is two sides of the same coin. The spread can be viewed as trading bonuses or costs according to different parties and different strategies. On one hand, traders who do NOT wish to queue their order, instead paying the market price, pay the spreads (costs). On the other hand, traders who wish to queue and wait for execution receive the spreads (bonuses). Some day trading strategies attempt to capture the spread as additional, or even the only, profits for successful trades. Market data is necessary for day traders to be competitive. A real-time data feed requires paying fees to the respective stock exchanges, usually combined with the broker's charges; these fees are usually very low compared to the other costs of trading. The fees may be waived for promotional purposes or for customers meeting a minimum monthly volume of trades. Even a moderately active day trader can expect to meet these requirements, making the basic data feed essentially "free". In addition to the raw market data, some traders purchase more advanced data feeds that include historical data and features such as scanning large numbers of stocks in the live market for unusual activity. Complicated analysis and charting software are other popular additions. These types of systems can cost from tens to hundreds of dollars per month to access. In addition, in the United States, the Financial Industry Regulatory Authority and SEC further restrict the entry by means of "pattern day trader" amendments. Pattern day trader is a term defined by the SEC to describe any trader who buys and sells a particular security in the same trading day (day trades), and does this four or more times in any five consecutive business day period. A pattern day trader is subject to special rules, the main rule being that in order to engage in pattern day trading in a margin account, the trader must maintain an equity balance of at least $25,000. It is important to note that this requirement is only for day traders using a margin account. ^ "Day-Trading Margin Requirements: Know the Rules". Financial Industry Regulatory Authority. ^ Karger, Gunther (August 22, 1999). "Daytrading: Wall Street's latest, riskiest get-rich scheme". American City Business Journals. ^ "U.S. government warning about the dangers of day trading". ^ CNBC, Scott Patterson, |Special to (2010-09-13). "Man Vs. Machine: How the Crash of '87 Gave Birth To High-Frequency Trading". www.cnbc.com. Retrieved 2019-03-21. ^ Goldfield, Robert (May 31, 1998). "Got $50,000 extra? Put it in day trading". American City Business Journals. ^ Nakashima, David (February 11, 2002). "It's back to day jobs for most Internet 'day traders'". American City Business Journals. ^ Hayes, Adam. "Dotcom Bubble Definition". Investopedia. Retrieved 2019-03-21. ^ Gomez, Steve (October 2009). "Adapting To Change". SFO Magazine (republished on Trader Planet, 2013). Retrieved 2013-10-17. ^ Norris, Emily. "Scalping: Small Quick Profits Can Add Up". Investopedia. Retrieved 2019-03-21. ^ "Type of Day Trader". DayTradeTheWorld. ^ Blodget, Henry (May 4, 2018). "The Latest Wall Street Trading Scam That Costs You Billions". Business Insider. ^ Duhigg, Charles (November 23, 2006). "Artificial intelligence applied heavily to picking stocks - Business - International Herald Tribune". The New York Times. (Subscription required (help)). ^ Milton, Adam. "Large Bid and Ask Spreads in Day Trading Explained". The Balance. Retrieved 2019-03-21. ^ SETH, SHOBHIT (February 25, 2018). "Choosing the Right Day-Trading Software". Investopedia. ^ "Day Traders: Mind Your Margin". Financial Industry Regulatory Authority.
0.884796
I always thought if you happened to be a person who was famous, the 1980s was the perfect decade to be a celebrity in - I have always felt growing up that society, mostly in television, sport, politics etc, there really was a lot of "community spirit" that celebrities lived under - one happened to be famous for doing what they were good at, and because of being famous, they got paid a lucrative amount - not bad for an era where people were successful at making lots of money from the outset. A lot of famous people had their own niches - Steve Davis - snooker; Russell Grant - astrology; Ian Botham - cricket; Shakin' Stevens - pop music and wearing jeans; Claire Rayner - agony aunt: Paul Daniels - magic; David Bellamy - Botany; Margaret Thatcher - politics; Jimmy Cricket - wearing his wellies the wrong way round; Brian Clough - Nottingham Forest, etc. 1) They were portrayed as a Spitting Image puppet. 2) They were the star guest on This is Your Life (mostly hosted by Eamonn Andrews reading the Big Red Book), or they had featured as a guest on someone else's edition. 3) They sat on the TV-am sofa at least once. 4) They were a guest on Wogan at least once. 5) They were seen at least once in the audience of an edition "An Audience With", unless they were the star guest on stage. 6) They were on the panel on editions of Blankety Blank or Punchlines - mostly the first one of those two shows. 7) They were on the panel of the Thames "What's My Line" (if they were not the star guest during the "blindfold" round). 8) They supported Margaret Thatcher at the 1983 and 1987 General Elections, and by implication they were Conservative Party supporters. 9) They were impersonated by Mike Yarwood (Rory Bremner didn't find his own niche properly until the early 1990s). 10) They were part of an "all-star cast" in a comedy drama or film. 11) They were featured in at least one outtake seen on It'll be Alright on the Night. 12) They lost most of their earnings in the 1992 Recession, and became a "where are they now?" person. 13) They allowed Loyd Grossman to look around their house for Through the Keyhole. 14) They were continuity announcers for a whole month on Children's ITV, if they had presented or starred in a children's TV series on ITV at the time. 15) They had a novelty hit in the charts, mostly for charity (cf Russ Abbot; neil [sic] from The Young Ones; Keith Harris and Orville, etc). 16) They appeared in that year's Royal Variety Performance. 17) They performed as contestants in celebrity versions of game shows at Christmas. 18) They had a well-known catchphrase (which was probably repeated in the school playground). 19) They probably ended up doing local radio or regional news programmes by the late 1990s, Alan Partridge style. 20) They still perform ironically at Pontins or Butlins, birthday parties, and also at university dos. 21) They haven't been seen on TV since 1991. but they still perform in pantomimes, summer seasons and sea cruises every year. 22) They were famous for being in character as someone. 23) They had a prop sidekick (emu, ventriloquist's dummy, soft toy, dustbin, etc). 24) They appeared in TV commercials either as themselves or in character. 25) They are now in their 60s and 70s if they are still around these days. 26) They were implicated by Operation Yewtree nearly 30 years later, (but enough said about that). 27) They had their own show on BBC Radio 2. 28) They supported Manchester United, (says a Nottingham Forest supporter). 30) That's it. (cf Private Eye). So many famous people fall into one of those categories, no doubt - if I was looking back at the 2010s, no doubt that taking part in Reality TV programmes such as Strictly Come Dancing, or I'm a Celebrity... Get Me Out of Here! would feature prominently in the above list. As I grew up in the 1980s as a child, the familiar names and faces that we saw on TV, be it as an actors, game show hosts, newsreaders, darts players, etc, felt like a community that nearly everyone in Britain was familiar with, and it made the decade so special and magical, especially the Christmases where we had Christmas specials that they appeared in, mostly for charity. It was so great to hear or see so many 1980s celebrities in "scaled down" roles in the late 1990s and 2000s, but are so welcome, such as Sarah Kennedy from Game from a Laugh doing the early show on Radio 2, or Gordon Burns of The Krypton Factor seen reading the BBC North West News. I would even include people like Margaret Thatcher, Ronald Regan and Robert Runcie within that scope as well - they were the "straight" people to most of the others. The TV-am sofa and Wogan was such a huge showcase for a lot of these stars, and I suppose that the occupational hazard was the fact that many of them got a bit of over-exposure at the time as a result. I might be looking through my Dolland and Aitchison rose-tinted spectacles at my childhood from over 30 years ago, but the famous people who we saw on TV was amazing, and to think that if they had not had become famous in the first place, we would have missed a good chunk of all this, and many of them weren't even discovered on talent shows in the first place. I bet that the 1980s was the best decade to be famous in - or was it? I guess everyone knew ALL the stars of the time, you wouldn't have mentioned a celeb in those days with the other person now knowing who you meant. I guess we had a much narrower realm of popular entertainment then, 4 TV channels (well 3 for most of it) who seemed to program their shows around each other rather than compete, so everyone watched the same stuff pretty much. Whereas now there are so many channels and forms of entertainment there is no way everyone can keep up with everyone on them. Celebs were more than celebs back then, they were "household names" as we called them and as you said, it did seem to bring people closer together rather than fragmented as we are today. Was a great time to be alive, i'm so glad i was part of it. Appear on a kids show (the Saturday morning ones mostly, or the likes of Crackerjack) & have the mickey taken out of them by the hosts & pretend they were in on the joke. Appear on Pebble Mill At One & badly sing an easy listening version of a current hit. If they were a sports personality they had a summer filler series giving tips on how to perform better in their particular sport. Become famous on a BBC show, then defect to ITV & pretend it's not for a bigger pay cheque. Have Harry Enfield base a character around them in the early 1990s. Appear on a chat show & baffle an American fellow guest who has no idea who they are, but are acting like a big name, leading to some awkward moment for the host. In the late 1980s jump on the green issues bandwagon by getter their car converted to unleaded & make sure a news crew is recording it. In the mid 1990s appear on Fantasy Football League to boast about being the sole celebrity fan of their home club, but can't stand being sent up by the hosts. Be a winner of the Golden Egg Award on The Late Late Breakfast show for an on screen ****-up on a BBC show. Become a Radio 1 DJ & an early victim of the Bannister axe, only to end up on Virgin Radio or a bigger local station a few months later. Getting in a court battle with a tabloid paper & losing. Get a brief chat show with guests even lower down the fame ladder & a baffling premise that turns the viewers away. Have a sort lived ITV sitcom where they play themselves & the plot lines & jokes are wobblier than the sets. unfortunately you dont have to do an awful lot or even have to be good at whatever you do to be labelled a sleb these days, if you are happy to forego your dignity in the name of entertainment aka love island, im a celebrity ,big brother etc than you too can have your fifteen minutes. Getting a boost of publicity during a TV strike when the usual staff wouldn't cross a picket line. Being a "professional" person from where they were born, ie a professional Northerner, Cockney etc. Coasting on the success of a more talented parent / sibling / friend. That's right - I preferred those sort of presenters who probably did jobs such as teaching prior to entering into television - "the tweed jacket with leather patches" people if you like. David Bellamy and Johnny Ball are other examples. This is how Blue Peter has changed since the Christopher Trace / Singleton / Noakes era of the 1960s - from the Mark Curry era onwards, I doubt that many of them would fit into that category. Although they are not presenters, many of the actors who play Grange Hill teachers in character could even fit into that category. It's another reason why I enjoy the Royal Institution Christmas Lectures between Christmas and New Year - the lecturers often resemble that sort of person, although sadly, even that seems to lack a little bit in recent years. It disgusts me that it isn't popular enough for BBC 2 anymore. Great list, Richard! Especially the one referring to leaving the country if Labour won in 1997 - Jim Davidson, Frank Bruno, Andrew Lloyd-Webber and Paul Daniels were four celebrities who mentioned that they were going to do just that, I believe. I think that as the list was from a 1980s perspective, 1997 was two General Elections away into the future to consider those things! In addition to being a guest on Wogan as I said in my opening gambit, they could have also hosted it for a week when Tel was on one of his dozen annual holidays! And they could have been the star castaway on Desert Island Discs which would have been presented by Michael Parkinson after Roy Plomley's death - which reminds me of all of those guests on Una and Lionel's team on Give us a Clue. I have always assumed that all celebrities (B and C-list at least) all know each other, and they often professionally "bump into each other" many times as guests on different shows as well as working together - it's just the ordinary members of the public which seem to be anonymous to them - a bit like the "teacher and pupil" relationship in a way.
0.950059
Get the most from your body. No matter how old, young, healthy or active we are, caring for our bodies and getting the nutrients we need is essential. Despite our best intentions, our daily diet may not contain the right levels of every nutrient. This is where dietary supplements can help, as they’re a great way to ensure we’re fueling our bodies right. Vitamins, minerals and other supplements help support overall health and vitality, and are key for the body’s normal growth and development.
0.999987
"to get the heads out on" I would like to know if there is anyone who can help me with this phrase: "to get the heads out on". I will provide you with the context. Two scientists are examining a tortoise. One of the says: "It is a shy little thing, isn't it?" The other one replies:"Yeah, very hard to get the heads out on these species."
0.897696
More healthy reasons to put cabbage in your diet. 1. It’s hardy. Originating along the Mediterranean Sea, its thick and succulent leaves protected it from the sun and salt. But cabbage grows equally well in the cold, thriving at temperatures barely above freezing. 2. When chopped, it yields a pile of slaw-ready shreds loaded with vitamins C and K, fiber and detoxifying sulfur compounds. Red cabbage also boasts anthocyanins, an antioxidant thought to keep your heart healthy and brain sharp. 4. Young scientists can use red cabbage juice in their kitchen “labs” to gauge a pH level: add vinegar (an acid) and it will turn red, add baking soda (a base) and it will turn blue. 5. It’s dirt cheap. At 62¢ per pound, 1/2 cup shredded cabbage costs about 6¢, so you don’t have to fork over more than a quarter to make a salad for four.
0.911028
For those of us who own Wiis, what games do we recommend for new Wii owners (that are actually gamers)? I'm not sure I'd recommend Fire Emblem: Radiant Dawn though unless you're a Fire Emblem fan already. Radiant Dawn is pretty difficult even for people used to the series. Or at least I found it so, having to set the difficulty to normal even though I beat the Gamecube Fire Emblem on hard. Are there any must play action RPG? Does super paper mario count? because I enjoyed that game greatly. There's also Mario Kart Wii (not sure how active online is anymore though) and if you like dungeon crawlers, I really enjoy Final Fantasy Fables: Chocobo's Dungeon. Probably one of the best jrpgs to come out this generation. Neo-Geo, SNES, NES, TG16, MSX, NGPC, and Genesis libraries through emulation. -New Super Mario Bros Wii. Nothing like answering a question 5 years later ... Retro - the best arpg on the wii without Zelda in the title is monster hunter tri. It's a sub $10.00 game and it's really good. Come to think of it there's a lot of great wii games that are fairly reasonable, which is kinda nice since most of the Mario, Zelda, Metroid, etc. wii games are so expensive. It balances out well. You Tube on the Wii?
0.950279
Lt. Weber: Where the hell's my man? Keith Ripley: Lieutenant... your personal life, none of my business. Oh Dear, in deed. What could have been one of the best heist movies in recent history turned out to be (in my humble opinion) a serious let down. Morgan Freeman and Antonio Banderas, a potential chemistry that on paper seemed a sure fire hit, but and there's always a but. It just didn't materialize, whether it was the short-comings of the dialog or the direction I can't say; but it just didn't sizzle. When you think of heist movie partnerships then you have to be very special to match up against De Niro and Norton has seen in 'The Score'. Freeman and Banderas - don't....sorry. For me this was the most overwhelming shortfall of the film, the two main protagonists just don't bounce off each other and has a consequence the characters fail to draw you in. As for the plot, Freeman - going for the last big score before retirement, Banderas - hoping to improve his standing in the criminal fraternity whilst making a lot of money in the process. Formulaic with a twist-at-the-end that was disappointingly obvious because the writers thought it had to have a twist-at-the-end. When will they ever learn that you can tell a story without having the ubiquitous twist - the bigger surprise would have been - oh wait for it - no surprise. Unfortunately, this movie will go down has another example of Hollywood doing what Hollywood does, very little thought and effort put into an idea that could have been so much greater than the sum of its parts. I gave this movie a 5 for the attempt of entertaining us, it should be a lot higher if only it had lived up to it's potential. 20 of 29 people found this review helpful. Was this review helpful to you?
0.999997
DOES THE ECONOMIC LOSS RULE BAR AN INSURED'S SUIT AGAINST AN INSURANCE BROKER WHERE THE PARTIES ARE IN CONTRACTUAL PRIVITY WITH ONE ANOTHER AND THE DAMAGES SOUGHT ARE SOLELY FOR ECONOMIC LOSSES? The crux of the underlying lawsuit was essentially a claim by Tiara Condominium Association, Inc. (Tiara) sounding in tort and contract against its insurance broker, Marsh & McLennan Companies, Inc. (Marsh) for failing to provide adequate professional advice where Marsh had advised Tiara that its loss limits coverage was per occurrence, relying upon which advice, Tiara proceeded with expensive remediate efforts. Ultimately, it turned out that the insurer, Citizens Property Insurance Corporation (Citizens), claimed that the loss limit was not as advised by the Marsh, and that it was $50 million in the aggregate, not per occurrence. While a settlement ensued between Tiara and Citizens in an approximate amount of $89 million, the amount was less than the $100 million plus expended by Tiara. Tiara filed suit against Marsh, alleging (1) breach of contract, (2) negligent misrepresentation, (3) breach of the implied covenant of good faith and fair dealing, (4) negligence, and (5) breach of fiduciary duty. A summary judgment was entered as to all counts except negligence and breach of fiduciary duty. It was as to these two claims, the appeals court certified a question to the Florida Supreme Court to determine whether the economic loss rule prohibited recovery, or whether an insurance broker falls within the professional services exception that would allow Tiara to proceed with the claims. In its analysis, the Florida Supreme Court discusses, at length, the origins and development of the Economic Loss Rule, the pertinent aspects of which are summarized as follows: (1) The rule appeared initially in both state and federal courts in products liability type cases, and historically the doctrine was introduced to address attempts to apply tort remedies to traditional contract law damages. (2) The rule was recognized as the fundamental boundary between contract law, which was designed to enforce the expectancy interests of the parties, and tort law, which imposed a duty of reasonable care thereby encouraging citizens to avoid causing physical harm to others. (3) The contractual privity rule provided that, generally, a tort action is barred where a defendant has not committed a breach of duty apart from a breach of contract. (4) However, an exception to the above rule was also recognized, allowing torts committed independently of a contractual breach, such as for fraud in the inducement. (5) Another situation where the economic loss rule was limited was in the case of neglect in providing professional services. The Florida Supreme Court subsequently engaged in a discussion regarding the roots of the doctrine originating in the products liability context where the focus of the rule was directed to damages resulting from defects in the product itself. The Court goes on to state that for some time, the Court had been concerned with what they perceived as an “over-expansion of the economic loss rule”, and noting the expression of this concern in various cases. …[W]e now take this final step and hold that the economic loss rule applies only in the products liability context. We thus recede from our prior rulings to the extent that they have applied the economic loss rule to cases other than products liability. The Court will depart from precedent as it does here ‘when such departure is necessary to vindicate other principles of law or to remedy continued injustice.’”…Stare decisis will also yield when an established rule has proven unacceptable or unworkable in practice…Our experience with the economic loss rule over time, which led to the creation of the exceptions to the rule, now demonstrates that expansion of the rule beyond its origins was unwise and unworkable in practice. Thus, today we return the economic loss rule to its origin in products liability. This is certainly a landmark decision, and it will be interesting to see the effects of this case unfolding, and its impact upon Florida tort litigation.
0.999909
Perception of auditory time intervals is critical for accurate comprehension of natural sounds like speech and music. However, the neural substrates and mechanisms underlying the representation of time intervals in working memory are poorly understood. In this study, we investigate the brain bases of working memory for time intervals in rhythmic sequences using functional magnetic resonance imaging. We used a novel behavioral paradigm to investigate time-interval representation in working memory as a function of the temporal jitter and memory load of the sequences containing those time intervals. Human participants were presented with a sequence of intervals and required to reproduce the duration of a particular probed interval. We found that perceptual timing areas including the cerebellum and the striatum were more or less active as a function of increasing and decreasing jitter of the intervals held in working memory respectively whilst the activity of the inferior parietal cortex is modulated as a function of memory load. Additionally, we also analyzed structural correlations between gray and white matter density and behavior and found significant correlations in the cerebellum and the striatum, mirroring the functional results. Our data demonstrate neural substrates of working memory for time intervals and suggest that the cerebellum and the striatum represent core areas for representing temporal information in working memory. Everyday we are required to assess sequences of variable time intervals that occur in sounds like speech, music and environmental sounds, a process that requires us to hold multiple time intervals in memory. This work examines the neural bases for holding time intervals in working memory and the effect of changing the amount of information in these sequences determined by the temporal variability and number of intervals. The nature of working memory in general is under debate (Ma et al., 2014). Classical visual models assume a limited working memory capacity (Miller, 1956; Cowan, 2001) where information is stored in a fixed number of discrete slots (Luck and Vogel, 1997). However, recent visual and auditory studies support a resource allocation model based on a limited working memory resource that is dynamically distributed between multiple items in natural scenes, without a slot limit (Bays and Husain, 2008; Gorgoraptis et al., 2011; van den Berg et al., 2012; Kumar et al., 2013; Ma et al., 2014; Teki and Griffiths, 2014; Joseph et al., 2015a,b, 2016). Neither of these models, however, has considered the question of how time intervals are held in working memory. We designed a novel paradigm to assess working memory for sequences of intervals that systematically changed the information held in working memory and examined working-memory fidelity (Teki and Griffiths, 2014). Listeners were presented with sequences that consisted of two types of sequences: (1) sequences with a fixed number of intervals with different levels of temporal regularity, and, (2) sequences with a varying number of intervals with a fixed temporal regularity. The task did not involve a binary response (e.g., shorter/longer or same/different judgment) about the probed interval change as in previous studies, but instead required the participant to reproduce the duration of a single interval that was probed after the sequence. This allowed us to examine the effects of the variability and number of intervals in the sequence on the precision (reciprocal of standard deviation) for probed interval reproduction (Teki and Griffiths, 2014). The results are consistent with a working memory model based on a fixed resource for storing time intervals so that a greater numbers of intervals can be stored at the expense of fidelity (Bays and Husain, 2008). The present study sought to address the neural bases for the core working memory resource, determined by both temporal variability and number of intervals. Previous work on memory for time was either based on retention of a single interval into memory for subsequent comparison or involved multiple presentations of a standard interval that formed an isochronous sequence (Keele et al., 1989; Ivry and Hazeltine, 1995; Merchant et al., 2008). Other studies used induction sequences to study the effect of rate of presentation of those sequences (Barnes and Jones, 2000) or the temporal structure of the sequence (McAuley and Jones, 2003; Teki et al., 2011) on judgments of the duration of subsequent intervals. However, as these studies involved repetition of standard intervals, the effective memory load was limited to the interval used as the basis for the induction sequence. Previous imaging work has shown that the putamen and caudate nucleus encode the duration of single time intervals (Rao et al., 2001; Coull et al., 2008) while recent work suggests that areas for the analysis of single intervals alter with the sequence context (Merchant et al., 2013). Timing in regular sequences relies more on a striato-thalamo-cortical network whilst timing in irregular sequences depends more on the cerebellum (Grahn and Rowe, 2009; Teki et al., 2011, 2012; Kung et al., 2013; Allman et al., 2014). The present study addresses brain bases for storing time intervals in memory as is required for natural acoustic stimuli, for which we hypothesized a striatal and cerebellar substrate. Another motivation of the study was to examine contextual factors: the effect of task context on stimuli with the same variability and number of intervals. Previous work was mostly based on single intervals and thus could not address this crucial question. Recent reviews emphasize task-dependent activation of brain areas associated with temporal processing (Wiener et al., 2010a; Merchant et al., 2013) but there are no data suggesting that the activity of brain areas underlying memory for time intervals may also be modulated by task context. We used functional magnetic resonance imaging to uncover the neural substrates that represent sequences of intervals in working memory. Our results highlight activity in core perceptual timing areas including the cerebellum and the striatum that varies with the amount of information in a sequence, determined by temporal regularity and number of intervals. Holding and manipulating the same interval in working memory depended on the context, the number of intervals in the sequence, in the caudate nucleus and the inferior parietal cortex. Our data support the flexible representation of time intervals in working memory where the cerebellum and caudate provide the core resource. Nineteen listeners (12 females; mean age: 27.4 ± 2.3 years) with normal hearing and no history of audiological or neurological disorders provided informed written consent and participated in the experiment. A female listener was excluded from the analysis due to excessive movement in the scanner. Two listeners could not complete the number-of-interval blocks. Thus, 18 listeners provided datasets for the jitter condition whilst 16 listeners' datasets were analyzed for the number-of-intervals condition. All but four listeners had musical experience but none of them were currently practicing music. Experimental procedures were approved by the research ethics committee of University College London. The stimulus (Figure 1) consists of a sequence of clicks of 0.5 ms duration and identical loudness. The inter-onset interval (IOI) was selected from a normal distribution that ranged from 500 to 600 ms. For the Jitter blocks, the stimulus comprised four time intervals. By jitter, we refer to variability in the length of a time interval around a mean value of inter-onset interval. For instance, introducing a 10% jitter for a 100 ms interval would yield an interval whose duration may vary from 90 to 110 ms. Four different levels of temporal jitter were incorporated: (i) 5–10%, (ii) 20–25%, (iii) 35–40%, and, (iv) 50–55%. Higher jitter values enhance the difference in duration between the various intervals and make each interval more unique, thereby increasing the memory load. The exact jitter values were randomly drawn from a normal distribution centered on the mean of each of the four ranges of jitter. Each sequence block was jittered by only one of the above ranges of jitter. Figure 1. Behavioral results. (A) Task and sparse imaging paradigm. Listeners are presented with a sequence of time intervals (4 intervals in the jitter condition and 1–4 intervals in the number-of-intervals condition) separated by clicks (indicated by the gray bars). At the end of the sequence, a probe is presented during the delay period (2 s) indicating the interval to be reproduced. Another click is played after the delay indicating the start of the reproduction interval which listeners are required to terminate at a point in time that corresponds to their memory of the probed interval. Feedback, equal to the difference between the reproduced interval and the actual duration of the probed interval is presented for 500 ms at the end of each trial. The task structure and timing is shown between two successive volumes in the sparse imaging design. (B) Performance on jitter blocks. Listeners' performance (n = 18) was calculated as the precision of the timing error distribution. The mean precision (± SEM) is plotted as a function of temporal regularity, varying from 5–10% jitter to 50–55% jitter as indicated on the x-axis. A significant effect of jitter (p = 0.02) on precision was observed (see results). (C) Performance on number of intervals blocks. Listeners' (n = 16) mean precision (± SEM) is plotted against the number of intervals on the abscissa. No significant effect of memory load (p = 0.36) was found (but see results). The stimuli for the Number-of-intervals blocks consisted of sequences with different number of time intervals, from 1 to 4, and a fixed jitter of 20–25%. The stimulus for the reaction time task consisted of a single click only. Stimuli were created digitally using MATLAB 2012 (MathWorks Inc.) at a sampling rate of 44.1 kHz and resolution of 16 bits. Sounds were delivered diotically through MRI compatible insert earphones (Sensimetrics Corp.) and presented at a comfortable listening level between 80 and 90 db SPL that was adjusted by each listener. The stimulus presentation was controlled using Cogent (http://www.vislab.ucl.ac.uk/cogent.php). The task was designed to assess listeners' memory for time intervals embedded in sequences in which the temporal jitter and number of intervals were varied parametrically (Teki and Griffiths, 2014). Listeners were instructed to attend to the sequence of clicks and reproduce the duration of interval that was probed after the sequence (via text displayed on the screen—e.g., “Match time interval: 1”). The probed interval number was displayed during the entire delay after the sequence period lasting 1.5 s. A click was played after the delay period and indicated the start of the interval to be reproduced. The listeners' task was to press a button at a point in time (after this click) that corresponded to their memory of the duration of the probed interval. Responses made within a window of 2 s were considered valid responses while responses longer than 2 s were treated as “missed” responses. Feedback, equal to the difference between the duration of the probed interval and the listeners' response (adjusted for reaction times) was presented for 500 ms after each trial (e.g., “Shorter by 53.2 ms” or “Longer by 107.4 ms”). A control task was used prior to each timing block to calculate listeners' response times to a single click. The reaction times were used to regress out variance due to the motor response from the time matching responses in the experimental blocks. Listeners received instructions about the task and practiced a reaction time block of 15 trials and a jitter block of 24 trials. Training was repeated until performance improved, as assessed by precision values. However, participants did not receive any explicit information or training for the number-of-interval blocks. In order to investigate context-sensitive responses, listeners only received training on the jitter blocks and not the (later) number-of-interval blocks. It was important to not counterbalance the order of the jitter and number-of-interval blocks to ensure that listeners held only one task context in mind during the jitter block and then switched to a different task context provided by the number-of-interval blocks. Listeners received brief training on the number-of-intervals condition in the scanner after the jitter blocks were completed. This enabled us to compare brain activations for trials that were identical in structure (32 trials with 20–25% jitter and 4 intervals in a sequence) across the two task conditions. The task of the listeners was to reproduce the duration of the cued interval from memory by pressing a button on a keypad. Responses were always made with the index finger and the use of right and left hand was counterbalanced across participants. Prior to each timing block, listeners completed a reaction time block comprising 30 trials where they pressed a button in response to a single click. Listeners were instructed to respond at a comfortable rate and maintain the same pace for both the reaction time and timing trials throughout the experiment. The imaging experiment lasted ~1 h and consisted of two jitter blocks (varying jitter and fixed number of intervals) followed by two number-of-intervals blocks (varying number of intervals and fixed jitter) where each block consisted of 64 trials. Field maps were acquired after the first two blocks and listeners were instructed about the change in stimulus structure and received limited training on the number-of-intervals condition whilst in the scanner. Each block lasted ~15 min and short breaks were allowed between successive blocks. Listeners were instructed to keep their eyes open as the probed interval was indicated visually on the screen. At the end of each block, listeners received feedback specifying the number of trials on which their timing error was less than 100 ms, between 100 and 200 ms, or greater than 200 ms. A structural scan was acquired at the end of the functional imaging experiment for each participant. The median of the reaction times for the final 24/30 trials was computed for each reaction time block. For the timing blocks, the error response was calculated as the difference between the time matching response and the actual duration of the cued interval. The median reaction time from the preceding control (reaction time) block was subtracted from this value. This allowed us to obtain a cleaner measure of the time matching response that was not confounded with the time taken for button press (see Teki and Griffiths, 2014). This analysis was repeated for each timing block. The absolute value of the error responses was used to calculate precision, by computing the inverse of the standard deviation of the error responses. Precision was measured as a function of jitter and as a function of number of intervals for the corresponding blocks. Precision was used as the primary measure of interest as it captures the true variability in memory performance. This is useful to interpret variability in performance with increasing number of items and examine whether performance is fixed up to a certain number of slots (according to slot models) or scales flexibly according to the total amount of information to be remembered (according to shared resource models). The slot model would predict that the precision would be at ceiling for a set number of items such as four (see Cowan, 2001) until capacity is exceeded and would drop to floor for a set size that exceeds the working memory capacity. The shared resource model, however, predicts that precision is highest for a set size of one and decays as a function of the number of items to be remembered (Ma et al., 2014). Crucially, the precision for higher memory loads greater than four is predicted to be higher than that obtained by chance. Absolute error or accuracy measures do not capture such variability and are thus not ideal for comparing the two models. Gradient weighted whole-brain echo planar images were acquired using a 3T Siemens Allegra system using a sparse imaging design: time to repeat (TR) of 14.76 s; time to echo: 30 ms (TE); time for volume acquisition (TA): 3.36 s (70 ms to acquire one slice × 48 slices); matrix size: 64 × 72; slice thickness: 2 mm with 1 mm gap between slices; and, in-plane resolution: 3.0 × 3.0 mm2. The slices were tilted by −7° (transverse > coronal) to obtain full coverage of the cerebellum. This orientation was used successfully to uncover perceptual timing responses in the inferior olive and the cerebellum in our previous fMRI timing study (Teki et al., 2011). Field maps were acquired to compensate for geometric distortions due to magnetic field inhomogeneity (Hutton et al., 2002) using a double-echo gradient echo field map sequence (TE1 = 10.00 ms and TE2 = 12.46 ms). A T1-weighted structural scan was acquired after the functional scans (Deichmann et al., 2004). A sparse sampling design (Figure 1) was used to obtain clean auditory activations unaffected by the scanner noise (Belin et al., 1999). The total duration of the stimulus ranged from 0.5 to 2.6 s depending on the number of intervals (1–4) in the sequence. A variable silence period preceded the onset of the stimulus such that the combined duration of silence and stimulus was fixed at 7.4 s. A delay period of 1.5 s, response window of 2 s and a feedback period of 0.5 s, in that order, completed each trial with a fixed duration of 11.4 s. The latency between trial offset and scanner onset was fixed at 4 s so that the acquisition of each scan was time-locked to the onset of the delay period. This latency of 4 s was based on our previous study where we used a similar sparse imaging protocol to isolate timing responses in the cerebellum and the striatum (Teki et al., 2011). The fixed latency helped ensure that the peak of the BOLD signal captured brain activity corresponding to the manipulation and retrieval of the cued interval from memory rather than earlier stimulus-evoked or subsequent motor activity, with minimal overlap in their hemodynamic response functions (HRFs). Given the poor temporal resolution of fMRI, one cannot be completely confident about the extent to which the scan acquired at the end of each trial was contaminated by effects not related to memory processes during the delay period. However, the manipulation of keeping a fixed latency from the onset of the delay period to the onset of the acquisition of the scan is motivated by the characteristic latency of BOLD responses to sounds in sparse imaging protocols (~4 s, Belin et al., 1999; Hall et al., 1999) and is a reliable method to obtain pseudo time-locked responses using sparse fMRI (Teki et al., 2011; Talavage et al., 2014). The analysis of brain imaging data was performed using SPM12 (Wellcome Trust Centre for Neuroimaging, Ashburner, 2012). Each block comprised of 66 volume acquisitions out of which the first two volumes were rejected to control for saturation effects. The remaining 64 volumes were realigned to the first volume and unwarped using field map parameters. The structural image was segmented to obtain a bias-corrected structural image that has more uniform intensities within six different tissue classes including gray matter (GM) and white matter (WM). The resulting image was co-registered with the mean functional image obtained after realignment. DARTEL was used to create a series of templates using the GM and WM images (Ashburner, 2007). The final template from this step was affine registered with tissue probability maps (available in SPM12) to obtain spatially normalized images in MNI space (Friston et al., 1995a). The normalized images were smoothed with an isotropic Gaussian kernel of 5 mm full-width at half-maximum (FWHM). Statistical analysis of the images was performed using general linear model (Friston et al., 1995b). Data from the jitter and number-of-interval blocks were analyzed separately using a parametric contrast to examine brain activity that increased as a function of jitter and number-of-intervals respectively. All trials were convolved with an HRF boxcar function and missed trials were modeled as conditions of no interest (separately for each condition) to remove unwanted variance. The data were not high-pass filtered as a sparse design ensures minimal low-frequency variations in the BOLD signal. A whole-brain random-effects model was used to account for within-subject variance (Penny and Holmes, 2004). Each subject's first-level contrast images were subjected to second-level t-tests for the primary contrasts of interest: “parametric effect of jitter” and “parametric effect of number of intervals.” To examine context-dependent memory encoding for trials that were identical in the two conditions, a separate design based on difference in activations between the jitter versus number-of-interval blocks (and vice-versa) was used. Functional data were visualized on the group-averaged T1-weighted structural scan and activations specific to the cerebellum were overlaid on the high-resolution, spatially unbiased infra-tentorial template (SUIT) atlas of the human cerebellum (Diedrichsen, 2006; Diedrichsen et al., 2009). Structural brain images were analyzed using voxel-based morphometry (VBM; Ashburner and Friston, 2000). The segmented GM and WM images were imported into DARTEL and a series of template images were created by iteratively matching images to align them with the average-shaped template. The final template obtained in this procedure was normalized to MNI space through an affine registration of the template with tissue probability maps. The resultant images were smoothed with an isotropic Gaussian kernel of 8 mm FWHM. The smoothed images for each individual were entered into a second-level ANOVA to examine brain areas in which GM and WM volume varied as a function of jitter and number of intervals respectively. Participants' performance in the scanner was measured by calculating precision, the inverse of the variance of the timing error distribution for both blocks. Precision provides a continuous measure of memory performance and has been used previously in studies of working memory based on the shared resource model (Bays and Husain, 2008; Bays et al., 2009; Kumar et al., 2013; Ma et al., 2014; Teki and Griffiths, 2014; Joseph et al., 2015a,b, 2016). ANOVA revealed a main effect of jitter (p = 0.02, F = 3.40, η2 = 0.14) but a non-significant effect of number of intervals [p = 0.36, F(3, 63) = 1.10, η2 = 0.05] as shown in Figures 1B,C respectively. Post-hoc analysis revealed a significant difference between the precision for the least and most irregular conditions in the jitter experiment: p = 0.048, t = 2.05; and a marginal but not significant difference between the precision for the trials with lowest and highest number of intervals: p = 0.10, t = 1.69. Secondary analysis of precision as a function of serial position did not reveal a significant effect for either condition: p = 0.10, F = 2.14, η2 = 0.09 (jitter block), p = 0.38, F = 1.05, η2 = 0.05 (number of intervals block). Although a significant effect of number of intervals was not observed during performance in the scanner, our previous psychophysical work did demonstrate a significant effect: [p = 0.01, F(3, 28) = 4.27, η2 = 0.31, n = 8; Teki and Griffiths, 2014]. The absence of a behavioral effect in the scanner could be due to a number of reasons: (i) listeners did not receive explicit and adequate training about the number-of-intervals blocks before the experiment, (ii) the number-of-interval blocks were always run after the jitter blocks and could be associated with increased fatigue, (iii) reduced number of trials in the scanner: 2 blocks of 64 trials compared to 4–5 blocks of 96 trials in the psychophysics study, (iv) limited response time and a noisier task environment in the scanner. Further investigation of individual behavioral scores in the number-of-interval blocks revealed the opposite trend in 4 subjects who showed no significant effect: [F(3, 15) = 0.66, p = 0.59, η2 = 0.14]. A similar ANOVA on the scores of the remaining 12/16 subjects revealed a significant effect of number of intervals: [F(3, 47) = 2.84, p = 0.04, η2 = 0.16]. We analyzed BOLD responses to examine brain areas that: (i) encode memory for time as a function of increasing and decreasing jitter, (ii) are activated as a function of increasing and decreasing numbers of intervals, and (iii) the effect of task context in modulating brain activity in response to identical trials across the two conditions. A priori, we predicted that both cerebellum and striatum would show increased activity as a function of increasing as well as decreasing jitter, but with opposite effects such that cerebellum would be more strongly activated for encoding temporal memory in irregular sequences and the striatum would show elevated activity for regular sequences (Grahn and Brett, 2007; Teki et al., 2011, 2012; Grahn, 2012; Merchant et al., 2013). Secondly, based on previous fMRI work on temporal memory encoding (Rao et al., 2001; Coull et al., 2008), we hypothesized that the striatum would be involved in encoding memory for time as a function of increasing numbers of intervals. Thirdly, we expected that task context would modulate brain activity such that areas that represent the structure of sequences of intervals would show differential responses for trials that were identical in structure during the jitter and number-of-intervals conditions. To answer the first question, data from the blocks with different levels of jitter were analyzed. A parametric contrast was used to examine areas that showed an increase in response as a function of increasing jitter. Results revealed significant clusters in the left cerebellum (lobules I-IV, V) including the vermis as shown in Figure 2A. The striatum was also significantly modulated, with clusters in the putamen and pallidum. Other brain areas whose activity was significantly modulated by increasing levels of jitter included the precuneus, the parahippocampal gyrus and the middle temporal gyrus (see Table 1A). Figure 2. Functional imaging results: effect of jitter. (A) Brain areas that encode temporal memory in the context of irregular sequences. BOLD activations are shown for the vermis and cerebellum (overlaid on the SUIT template of the human cerebellum, Diedrichsen, 2006, Diedrichsen et al., 2009); left putamen and parahippocampal gyrus (overlaid on a coronal section of the average normalized structural scan and zoomed to 80 × 80 mm) at a threshold of p < 0.001 (uncorrected, for each figure). Other activations in the precuneus, MTG, and pallidum are listed in Table 1A. The strength of activations (t-value) is graded according to the adjacent color scheme on the right (for each figure). (B) Brain areas that encode temporal memory in the context of regular sequences. BOLD activations in the striatum including the caudate and putamen as well as the cerebellum are shown. Other activations in the thalamus, temporal pole, and frontal cortex are listed in Table 1B. The significant clusters are displayed according to the same scheme as in Figure 1A. Table 1A. Brain areas whose activity increased as a function of jitter. Examination of parametric responses in the opposite direction (as a function of decreasing jitter) showed maximum activation in the striatum including the caudate and putamen (Figure 2B). We also observed activity in the cerebellum (right posterior lobe); however, the strength of the activation in the cerebellum was weaker than the striatal response (see Table 1B). The frontal cortex, temporal pole and thalamus also showed significant activations with decreasing levels of jitter. Table 1B. Brain areas whose activity decreased as a function of jitter. The second question focused on parametric brain responses as a function of increasing numbers of intervals. Results across all subjects revealed significant activations in the bilateral inferior parietal cortex (abutting supramarginal gyrus) and the left caudate nucleus (Figure 3A; Table 2A). In the 12/16 subjects who showed a significant behavioral effect of number of intervals, similar activations in the inferior parietal cortex were observed as well (x = 33, y = −37, z = 39; t = 4.11, and x = −28, y = −52, z = 39, t = 3.97, respectively). As the number of intervals decreased, activity in the superior cerebellum increased as shown in Figure 3B. Other areas to encode memory for time with decreasing number of intervals included the inferior orbitofrontal cortex and the insula (also see Table 2B). Figure 3. Functional imaging results: effect of number of intervals. (A) Brain areas that encode temporal memory as a function of increasing number of intervals. The activity in the caudate and the inferior parietal cortex was found to increase parametrically with the number of intervals. The MNI coordinates of these areas are listed in Table 2A. (B) Brain areas that encode temporal memory as a function of decreasing number of intervals. BOLD responses in the cerebellum was found to vary as a function of decreasing number of intervals. The MNI coordinates are provided in Table 2B. Table 2A. Brain areas whose activity increased as a function of memory load. Table 2B. Brain areas whose activity decreased as a function of memory load. One of the key motivations of the study was to examine whether encoding of time into memory depends on contextual factors like the temporal structure and number of intervals in the sequences. The experiment was designed to have an orthogonal design with 32 identical trials in the jitter and number-of-interval blocks respectively with a jitter of 20–25% and 4 intervals in each sequence. A subtraction analysis between jitter vs. number of interval blocks revealed enhanced activity in the right anterior cerebellar lobe and the striatum (including left caudate and bilateral putamen and pallidum) as shown in Figure 4A. Other areas included the thalamus, Heschl's gyrus, precuneus, hippocampus, orbitofrontal cortex, precuneus and the amygdala (see Table 3A). The reverse contrast (number of intervals vs. jitter) showed differential activation in the right cerebellar lobule VI (see Figure 4B; Table 3B). Figure 4. Functional imaging results: effect of task context. BOLD activations are shown for brain areas whose activity was found to be differentially modulated for trials with identical task structure (4 intervals with 20–25% jitter) but different context provided by the variable temporal structure in the jitter condition and the variable memory load in the number-of-intervals condition. All activations (except cerebellar activations on SUIT template) are displayed on the average normalized structural across all participants at a threshold of p < 0.001 (uncorrected). MNI coordinates and t-values are listed in Tables 3A,B respectively. (A) Brain areas with greater response for jitter vs. number of intervals condition. BOLD response in the cerebellum, caudate and putamen was found to be significantly modulated and higher during the jitter compared to the number-of-intervals condition for identical trials. (B) Brain areas with greater response for number of intervals vs. jitter condition. BOLD response in the cerebellum only was found to be higher for the identical trials in the number-of-intervals compared to the jitter condition. Table 3A. Brain areas activated for jitter vs. number-of-intervals condition. Table 3B. Brain areas activated for number-of-intervals vs. jitter condition. Structural imaging data were analyzed using VBM to investigate correlations between gray and white matter volume (GM; WM) and task performance. Specifically, we wanted to assess whether the key timing areas revealed by previous work (e.g., Grahn and Brett, 2007; Wiener et al., 2010b; Teki et al., 2011) and in the present study, i.e., the cerebellum and the striatum, also showed structural correlations with behavior. The correlations were performed between GM and WM density and precision (for all levels of the factor of interest, i.e., jitter and number of intervals). We found a significant correlation between precision on trials with increasing jitter and GM volume in the cerebellum (see Figure 5A) in a similar region of the cerebellar cortex as implicated in the functional data (Table S1A). In contrast, precision on trials with decreasing jitter and GM volume was demonstrated in sensory cortical areas including the Heschl's gyrus and superior temporal gyrus (see Figure 5B; Table S1B). Figure 5. Structural imaging results: correlation between GM and WM volume and behavior. All activations are reported at a threshold of p < 0.001 (uncorrected) and scaled according to the t-value maps on the right. (A) Correlation between GM volume and performance on irregular sequences. The GM volume in the cerebellum increased with precision on irregular trials as shown here. Similar effects were observed in the orbitofrontal cortex and inferior temporal gyrus (Table S1A). (B) Correlation between GM volume and performance on regular sequences. The gray matter volume of sensory areas in Heschl's gyrus increased as a function of precision on regular sequences as shown here. Similar effects were also observed in the STG, insula and middle cingulate gyrus (Table S1B). (C) Correlation between GM volume and performance on sequences with high memory load. The gray matter volume of the caudate increased with performance on sequences with greater memory load. Correlations were also observed in the insula, thalamus and the Heschl's gyrus (as listed in Table S2A). (D) Correlation between GM volume and performance on sequences with low memory load. The cerebellum showed higher GM volume as a function of precision on sequences with low memory load (see Table S2B). (E) Correlation between WM volume and performance on irregular sequences. The pallidum expressed higher WM volume that correlated with listeners' precision as a function of increasing irregularity of the sequences (see Table S3A). (F) Correlation between WM volume and performance on sequences with high memory load. The pallidum showed higher WM volume that correlated with performance as a function of increasing memory load associated with the sequences (see Table S3B). Similar analysis between precision on trials with increasing number of intervals and GM volume revealed significant clusters in the caudate (also activated in functional data) as shown in Figure 5C (also see Table S2A). The GM volume of the cerebellum was correlated with precision on trials with decreasing load (Figure 5D; Table S2B). Correlation analysis of WM volume as a function of increasing jitter revealed bilateral clusters in the pallidum (Figure 5E; Table S3A) whilst no areas were found to be significant in the reverse contrast. The WM volume was also found to be higher in the pallidum as a function of increasing number of intervals (Figure 5F; Table S3B). The precuneus was the only area found to show significant effect in the opposite direction (Table S3C). We investigated the neural bases of working memory for time intervals in the context of a shared resource model of working memory where the resource is flexibly distributed according to the amount of information to be encoded. We manipulated the information content in sequences by manipulating the temporal regularity and number of intervals, which we hypothesized to affect the working memory load. We examined, from first principles, whether there are core brain areas that are activated through these two manipulations of the resource even though the magnitude of the effect of temporal regularity and number of intervals may be different. Behaviorally, listeners' performance decreased with greater information in the sequence, achieved by manipulating temporal jitter and the number of intervals. The fMRI activations revealed the striatum and cerebellum as core areas for encoding temporal memory as a function of increasing jitter and number of intervals. Additionally, the inferior parietal cortex was also strongly involved in representing time intervals as a function of load. We also analyzed structural correlations between gray and white matter volume and behavior that revealed correlations in the striatum and cerebellum, in line with the functional results. Furthermore, the analysis of context-specific responses for identical trials across the two conditions also revealed activations in the striatum and the cerebellum, suggesting on the whole, a critical role for these two subcortical motor areas in representing time intervals in working memory. Behavioral performance showed significant sensitivity to the temporal structure of the sequences (Figure 1B). The analyses of the underlying brain responses revealed activation of core timing areas in the cerebellum and the striatum (Buhusi and Meck, 2005; Ivry and Schlerf, 2008; Teki et al., 2011). Temporal context of the sequences of intervals provides a basis to distinguish the timing functions of the cerebellum and the striatum: whilst the cerebellum is associated with absolute, duration-based timing of intervals in irregular sequences, the striatum in coordination with fronto-striatal loops mediates relative, beat-based timing (Teki et al., 2012; Allman et al., 2014). This dissociation is supported by several lines of evidence: behavioral work (Monahan and Hirsh, 1990; Yee et al., 1994; Pashler, 2001; McAuley and Jones, 2003), neuropsychological assessment of patients (Grube et al., 2010; Cope et al., 2014a,b), motor timing studies (Schlerf et al., 2007; Spencer et al., 2007), and neuroimaging studies (Grahn and Brett, 2007; Teki et al., 2011; Grahn and Rowe, 2013). We have previously suggested a synergistic relationship between the striatum and the cerebellum whereby the striatum serves as a default clock and the cerebellum serves to encode the error in the timing activity of the striatal clock (Teki et al., 2012; Allman et al., 2014). Other timing models like the Striatal Beat Frequency model (SBF; Matell and Meck, 2004; Buhusi and Meck, 2005; Meck et al., 2008) based on coincident activity in the medium spiny neurons in the striatum do not address timing in sequences containing several intervals and the effect of temporal jitter. The present data suggest that in addition to perception of time, the cerebellum and striatum also represent memory for time with the level of activation depending on the temporal context of the sequences. The cerebellum and vermis (see Table 1A for precise locations with cerebellum) were more strongly activated as a function of increasing jitter compared to the putamen and pallidum whilst the caudate and putamen were more active relative to the cerebellum as a function of decreasing jitter. Other memory-related areas that were activated as a function of increasing jitter included the precuneus, the posteromedial portion of the parietal lobe and the parahippocampal cortex. These two areas are involved in encoding and retrieval of episodic memory but have not been specifically implicated in temporal processing before. The activation of these areas suggests a link between subcortical timing areas and higher-order memory related areas in the medial temporal lobe that remains to be investigated. It is important to note that sound-evoked activity is also observed in the cerebellum (e.g., Wolfe, 1972; Jastreboff and Tarnecki, 1975) and the basal ganglia (Hikosaka et al., 1989). Although it can be argued that the observed BOLD activations might capture sound-evoked responses, it is unlikely that such responses would scale as a function of jitter or number of intervals. Thus, the parametric analysis reported in the present study can be assumed to primarily reflect temporal processing activity. We also varied the amount of information in the sequences by manipulating the number of intervals. Although the task was based on the recall and reproduction of a single interval, the number-of-intervals condition required representation of multiple intervals in working memory. Activity in the caudate nucleus and the inferior parietal cortex systematically increased with increasing number of intervals in the sequence, consistent with previous event-related fMRI studies on memory for a single time interval (Rao et al., 2001; Coull et al., 2008). The striatum is widely acknowledged to contribute to working memory (Postle and D'Esposito, 1999; Lewis et al., 2004; McNab and Klingberg, 2008; Darki and Klingberg, 2015) via dopaminergic interactions with frontal cortex (Goldman-Rakic, 1996; Frank et al., 2001). Consistent with this, disorders affecting the basal ganglia including Parkinson's, Huntington's and Multiple Systems Atrophy are associated with impairment on a range of working memory tasks (Robbins et al., 1992; Grahn et al., 2006; Dumas et al., 2013). The role of the striatum and frontal cortex in controlling access to working memory storage (McNab and Klingberg, 2008) is particularly significant in light of the SBF model that emphasizes the role of fronto-striatal dopaminergic loops in interval timing. The SBF model posits that striatal medium spiny neurons perform coincidence detection of cortical oscillatory activity, triggered by nigrostriatal dopaminergic signals. These theoretical considerations suggest a close relationship between perception and memory for time in fronto-striatal pathways (Darki and Klingberg, 2015). The parietal cortex is also implicated in storage of information in working memory (McNab and Klingberg, 2008; Darki and Klingberg, 2015) and shows robust load-sensitive activity in visual working memory tasks (Todd and Marois, 2004; Vogel and Machizawa, 2004; Vogel et al., 2005; Ma et al., 2014). The parametric increase in the activity of the parietal cortex suggests a common framework for working memory processing in the brain that not only applies to storage of sensory information but also to temporal information. Timing activity in the parietal cortex has been demonstrated in nonhuman primates (Leon and Shadlen, 2003; Schneider and Ghose, 2012) as well as humans (Wiener et al., 2010a, 2012; Hayashi et al., 2015). Furthermore, the parietal cortex has also been shown to encode magnitude in general, and process time, space, and number (Walsh, 2003; Bueti and Walsh, 2009). The current data provide converging evidence from the temporal domain that parietal cortex may encode “temporal” magnitude and represent multiple time intervals in working memory. The activity of the cerebellum (lobule V) was modulated as a function of decreasing load. This is consistent with cerebellar specialization for encoding the absolute duration of single intervals (Grube et al., 2010). Behaviorally, there was no difference in precision between the trials that were identical in the jitter and number-of-intervals blocks (32 trials with 25% jitter and 4 intervals): p = 0.64, t = 0.47. However, there was a significant difference in BOLD responses between the two conditions. For a contrast of jitter vs. number-of-intervals, putamen, caudate, and cerebellum (lobule V) showed significant differential activity. The reverse contrast showed enhanced responses in the cerebellum (lobule VI) only. These data suggest that brain areas involved in holding and manipulating time intervals in memory are selectively activated by different task contexts: differential striatal and cerebellar activity for the jitter condition is consistent with previous work on rhythm and time perception (Grahn, 2012; Teki et al., 2012). The activation of cerebellar lobule VI is consistent with the specific role of this cerebellar sub-region in verbal working memory (Koziol et al., 2014), which may be attributed to its role in temporal sequencing of internal motor traces representing inner speech (Marvel and Desmond, 2010). VBM correlation analysis was performed to assess whether the gray and white matter volume of specific temporal processing regions correlated with behavioral performance in the jitter and number-of-intervals conditions. In the absence of previous work on correlates between brain structure and timing behavior, we did not have strong well-defined anatomical hypotheses and, therefore, examined correspondence between the functional and structural brain data. GM volume in the cerebellum (lobule V) correlated with behavior as the jitter increased, consistent with greater functional response in the same cerebellar sub-region. On the other hand, the GM volume of the Heschl's gyrus correlated with listeners' performance on regular trials. As the sequences become more regular, stronger phase-locking to the clicks at low rates (2 Hz) may explain the correlation observed in the auditory cortex. For the memory task, the GM volume of the caudate correlated with behavioral performance as the load increased. The reverse correlation was found in the cerebellum as a function of decreasing load. Correlation between WM volume and behavior showed effects in the pallidum as a function of both increasing jitter and load. This result is consistent with recent evidence from a longitudinal study that revealed a correlation between working memory capacity and the fractional isotropy (FA) and the WM volume of fronto-striatal tracts (Darki and Klingberg, 2015). More specifically, they found that FA in white matter tracts and activity in the caudate predict future working memory capacity. Overall, the VBM results show strong correspondence with the functional data and highlight the importance of the cerebellum and the striatum in representation of temporal memory. We have demonstrated using fMRI that working memory for time intervals is implemented in a core resource in the striatum and the cerebellum, achieved through manipulating the information content by varying the regularity and number of intervals in sequences. These results are supported by concordant structural correlations with behavior in the same areas. Our results highlight functional and structural correlates of a flexible working memory resource for time intervals in rhythmic sequences and provide a strong basis to examine the underlying neural correlates of context-dependent memory for time, e.g., beta-band oscillations in the auditory-motor pathways (Iversen et al., 2009; Fujioka et al., 2012; Teki, 2014; Bartolo and Merchant, 2015), using techniques with higher temporal resolution than fMRI. ST designed the study; ST collected and analyzed the data; ST and TG wrote the manuscript. This work was supported by the Wellcome Trust (WT091681MA awarded to TG). ST is supported by the Wellcome Trust (WT106084/Z/14/Z). We thank the Physics and Radiology group at the Wellcome Trust Centre for Neuroimaging for technical support. Hall, D. A., Haggard, M. P., Akeroyd, M. A., Palmer, A. R., Summerfield, A. Q., Elliott, M. R., et al. (1999). “Sparse” temporal sampling in auditory fMRI. Hum. Brain Mapp. 7, 213–223. Hikosaka, O., Sakamoto, M., and Usui, S. (1989). Functional properties of monkey caudate neurons. II. Visual and auditory responses. J. Neurophysiol. 61, 799–813. Jastreboff, P. J., and Tarnecki, R. (1975). Response of cat cerebellar vermis induced by sound. I. Influence of drugs on responses of single units. Acta Neurobiol. Exp. 35, 209–216. Penny, W., and Holmes, A. P. (2004). “Random-effects analysis,” in Human Brain Function, eds R. S. Frackowiak, K. J. Friston, C. D. Frith, R. J. Dolan and C. J. Price (San Diego, CA: Academic), 843–850. Postle, B. R., and D'Esposito, M. (1999). Dissociation of human caudate nucleus activity in spatial and nonspatial working memory: an event-related fMRI study. Brain Res. Cogn. Brain Res. 8, 107–115. Copyright © 2016 Teki and Griffiths. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
0.974496
Leonardo Bruni (or Leonardo Aretino; c. 1370 – March 9, 1444) was an Italian humanist, historian and statesman, often recognized as the most important humanist historian of the early Renaissance. He has been called the first modern historian. He was the earliest person to write using the three-period view of history: Antiquity, Middle Ages, and Modern. The dates Bruni used to define the periods are not exactly what modern historians use today, but he laid the conceptual groundwork for a tripartite division of history. Leonardo Bruni was born in Arezzo, Tuscany circa 1370. Bruni was the pupil of political and cultural leader Coluccio Salutati, whom he succeeded as Chancellor of Florence, and under whose tutelage he developed his ideation of civic humanism. He also served as apostolic secretary to four popes (1405–1414). Bruni's years as chancellor, 1410 to 1411 and again from 1427 to his death in 1444, were plagued by warfare. Though he occupied one of the highest political offices, Bruni was relatively powerless, compared to the Albizzi and Medici families. Historian Arthur Field has identified Bruni as an apparent plotter against Cosimo de' Medici in 1437 (see below). Bruni died in 1444 in Florence and was succeeded in office by Carlo Marsuppini. Bruni's most notable work is Historiarum Florentini populi libri XII (History of the Florentine People, 12 Books), which has been called the first modern history book. While it probably was not Bruni's intention to secularize history, the three period view of history is unquestionably secular and so Bruni has been called the first modern historian. The foundation of Bruni's conception can be found with Petrarch, who distinguished the classical period from later cultural decline, or tenebrae (literally "darkness"). Bruni argued that Italy had revived in recent centuries and could therefore be described as entering a new age. One of Bruni's most famous works is New Cicero, a biography of the Roman statesman Cicero. He was also the author of biographies in Italian of Dante and Petrarch. It was Bruni who used the phrase studia humanitatis, meaning the study of human endeavors, as distinct from those of theology and metaphysics, the source of the term humanists. As a humanist Bruni was essential in translating into Latin many works of Greek philosophy and history, such as Aristotle and Procopius. Bruni's translations of Aristotle's Politics and Nicomachean Ethics, as well as the pseudo-Aristotelean Economics, were widely distributed in manuscript and in print. His use of Aelius Aristides' Panathenicus (Panegyric to Athens) to buttress his republican theses in the Panegyric to the City of Florence (c. 1401) was instrumental in bringing the Greek historian to the attention of Renaissance political philosophers (see Hans Baron's The Crisis of the Early Italian Renaissance for details). He also wrote a short treatise in Greek on the Florentine constitution. Bruni died in Florence in 1444, and is buried in a wall tomb by Bernardo Rossellino in the Basilica of Santa Croce, Florence. Wikisource has the text of the 1911 Encyclopædia Britannica article Bruni, Leonardo. Leonardo Bruni (April 2001). James Hankins (ed.). History of the Florentine People. 1. translated by James Hankins. Harvard University Press. ISBN 0-674-00506-6. Leonardo Bruni (November 2004). James Hankins (ed.). History of the Florentine People. 2. translated by James Hankins. Harvard University Press. ISBN 0-674-01066-3. ^ Gary Ianziti (2012). Writing History in Renaissance Italy: Leonardo Bruni and the Uses of the Past. Harvard University Press. p. 432. ISBN 978-0674061521. ^ a b c d Leonardo Bruni; James Hankins (October 9, 2010). History of the Florentine People. 1. Boston: Harvard University Press. Field, Arthur: "Leonardi Bruni, Florentine traitor? Bruni, the Medici, and an Aretine conspiracy of 1437", Renaissance Quarterly 51 (1998): 1109-50. "Leonardo Bruni". In Encyclopædia Britannica Online. Reeser, Todd W. Chapter 2 in Setting Plato Straight: Translating Ancient Sexuality in the Renaissance (Chicago: U of Chicago Press, 2016). Wikimedia Commons has media related to Leonardo Bruni. Bruni, Leonardo (1610) . Historiarum Florentinarum libri XII : quibus accesserunt quorundam suo tempore in Italia gestorum & de rebus Græcis commentarii (in Latin). Strassburg: Lazarus Zetzner. OCLC 288009927. Archived from the original on August 18, 2009. Retrieved October 9, 2010. Digitized from a copy at the John Adams Library. De studijs et litteris ad illustem dominam baptistam de malatesta tractatulus. Leipzig 1496. Epistola ad Baptistam de Malatestis. Theodorus de Bry was an engraver, goldsmith and publisher, famous for his depictions of early European expeditions to the Americas. The Spanish Inquisition forced de Bry, a Protestant, to flee his native and he moved around Europe, starting from the city of Liège in the Prince-Bishopric of Liège, to Strasbourg, Antwerp and Frankfurt, where he settled. De Bry created a number of engraved illustrations for his books. Most of his books were based on observations by explorers, even if De Bry himself, acting as a recorder of information. To modern eyes, many of the illustrations seem formal but detailed, as a man he trained under his grandfather, Thiry de Bry the Elder, and under his father, Thiry de Bry the Younger, who were jewellers and engravers, engraving copper plates. The art of copper engraving was the technology required at that time for printing images. In 1524 Thiry de Bry the Younger married Catherine le Blavier, daughter of Conrad le Blavier de Jemeppe and their son, Theodore de Bry, became a jeweller and engraver. Theodore de Bry became a Protestant, and in 1570 was sentenced to perpetual banishment and he moved to Strasbourg, along the west bank of the Rhine. In 1588, Theodorus and his family moved permanently to Frankfurt-am-Main, the most famous one is known as Les Grands Voyages, i. e. The Great Travels, or The Discovery of America and he published the largely identical India Orientalis series, as well as many other illustrated works on a wide range of subjects. His books were published in Latin, and were translated into German, English. His illustrations were based on the paintings of colonist John White. The book sold well, and the year de Bry published a new one about the first French attempts to colonize Florida, Fort Caroline, founded by Jean Ribault. It featured 43 illustrations based on paintings of Jacques Le Moyne de Morgues, Jacques de Moyne had planned to publish his account of his expeditions but died 1587. According to de Brys account, he had bought de Moynes paintings from his widow in London, the Latin and German editions varied markedly, in accordance with the differences in estimated readership. Modern history, the modern period or the modern era, is the global historiographical approach to the timeframe after the Post-classical history. It took all of history up to 1804 for the worlds population to reach 1 billion. Contemporary history is the span of historic events from approximately 1945 that are relevant to the present time. Some events, while not without precedent, show a new way of perceiving the world, the concept of modernity interprets the general meaning of these events and seeks explanations for major developments. The fundamental difficulty of studying modern history is the fact that a plethora of it has been documented up to the present day and it is imperative to consider the reliability of the information obtained from these records. In the pre-modern era, many peoples sense of self and purpose was expressed via a faith in some form of deity. Pre-modern cultures have not been thought of creating a sense of distinct individuality, religious officials, who often held positions of power, were the spiritual intermediaries to the common person. It was only through intermediaries that the general masses had access to the divine. Tradition was sacred to ancient cultures and was unchanging and the order of ceremony. The term modern was coined in the 16th century to present or recent times. New information about the world was discovered via empirical observation, versus the use of reason. The term Early Modern was introduced in the English language in the 1930s, to distinguish the time between what we call Middle Ages and time of the late Enlightenment. It is important to note that these terms stem from European history, in the Contemporary era, there were various socio-technological trends. Regarding the 21st century and the modern world, the Information Age and computers were forefront in use, not completely ubiquitous. The development of Eastern powers was of note, with China, in the Eurasian theater, the European Union and Russian Federation were two forces recently developed. A concern for Western world, if not the world, was the late modern form of terrorism. James Hankins is an intellectual historian specializing in the Italian Renaissance. He is the General Editor of the I Tatti Renaissance Library and he is a professor in the History Department of Harvard University. In 2012 he was honored with the Paul Oskar Kristeller Lifetime Achievement Award of the Renaissance Society of America, Hankins was born in Philadelphia, Pennsylvania. He took an A. B. in Classics from Duke University and M. A. M. Phil. at Columbia he worked with Eugene F. Rice and the historian of philosophy Paul Oskar Kristeller, serving as the latters research assistant for six years. In 1985 he joined the faculty at Harvard University. Hankins monographic work centers on the history of philosophy, literature, under Hankins editorship the series has published over fifty volumes between 2001 and 2012 and sold close to 80,000 volumes. He is the author or editor of twenty volumes and more than eighty articles, essays. Many of his writings are accessible online, via Digital Access to Scholarship at Harvard. The family originated in the Mugello region of the Tuscan countryside, gradually rising until they were able to fund the Medici Bank. The bank was the largest in Europe during the 15th century, the Medici produced three Popes of the Catholic Church—Pope Leo X, Pope Clement VII, and Pope Leo XI, two regent queens of France—Catherine de Medici and Marie de Medici. In 1531, the family became hereditary Dukes of Florence, in 1569, the duchy was elevated to a grand duchy after territorial expansion. They ruled the Grand Duchy of Tuscany from its inception until 1737, the grand duchy witnessed degrees of economic growth under the earlier grand dukes, but by the time of Cosimo III de Medici, Tuscany was fiscally bankrupt. Their wealth and influence initially derived from the textile trade guided by the guild of the Arte della Lana. They, along with families of Italy—such as the Visconti and Sforza of Milan, the Este of Ferrara. The Medici Bank was one of the most prosperous and most respected institutions in Europe, there are some estimates that the Medici family were the wealthiest family in Europe for a time. From this base, they acquired political power initially in Florence and in wider Italy, a notable contribution to the profession of accounting was the improvement of the general ledger system through the development of the double-entry bookkeeping system for tracking credits and debits. The Medici family were among the earliest businesses to use the system, the Medici family came from the agricultural Mugello region, north of Florence, being mentioned for the first time in a document of 1230. The origin of the name is uncertain, Medici is the plural of medico, written del medico or delmedigo, medical doctor. It has been suggested that the derived from one Medico di Potrone, a castellan of Potrone in the late 11th century. The dynasty began with the founding of the Medici Bank, until the late 14th century, prior to the Medici, the leading family of Florence was the House of Albizzi. In 1293 the Ordinances of Justice were enacted, which became the constitution of the republic of Florence throughout the Italian Renaissance. Roman pottery sherd from Arezzo, Latium, found at Arikamedu in India (1st century AD), an evidence of the role of the city in Roman trade with India through Persia during the Augustan period. Musée Guimet. The late medieval mark of the Medici Bank (Banco Medici), used for the authentication of documents. Florence, Biblioteca Nazionale Centrale, Ms. Panciatichi 71, fol. 1r. Cosimo goes into exile, Palazzo Vecchio. Rossellino's tomb of the Cardinal of Portugal, Florence. Monument to Giannozzo Pandolfini, Badia Fiorentina, Florence. Tomb of Carlo Marsuppini in the Basilica Santa Croce in Florence.
0.999857
Back in November, my friends and I had a pre-Thanksgiving dinner. And by "pre-Thanksgiving" I mean basically an entire Thanksgiving dinner. It was a great time, but conisdering all the dishes that were made, there was a massive amount of preparation and then a ton of clean-up. I would love to do it again, but with a simpler meal. Could you help me come up with some ideas for meals that are simple, not too expensive and could feed anywhere from 7-12 people? This is such a good question, that I have turned it into a mini-series. Today's final installment: Comfort Foods. In recent editions, I took you through menu plans that would use a minimum of dishes. This one may require a bit more clean-up, but the recipes are all simple and delicious - and they're easy to double, triple, or quadruple to feed a crowd. Here we go! Prep time: 15 minutes. Cook time: 25-30 minutes. What you'll need: A cutting board and a sharp knife; a can opener; oven-safe baking dish; something to stir with; your clenched fist pounding the crackers to smithereens OR a food processor; a lemon juicer OR your bare hands juicing a lemon. This dip has been a family tradition at our house since, oh, the 1940's, when somebody read the recipe on the back of a Ritz crackers box, and made a few Italian suggestions. The consistency is more like bread pudding than what you're probably thinking of. It is deeply satisfying, and a snap to make. Prep time: 15 minutes. Cook time: 15 minutes. What you'll need: A cutting board and a sharp knife; garlic press; can opener; wooden spoon, cheese grater. You'll be chopping vegetables, opening cans, and grating some cheese while you let things simmer. It's a simple recipe and a great way to fill people up if you've got some big eaters coming. You can always make this in bulk and freeze what you don't end up eating. What you'll need: A cutting board and a sharp knife; pasta pot, collander. This pasta dish is more elegant than opening up a jar of sauce, but almost as simple! Your guests will love it. Don't feel like springing for expensive parmesan? Grated mozzarella makes a great substitution. Prep time: about 30 minutes to make the apple tartlet filling; another 30 minutes to bake the tartlets. You can use those same 30 minutes to let the cider steep and mix up some cinnamon butter. The filling will take a little bit of effort, but assembling the tarts could not be easier. They will look bangin' coming out of the oven, and the hot buttered rum cider will knock people out! If You Liked Anchorman . . . Katie & Rob: Wedding Feast!
0.996531
Why join someone else's sailing team when you can create your own? To win an around-the-world sailing race, it takes more than a fast boat, a skilled crew and a nine-month supply of Dramamine. It takes a dream. Oh yeah, and about $21 million. At least that's how much Mark Towill, a local kid from Kaneohe, and his college sailing buddy Charlie Enright scraped up for their bid to win the Volvo Ocean Race 2014-2015, which starts in October. Instead of trying out for an existing team, the young entrepreneurs built their own team from scratch (with substantial backing from Turkish medical device manufacturer Alivimedica). Towill and Enright have dreamed of racing around the world since they met in 2006 while making the Disney documentary Morning Light, which follows a team of young sailors racing from Los Angeles to Honolulu. You can read the whole story, "Ivy League Sailors Go From Scratch to Race Billionaires", at Bloomberg News.
0.999996
Heavy-metal band Metallica sued the Napster MP3-trading software company and a trio of universities today, charging that together they were responsible for massive violations of the band's copyrights. Is this issue a simple matter of copyright violation, or is there something more profound happening here? Napster is not software designed to violate copyright, but software which allows individual people to share files with each other, an extending the basic facilities of the Internet without the arbitrary distinction separating higher-powered servers from the users desktop. Attempting to shut down the Napster service is not going to achieve Metallica's goals but will instead alienate their previous fans who are interested in moving to a more modern music distribution system. Metallica's lawyers claim is that "Napster has built a business based on large-scale piracy". This is about as out-of-touch with reality as previous claims that the Internet was purely a tool that facilitated copying of copyright restricted materials. The use of the word "piracy" itself is an invalid comparison since it suggests that illegal copying is ethically equivalent to attacking ships on the high seas, kidnapping and murdering the people on them. Limiting the right to copy information via copyright is generally promoted as the only method to ensure that musicians get paid for their craft. In fact it is being used as a tool being used to centralize all aspects of human communication and remove basic freedoms of choice and by extension threatens freedom of speech. While Metallica might have been made rich from their music, many musicians are not able to make a living and it is partly because of this centralization of the entertainment industry. When people are paying for the music of widely-promoted bands such as Metallica, this is money they would not have available to go towards other entertainment. The widely-promoted bands have the huge overhead of the promotion, meaning that a smaller percentage of your entertainment dollar makes it to the musician with this entertainment form over others. Services such as Napster provide valuable promotion for musicians for free, cutting out some of these expensive middle-men. There is a real threat to Metallica's bottom line here, but it is NOT from not getting paid for music shared over the Internet. It is from the lowering of popularity they will receive as a direct result of people having more freedom-of-choice in how they get entertained as alternatives to the monopolizing music industry becomes more and more available to the population. Metallica is not protecting the ability of musicians to make a living at their craft, they are trying to impose one specific business model onto the entire music industry. I for one am starting early, and as a protest against Metallica's promotion of yesterdays centralized music industry, I am getting rid of all the Metallica CD's. I will be trying to find a home for them with someone else who might otherwise purchase them, reducing their sales in a minor way. I was once a fan and had a number of their CD's, but now have no interest in listening to this band, nor supporting them financially in any way. This case is going to hurt the industry more than it will help as more people are being forced to think more about how the music they listen to gets to them. I know I will be moving my entertainment from these monopolists to independent artists and media. I was missing a few: Kill 'Em All, Ride the Lightning, Garage Inc., but given the way that Metallica wishes to do business and hold back the forward movement of the industry, I am not going to miss any of them. I have given away or sold the albums I owned in the hopes that my passing these copies onward will mean 6 less CD's bought by someone else and 6 less CD's worth of support to this dinasaur band. MP3.com - this is where I listen to most of my music these days. IceCast - Open-Source MP3 Streaming. The various channels listed here and at ShoutCast are also of interest. Napster - I don't actually use Napster (I don't find their business model interesting), but politically support their right to provide the tools they do. OpenNap: Open Source Napster Server which also contains references to other clients. The Free Music Philosophy extends this issue to the next level by questioning the restriction of any music sharing. EFF's Open Licenses, specifically the Open Audio Licence is for audio/music what the GPL is for Software. GNUtella is another distributed file sharing system, more like FreeNet than the more centralized Napster systems. Courtney Love does the math - she does the math and comes to the same conclusions I have....except that the believes she agrees with Lars when in fact I suspect she strongly disagrees. TVT sues Napster - another monopolist recording industry player joins the list on dark-side of this debate. CBC seems to be doing a good job in covering this story and has many other story links at the bottom. A simple search on Linuxtoday will come up with many articles. Napster Shut Down July 27 - preliminary injunction granted againsts Napster. Will this help the cause of Metallica? It will make it worse as when people stop using Napster they will start to use more decentralized systems to share their files : systems that cannot be shut down as there is no centralized organization to use the courts to attack. (Note: This injunction didn't actually happen in the final hour).
0.940695
The latest Ken Burns documentary, Vietnam, should have been as widely watched as his Civil War documentary. Many people, however, including some of my friends who were in college then, and others, working class and patriotic, have found it too painful to watch. I am sorry that they missed this, because it was an extremely fair revisit that would have been impossible to make short of 50 years passage of time.
0.942378
In 1997, the medical community was prescribing a new drug to treat type 2 diabetes. By March 2000, this drug was removed from the market because it was causing hepatitis and liver disease. Drugs in this family are still being prescribed to treat diabetes. Are there any risks? Troglitazone was allegedly a miracle drug. It decreased incidence of type 2 diabetes by up to 75% compared with a control group. It helped relieve many complications that can come from insulin resistance, including certain ovarian diseases. It was prescribed to use with insulin, with other diabetes medications, and by itself for therapy. Only after 3 years did the FDA (Food and Drug Administration) realize that troglitazone caused severe liver damage. Troglitazone was available under the brand names Rezulin and Romozin. Troglitazone is in the thiazolidinedione family of diabetes medications. The thiazolidinedione family includes pioglitazone and rosiglitazone. Pioglitazone is marketed as Actos by Takeda Pharmaceuticals; Rosiglitazone is marketed as Avandia by GlaxoSmithKline. Both of these medications are currently on the market. Neither Avandia nor Actos have been associated with an increase in liver disease. However, both of these medications may cause an increase risk of heart attack and stroke. It is important that you discuss any concerns that you have with your doctor before undertaking any treatment. Both Avandia and Actos can be used as monotherapies (by themselves) to help increase the body's sensitivity to insulin. They can also be used with insulin treatment, for type 2 diabetics who are insulin dependent. Avandia and Actos can be used in combination with other diabetes medications, such as biguanides (such as metformin) and sulfonylureas. Avandia is available in pre-mixed combinations called Avandamet and Avandaryl. In order to minimize your risk of side effects on diabetes medication from the thiazolidinedione family, such as Actos and Avandia, it is important to follow your doctor's directions. This means that you will have to follow your diet and exercise regimen. It also means that you will have to limit your alcohol intake. You will not be able to take a thiazolidinedione if you have a history of liver disease, or if you have a history of heart disease. Doctors typically monitor patients' livers when they are on Avandia, Actos, or other thiazolidinedione because of the previous scare with Rezulin. Your liver function can be monitored with regular blood tests, often each month or every other month. Be sure to visit your health care professional regularly to have your check ups. The basic element of diabetes management, no matter your treatment, is keeping a healthy diet and exercise. This can often prevent you from having to take medications to treat your diabetes, or it can help you minimize the amount of medication that you need. Prevention is often the best medicine of all. Vivian Brennan is the editor of The Guide To Diabetes. To learn more about diabetes medications, for both type 1 diabetes and type 2 diabetes, visit The Guide to Diabetes today.
0.999572
Is vindictiveness a good or bad trait? I've asked myself this question recently as I quite simply believe I almost went beyond the point of no return. In the past I've had a couple of situations where I reacted in such a way that I considered fair and just deserved. After one such case a person upon hearing about my reaction referred to me as "vindictive little bugger". Now I didn't really see this as a bad thing, but rather saw it that I was actively doing something rather then playing nice, sitting in the corner and keeping the peace. I never went overboard with anything and always played within the limits of the three fold rule, and generally used this a authoritative figure. However, I recently was personally involved in a situation where I simply ignored all constraints and basically common sense, and almost did something which I would probably have regretted later. It involved someone doing something rather underhanded to my family and the business they ran. Warping the law, and also being an underhanded "snake" about the whole thing. Needless to say I lost my cool and I almost was prepared to attempt to not just end his existence but all those that he holds dear by any means necessary. It was only White Harmony who calmed me down preventing me from basically shouting to the universe for all I was worth. Upon reflection it's not a very nice picture. Looking back at myself I can not in anyway describe myself in any good light. I can only see myself at the time as a hate filled little daemon who cared not one wit about karma, justice and only about retribution. It's rather disquieting when you realise this about yourself, you can kid yourself as much as you want that you are always developing your spirituality and you turn around and see exactly the same things you find most reprehensible in world, you see in yourself. Humans are Vindictive... So, in essence, you are running about average. Don't worry. To err is human. To err again is Foolish, but is also human. To err a third time is stupidity, but is also human. he fact you are vindictive is not a problem, but it's the fact you let it overpower you, and you almost go and make a terrible mistake, is the error. We all make mistakes, some big and some small. But the true power of humanity is the idea that we can learn from these mistakes, and then do better the next time! Those who don't learn from their past are doomed to repeat it. The moral of this lesson? Don't be an idiot, loathing your past! Learn from it. Don't be sad because you are human, but smile that you know that you can control yourself next time, and that he's going to get his... That's why the Law of Three is there in the first place. The Universe will balance itself out. Seriously, as long as you're being true to yourself and your inner voice isn't telling you it's wrong, you're probably not too far off track, if at all. Well, in a nutshell, vindictiveness is not nice. Sometimes you have to forgive people for their faults too. Like Runewulf said, be true to yourself, but I would add don't reciprocate negativity, it's just not useful. Ah my dear, you are a good person. We all descend into shadow. You are not a bad person. It is wonderful that you had enough awareness to allow that side of you to emerge. It is a perfect time to heal it, now that it has surfaced just a short time ago. Personally, I have always followed the "Forgive, but never forget" way of looking at it. This tends to make people think of me as vindictive, but I think of it as being aware of the fallacies of human behavior. I think that so long as you don't let it overrun your common sense, you're doing well. When it does, it's important to note it and make amends, in spirit if not in person.
0.991636
The description of Neighbor House Escape Game The game-play in Terrible Neighbor House Is a kind of a game of hide and seek. Your goal is to uncover all the secrets of your crazy neighbor and not to fall into his eyes, but it’s no longer fun. If you get caught, you are done! On a serious note, there is a dead body and grave in his house as well. try not to get caught by dog of Neighbor.Your strange suspicious neighbor, who is absolutely crazy and he is scary neighbor. You decide to enter your neighbor's house, first you meet dog of neighbor and say hello dog of neighbor. You are cautious that your neighbor is up-to something.There is a strange neighbor doing some suspicious things in his house. He is absolutely scary. It is a grand survival game about crazy scary neighbor house. You have to explore the strange and suspicious things, find strange scary neighbor suspicious activities in his house and make an escape plan before strange neighbor fat scary guy caught you. You are assigned the tasks to steal his favorite things, so sneak into the neighbor house. You play as a normal man who lives in his house but keep in mind you should hide from him.This scary killer neighbor game gives you the lifetime experience of the Scary neighbor who scare the people with his evil powers. This solo neighbor game is full of ultimate destruction and maximum thrill with various tasks. You will definitely get addicted to this scary neighbor game in no time. Are you afraid of the dark and haunted houses? Test yourself like you are in a haunted inhabited by scary neighbor ghosts. Scary sounds everywhere, be careful, everything is paranormal in this place This scary neighbor survival game will give you the best virtual experience in a haunted scary town. It's the best survival simulator game among all haunted house games & scary survival escape adventure games. A truly horror night right from the beginning as these old mansions appeared to look menacing. try your best to do the job!So Good Luck! and say Hello scary Neighbor!Features:• Scary Neighbor who will not let you go! Mystery based mission, challenging levels• High Quality 3D Graphics! You will Enjoy how your device will show this Full of Atmosphere Game and You will feel the scary Neighbor's presence everywhere!• Amazing Sounds! A stunning soundtrack, the ominous melodies of which perfectly convey the atmosphere of the Terrifying Neighbor ! Hello Adventures 3D thriller!• Smooth and Easy Controls! A well-designed first-person camera system that allows you to move around your neighbor house freely and look around without any delay!• Amazing environments. Explore this mystical neighbor house and try to get known in all crazy neighbor secrets!Download now FOR FREE! and Say Hello to your scary neighbor.
0.999999
Brazil is the most populous country in South America. What country is the SECOND most populous in South America? Argentina is the third most populous.
0.938117
Is Lucy Liu Lesbian ? The Asian-American actress Lucy Liu is a multi-faceted personality who has showcased her talent in several fields. Her acting career began in 1991 from the Television series named "Beverly Hills, 90210" where she portrayed Courtney for an episode. After her debut, she was everywhere as she was cast in many more movies and Television series. The charming Liu is an artist as well and showcases her paintings, photographs in art galleries. She has done some directing in her career and has directed a movie and some episodes in Television series. Lucy Liu was born to parents Cecilia and Tom Liu on December 2, 1968, in Jackson Heights, Queens, New York City, New York. She was raised alongside her two siblings a sister named Jenny and a brother named Alex. Her father Tom worked as a trained civil engineer who used to sell digital clock pens and her mother Cecilia was a biochemist. Her parents were a middle-class people who worked many jobs to raise their children. They immigrated from Beijing and Shanghai. Liu grew up in a diverse neighborhood and spoke Mandarin and English from an early age. As for her schooling, she attended Joseph Pulitzer Middle School and later on enrolled at Stuyvesant High School in 1986 from where she graduated. For her higher studies, she joined the University of Michigan located in Ann Arbor, Michigan. During her time there she was a member of the Chi Omega sorority. Liu achieved a bachelor's degree in Asian Languages and cultures. Lucy Liu is professionally an actress, director, artist and voice actress who has involved herself in various fields. She debuted in the entertainment industry in 1991 from the Television series named Beverly Hills, 90210 where she portrayed Courtney for an episode. After her debut, she kept appearing in more Television series. Liu's movie debut happened in 1992 when she was cast in a movie named Rhythm of Destiny. Her notable movies include the action-comedy movie Charles Angels where she played the role of Alex Munday. In the movie, she was joined by two other lead actresses Cameron Diaz and Drew Barrymore. Her recent projects include her 2018 movies Set it Up and Future World where she was cast in the role of Kirsten Stevens and The Queen respectively. Besides she has also worked as a visual artist and showcases her self-made paintings, photography, and collage in art galleries. Her directing career started in 2014 when she directed a movie named Meena which is a story about an eight-year-old Indian girl who was sold to a brothel. The movie was based on a true story and it was screened in New York City. The American native actress Lucy Liu is estimated to have a net worth of $16 million and takes home a salary of $130k for an episode of the series she features in. From her cast in the American procedural drama Elementary, she grabbed a salary of $125k and $130k in 2012 & 2013 respectively. Liu put her Hollywood home back in the market again in October 2018. She reduced the price by $700,000 from the asking price of $4.199 million to a cheaper $3.5 million. Liu owns an LA apartment which she bought in 2004 and by renting it she earns around $12k per month. Some of her movies are a big hit in the box office collection. Cast Members - Cameron Diaz, Drew Barrymore, and Bill Murray. Cast Members - Jackie Chan and Owen Wilson. Cast Members - Uma Thurman, Vivica A. Fox, and Daryl Hannah. The Asian-American actress Lucy Liu has done many charitable works in her lifetime. She raised funds for breast cancer research and education through Lee National Denim Day fundraiser program. UNICEF appointed her as their U.S fund ambassador in 2004 and she traveled to several places like Lesotho, Pakistan and so on. For her amazing work in raising funds, she was awarded the Asian Excellence Award for Visibility. In 2011 she became the spokesperson for the Human Rights Campaign since she was a major supporter of equality of lesbians and gays. Besides, she actively takes part in spreading awareness about the global threat of iron deficiency anemia, vitamin and mineral malnutrition occurring in the young infants and children. The charming actress Lucy Liu is not married as of yet but has been involved in some relationships. She is a single mother who has a son named Rockwell Lloyd Liu who was born through a surrogate on August 27, 2015. Liu has been living the best single mother life one could ever imagine. She decided to take that option since she was busy with work and since his birth, she has been the happiest person on the planet. She travels the world with her son. According to Page Six, Liu is dating an Israeli-American man named Noam Gottesman for six years. He is a billionaire and the couple has kept their relationship private from the media. She is a member of the Chinese-American organization Committee of 100 since 2004 and is still involved with the committee. Liu underwent surgery for breast cancer in 1991 after doctors diagnosed her with the disease and the lump was removed. She is a religious person and has studied various religions like Buddhism, Taoism, and Jewish Mysticism. Liu believes in spirituality and loves to involve herself in everything related to meditations. Liu took part in the Tylenol's #HowWeFamily Mother's Day Campaign. Comedian Awkwafina paid tribute to Lucy Liu while hosting the show, Saturday Night Live. Here is a video of Lucy Liu featuring on the talk show, Late Show with Stephen Colbert. According to Page Six, Liu is dating an Israeli-American man named Noam Gottesman for six years. She is a single mother who has a son named Rockwell Lloyd Liu through a surrogate on August 27, 2015.
0.999979
The Daily Caller first broke a story yesterday afternoon, where an IT staffer for the Democratic National Committee (DNC) was arrested at Dulles airport in the D.C. metropolitan area. The staffer, Imwan Awan, was arrested by the FBI after wiring $283,000 to his native Pakistan. His wife and children had already left the country and were permitted to do so by airport authorities, who confiscated $12,000 found on the wife before letting them go. Debbie Wasserman Schultz, former DNC chief, only fired Awan when news of the arrest was made public. The story broke after 5 PM on Tuesday, but the liberal media only reported on it Wednesday afternoon. Does it take almost an entire day to vet this story, considering the details that the Daily Caller had reported on? ABC News reported on it after 6 PM, CBS News reported on it after 2 PM, Washington Post published an article at 2:44 PM and NBC News right before 2 PM.
0.999559
In the new film, Little Black Book, Brittany Murphy's character engages in some high-tech snooping on her new boyfriend. Based on this scenario Date.com, an online dating service, asked its members: Have you ever snooped on your partner? Women were more likely than men to snoop with 30 percent admitting to doing it "once or twice" compared to 25 percent of men. A bigger gap opened among those who answered "That's not my style". This was the choice of 34 percent of men compared to only 20 percent of women. The justification for women to snoop however, could stem from the fact that 22 percent of them answered: "I have snooped, and found out information that ended the relationship." Only 14 percent of men chose the same answer. More men, 21 percent, than women, 16 percent, admitted to being tempted to snoop but not going through with it. The remaining 12 percent of women and 7 percent of men answered: "I regularly go undercover. You never know what you might find." "The Date.com results show that when women's intuition meets distrust, relationships can be put into jeopardy", said Brenda Ross, relationship advisor for Date.com. "We also have to realize that many people snoop when they already suspect their partners are up to no good", she added.
0.992518
The Guitar Column: Roger Mayer -- "He's an effects wizard, Harry!" Roger Mayer was probably the first of the custom effects pedal builders. In 1963, he began building fuzz boxes in his spare time as he worked for the British Navy's sound and vibration analysis division (read: submarine warfare science) and his pedals soon found themselves at the feet of Yardbird's guitarists Jimmy Page and Jeff Beck who were, coincidentally, his childhood friends from the same neighbourhood. But it was his meeting Jimi Hendrix at London's Bag O' Nails club on 11th January 1967 that was to establish Mayer as the primo effects guru of his time. Primarily a Dallas Arbiter Fuzz Face user, Mayer introduced Hendrix to his newest creation which he dubbed the Octavia. A pedal that produced a randomly generated higher octave depending on how hard the string was struck, the Octavia was deployed by Hendrix on Are You Experienced on the songs 'Purple Haze' and 'Fire' three weeks after their first meeting. It is interesting to note that the pedal used on this recording was a prototype (dubbed Evo 1) that did not incorporate a fuzz or drive circuit -- another custom unit provided the distorted signal for this purpose. Mayer also claims he consigned this prototype to the 'trash bin' after that historic recording! We should clarify that the Octavia produced by Tycobrahe Engineering in the '70s was not the same pedal invented by Mayer but copied from a '69 variant of a Mayer Octavia owned by Keith Relf of the Yardbirds. It is also not clear why Mayer has not taken legal action on what he claims is a copy of his original Octavia concept and name. Joining Hendrix on his 1968 US tour, Mayer took care of Jimi's onstage sound, his effects and his guitars. According to Mayer, Hendrix's 'effects rig' for the tour consisted of a Cry Baby wah, an Arbiter Fuzz Face and/or a Mayer-designed fuzz and an Octavia. During Hendrix's short career, Mayer and Hendrix experimented with five or six different fuzz designs with Mayer building numerous fuzz units and Octavias in the process since pedals were always getting stolen -- sometimes taken directly off the stage by audience members or sometimes vanishing into the overcoat pockets of stage hands, roadies and various hangers-ons. On occasion they were given away as gifts by the guitarist. In 1968 Mayer began working for Olympic Recording Studios -- where Are You Experienced was recorded -- before venturing out on his own. In 1973 he established Roger Mayer Electronics to manufacture effects pedals and custom studio electronics.
0.997856
JK Rowling has mocked Rupert Murdoch over his controversial tweets asserting that all Muslims (or, as he writes, "Moslems") must be held accountable for jihadi attacks. Murdoch, the News Corp boss, shared his views on Twitter on Saturday, after French police killed three Islamist hostage-takers at a Jewish supermarket and printing warehouse. But Murdoch was not yet done and he went onto condemn political correctness (because obviously that sort of ethical thinking that avoids marginalisation and discrimination always leads to bad news). Step forward JK Rowling, who said she would excommunicate if being born a Christian made Murdoch her responsibility. She then offered to assume responsibility for the Spanish inquisition, Christian fundamentalist violence and Jim Bakker - an American televangelist, who served five years in prison for fraud. In his New Years message, he blamed the legislation of abortion and "religion being taken out of schools" as the reason the US hasn't won a war recently. The author also shared a link to an online article stating that Al-Qaida kills eight times more Muslims than non-Muslims. Murdoch has not yet commented on Rowling's comments.