text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Every day, travel budgets are cut and organizations are struggling to find ways to connect with each other to share and present information. Every day, workers are being laid off or leaving their organizations, which creates a recurring need to train new people. Every day, educators are trying to find better ways of transferring knowledge so it becomes more effective to their organizations or the workforce. Virtual classrooms are becoming more popular these days, due to the fact that they are contributing as a primary solution to the challenges mentioned earlier. What initially makes virtual classrooms an effective solution is the concept of not requiring the learner to travel to a physical location to consume knowledge or materials. Learners and instructors can all connect into a virtual classroom at the same time through various devices like a computer, tablet, smart phone, or even a properly configured television. Combined with other technologies like webcams and VOIP, learners can see and hear in a similar fashion as if they were physically in the same room. This allows for real-time presentations, questions, answers, and peer to peer collaboration. None of the traditional facilitation techniques are lost. Surprisingly, with the use of virtual classroom technology, some of these techniques, like peer to peer interaction, are enhanced. The ability to come together in this virtual way saves organizations tens of thousands of dollars every year. Virtual Classroom Benefits In many cases, setting up a virtual learning space has become very easy and fast, which allow organizations to be flexible with the demand of training. Much of this demand is driven by workforce turnover and new hires. Being able to easily adjust and setup virtual class sessions to address the ever-changing training demand gets new hires trained faster and productive sooner. It doesn’t just stop at the benefits of sooner productivity. Learning effectiveness has become a huge benefit as well. Virtual classrooms have become part of a blended learning approach. Recent research from the U.S. Department of Education, in 2009, shows that on average, students in “online learning conditions performed better than those receiving face-to-face instruction.” Because virtual classrooms can be part of a larger virtual community, other learning tools like on-demand courses, assignments, assessments, and prescriptive curriculums all contribute to this blended learning approach. At the end of the day, virtual classrooms can save your organization money, allow greater flexibility in the delivery of your training, and make your workforce more productive and effective. Barbara Means et al., “Evaluation of Evidence-Based Practices in On-line Learning” (U.S. Department of Education, 2009) Photo credit: austinevan on flickr Post contributed by Paul Ingallinera, Learning Solutions Consultant
<urn:uuid:f551a19f-60e9-4df6-97df-96e484e296a5>
CC-MAIN-2017-04
https://blog.inxpo.com/why-virtual-classrooms-are-excellent-learning-venues/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00530-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947893
553
2.8125
3
Health care providers have historically been hesitant to embrace mobile technology as it relates to building apps that will directly impact patient care. Up until recent years, apps have been seen as a toy rather than a tool, and for some health care administrators that stereotype has been enough to deter them from pursuing apps. In this week’s reading however, I came across a study being done by psychologist Kenneth R. Weingardt at Northwestern University’s Feinberg School of Medicine. There at Northwestern, they have been doing clinical trials utilizing responsive websites (websites that work well on many devices) and mobile apps. Two examples of these are the ThinkFeelDo website and a suite of apps called Intellicare. (Intellicare is currently available for android devices.) The advantage of these technologies is that by using content such as text, video, and animation they can provide virtual coaching to help the patient walk through the exercises in the app or use the app in other ways to further their treatment. At this time, the apps that Northwestern is studying are aimed at helping mental health patients with cognitive behavioral therapy. Thus far, they have discovered that the apps do seem to have an impact on reducing the duration of depressive episodes. Looking into the future, there are some very real advantages to developing apps that will benefit the patients of our health care systems. Some of these benefits are: Although the medical profession has been slow in adopting this technology, the simple truth is that giving a patient tools to help them cope is better than dealing with their frustrations concerning wait times, appointments, and lack of attention. Alleviating even some of these stressors will help both patient and health care provider. Giving the patient some avenue to actively engage in their own health care will bring them into the process instead of making them feel like they are only the object of the process. Joseph Siemienczuk, MD. of Northwestern said, “We have to follow the communication preferences of the community and it is clear that their communication preferences have moved to mobile technology… As we pursue effectiveness, moving patient engagement activities to mobile technology is an imperative.” To learn more about how information technology like mobile apps can help you facilitate better health outcomes for your patients contact Apex by phone (800) 310-2739 or email firstname.lastname@example.org today. We would be happy to consult with you concerning your IT needs.
<urn:uuid:1eb5a7c6-b999-4cfb-bac8-960d8e538361>
CC-MAIN-2017-04
https://www.apex.com/mobile-apps-take-foothold-health-care/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00164-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965426
490
2.703125
3
Coding and Typing In the Austin American-Statesman over the weekend was an article covering the efforts of Austin’s K-12 schools to produce students with coding skills. I’m a fan of teaching coding sooner. I think of it as a language skill, along with English, Spanish, Mandarin, Math, Music, and Design. And you can read about my thoughts about the efforts of Magellan International School to address learning as language acquisition to get a sense of how I feel about the subject. If we want to produce children prepared for the future, these language skills are crucial – and coding may be the most important of all! But there are bumps along the road: When leaders at the Austin Achieve charter school installed a new computer programming course this year, they knew they would hit a few snags along the way. They didn’t realize that something as basic as typing would be one of them. “For those (students) that had the technology, they spent more time on their phones than on a keyboard,” said John Armbrust, the school’s executive director. So on a recent Tuesday morning, as the first section of ninth graders filed in for their coding class, they pulled out their Chromebooks and started off with a few minutes of typing exercises. Armbrust said the school will review its classes so students get more typing instruction in earlier grades. After all, he said, each one of those kids will eventually take the computer programming course; it’s part of the charter school’s core curriculum. It turns out, typing is a foundational skill for communicating in the digital age- and in particular for coding. Typing with one finger or two thumbs on your phone may suffice for impromptu texts but is no way to write and debug code. One of the interesting parallels that I see in the BPM space is this same layering of expertise. Everyone wants to know what skills they need to have to do BPM. And the answer is that you need to develop the foundational skills and fluency – and then get practice applying those skills to BPM-level problems. This topic came up at bpmNEXT, in the context of how you find, recruit, and develop talent for BPM. I could list out the technical skills you might need to be a BPM developer: - BPM Engine / Suite of choice And I could list out the business skills: - voice of customer And many more. But moreover, the foundational skills are critical thinking, an understanding of business fundamentals like ROI, customer experience, and customer acquisition costs. And then if you’re going to implement the BPMS as well, you need coding skills. I love this quote from the article: “Using technology and not being able to code is like knowing how to read and not knowing how to write,” Emily Reid, director of education at Girls Who Code, said during a South by Southwest Interactive panel last month. Let me translate for our BPM audience. Knowing how to use the process without knowing how to model business processes is like knowing how to read and not knowing how to write.
<urn:uuid:644308e6-e884-4ac2-8105-d83b343eb2e6>
CC-MAIN-2017-04
http://www.bp-3.com/blogs/2016/04/coding-and-typing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956328
662
2.609375
3
How do tornadoes form? Here's what the NOAA National Severe Storms Laboratory says: The truth is that we don't fully understand. The most destructive and deadly tornadoes occur from supercells, which are rotating thunderstorms with a well-defined radar circulation called a mesocyclone. (Supercells can also produce damaging hail, severe non-tornadic winds, unusually frequent lightning, and flash floods.) Tornado formation is believed to be dictated mainly by things which happen on the storm scale, in and around the mesocyclone. Recent theories and results from the VORTEX2 program suggest that once a mesocyclone is underway, tornado development is related to the temperature differences across the edge of downdraft air wrapping around the mesocyclone. Supercells? Mesocyclones? VORTEX2? What is this, an L. Ron Hubbard novel? Clearly the NSLL should have stopped at, "The truth is that we don't fully understand." But by doing a little online research and measuring scientific metrics such as total page views and "likes," we have narrowed down the true cause of tornadoes to a pair of possibilities. The case for each is presented in the videos below. Which makes more sense to you, or do both explanations pass the "reasonable" test? This story, "Tornadoes: U.S. military weather control or God's punishment for gays? You decide!" was originally published by Fritterati.
<urn:uuid:ebcdc38a-171e-451d-a11c-1cd23d39f4ca>
CC-MAIN-2017-04
http://www.itnews.com/article/2921912/tornadoes-us-military-weather-control-or-gods-punishment-for-gays-you-decide.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00126-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948999
302
3.5
4
The Fusion of Big Data and Little Babies Using big data analytics to save the lives of newborns There are a range of diseases and syndromes in newborns that, if undetected and untreated, can be fatal or debilitating. For example, phenylketonuria (PKU), an amino-acid metabolic disorder, can lead to severe developmental issues. PKU was one of the first diseases regularly tested for by physicians. And, when detected early enough, children with PKU can manage their diet and can lead normal lives (indeed, I heard of one example of a child that was top throughout school and is now a physician). Recently, I attended a function by the Newborn Foundation to commemorate the work of newborn screening advocates. It was interesting to meet this group, all folks passionate for the welfare of newborns. I was also given an inside view of how hard it is to change health policy, to add one new screening protocol to the list (the ramifications are huge). But the most interesting thing to me was how data-driven these folks were. They knew that they needed the hard numbers to show why a new test was necessary. They used data to inform policy. For example, they needed to show the cost of testing versus the cost of not testing, such as emergency visits or reduced quality of life and potential increased care of diseased children. One person was comparing the price of annual screening to the cost of providing Lipitor. Another said, “little test, big bargain.” Also attending the event was Dr Carolyn McGregor, from the University of Ontario Institute of Technology. Dr. McGregor built a system at Toronto’s SickKids hospital to monitor newborns and predict dangerous infections 24 hours earlier than traditional visual methods (watch this great video about her work). The Newborn Foundation recently added one more screening to the list: for Complex Congenital Heart Defects. And, with data being at the center of the Newborn Foundation’s efforts, then it should not be a surprise that the interest in Dr. McGregor’s work was high. Folks asked her how her system could be used for screening of heart defects, using an algorithm to predict heart function or circulatory issues. They also asked how perhaps her system could be used in developing countries, where screening could have an even greater impact. Use that data Dr. McGregor said it best when she said that “the babies give this data to us freely,” so we should use it to make their lives better. Which got me thinking: Organizations that don’t understand that the future of healthcare is data-driven will fail to provide the best care they can possibly give. We must do our best to make use of this valuable data these babies, and all our patients, so freely give us to make them healthier and happier and to be productive members of society. Do you feel that you’re data-centric? Do you think healthcare will be more data-centric? How do you use data in your organization? Do you have stories of how better use of data led to better outcomes? Let us know! View this short presentation for an overview of the University of Ontario case study:
<urn:uuid:23091051-df29-4123-92d4-4e4c9949c0df>
CC-MAIN-2017-04
http://www.ibmbigdatahub.com/blog/fusion-big-data-and-little-babies
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00522-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958118
662
2.640625
3
After I described the actions of BBC Click’s production team in broadcasting their botnet special as “irresponsible, unethical, and almost certainly illegal” (ComputerWeekly 17 March 2009) I have heard more than a few questions. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The number one question from people outside the world of information security was this: “Why does it matter?” Even if the BBC Click producers “technically” committed a crime, why should anyone care? As a university lecturer in legal aspects of information security I take this question seriously. Sometimes it is not enough for us to say that an action was technically a “crime”. The law is supposed to reflect societal values. We expect our government to take varying actions against crime depending upon the seriousness of the criminal acts. Thankfully not all criminal acts produce harm to people or property. A person who fires a rifle blindly into a crowded public square without hitting anyone has “technically” committed a crime. A person who drives an automobile at 75mph on a motorway without causing an accident has also “technically” violated the law. While both are crimes, we believe that one deserves harsh intervention by the police and courts while the other might reasonably be overlooked. We explain the different treatment by reference to the element of risk or negligence involved. We know that firing a weapon blindly in a city could very easily cause mayhem and death. As a society we are outraged that someone could treat other people in such a cavalier fashion. We demand investigation and prosecution. For “minor” speeding offences, however, we take a more relaxed stance. We do not always demand strict compliance. Although the producers of BBC Click took pains to “educate” us about how botnets are meant to work, they failed to discuss this issue of potential risk of their actions to the 21,000 computers already infected by the botnet Trojan. Recall what we learned while watching the programme. Acting without permission, BBC Click producers instructed 21,000 computers around the world: to send spam; to launch a coordinated DDoS attack; to change the “wallpaper” of all 21,000 host machines; and finally to de-activate the trojan infection on all 21,000 machines. Anyone who works in a large corporate IT environment who has ever attempted to update, upgrade, modify, patch, or remove software from a large group of computers using remote access tools will be able to explain that things often go wrong in the process. There is a risk that the “target” machine whose contents are altered (for whatever reason) might fail. The failure could be minor or catastrophic. The chances of failure for each individual machine are relatively small, but consider for a moment that the BBC Click team was tinkering with more than 21,000 machines. These machines were almost certainly running outdated operating systems such as Windows 95, and it is unclear what level of technical sophistication the botnet developers used with regard to so-called “de-activation” instructions. Even if the chances of inconvenient or catastrophic failure are only 1 in 100, this suggests that 210 machines somewhere in the world “fell over” in the cause of well-intentioned (if cack-handed) journalism. We have no way of knowing what havoc this may have wreaked. We don’t know how many of these 21,000 machines are used in a hospital or a doctor’s office; how many are used in safety critical systems; how many represent the only online education tool for a rural school; how many are used by small businesses in remote parts of the world; and how many are the only point of access in a remote village to global information sources – like the BBC. I wonder whether the producers of BBC Click considered any of this before they fired 21,000 “bullets” around the world. - Robert Carolina is a US Lawyer and an English Solicitor who specialises in the law of information technology. He is also a Senior Visiting Fellow with the Information Security Group, Royal Holloway University of London, where he teaches in the information security MSc programme. Opinions expressed are his alone.
<urn:uuid:ca19ae11-8702-4ca4-b87e-531fd95a1242>
CC-MAIN-2017-04
http://www.computerweekly.com/opinion/Opinion-The-unanticipated-consequences-of-BBC-Clicks-botnet-crime
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00246-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952172
899
2.703125
3
As the exascale era looms, a number of research groups are pinpointing the bevy of barriers that the next generation of elite systems will bring. From his vantage point as Argonne National Lab’s Associate Director for Computing, Environmental, and Life Sciences, Rick Stevens has identified the key challenges of the leap to billion-core systems. In a recent Department of Energy report on the coming challenges and benefits of exascale computing, he said that while the magnitude of the programming challenges ahead is daunting, power is also a major concern. According to Stevens, a billion-processor computer, if relying on efficient technologies we have now, will gobble over a gigawatt of electricity. To put that in context, even the top-performing utility power plants in the United States generate only a few gigawatts, with most producing less than four. While he contends that GPU computing is one way to curb this incredible hunger for energy, as it stands now, one single exascale machine could require its own dedicated power plant. Outside of programming and energy consumption, the other barrier to exascale systems is general reliability. He says that with the vast increase in core count comes a vast possibility of failures, noting “If you just scale up from today’s technology, an exascale computer wouldn’t stay up for more than a few minutes at a time” which means that a machine’s failure rate would be once per week or more, at least if you consider that Lawrence Livermore National Lab’s IBM BlueGene/L drops off about once every two weeks. With the massive boost in power requirements and reliability worries, the role of hyper-smart cluster management software will become more critical. This is a topic that Bill Nitzberg, the self-described “cynical engineer” who serves as CTO for Altair Engineering’s PBS Works division, is quite passionate about, even though he’s “heard it all before” in the pre-petascale days. Even in high performance computing where every element is being pushed to the limit, it can be a little tough to get excited about the middleware piece of the HPC race. Of course, without the behind the scenes scheduling and workload management, all the raw clustered compute power in the world is essentially useless. And when it comes to this scale of computing where all the challenges that Rick Stevens alluded to can eventually be mitigated (at varying levels) via effective management, middleware might get more attention that it used to. Nitzberg revealed how the next generation of supercomputers will need to be brainier, not brawnier. He might be biased, coming as he does from the cluster management perspective, so but the fact is—the two most problematic elements of exascale systems outside of programming (power and reliability) can have significant solutions at the management layer. He says that instead of focusing our attention on making the next generation of supercomputers simply use less power, he claims there also needs to be a focus on making very wise use of what power is available. As Nitzberg said, “When I think about the future of computing, whether it’s GPUs, clouds, whatever, I see a lot of trends—the issue of power is no trend, this is an ongoing problem we’ll need to face. When I think of making the future generation of computers smarter, the computer scientist in me thinks about optimization and the environmental side of me thinks about power.” Nitzberg puts this idea of wise power management over mere reduction in context, noting that there needs to be a way for operators of ultra-scale machines to reconsider what workloads they choose to run and when they do so. This might sound, on the surface, very simple—this idea of picking jobs wisely to maximize power and cost efficiency—but he argues that many systems need to be proven for funders as running at peak capacity. He sees this as a concept that might suit the funding powers that be in the short term, but over the long term, the costs of operating such systems will spiral out of control. Running at 99 percent capacity isn’t always necessary and sure isn’t cheap. Many HPC management software layers provide energy-aware features. For example, Nitzberg described the “Green Provisioning” feature in their PBS Professional product. This, like Platform Computing’s Dynamic Power Optimizer, uses sophisticated monitoring tools that shut down, restart, and reroute according to temperature and other factors in large data center enviornments. According to PBS Works, this solution was “validated by several large-scale customers and has lowered their energy use by up to 20 percent.” Louis Westby from Platform Computing told us, “There is already a lot out there to help users power up an down, but there are innovations missing in a lot of those solutions. Temperature level monitoring across an entire data center at that scale to ensure there is a steady influx of power and management of heat are, of course, very important when it’s at [exascale].” Platform Computing, Bright Computing and PBS Works already have power management solutions available that power down systems according to fault detection and they also manage data center temperatures to reroute workloads according to these readings. Open source solutions are also trying to keep pace but as Nitzberg told us, there is no way that the open source solutions available are able to keep pace with the many demands that will come at exascale. Platform’s Westby said that their solutions on power management are very similar to that of PBS and indeed, as Nitzberg noted, there are still innovations to be made before any workload suite would be ready to tackle the challenges of exascale. Westby noted that they have an eye on the future in terms of smart energy management. She says that one area that affects energy consumption is making sure that the system is able to intelligently handle temperature fluctuations and focus on fail-proof failover mechanisms.
<urn:uuid:e0251542-306b-4886-b545-7d328b61e27e>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/05/19/a_new_generation_of_smarter_not_faster_supercomputers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00246-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948759
1,256
2.6875
3
London Area Code 226 From the 1960's up to 1990, the London area code was 01. This area code of London was used for a long time until its number inventory was used up. The supply of London phone numbers were supplemented in the 1990 by splitting London into two parts: inner and outer. Inner London would be given area code 071 while outer London would have 081 as its area code. The London Area Code was modified again on PhONEday in 1995. Both of London's area codes were given the area code format 01X1. The X is replaced by the digit 7 for inner London while the X for the outer London is 8. Both area codes changed from being composed of only 2 numbers to a 3-digit area code. This increase in the length of the area code is the result of the insertion of the number 1 to both area codes. London local numbers during this period was 7 numerals long. PhONEday was intended to produce a considerable supply of phone numbers. But in the case of the London area code, it was only successful in creating a significant pool of unused numbers that was used for other services. The failure of PhONEday has led to another event that would try to accomplish what it failed to do. This activity was the Big Number Change that happened in 2000. This occurrence altered the area codes of London to become area code 020. This new area code arrangement of employing 2-digit area codes from the previous 4 figured area code was able to augment the supply of phone numbers of the London area. This new area code was also used by both inner and outer London. This consolidated the different parts of London under area code 020. London local numbers utilized during this time was increased to 8 digits. On June 2005, a specific type of London local numbers was released. These local numbers were to start with the number 3. The area code format of London would look like this, 020 3XXX XXXX. This resulted to the misconception of the 203 area code. The misinterpretation that the 203 area code is the official area code of London is caused by the perception of callers that London still uses 4-digit area codes. All these changes in the London Area Code were caused by the ever-increasing demand for new phone numbers. Businesses and companies need multiple phone lines and network for their telephones, faxes and Internet access. Even those that live in their homes also employ the same communication gadgets. Another contributing factor to this considerable growth in the need for phone numbers is the mushrooming and proliferation of mobile phones. There are some individuals, specifically businessmen, who own several mobile phones. No one would like to change number if it can be avoided. Problems such as informing previous contacts about the new number are just one of the complications that can be caused by switching to a new number. Based on the past experiences that had happened with the area codes of London, the 203 area code will be depleted. Possessing a business phone number that does not rely on area codes would be very favorable. You must get the services of RingCentral if you want to have this type of phone system. This type of benefit is just one of the many that you can derive from the RingCentral communications system. Some of the significant features that you can obtain would include Virtual PBX, Call Transfer, Call Blocking and Internet Fax.
<urn:uuid:e029a3fc-5399-4e78-aa82-24451ca4b038>
CC-MAIN-2017-04
https://www.ringcentral.ca/features/local-numbers/ontario/london-226-areacode.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00366-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960883
689
3.1875
3
Flame retardants are additive materials used to inhibit flames and to stop the magnification of burning. Flame retardant chemicals are materials which retard the effect of fire and it includes halogens, phosphorus, nitrogen containing compound, and hydroxide metallic compound. The North America Flame retardant Market was valued at $1,139.34 million in 2011, and is projected to reach $1,628.75 million by 2017, at a CAGR of 6.2%. Brominated flame retardant is a major type of flame retardant, and has a huge demand in North America. Flame retardants are widely used in building & construction industry which is growing in North America. Flame retardants do not modify any property of the material to which they are added. However, flame retardant chemicals inhibit or delay burning of materials and thus these are indispensable for protecting electrical appliances, construction materials, and textiles. With increasing fire safety standards across the globe and growing use of flammable materials, flame retardant chemicals are gaining more and more importance. Flame retardants are widely used in various applications such as building and construction; electrical and electronics; wire and cable; automobile, transportation, and textiles. The North American chemical industry is a significant part of the country’s economy. The industry is divided in four segments including Base chemicals, Specialty chemicals, Pharmaceuticals, and Consumer chemicals. The U.S. is the largest chemical producer in North America, followed by Mexico, and Canada. In the past, most of North America’s chemical industry growth was driven by domestic sales, but these days, the country’s growth is shared dependent on both the domestic and the export market. U.S. is the major consumer of Flame retardant in North America, accounting for 85% of the total consumption. Subsequent to U.S. are Mexico, and Canada. The key countries covered in North America Flame retardant Market are U.S., Mexico, and Canada. The types studied include aluminium trihydrate, antimony oxides, brominated, chlorinated, organophosphorus, and others. Further, as part of qualitative analysis, the North America Flame retardant market research report provides a comprehensive review of the important drivers, restraints, opportunities, and issues in the flame retardant market. The North America Flame retardant Market report also provides an extensive competitive landscape of the companies operating in this market. It also includes the company profiles and competitive strategies adopted by various market players, such as Akzonovbel N.V. (The Netherlands), Albemarle Corporation (U.S.), Arkema S.A. (France), and BASF S.E. (Germany). Along with the market data, you can also customize the MMM assessments that meet your company’s specific needs. Customize to get comprehensive industry standards and deep dive analysis of the following parameters: - Market size and forecast (Deep Analysis and Scope) - Consumption pattern (in-depth trend analysis), by application (country-wise) - Country wise market trends in terms of both value and volume - Competitive landscape with a detailed comparison of portfolio of each company mapped at the regional- and country-level - Production Data with a wealth of information on Flame retardant Raw material suppliers as well as producers at country level with much comprehended approach of understanding - Comprehensive data showing Flame retardant plant capacities, production, Consumption, trade statistics, Price analysis - Analysis of Forward chain integration as well as backward chain integration to understand the approach of business prevailing in the North America Flame retardant market - Detailed analysis of Competitive Strategies like new product launch, expansion, merger & acquisitions adopted by various companies and their impact on North America Flame retardant Market - Detailed Analysis of various drivers and restraints with their impact on the North America Flame retardant market - Upcoming opportunities in flame retardant market. - SWOT for top companies in flame retardant market - Porters 5 force analysis for flame retardant market - PESTLE analysis for major countries in flame retardant market. Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:88f244af-70db-4b13-ae35-eab2a9a6acc8>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market/north-america-flame-retrardents-5433259783.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00119-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919531
890
2.953125
3
Not so long ago, the standard way of looking for a malware infection was to simply monitor web traffic. By looking, for example, for HTTP requests to google.com/webhp – a typical Internet connectivity check – we could easily pinpoint a ZeuS infected machine. Problem solved. Sadly, cybercriminals use increasingly sophisticated methods of communication such as Domain Generation Algorithms (DGA) designed to evade detection in the growing noise of web traffic and to prevent the takedown of a botnet. DGAs are algorithms used by malware that generate domain names, which then serve as rendezvous points with their controllers. They are used as a method to restore communication when a controller is offline. As cybercriminals change and improve their evasion techniques, monitoring capabilities also have to change and become more sophisticated. The focus in monitoring has always been on analyzing successful connections, whether it is an HTTP connection or an email. Now, we need to mine DNS traffic data to detect threats and pinpoint their sources. DNS monitoring takes us much further, providing information on failing attempts – the red flags of suspicious activity. The good news is that since DNS is an essential component of the Internet, there is no way cybercriminals can get around it. Most activities that they engage in online will create DNS traffic. Most importantly, since their uses of DNS are atypical, this becomes a weakness that can be used against them. Capturing and creating usable blocks of data DNS traffic is rich in information. When captured correctly, it tells us what domain a computer attempts to connect with. In a typical situation, someone requests a specific domain name and it translates to an IP address. A successful request will create HTTP traffic towards that domain. But if a domain is entered incorrectly, the request will fail, generating an NXDOMAIN response. Malicious DNS traffic does not follow this typical sequence. A malware infection will generate hundreds of requests for a domain at once; attempting to connect to its command and control (C&C) server by guessing which domain is controlled by the cybercriminals. This method essentially connects to a predetermined list of controllers and ultimately connects to the active one. This results in loads of noise, which is detectable. High volumes of NXDOMAIN responses are red flags for malware threats. To avoid sending up these red flags, malicious software communicates with new domains intermittently to frustrate detection efforts. The random nature of it circumvents static timing analysis of traffic. This “agile” DNS method evades blacklists, the historical records of malicious domains that have been used in the past. With every Internet transaction creating DNS traffic, monitoring is obviously not a small task. Normal DNS traffic typically generates about 12 NXDOMAIN’s per hour. At one client, we were able to detect and resolve an infection almost instantly when our DNS monitoring uncovered 400 NXDOMAIN’s per hour. It is essential to utilize a sophisticated and comprehensive system to collect the DNS traffic that is captured through monitoring sensors. PassiveDNS aggregates duplicate traffic, keeping the logs small without losing the volume information. Most importantly, it keeps track of request and responses and splits the NXDOMAINS essential to DGA detection into a separate log. This dramatically reduces the amount of traffic to be analyzed, and allows focusing on the 10% of the traffic that fails. Finding the source of malicious DNS traffic While monitoring will detect a malware infection, an analysis of the data will lead to the source, and finding the infected host is always our goal. There are various tools and methods used to analyze DNS traffic for DGA patterns, and searching DNS logs for specific queries of known or suspected botnets. Proven analysis tools that focus only on failed DNS requests can quickly search for malicious domains and return only a low percent of false positives. When using these tools to focus on a specific data set, the DGA domains stick out like a sore thumb. Another method for analyzing NXDOMAIN logs is searching for long domains. Legitimate domains are typically less than 12 characters long, and usually as short as possible in order to be memorable. A cybercriminal may direct his bots to use longer, illegitimate domains for communication, making them obvious and easier to find. For example, a 14-character domain made up of only consonants will be automatically flagged as malicious by the detection system. The most well-known and widely spread malware is ZeuS; this malware family has infected millions of PCs. The typical ZeuS query is 33-character-long or more, and ends with .ru, .com, .biz, .info, .org or .net domain extensions. In addition to analysis tools, there are specific methods that can be used to search through the NXDOMAIN logs. There are three domain characteristics that we look for: Domain length – broken into 6 different length categories. Character makeup – Alphanumeric, characters only and consonants only. Top Level Domains (TLD) – 272 variations. This method constantly looks for any combination of these three characteristics – a bit like a slot machine rotating its reels waiting to hit the jackpot. Random DGA domains:
<urn:uuid:7bd83df8-f561-4f86-b132-6e1ad49c3506>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/05/28/dns-anomaly-detection-defend-against-sophisticated-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908113
1,061
3.0625
3
News & Notes for Certified Professionals Visually Imaging Virtual Threats Images often are used to make an impression. In many high schools, pictures of blackened lungs or grisly car accidents are shown to warn students about the dangers of smoking or not wearing a seatbelt. Whether or not this technique acts as a scientifically proven deterrent, it makes enough of an impact that educators keep using it year after year. All sorts of invisible threats have received artistic interpretations, whether it’s seeing your brain on drugs or your arteries clogged after too many burgers. Internet security research firm MessageLabs has done the same thing with IT security threats. As part of a new ad campaign, it provided Romanian visual artist Alex Dragulescu with the binary code of such threats as viruses, spam, phishing attacks, spyware, malicious links and Trojans to render visual interpretations of each. Maksym Schipka, senior architect at MessageLabs, feels that end users are too often only told and not shown how dangerous an online threat can be. “Basically, the big idea behind the visualization is to convey our message that it’s a very scary world on the Internet, and it’s difficult to actually identify threats and catch them,” Schipka said. “We are interested in and committed to stopping these threats for our customers, so we wanted to visualize that instead of just relying on words.” In this project, each virus, spam, phishing attack, spyware, malicious link or Trojan has its unique representation, looking not unlike a group of miniscule protozoa from the depths of the ocean. There have been earlier attempts to visualize cyberthreats, but according to Schipka, even though they were technically correct, they didn’t capture the true essence of the threat. “The difference [between this and] the previous attempts was the involvement of an artist,” he said. “You can get brilliant researchers and brilliant programmers who would do a precise job in visualizing a cyberthreat, yet its look wouldn’t convey the message, which is the important point. [Dragulescu] was able to convey the whole passionate feeling about being able to stop these threats.” Dragulescu’s new visualizations actually may be less technically accurate than previous attempts to visualize cyberthreats. But MessageLabs feels they make twice the impact because they are expressive images. MessageLabs spokesperson Marissa Vicario told of the effectiveness of the new visual campaign, indicating it came about because the firm was never really convinced by threat visualizations it had seen before: “The thinking behind it initially was that when you look at pictures in magazines or Web sites and see them in print trying to represent a [cyberthreat], it’s generally a very generic picture. We felt there [had been] a struggle to represent what [a cyberthreat] really looks like, and when we found Alex, we felt he could produce accurate representations because the pictures are based on actual virus and malware code.” While Dragulescu’s art may have started as a tool to help people picture the threats they face online, it has made an impact on the technology side of MessageLabs just the same. Since previously visualized cyberthreats looked the same, Dragulescu’s variety of threats has showcased the depth of what antivirus firms such as MessageLabs have to deal with everyday. Through the drawings, “even similar threats can be represented quite differently. They are all very similar internally,” Schipka said. The renderings of cyberthreats serve to illuminate the fact that internally similar threats may still look and behave differently, and interrupting them requires casting a wide net. “That’s one of the challenges the antivirus industry has: One of the same family of threats can actually be a lot different and require a different approach or tools to take it apart and figure it out,” Schipka said. “So antivirus researchers end up being specialists in a wide variety of technologies, not just one, and I think the pictures quite well represent that.” Because of this, the functionality of these images is twofold for MessageLabs. On one hand, the provocative pictures get people’s attention, and on the other, they show people that the world of online threats isn’t as simple as a virus here or a phishing scam there, but it includes many versions of all different types of threats. “In the media, the old visualization attempts were perhaps a bit distant from the threat itself, and those attempts to represent the precise threat were not suitable for media,” Schipka said. “Alex’s work has satisfied both the technical people and the marketing people.” – Ben Warden, editor (at) certmag (dot) com
<urn:uuid:9566bdce-135e-4b8f-bd91-2ea28be6806e>
CC-MAIN-2017-04
http://certmag.com/news-notes-for-certified-professionals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00569-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954844
1,026
2.6875
3
Jason needs to create a table with four columns and six rows. Which of the following attributes must he include in the <td> tag to allow the first cell to span across the entire width of the table? User feedback can be evaluated in many ways. Which of the following methods is more quantitative than direct user feedback, and will provide indirect feedback from the majority of users who do not respond? Tom is making changes to his company’s Web site. Because he likes the way the markup is styled, he copies the following into another section of the page:<div id="subsection"> This section is under construction</div>Why does the home page not validate properly? Which of the following formats uses Extensible Markup Language (XML) to describe certain shapes and is best for working with two-dimensional line art and shapes? Which of the following techniques helps stop a denial-of-service (DOS) attack in which an attacker has sent multiple ICMP or TCP packets to crash a Web server on the Internet? George is developing an intranet site for his company. How can he establish consistency for the structure and layout of all the pages, but leave decisions about the content of each page to the individual departments? A local philanthropic organization with a well-developed mission, vision and set of values has never had a Web site. The organization has hired Manuel to design a site. The president and directors of the organization want the mission, vision and values included in the site. This information is […] Which of the following examples of Web site tone on the home page would be most appropriate for the audience indicated? You are creating a Web site that uses a gradient fill for all page backgrounds on a site. To ensure that you can see the gradient through the image background, you should:
<urn:uuid:b7152653-3be6-47e0-b85e-42e8f647755a>
CC-MAIN-2017-04
http://www.aiotestking.com/ciw/category/ciw-web-design-specialist/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00321-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928413
371
2.71875
3
Google Takes Unconventional Route with Homegrown Machine Learning Chips May 19, 2016 Stacey Higginbotham At the tail end of Google’s keynote speech at its developer conference Wednesday, Sundar Pichai, Google’s CEO mentioned that Google had built its own chip for machine learning jobs that it calls a Tensor Processing Unit, or TPU. The boast was that the TPU offered “an order of magnitude” improvement in the performance per watt for machine learning. Any company building a custom chip for a dedicated workload is worth noting, because building a new processor is a multimillion-dollar effort when you consider hiring a design team, the cost of getting a chip to production and building the hardware and software infrastructure for it. However, Google’s achievement with the TPU may not be as earth shattering or innovative as it might seem given the coverage in the press. To understand what Google has done, it’s important to understand a bit about how machine learning works and the demands it makes on a processor. Machine learning actually involves two different computing jobs, the learning and the execution of that learning, which is called inference. Generally, for training companies have turned to GPUs because of the parallelization they offer. For execution companies are using a range of different architectures, but the big challenge is handling the limits of getting data from memory to the processor. An ideal processor for machine learning would offer great parallelization and increased memory bandwidth. Outside of supercomputing, this is something the chip world hasn’t focused on. The demand for workloads hasn’t been there. But with machine learning that is changing. So for the people eyeing innovations in machine learning chips the question is if Google has designed something new that can optimize for both highly parallel workloads and and execute quickly on those many small processing jobs without hitting a data bottleneck. Google isn’t saying, but what it has shown off seem more like a refining of existing architectures rather than something wholly new. Norman P. Jouppi, a Distinguished Hardware Engineer at Google, declined to say if it was using TPUs for learning or for execution, but based on the use cases it cited, it is clearly using it to execute its machine learning algorithms. Jouppi says it is using the TPUs for Street View and Inbox Smart Reply, a feature that analyzes your email and offers three choices of response generated by Google’s AI. It was also used in the Alpha Go demonstration Most companies pursuing machine learning today are have turn to massive parallelization to deliver the performance they need. For example, Facebook is using Nvidia GPUs in the specially designed servers it built just for implementing machine learning. IBM is testing a brain computing concept for eventual use, but in the meantime it is using an implementation of its Power architecture, CPUs and GPUs from Nvidia to run its cognitive computing efforts on. Nervana Systems, a company building a cloud-based AI service has adapted the firmware on Nvidia GPUs to deliver faster performance (its power consumption is unknown). With its TPU Google has seemingly focused on delivering the data really quickly by cutting down on precision. Specifically, it doesn’t rely on floating point precision like a GPU does. Jouppi says that the focus on less precision meant it wasn’t using floating point math. Instead the chip uses integer math, which Google’s VP for Technical Infrastructure Urs Hölzle confirmed for reporters in a press conference. At the time, Hölzle noted the TPU used 8-bit integer. Essentially this means that instead of wasting processing cycles worried about calculating things out to the umpteenth decimal point, the TPU can let a few slide, which means larger models can be used because of the lower resolution of the data. This lack of precision is a common tactic for building out neural networks, where accepting probabilities in gigantic data sets tends to generate the right answer enough of the time. But it’s also not incredibly complex from a design perspective. “Integer math isn’t something new,” says Kevin Krewell an analyst with Tirias Research. He is also skeptical about the power savings claims when compared with today’s graphics chips. Joupi said the TPUs have been in use for at least a year at Google, which means that these processors are best compared not to today’s machine learning chips, but to those built a year ago. Google didn’t disclose what manufacturing node the TPU is built at, but it’s most likely a 28-nanometer node, which was the standard for a new GPU last year. Now the new Pascal chips from Nvidia are manufactured using a FInFET process at 16 nanometers, which wasn’t available a year ago. Still, for a company like Google, the value of saving money for a year running it’s massive machine learning operations may have outweighed the cost of designing its own chips. Jouppi says that these are not processors that Google expects to be obsolete in a year. He also added that the focus wasn’t on the number of transistors, which suggests that a focus on moving down the process node to cram more transistors on a chip isn’t as important with this design. As for the design, Jouppi explained that the decision to do an ASIC as opposed to a customizable FPGA was dictated by the economics. “We thought about doing an FPGA, but because they are programmable and not that power efficient–remember we are getting an order of magnitude more performance per watt — we decided it was not that big a step up to customization.” Krewell points out that designing a chip from scratch, even a simple one, can cost $100 million or more. So for Google the question is whether the time to market advantage on more efficient machine learning inference justifies and will continue to justify that cost. Without knowing what node Google is manufacturing at, the size of its operations (when asked what percent of machine learning workloads were running on TPUs, Jouppi said, “I don’t know.”) or the details of the chip itself, it’s hard to say. Our bet is that is exactly how Google wants it. Remember this? The company has gained a considerable advantage by investing in its infrastructure–from buildings it’s own gear to building actual fiber connections. But with machine learning being the new bedrock for product innovation and delivering services, Google now has to adapt its infrastructure strategy to the new era. Unfortunately its competitors have learned from Google’s previous investments in infrastructure, so they are hot on its heels, seeking the same efficiencies. And since Google rarely shares anything it doesn’t have to about its infrastructure until it had already squeezed the economic and technical advantage out of them, the TPU announcement feels a lot like marketing. Jouppi says the company has no plans to open source it’s TPU design or license it, and he didn’t say when the company might release more details, although it sounded like Google would eventually release them. Maybe it is waiting for the completion of a newer, better design. Stacey Higginbotham has spent the last fifteen years covering technology and finance for a diverse range of publications, including Fortune, Gigaom, BusinessWeek, The Deal, and The Bond Buyer. She is currently the host of The Internet of Things Podcastevery week and writes the Stacey Knows Things newsletter all about the internet of things. In addition to covering momentum in the Internet of Things space, Stacey also focuses on semiconductors, and artificial intelligence.
<urn:uuid:0a56144e-edc4-43cd-b6fa-4300766213bc>
CC-MAIN-2017-04
https://www.nextplatform.com/2016/05/19/google-takes-unconventional-route-homegrown-machine-learning-chips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00045-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951081
1,602
2.59375
3
BGP Routing Tutorial Series, Part 3 Configuring Peering for Neighbor Autonomous Systems So far in this series we've looked at a number of basic concepts about BGP, covering both who would want to use it and why. In particular we've learned that speaking or advertising BGP to your service providers and/or peers lets you do two things: - Make semi-intelligent routing decisions concerning the best path for a particular route to take outbound from your network (otherwise you would simply set a default route from your border routers into your service providers). - Advertise your routes to those providers, for them to advertise in turn to others (for transit connectivity) or just use internally (in the case of peers). We also pointed out some of the negative consequences that can result from careless BGP configuration. In this post, we'll delve deeper into the mechanics of BGP by looking at how you actually configure BGP on routers. As discussed in Part 1, the term Autonomous System (AS) is a way of referring to a network such as a private enterprise network or a service provider network. Each AS is administered independently and may also be referred to as a domain. Each AS is assigned at least one Autonomous System Number (ASN), which identifies the network to the world. Most networks use (or at least show to the world) only one ASN. Each ASN is drawn from a 16-bit number field (allowing for 65,536 possible ASNs): - ASNs 0 and 65,535 are reserved values - The block of ASNs from 64,512 through 65,534 is designated for private use - The remainder of possible ASN values available for Internet routing range from 1 through to 64,511 (except 23,456). One more clarification before we start configuring: BGP can be used internally (iBGP) within an AS to manage routes, and externally (eBGP) to route between ASes, which is what makes possible the Internet itself. In this article when we say BGP we're talking about eBGP, not iBGP. eBGP and iBGP share the same low-level protocol for exchanging routes, and also share some algorithms. But eBGP is used to exchange routes between different ASes, while iBGP is used to exchange routes within the same AS. In fact, iBGP is one of the "interior routing protocols" that you can use to do "active routing" inside your network/domain. The major difference between eBGP and iBGP is that eBGP tries like crazy to advertise every BGP route it knows to everyone, and you have to put "filters" in place to stop it from doing so. iBGP, on the other hand, tries like crazy not to reconfigure routes. In fact, iBGP can actually be a challenge to get working because to make it work you have to peer all of the iBGP-"speakers" inside your network with all of the other iBGP speakers. This is called a "routing mesh" and, as you can imagine, it can get to be quite a mess when you have 20 routers that each have to peer with every other router. The solution to this is "BGP confederations," a topic we'll cover in a subsequent tutorial. So now let's look at the actual configuration. BGP-speaking routers exchange routes with other BGP-speaking routers via peering sessions using ASN identification. At a technical level, this is what it means for one network or organization to peer with another. Here's a simplified Cisco code snippet of a router BGP clause: router bgp 64512 neighbor 22.214.171.124 remote-as 701 The clause starts out by saying “router bgp 64512.” This means that what follows is a list of commands that describe how to speak BGP on behalf of ASN 64512. (We're using 64512 in our examples because it's not a live ASN, so if anyone uses a configuration straight from this column and uses this made up ASN, automated route-examination programs will detect it.) All that's required to bring up a peering session is that one neighbor line under the router bgp clause. In this example, this line specifies 126.96.36.199 as the remote IP address (with respect to the customer's route) of a router in the AS with ASN 701. The purpose of neighbor commands is to initiate peering sessions with neighbors. It's possible to have BGP peering sessions that go over multiple hops, but eBGP multi-hop is a more advanced topic and has many potential pitfalls. So for now, let's assume that all neighbors must be on a LAN interface (Ethernet, Fast Ethernet, FDDI). In practice, you nearly always use more than one line to specify how to exchange routes with a given neighbor in a given peering session. So a typical neighbor command sequence would look more like this: router bgp 64512 neighbor 188.8.131.52 remote-as 4969 neighbor 184.108.40.206 next-hop-self neighbor 220.127.116.11 send-communities neighbor 18.104.22.168 route-map prepend-once out neighbor 22.214.171.124 filter-list 2 in Every time a neighbor session comes up, each router will evaluate every BGP route it has by running it through any filters you specify in the BGP neighbor command. Any routes that pass the filter are sent to the remote end. This filtering is a critical process. The most dangerous element of BGP is the risk that your filtering will go awry and you'll announce routes that you shouldn't to your upstream providers. While the session is up, BGP updates will be sent from one router to the other each time one of the routers knows about a new BGP route or needs to withdraw a previous route announcement. To see a list of all current peering sessions you can use the Cisco “sho ip bgp sum” command line: brain.companyx.com# sho ip bgp summ The command typically returns results like the following, which is a session summary from a core router at an ISP. The 6451x Autonomous Systems are BGP sessions to other routers at the same ISP whose ASNs are not shown to the world. The 126.96.36.199 session is a session that is down, and the sessions where the remote Autonomous Systems are 4231, 3564, and 6078 are external peering sessions with routers from another ISP. |BGP table version is 1159873, main routing table version 1159873| |44796 network entries (98292/144814 paths) using 9596344 bytes of memory| |16308 BGP path attribute entries using 2075736 bytes of memory| |12967 BGP route-map cache entries using 207472 bytes of memory| |16200 BGP filter-list cache entries using 259200 bytes of memory| Most of the above table is fairly self-explanatory: - The neighbor column gives the IP address of the neighbor with which the router is peered. - The V column is the BGP version number. If it is not 4, something is very wrong! BGP version 3 doesn't understand about Classless (CIDR) routing and is thus dangerous. - The AS column is the remote ASN. - InQ is the number of routes left to be sent to us. - OutQ is the number of routes left to be sent to the other side. - The Up/Down column is the time that the session has been up (if the State field is empty) or down (if State field is not empty). - Anything in a State field indicates that the session for that row is not up. In just one of the nomenclature flaws of BGP, a state of Active actually indicates that the session is inactive. In our next installment we'll be looking at what to keep in mind when configuring BGP, as well as topics such as route withdrawal, route flaps, route selection, load balancing, and BGP metrics.
<urn:uuid:c2b6f52b-b2e9-44bf-a98a-dc39ca459d0e>
CC-MAIN-2017-04
https://www.kentik.com/bgp-routing-tutorial-series-part-3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00283-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902916
1,758
3
3
Internet devices use various forms of timers and timestamps to determine everything from when a given e-mail message arrives to the number of seconds since a particular device was rebooted. Most systems use the Network Time Protocol (NTP) to obtain the current time from a large network of Internet time servers. NTP will be the subject of a future article in this journal. This time we will focus our attention on the Leap Second, which is occasionally applied to Coordinated Universal Time (UTC) in order to keep its time of day close to the Mean Solar Time. Geoff Huston explains the mechanism and describes what happened to some Internet systems on July 1, 2012, as a result of a leap second addition. The Internet of Things (IoT) is a phrase used to describe networks where not only computers, smartphones, and tablets are Internet-aware, but also autonomous sensors, control systems, light switches, and thousands of other embedded devices. In our second article, David Lake, Ammar Rayes, and Monique Morrow give an overview of this emerging field which already has its own conferences and journals. The World Wide Web became a reality in the early 1990s, thanks mostly to the efforts of Tim Berners Lee and Robert Cailliau. The web has been a wonderful breeding ground for new protocols and technologies associated with access to and presentation of all kinds of media. The phrase Web 2.0, coined in 1999, has, per Wikipedia, "...been used to describe web sites that use technology beyond the static pages of earlier web sites." David Strom argues that the term is no longer appropriate and that we have moved on to a new phase of the web, dominated by mobile devices and Social Networking. The last few years have seen great advances in Internet-based collaboration tools. Sometimes referred to as Telepresence, these systems allow not only high-quality audio and videoconferencing, but also the use of shared whiteboards and other presentation material. In our final article, Pat Jensen describes one important component of such systems, namely the Binary Floor Control Protocol (BFCP), which the IETF's XCON Centralized Conferencing working group has developed. As always we welcome your feedback on anything you read in this journal. Contact us by e-mail at firstname.lastname@example.org —Ole J. Jacobsen, email@example.com
<urn:uuid:b52d476f-6b1b-4021-b462-4d08b4fabf30>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-57/153-editor.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00247-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921068
491
3.328125
3
Calculating Return on Investment by BNET Editorial Return on Investment is one of several profitability ratios, one of the four basic classes of financial ratios-the others being liquidity ratios, activity ratios and debt ratios. This, the Return on Investment, often called a company's return on total assets, measures the overall profit made on an investment expressed as a percentage of the amount invested. Like return on assets, or return on equity, Return on Investment measures a company's profitability and its management's ability to generate profits from the funds investors have placed at its disposal. It is often said that if a company's operations cannot generate net profit as a percentage of the amount invested greater than the interest rate on financial markets, its future is grim. What to Do The basic Return on Investment can be found by dividing a company's net profit (also called net earnings) by the total investment (total debt plus total equity), and multiplying by 100 to arrive at a percentage: Net profit / total investment × 100 = Return on Investment So if net profit is $30 and the total invested is $250, the Return on Investment is: 30 / 250 = 0.12 × 100 = 12% A more complex variation of Return on Investment is a formula known as the Du Pont formula, which allows a company to break down its Return on Investment into a profit-on-sales component and an asset-efficiency component, and is: (Net profit after taxes / total assets) = (net profit after taxes / sales) × sales / total assets So if net profit after taxes is $30, total assets $250, and sales $500, then: 30 / 250 = 30 / 500 × 500 / 250 = 6 × 2 = 12% This formula was developed by the Du Pont Company in the 1920s, and helps to reveal how a company has deployed its assets and controlled its costs, and how it can achieve the same percentage return in different ways. For stockholders, the variation of the basic Return on Investment formula used by investors is: Net income + (current value-original value) / original value × 100 = Return on Investment So if somebody invests $5,000 in a company and a year later has earned $100 in dividends, while the value of the stock has risen to $5,200, the return on investment would be: 100 + (5,200-5,000) / 5,000 × 100 = (100 + 200) / 5,000 × 100 = 300 / 5,000 × 100 = 0.06 × 100 = 6% Return on Investment What You Need to Know Investors can use an alternative Return-on-Investment formula, which is: net income divided by common stock and preference stock equity, plus long-term debt. Meanwhile, it is vital to understand exactly what a return on investment measures, for example assets, equity, or sales. Without this understanding, comparisons may be misleading. A search for "return on investment" on the Internet, for example, harvests sites detailing staff training, e-commerce, advertising and promotions. Be sure to establish whether the net profit figure used is before or after provision for taxes. This is important for making accurate comparisons of Return on Investment. Reprinted with permission from BNET.
<urn:uuid:452d61fe-1045-4d03-983a-01c1af3642cb>
CC-MAIN-2017-04
http://www.dell.com/content/topics/global.aspx/bizportal/en/business_resources/articles/calculating?c=us&l=en&cs=19
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00422-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934166
680
3.1875
3
This holiday season, buyers everywhere will flock to the Internet to rack up savings on deals and avoid the hassles of shopping in malls and department stores. Unfortunately, shopping online without using caution can lead to great headaches due to the prevalence of criminal activity. One of the most devastating identity theft techniques comes in the form of email phishing. Phishing involves the use of phony links, emails and websites for the purpose of gaining access to sensitive consumer information — usually by installing malware on the target system. This data is then used to steal other identities, gain access to valuable assets and overload inboxes with email spam. In addition to affecting desktop computers, a mobile device does not mitigate phishing attempts. As with the SMS notifications, if you feel the email could be legitimate, log directly in to that account and do not click the link. Currently there exists a misconception amongst consumers that phishing is not something that could happen to the average user. However, it was recently reported in the APWG Phishing Activity Trends Report that as of June 2013, 38,110 websites were identified as hosted phishing domains. To make matters worse, as many as 425 brands were recently targeted by phishing attempts. The following tips can help you avoid the pitfalls of being targeted by phishing campaigns during the holidays: 1. Trust your spam filter Browsing through your junk email box is important as your spam filter might occasionally send important emails to the trash. However, more often than not an item is sent to the spam filter because it is dangerous and filled with malware. Trust your spam filter. If an important email winds up there, you can always ask a user to re-send the information. To protect your critical information, avoid clicking on ANY links from an email sent to the spam box. 2. Beware of misspellings in email subject lines When you get an email with incorrect or misspelled names, or the email is a grammatical disaster, there is strong likelihood that it could be a phishing attempt. These emails are not hard to identify. Chances are, if you get an email from an official company and it looks like an individual with a poor grasp of the intended language wrote the content, do not click or open it. 3. Look out for random or misspelled hyperlinks If you are presented with a link that is shortened and contains jumbled letters — or appears to take you to a nefarious website — these are common signs of phishing. Always examine the link before you click on it to avoid clicking on malware and infecting your computer. A helpful way of avoiding malevolent links is to investigate the website in question by safely performing a Google search.
<urn:uuid:53d651a5-0869-41b4-bdf5-5caae3d368ba>
CC-MAIN-2017-04
https://www.entrust.com/can-spot-phishing-email/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00148-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952074
547
2.734375
3
Digital preservation is the foundation of enterprise archiving. Electronic records are archived when they have long-term retention needs in order to fulfil legal, business and regulatory requirements. A digital archive is a repository that stores collections of digital objects with the intention of preserving and providing long-term access to the information. Authorized users of these records must be able to access these records seven, 20, 50 years later or even for perpetuity. Organizations must deal with the challenges of digital preservation: - File format/technology obsolescence - Technology fragility - Lack of understanding about digital preservation best practices - Authenticity and provenance of the records - Declining knowledge about the records in the enterprise, particularly with respect to structured information - Uncertainty about the best organizational infrastructures to achieve digital preservation Digital archiving and preservation are needed to ensure the authenticity, integrity, and protection of electronic records despite limited resources and a constant stream of new complex technologies. Good news! There exists tools useful during electronic records appraisal and preparation with file identification, file format migration, and metadata extraction. An organization does not need to use all of these tools - some overlap in features - so implementation depends upon an organization's archive capabilities and policies. Although there are open source digital archive storage solutions, such as Archivematica, this article will not cover open source storage software as they typically do not handle records retention and are not validated to federal and industry electronic records regulations such as Title 21 CFR Part 11 or GxP for the life sciences industry. Proprietary repository options that can handle these storage needs include EMC’s InfoArchive, HP’s Records Manager, IBM’s Optim, etc. The Matchbox tool is able to identify duplicate images. This is a powerful tool as it can even determine duplicate content where files are different, cropped, format, rotated, etc. Another open source tool that has similar capabilities is GNU Diffutils. GNU Diffutils can find the differences between two files. This can be useful to determine the different between an older draft and a final version or two files that were once identical but have been changed by two people. Besides archiving, Matchbox tool and GNU Diffutils can also be used for file share remediation by identifying duplicates to allow for defensible disposition of redundant data and transitory drafts. DROID (Digital Record Object Identification) DROID is a software tool created by the National Archives of the United Kingdom for the identification and standardization of file formats and metadata extraction. DROID is able to the exact file format version of digital objects. DROID links its identification of file formats to the authoritative internet-based file format registry PRONOM. DROID and PRONOM can be used as the basis for your enterprise archive’s file format policy to ensure that you only ingest - add to your digital archive - files in open, widely available formats for long term preservation. Xena stands for “XML Electronic Normalising for Archives” and was developed by the National Archives of Australia as part of their Digital Preservation Software Platform. Normalising is the conversion of digital files to a range of preservation formats that are open, well supported, universally available, and look to remain viable for a long period. Xena is similar to DROID in that it detects the file format of a digital object. However, Xena goes one step further and also transforms digital files into open formats for long term preservation. Xena is a good tool in the battle against file format obsolescence as its conversion ability mitigates the risk of not being able to access files years later, especially those in proprietary formats. ePADD was developed by Stanford University to support the appraisal, ingest, processing, discovery, and delivery processes of email archives. A unique feature of ePADD is that full texts of emails are only accessible from one site. This capability was created for historical archives to deal with copyright where the full text could only be read at the library or archives. However, companies can leverage this tool as well to stay compliant with regulations such as HIPPA and deal with issues of privacy and security. Archived emails could be viewed with sensitive information and PII (Personally Identifiable Information) redacted and the complete text only accessible by users from one location. Remote users can be granted full access if needed. ePADD not only works as a viewer for reading archived email messages, but also for email attachments that could be in a wide variety of file formats. Web Curator Tool The Web Curator Tool was developed for archiving websites in collaboration between the British Library and the National Library of New Zealand through the International Internet Preservation Consortium. The Web Curator Tools is a workflow management tool for collecting or “harvesting” websites for archiving. It can capture descriptive metadata and schedule when/how often a target website should be harvested. Many of the current digital preservation tools were developed for digital preservation of records stored in museums, historical and educational archives, and cultural institutions that similar to the private sector also face digital curation and preservation challenges. While developed for more traditional archives, these pieces of software can also provide value to an enterprise archive since they not only come at no cost, but were designed to for various digital preservation activities across the ISO Open Archival Information System (OAIS) Reference Model and can be altered to suit company or industry specific needs. Before using in your enterprise, be sure to check all licensing details for commercial use and generate a plan for checking not only updates to these details, but also to the upgrade and update paths for the tools themselves. If you choose to implement these tools, remember that while these tools can be modified, you must credit the origin of the software and mark your changes as a different version. Know Your Tools The Amendments to the U.S. Federal Rules of Civil Procedure (FRCP) took effect this month (December 2015) with their standard for preservation of Electronically Stored Information (ESI) being reasonable steps. Having a clearly documented, defined and consistently followed policy for retention and disposition, litigation holds, archiving, and retrieval of electronic records will ensure compliance to the new changes to FRCP. Keep in mind, digital preservation is constantly evolving with technology, so while perfection may not always be possible, reasonable steps must be taken for long term preservation of ESI. - Using software like these in conjunction with your records management and archives policies will allow your enterprise to know what you will ingest during the appraisal process. Therefore time and money won’t be spent on preserving duplicates or information past its retention period with no business value. - There should be a file format policy in place for your enterprise archive to only ingest formats that are based on open freely available standards, have current and widespread use, are robust enough to be used on multiple types of hardware/software/operating systems, and are not patented proprietary formats. - Your policies will change over time as digital preservation practices, standards, and technology continue to evolve. Building software such as these into your archival process following the OAIS Reference Model will lower the risk of retaining files that may not be accessible years down the road during litigation or an audit and assure compliance to legal requirements and regulations. Remember, these tools are only as strong as those who wield them. Therefore it is imperative to have the strategy, processes, including in house support processes for any tools you implement, policies, and archival storage solution for your enterprise archive in place first.
<urn:uuid:1c0454cb-3ced-4438-a449-85d7d3bac860>
CC-MAIN-2017-04
http://www.consultparagon.com/blog/digital-preservation-tools-enterprise-archiving
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00358-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927742
1,545
3.265625
3
Willis S.G.,Durham University | Foden W.,University of Witwatersrand | Baker D.J.,Durham University | Belle E.,UNEP WCMC | And 15 more authors. Biological Conservation | Year: 2015 To accommodate climate-driven changes in biological communities, conservation plans are increasingly making use of models to predict species' responses to climate change. To date, species distribution models have been the most commonly used approach for assessing species' vulnerability to climate change. Biological trait-based approaches, which have emerged recently, and which include consideration of species' sensitivity and adaptive capacity, provide alternative and potentially conflicting vulnerability assessments and present conservation practitioners and planners with difficult choices. Here we discuss the differing objectives and strengths of the approaches, and provide guidance to conservation practitioners for their application. We outline an integrative methodological framework for assessing climate change impacts on species that uses both traditional species distribution modelling approaches and biological trait-based assessments. We show how these models can be used conceptually as inputs to guide conservation monitoring and planning. © 2015. Source Gardner T.A.,University of Cambridge | Burgess N.D.,University of Cambridge | Burgess N.D.,Copenhagen University | Aguilar-Amuchastegui N.,WWF U.S. Conservation Science Program | And 19 more authors. Biological Conservation | Year: 2012 The UNFCCC mechanism for Reducing Emissions from Deforestation and Degradation in developing countries (REDD+) represents an unprecedented opportunity for the conservation of forest biodiversity. Nevertheless, there are widespread concerns surrounding the possibility of negative environmental outcomes if biodiversity is not given adequate consideration throughout the REDD+ process. We propose a general framework for incorporating biodiversity concerns into national REDD+ programmes based on well-established ecological principles and experiences. First, we identify how biodiversity distribution and threat data, together with data on biodiversity responses to forest change and management, can be readily incorporated into the strategic planning process for REDD+ in order to identify priority areas and activities for investment that will deliver returns for both carbon and biodiversity. Second, we propose that assessments of changes in biodiversity following REDD+ implementation could be greatly facilitated by paralleling, where possible, the existing IPCC architecture for assessing carbon emissions. A three-tiered approach is proposed for biodiversity assessment, where lower tiers can provide a realistic starting point for countries with fewer data and lower technical capacities. Planning and assessment of biodiversity safeguards for REDD+ need not overburden an already encumbered UNFCCC process. Immediate progress is already possible for a large number of developing countries, and a gradual, phased approach to implementation would minimise risks and facilitate the protection of additional biodiversity benefits from REDD+ activities. Greater levels of coordination between the UNFCCC and CBD, as well as other agencies and stakeholder groups interested in forest conservation are needed if biodiversity safeguards are to be fully adopted and implemented. © 2011 Elsevier Ltd. Source Ockendon N.,British Trust for Ornithology | Baker D.J.,British Trust for Ornithology | Baker D.J.,Durham University | Carr J.A.,IUCN Global Species Programme | And 14 more authors. Global Change Biology | Year: 2014 Shifts in species' distribution and abundance in response to climate change have been well documented, but the underpinning processes are still poorly understood. We present the results of a systematic literature review and meta-analysis investigating the frequency and importance of different mechanisms by which climate has impacted natural populations. Most studies were from temperate latitudes of North America and Europe; almost half investigated bird populations. We found significantly greater support for indirect, biotic mechanisms than direct, abiotic mechanisms as mediators of the impact of climate on populations. In addition, biotic effects tended to have greater support than abiotic factors in studies of species from higher trophic levels. For primary consumers, the impact of climate was equally mediated by biotic and abiotic mechanisms, whereas for higher level consumers the mechanisms were most frequently biotic, such as predation or food availability. Biotic mechanisms were more frequently supported in studies that reported a directional trend in climate than in studies with no such climatic change, although sample sizes for this comparison were small. We call for more mechanistic studies of climate change impacts on populations, particularly in tropical systems. © 2014 John Wiley Sons Ltd. Source Ficetola G.F.,University of Milan Bicocca | Rondinini C.,University of Rome La Sapienza | Bonardi A.,University of Milan Bicocca | Katariya V.,IUCN Global Species Programme | And 2 more authors. Journal of Biogeography | Year: 2014 Aim: Maps of species ranges are among the most frequently used distribution data in biodiversity studies. As with any biological data, range maps have some level of measurement error, but this error is rarely quantified. We assessed the error associated with amphibian range maps by comparing them with point locality data. Location: Global. Methods: The maps published by the Global Amphibian Assessment were assessed against two data sets of species point localities: the Global Biodiversity Information Facility (GBIF), and a refined data set including recently published, high-quality presence data from both GBIF and other sources. Range fit was measured as the proportion of presence records falling within the range polygon(s) for each species. Results: Using the high-quality point data provided better fit measures than using the raw GBIF data. Range fit was highly variable among continents, being highest for North American and European species (a fit of 84-94%), and lowest for Asian and South American species (a fit of 57-64%). At the global scale, 95% of amphibian point records were inside the ranges published in maps, or within 31 km of the range edge. However, differences among continents were striking, and more points were found far from range edges for South American and Asian species. Main conclusions: The Global Amphibian Assessment range maps represent the known distribution of most amphibians well; this study provides measures of accuracy that can be useful for future research using amphibian maps as baseline data. Nevertheless, there is a need for greater investment in the continuous updating and improvement of maps, particularly in the megadiverse areas of tropical Asia and South America. © 2013 John Wiley & Sons Ltd. Source
<urn:uuid:4b11e3e3-13d5-418a-b9c2-0a45caa4743c>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/global-species-programme-546957/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00358-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918787
1,331
2.6875
3
More on SIP What is SIP? SIP (Session Initiation Protocol) is the IETF signaling protocol for presence, messaging, VoIP, audio/video conferencing and events notification that is becoming for person-to-person IP-communications what HTTP is for the Web. SIP initiates call setup, routing, authentication and other feature messages to endpoints within an IP domain. Most importantly, the SIP protocol allows for users of different service providers to communicate with each other. Using SIP, IP telephony becomes as easy to use as any other Web application and integrates easily into other Internet services. SIP has been widely adopted by a number of the industry’s leading providers, including Microsoft®, AOL, WorldCom, CommWorks, Cisco Systems and Yahoo!. Industry analysts predict that SIP will become the standard protocol no later than 2004. SIP is coming, and bringing with it the ability for users of instant messaging, presence, conferencing, and other realtime communications functions to find and communicate with users of any SIP-based provider around the world. "Session Initiation Protocol (SIP) management across firewalls integrates many forms of communications with data and processes safely across corporate and consumer boundaries. This opens a huge opportunity for integrated instant messaging and VOIP services to be brought to the mainstream and Research Director, Messaging and Collaboration Services Aberdeen Group, Inc.
<urn:uuid:870993de-04cc-458e-99d5-85e20ba7ada7>
CC-MAIN-2017-04
http://www.ingate.com/Moreonsip.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00414-ip-10-171-10-70.ec2.internal.warc.gz
en
0.900777
308
2.671875
3
In today’s modern business, with its global schedule and corresponding extended work days, telecommuting can mean the difference between productivity and high productivity. Telecommuting reduces the time and money wasted on physical commuting. It also can help to reduce absenteeism, alleviate office-space problems and aid in work-life balance. But there are issues that prevent many companies from taking advantage of this alternative work method. One of these is security — telecommuters might use laptops while they’re on the go, but what if someone’s laptop is stolen? Or even worse, what if someone working from home or out of the office exposes the company’s IT infrastructure to a virus? IT pros can avoid these types of incidents by putting all content on a server as opposed to a home system or laptop computer. The information also will be available to other portable devices such as BlackBerries, which means anyone in the office can use it. A virtual private network (VPN) can secure connectivity between telecommuter and company because it makes the system harder to hack. A VPN is a viable option only in a well-organized, well-equipped IT infrastructure, however. If an organization’s infrastructure is not up to snuff, the system will get bogged down, and productivity will suffer as telecommuters twiddle their thumbs while they wait for programs to load. Also, there should be a company policy that states the rules of telecommuting and firmly establishes what software will be used, as well as at what time. Without such a policy, security is the least of the issues that might result — telecommuters could overwrite files if they access the same device as an in-office colleague at the same time, which won’t make them very happy. The policy also might require that only company-issued laptops be used by telecommuters. This is advantageous for many reasons. For instance, people are less likely to surf the Web on a company computer, and surfing the Web brings with it a higher risk for picking up a virus or spyware. Further, telecommuters can be required to regularly bring in company regularly so they can be maintained and cleaned. This ensures licensed copies of anti-virus and anti-spyware are in place and fully functional. Regular laptop maintenance at the home office reduces the need for a telecommuter to have technical expertise and puts them firmly into the end-user role, which means they can focus on work. A telecommuting policy also could determine where a company-issued laptop can go. For instance, no plugging the laptop into certain networks in which sensitive company information might be compromised (such as a competitors’ office). One of the most common detractors to telecommuting is the lack of face time or actual contact between telecommuter and home office. Web cams could be used to address the issue and facilitate off-site, real-time and asynchronous collaboration. Alternatively, if there’s concern over who is and is not participating when they said they would, people could be required to log in to regular meetings, and the logs can be tracked to keep telecommuters honest.
<urn:uuid:4eccc525-404c-4817-9807-74bb4e81fac2>
CC-MAIN-2017-04
http://certmag.com/facilitating-telecommuting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00046-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946597
657
2.53125
3
Medicine and Health - Quiz Questions & Answers - What do you call the Chinese system of healing with insertions of needles into the body? - Which British biochemist first found the presence of vitamins in fresh food? - (a) What oil is sometimes applied to gums and teeth to relieve pain? (b) What is the name of the drug or preparation used to induce vomiting, especially in food-poisoning? - What is the difference between an artery and a vein? - Match the following with their particular fields: (a) Joseph Lister - Blood circulation (b) Alexander Fleming - Bacteriology (c) Christian Barnard - Antiseptic surgery (d) William Harvey - Penicillin (e) Louis Pasteur - Heart transplant - (a) Who is a Hypochondriac? (b) Give one word for the time of compulsory isolation to prevent the spread of infection or contagion. - (a) What do you suffer from if you have Halitosis? (b) What is the common name for Hypertension? - (a) What are the four major blood types? (b) In which part of the body are blood cells manufactured? - What do the following stand for? - (a) Who is regarded as the ‘Father of Plastic Surgery’? (b) What is Homoeopathy? - (a) Where would you find the medulla oblongata? (b) Which organ contains the Islets of Langerhans? - (a) What is the medical name for Lockjaw? (b) What is the medical name for Cancer of the Blood? - What do the following specialize in: - What are these (a) goose bumps (b) funny bone (c) writer’s finger - What is considered to be (a) the normal temperature of the human body (b) the pulse rate of a healthy adult - Which is (a) the largest bone in the human body? (b) the smallest bone in the human body? - What is the name given to the AIDS virus? - (a) To which bone is the tongue attached? (b) What substance must mix with food to give taste? (c) What are you if you are short-sighted? - For what is Sir Jonas Salk credited? - Curing kidney stones without surgery has become possible in recent times. What is the name of the new method which uses certain waves to smash kidney stones? Answers of Quiz Questions about Medicine & Health - Sir Frederick Gowland Hopkins - (a) Oil of cloves (b) An emetic - An artery carries blood from the heart, while a vein conveys blood back to the heart - (a) Joseph Lister - Antiseptic surgery (b) Alexander Fleming - Penicillin (c) Christian Barnard - Heart transplant (d) William Harvey - Blood circulation (e) Louis Pasteur - Bacteriology - (a) a person who continually imagines he is ill - (a) bad breath (b) high blood pressure - (a) A, B, AB, O (b) The bone-marrow - (a) Electro Cardiogram (b) Electro Convulsive Therapy - (a) Susruta, the ancient Indian man of medicine (b) Treatment of disease would produce symptoms of the disease. - (a) In the brain (b) The Pancreas - (a) Tetanus - (a) Study of skull features (b) diseases of old age (c) Care and treatment of infants and children (d) a specialist in the treatment of problems concerning the position of the teeth and jaws. (e) X-rays body parts and organs for diagnosis - (a) Tiny muscles under the skin’s surface which contract and make the hairs stand up, causing small bumps. (b) The spot on the back of the elbow where the ulna nerve rests against the humerus bone (c) A callus or hardening of the skin caused by constant pressure from holding a pen or pencil - (a) 98.4F (b) 70-80 beats - (a) the femur or thigh bone (b) the stirrup or stapes in the ear - HIV (Human Immunodeficiency Virus) - (a) The hyoid bone - For introducing a vaccine against Poliomyelitis in 1954. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:4e0219fc-82d6-49bb-b713-7dcf67beb4d0>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-732.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00258-ip-10-171-10-70.ec2.internal.warc.gz
en
0.877093
1,078
2.609375
3
Molten carbonate fuel cells (MCFCs) operate at high temperature and use natural gas and biogas as fuel. These cells allow non-precious metals to be used as catalyst on both the anode and cathode leading to cost reduction. MCFCs are 50% to 60% efficient at converting fuel to electricity and it can be increased by capturing the waste heat which is also used to drive a turbine. The report “Molten Carbonate Fuel Cells (MCFCs) Market Forecast 2014-2019”, analyses the market in terms of geography. In terms of geography, this market is segmented in to North America, Europe, Asia, and Rest of the world. Fuel cells allow more KWH storage than batteries of same weight, that’s why it is preferred as a replacement of batteries. There is also rise in investment from Governments and companies towards this industry since last decade. Fuel cells are also flexible because it allows variety of fuels like natural gas and biogas, and sun and wind are as sources of energy. The above factors are driving this market. Molten carbonate fuel cells are not expensive and easy to operate. These cells don’t require any external reformer. It can internally convert hydrocarbons into hydrogen for power generation. The report provides an extensive competitive landscaping of key companies operating in this market. The key players of the market are (U.S.), CFC solutions (Germany), GenCell Corporation (U.S.) and others. Further, country wise market share, new product and services launches, M&A, product portfolio of key players is also covered in the report. Along with the market data, you can also customize MMM assessments that meets your company‘s specific needs. Customize to get comprehensive industry standard and deep-dive analysis of the following parameters. Product Benchmarking Outlook - MCFC product differentiation among competitors - Upcoming technology and research - Comparison among different fuel cells and the most efficient one Customer Segment Outlook - Importance of MCFCs in telecommunication. - Types of fuels used in fuel cells - On-going projects/research on fuel cells - Challenges in fuel cell industry 1.1 Analyst Insights 1.2 Market Definitions 1.3 Market Segmentation & Aspects Covered 1.4 Research Methodology 2 Executive Summary 3 Market Overview 4 By Geographies 4.2 North America 4.4 Rest of World 5 By Companies 5.1 AFC Fuel cell 5.2 Ballard Fuel Cell Products 5.3 Bloom Energy Corp. 5.4 Ceramic fuel cells Fuel cell products 5.5 Ceres Power Holdings Plc 5.6 Clearedge Power 5.7 Genport SRL 5.8 GS Caltex 5.9 Horizon Fuel Cell Technologies 5.10 Hydrogenics Power Systems 5.11 JX Nippon Oil & Gas 5.12 Nedstack Fuel Cell Technology B.V. 5.13 Nuvera Fuel Cells 5.14 Oorja Protonics 5.15 Panasonic Energy 5.16 Plug Power Inc. 5.17 Protonex Technology Corporation 5.18 Relion Inc. 5.19 SFC Energy Power Manager 5.20 Topsoe Fuel Cell 5.21 Toshiba others 5.22 Other Companies 5.23 Fuelcell Energy Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:8a4def4f-2b43-47a7-9908-7c7c2e0e49fc>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market-report/molten-carbonate-fuel-cells-mcfc-reports-6506483954.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00166-ip-10-171-10-70.ec2.internal.warc.gz
en
0.869414
769
2.609375
3
Difference between Signs and Symptoms Signs are what a doctor sees, Symptoms are what you experience. Did you know that 90 - 95% of a doctor's diagnosis will come from what you say? Yep! If you do not give the doctor all of your symptoms and history then you are asking him or her to diagnose you based on insufficient information. Signs vs Symptoms Whenever you go to the doctor, he or she will take a patient history using a mneumonic called, "OPPQRST." Every doctor on the planet follows this mneumonic. This translates into: Objective, Palliative, Provocative, Quality, Radiating, Subjective and Timing. - ONSET: When did it start? when did it start? > - PALLIATIVE: What relieves your symptoms? what relieves your symptoms? > - PROVOCATIVE: What provokes your symptoms? what provokes your symptoms? > - QUALITY: How would you describe the symptoms? Sharp? stabbing?, sore? uncomfortable? nausea? achy? throbbing? ripping? tearing? how would you describe the symptoms? sharp? stabbing?, sore? uncomfortable? nausea? achy? throbbing? ripping? tearing? > - RADIATING: Do the symptoms or pain radiate to another area of your body? do the symptoms or pain radiate to another area of your body? > - SEVERITY: On a scale of 1 - 10, how would you rate your pain or discomfort? on a scale of 1 - 10, how would you rate your pain or discomfort? > - TIMING: How often do the symptoms occur? how often do the symptoms occur? > Typically with chest pain, the additional LMN are added to the mneumonic which means: Last, Movement and Notable Symptoms. - LAST: When was the last episode? when was the last episode? > - MOVEMENT: What activities could you do before you first felt chest pain and what activities can you do now? what activities could you do before you first felt chest pain and what activities can you do now? > - NOTABLE SYMPTOMS: What other symptoms do you have with your chest pain? symptoms: what other symptoms do you have with your chest pain? > Here are just a few things that will automatically pop into a doctor's head when you give the following symptoms. The doctor will then perform various orthopedic, laboratory or imaging tests on you to confirm or deny his or her suspicions. Please keep in mind there are many other conditions, diseases, syndromes and illnesses that your doctor may be thinking depending on what you stated in your patient history. - ABDOMINAL PAIN - may be indicative of appendicitis, food allergies, food poisoning, gastro-intestinal disorders, hiatal hernia or pre-menstrual syndrome. - ABNORMAL VAGINAL DISCHARGE - may be indicative of yeast infection (candidiasis) , chlamydia, genital herpes, gonorrhea or trichomoniasis. - BACKACHE - may be indicative of back strain, DJD (degenerative disc disease), lack of exercise, obesity, female disorders, spinal injury or pancreatic disorders. - BLOOD IN THE URINE, STOOL, VOMIT, VAGINA OR PENIS - may be indicative hemorrhoids, infections, polyps, bowel tumors, ulcers, cancer of the kidneys, colon or bladder. - DIFFICULTY SWALLOWING - may be indicative of emotional stress, hiatal hernia, cancer of the esophagus. - EXCESSIVE SWEATING - may be indicative of thyroid disorder, menopause, stress, food allergies, fever, infection or Hodgkin's disease. - FREQUENT URINATION - may be indicative of bladder infection, a diuretic effect, excessively taking in liquid, not emptying the bladder in a timely fashion or cancer. - INDIGESTION - may be indicative of poor diet, lack of enzymes such as HCL (hydrochloric acid), gallbladder dysfunction, heart disease, acidosis, alkalosis, allergies, stress, adrenal, liver or pancreatic disorders. - PERSISTANT COUGH - may be indicative of lung disorders, pneumonia, emphysema, bronchitis, influenza, food allergies or cancer. - PERSISTANT FEVER - may be indicative of influenza, mononucleosis, rheumatic disorders, bronchitis, colds, meningitis, diabetes or chronic infection. - PERSISTANT HEADACHE - may be indicative of migraines, eyestrain, need for glasses, allergies, asthma, drugs, glaucoma, high blood pressure, brain tumor, vitamin deficiencies, sinusitis or stress due to personal Life experiences. - RASH WITH BLISTERS - may be indicative of Herpes Zoster or Shingles. - SUDDEN WEIGHT GAIN - may be indicative of over-eating, lack of exercise,thyroid condition (underactive) or edema. - SUDDEN WEIGHT LOSS (UNEXPLAINED) - may be indicative of cancer, diabetes, thyroid condition (overactive) , hepatitis, mononucleosis, parasites, infection or malabsorption syndrome. - SWELLING IN THE APPENDAGES OR ABDOMEN - may be indicative of edema, heart condition, kidney dysfunction, medication, food allergies, oral contraceptives or steroids. - SWOLLEN LYMPH NODES - may be indicative of chronic infection, lymphoma, various cancers, toxic metals, toxic build-up or Hodgkin's disease. - THIRSTING EXCESSIVELY - may be indicative of diabetes, infection, excessive exercise or fever. If you are experiencing any of the above symptoms or have concern for your help, please seek medical attention immediately. Our goal is to provide you with information that may be useful in attaining optimal health. Nothing in it is meant as a prescription or as medical advice. You should check with your physician before implementing any changes in your exercise or lifestyle habits, especially if you have physical problems or are taking medications of any kind. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:48f80f8c-8a32-4a30-9b8f-25f8891c0a63>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-363.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00248-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894719
1,407
3.125
3
[ NOTE: For more primers like this, check out my tutorial series. ] There are many classic tech debates, and the question of what to formally call web addresses is one of the most nuanced. The way this normally manifests is someone asks for the “URL” to put into his or her browser, and someone perks up with, Actually, that’s called a URI, not a URL… The response to this correction can range from quietly thinking this person needs to get out more, to agreeing indifferently via shoulder shrug, to removing the safety clasp on a Katana. This page hopes to serve as a simple, one page summary for navigating the subtleties of this debate. URI, URL, URN As the image above indicates, there are three distinct components at play here. It’s usually best to go to the source when discussing matters like these, so here’s an exerpt from Tim Berners-Lee, et. al. in RFC 3986: Uniform Resource Identifier (URI): Generic Syntax: A Uniform Resource Identifier (URI) is a compact sequence of characters that identifies an abstract or physical resource. A URI can be further classified as a locator, a name, or both. The term “Uniform Resource Locator” (URL) refers to the subset of URIs that, in addition to identifying a resource, provide a means of locating the resource by describing its primary access mechanism (e.g., its network “location”). Wikipedia captures this well with the following simplification: One can classify URIs as locators (URLs), or as names (URNs), or as both. A Uniform Resource Name (URN) functions like a person’s name, while a Uniform Resource Locator (URL) resembles that person’s street address. In other words: the URN defines an item’s identity, while the URL provides a method for finding it. So we get a few things from these descriptions: - First of all (as we see in the diagram as well) a URL is a type of URI. So if someone tells you that a URL is not a URI, he’s wrong. But that doesn’t mean all URIs are URLs. All butterflies fly, but not everything that flies is a butterfly. - The part that makes a URI a URL is the inclusion of the “access mechanism”, or “network location”, e.g. - The URN is the “globally unique” part of the identification; it’s a unique name. So let’s look at some examples of URIs—again from the RFC: ftp://ftp.is.co.za/rfc/rfc1808.txt(also a URL because of the protocol) http://www.ietf.org/rfc/rfc2396.txt(also a URL because of the protocol) ldap://[2001:db8::7]/c=GB?objectClass?one(also a URL because of the protocol) mailto:John.Doe@example.com(also a URL because of the protocol) news:comp.infosystems.www.servers.unix(also a URL because of the protocol) telnet://192.0.2.16:80/(also a URL because of the protocol) Those are all URIs, and some of them are URLs. Which are URLs? The ones that show you how to get to them. Again, the name vs. address analogy serves well. Which is more accurate? So this brings us to the question that brings many readers here: Which is the more proper term when referring to web addresses? Based on the dozen or so articles and RFCs I read while researching this article, I’d say that depends on a very simple thing: whether you give the full thing or just a piece. Well, because we often use URIs in forms that don’t technically qualify as a URL. For example, you might be told that a file you need is located at files.hp.com. That’s a URI, not a URL—and that system might very well respond to many protocols over many ports. If you go to http://files.hp.com you could conceivably get completely different content than if you go to ftp://files.hp.com. And this type of thing is only getting more common. Think of all the different services that live on the various Google domains. So, if you use URI you’ll always be technically correct, and if you use URL you might not be. But if you definitely dealing with an actual full URL, then “URL” is most accurate because it’s most specific. Humans are technically African apes, and dogs are mammals, but we rightly call them humans and dogs, respectively. And if you’re an American from San Francisco, and you meet someone from Sydney while in Boston, you wouldn’t say you’re from Earth, or from the United States. You’d say California, or—even better—San Francisco. So until something changes, URI is best used when you’re referring to a resource just by its name or some other fragment. And when you’re giving both the name of a resource and the method of accessing it (like a full URL), it’s best to call that a URL. - URIs are identifiers, and that can mean name, location, or both - All URNs and URLs are URIs, but the opposite is not true - The part that makes something a URL is the combination of the name and an access method, such as - If you are discussing something that’s both a full URL and a URI (which all URLs are), it’s best to call it a “URL” because it’s most specific CREATED: JANUARY 2005 | UPDATED: DECEMBER 2015
<urn:uuid:83878918-98f9-4942-9de0-84a39cdee2c0>
CC-MAIN-2017-04
https://danielmiessler.com/study/url-uri/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00276-ip-10-171-10-70.ec2.internal.warc.gz
en
0.877792
1,289
3.453125
3
During a recent assessment I noticed that I was getting back (or, not getting back, as it were) a filtered response to hping SYN scans. That’s normal enough for sites that drop incoming scan traffic, but the weird part was that if I used a standard connect scan, i.e. one that completes the three-way-handshake, I would get back a ton of open ports on the same host. So if I did a “regular” scan, I’d send a SYN, get back a SYN-ACK, and then respond with an ACK. Fair enough, but if I sent just the SYN from tcpdump, the host would not respond at all. Well, after a couple of minutes of head-scratching, logic revealed the path to the truth: The two SYN packets are different. If these two SYN packets weren’t different, then the target host would have no way of knowing that the SYN-scan’s SYN packet wasn’t legitimate, and as such would respond with a SYN-ACK as with the standard connect scan. In short, the only way for the host (or a filtering device in between) to react to one SYN differently than another is for the packet itself to be different. Anatomy of a SYN As it turns out, there’s a very tangible reason for the two packets being different. The SYN packets created by most port scanners out there are created via the raw socket interface, and they tend to have some fairly standard characteristics that stand out to both humans and computers (as we’ll see below). Legitimate SYN packets, however, are created by the OS’s connect() syscall. This is what happens when you want to use a regular application on your system, like a web browser or a mail client. This is the “regular” way of building a SYN packet, and as we’ll see in a moment, the packets made in this way are quite different than those made by scanner applications. The raw socket technique can be thought of as “building” packets; it’s a method for modifying actual packet headers before they leave the machine. Common applications of this include spoofing the source address, changing checksums, and lighting up odd TCP flag combinations. The connect() method, however, is a packaged deal. When you call connect(), you get pretty much the same kind of packet every time. You don’t get to mangle it, morph it, or corrupt it. What you get is what you get. Given these differences, a number of products over the years have been coded to look at incoming SYN packets for the attributes associated with security scanners. They know that pretty much the only applications making these kinds of packets are illegitimate, so when they see them they immediately drop them. The Differences Illustrated Let’s actually take a look at the actual unique qualities of raw socket/scanner SYN packets and those created by connect(). Below are three SYN packets from three different applications: An Nmap SYN (-sS) Scan 14:53:09.185860 IP (tos 0x0, ttl 45, id 61607, offset 0, flags [none], proto: TCP (6), length: 44) source.60058 > dest.22: S, cksum 0x885a (correct), 877120720:877120720(0) win 2048 0x0000: 4500 002c f0a7 0000 2d06 8121 8115 0c09 E..,....-..!.... 0x0010: 8115 0dd0 ea9a 0016 3447 ccd0 0000 0000 ........4G...... 0x0020: 6002 0800 885a 0000 0204 05b4 `....Z...... An Nmap Connect (-sT) Scan 14:51:42.706802 IP (tos 0x0, ttl 64, id 61772, offset 0, flags [DF], proto: TCP (6), length: 60) source.36982 > dest.22: S, cksum 0x6e57 (correct), 113706876:113706876(0) win 5264 0x0000: 4500 003c f14c 4000 4006 2d6c 8115 0c09 E..<.L@.@.-l.... 0x0010: 8115 0dd0 9076 0016 06c7 077c 0000 0000 .....v.....|.... 0x0020: a002 1490 6e57 0000 0204 0524 0402 080a ....nW.....$.... 0x0030: 14aa f630 0000 0000 0103 0302 ...0........ A Legitimate SYN From Firefox 15:31:34.079416 IP (tos 0x0, ttl 64, id 20244, offset 0, flags [DF], proto: TCP (6), length: 60) source.35970 > dest.80: S, cksum 0x0ac1 (correct), 2647022145:2647022145(0) win 5840 0x0000: 4500 003c 4f14 4000 4006 7417 0afb 0257 E.. 0x0010: 4815 222a 8c82 0050 9dc6 5a41 0000 0000 H."*...P..ZA.... 0x0020: a002 16d0 0ac1 0000 0204 05b4 0402 080a ................ 0x0030: 14b4 1555 0000 0000 0103 0302 ...U........ Notice that the two latter SYN packets are very similar. They are the two created by the OS’s connect() syscall, while the first packet was created via a raw socket. Here are a few of the differences: - The size of the connect() packets is 60 bytes, and only 44 for the raw socket packet. - The TTL values for the connect() packets are 64, and 45 on the raw socket packet. - The “don’t fragment” bit is set in the “legitimate”, connect() packets, but not in the other. So the upshot is that you may actually get better scan results in many environments by doing “regular”, connect scans instead of SYN scans because of how the SYNs for each are constructed. The next thing on my agenda is to use nemesis to make a few custom SYN packets. I can build some that look just like the legitimate SYN packets — matching the size, TTL, and flag contents exactly. Then I can simply toggle each of them in sequence to figure out which value (or values) is considered illegitimate. I’ll do that soon and post the results for anyone interested. ::
<urn:uuid:767bcd0c-c532-4493-a69d-544e902fab44>
CC-MAIN-2017-04
https://danielmiessler.com/study/synpackets/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00121-ip-10-171-10-70.ec2.internal.warc.gz
en
0.873592
1,471
2.53125
3
For all the harping I do on this blog about IPv4 address depletion and the need to prepare yourselves for IPv6, there is another number resource that is also being quickly depleted, and that I haven’t written about before: the 2-byte autonomous system numbers (ASNs). A 16-bit number space gives you 65,536 possible numbers (AS numbers 0 – 65535). Out of these, the IANA reserves 1,026 of them: 64512 – 65534 for private, reusable ASNs (similar to private RFC1918 IPv4 addresses) and a few others such as 0 and 65535 and one that is important to this article, 23456. Presently 49,150 ASNs have been allocated out of the public pool, so there are 15,360 available ASNs remaining: About 23.8 percent of the public pool. An analysis of the allocation rate of 2-byte ASNs shows that the available pool will run out in mid-2011. Eerily close to the date that we will run out of IPv4 addresses. Fortunately there is much less cause for concern about ASN depletion than about IPv4 depletion, for two reasons: Unlike IP addresses, which are necessary for anyone that wants to connect to an IP network, autonomous system numbers matter only to networks that are running BGP. - Just as IPv6 was created to solve the IPv4 problem by offering an address size four times as large, 4-byte ASNs have been created to solve the 2-byte ASN depletion problem. But where transition to IPv6 can be complicated because of lack of interoperability between IPv4 and IPv6, the transition to 4-byte ASNs is far simpler. This post describes the format of the 4-byte ASN, how it interoperates with 2-byte ASNs, and what you need to do (if anything) to prepare your network for them. The 4-Byte ASN Format 4-byte ASNs provide 232 or 4,294,967,296 autonomous system numbers ranging from 0 to 4294967295. The first thing to notice about these numbers is that they include all of the older 2-byte ASNs, 0 through 65535. That greatly helps with interoperability between autonomous systems using 2-byte ASNs and those using 4-byte ASNs. (An oft-heard complaint about IPv6 is that interoperability with IPv4 could have been more easily supported if 4.3 billion IPv6 addresses had been reserved as representative of the existing IPv4 addresses, but that’s another story.) A 4-byte ASN between 0 and 65535 is called a mappable ASN, because it can be represented in just 2 bytes; the first 16 bits are in every case all zeroes. Stemming from a concern that 32-bit ASNs might be difficult to write and manage, there are three ways of representing 4-byte ASNs: · asplain is a simple decimal representation of the ASN, from 0 to 4294967295. · asdot+ breaks the number up into low-order and high-order 16-bit values, separated by a dot. All of the older 2-byte ASNs can be represented in the low-order value, with the high-order value set to 0. So for example, 65535 is 0.65535. One more than that, 65536, is outside the value that can be represented in the low-order range alone, and is therefore represented as 1.0. 65537 would be 1.1, 65680 is 1.144, and so on. You can figure the low- and high-order values by subtracting multiples of 65,536 from the asplain representation of the ASN, with the high-order value representing the multiples of 65536. The ASN 327700, then, is 5.20: Five times 65536 plus 20 more. The largest ASN, 4294967295, is 65535.65535: 65,535 times 65535, plus 65535 more. · asdot is a mixture of asplain and asdot+. Any ASN in the 2-byte range of 0 – 65535 is written in asplain (so 65535 is written “65535”) and any ASN above that range is written in asdot+ (so 65536 is written “1.0”). Asplain is obviously the most straightforward method of understanding the new ASNs, although the larger numbers might become unwieldy to write and therefore prone to typographical mistakes in written documentation or router configurations. Asdot+ is much simpler to write, but harder to calculate from its simple decimal equivalent. If you work in this format regularly, it’s probably worth your time to write a simple script that does the conversions for you to prevent calculation errors. Asdot might appear to have limited usefulness. After all, it’s not any harder to write “0.3657” than to write “3657”, and the need to do some calculations come in when you go above 65535; asdot does nothing to help you there. There is, however, a subtlety to this. The regional number assignment authorities – the Regional Internet Registries, or RIRs – differentiate a 16-bit number that is an older 2-byte ASN and a mappable 4-byte ASN (again, the set of 32-bit ASNs in which the first 16 bits are all 0). So “3657” is a 2-byte ASN, and “0.3657” is a 4-byte ASN. This, of course, leads us to look briefly at just what the RIRs’ policies are for assigning 4-byte ASNs. ASN Allocation Policies All five of the RIRs (AfriNIC, APNIC, ARIN, LACNIC, and RIPE NCC) have the same assignment policies for 4-byte ASNs: · 4-byte ASNs have been available since 1 January 2007. The default assignment, if you request an ASN, is to give you a 2-byte ASN and only assign a 4-byte ASN if you specifically request it. · Beginning on 1 January 2009 (yes, about a month from now!) that policy reverses: A 4-byte ASN will be the default. You can still get a 2-byte ASN, but only if you specifically request it. · A year later, on 1 January 2010, all ASN assignments will be 4-byte. The ASN you receive might be of the form 0.XX (where the high-order 16 bits are all 0 and the low-order 16 bits are not), but the RIRs will make no distinction between those numbers and any other 4-byte ASN. And although it won't effect your network in any way, the 16-bit ASN you've had maybe for years will, in the eyes of the RIRs, be a mapable 32-bit ASN. For instance, Level3 Communications' AS3356 becomes in the eyes of the RIRs, at the beginning of 2010, 0.3356. These policies raise several questions: · If you plan to request a new ASN assignment starting in 2009, what do you need to do to prepare for it? · How do the new 4-byte ASNs interoperate with older autonomous systems using 2-byte ASNs? · If you have an existing 2-byte ASN, does anything change? The ASN’s Role in BGP A brief review of how BGP uses autonomous system numbers will help in understanding how the new format might impact BGP networks. Most of you already know the basics of BGP; if you do, feel free to skip ahead. The purpose of BGP, unlike any IGP (OSPF, IS-IS, EIGRP and RIP), is to route between domains under separate administrative control – that is, systems that are autonomous from each other. If you’re going to route between (and among) these autonomous routing domains, you need some way of identifying individual ASs. That identifier is the autonomous system number. The ASN has two essential functions in BGP: First, it helps BGP determine the shortest path to a destination. When BGP advertises a route to a neighbor in an Update message, it attaches several path attributes to the route. When a router learns more than one BGP route to the same destination, the BGP decision process evaluates the routes’ path attributes in a prioritized order to decide which of the routes is most preferable. (BGP path attributes can be added, removed, or changed in all sorts of ways to influence the BGP decision process. This is the basis for BGP routing policies.) One of these attributes, attached to every BGP route, is called AS_PATH. When a router advertises a destination within its own AS to a neighbor in another AS, it puts its local ASN in the AS_PATH. As the route is advertised to subsequent autonomous systems, each AS border router adds its own ASN to the attribute. The AS_PATH, then, becomes a list of ASNs that describes the path back to the destination. A router can choose a shortest path by choosing the route with the fewest ASNs listed in its AS_PATH. The second ASN function is a very simple loop avoidance mechanism. Because a router adds its ASN to the AS_PATH before advertising a route to a neighbor in another AS, a route that loops – that is, exits an AS and is subsequently advertised back to the same AS – is easily detected by examining the AS_PATH. If a router sees its own ASN listed in the AS_PATH of a route advertised to it by a neighbor, it drops the route. The ASN also appears in a path attribute called the AGGREGATOR. When a number of routes are summarized (aggregated), route details can be lost. The AGGREGATOR attribute can be added to an aggregate route to indicate the Router ID and ASN of the router performing the aggregation. This attribute does not influence the BGP decision process, but it can be useful for tracing back problems with aggregate routes. A third attribute that uses the ASN is COMMUNITIES. This optional attribute helps you manage routing polices when they apply to a large number of routes; using a number of methods you can assign one or more COMMUNITIES attributes to prefixes, and then apply a routing policy to a community rather than individual routes. For example, you might define a COMMUNITES attribute named Cust_Routes and then add that attribute to all routes advertised into your AS by all your customers. Then anywhere in your network that you need to apply a policy to all of your customer routes, you can apply the policy to routes having the Customer_Routes attribute rather than having to identify each prefix (and possibly change all your prefix lists any time a customer route is added or removed). The COMMUNITES attribute is a 32-bit value in which the first 16 bits are an ASN and the last 16 bits are arbitrarily assigned by you to have whatever meaning you want. The important point here is not so much the functions of AGGREGATOR or COMMUNITES, however, but that they, like AS_PATH, are formatted to carry 2-byte ASNs; the formats of these attributes must therefore be adapted to carry the larger 32-bit ASNs. In addition to these three path attributes the BGP Open message also references the ASN, in a 16-bit field called My Autonomous System. BGP runs on top of a TCP session between neighbors; after the TCP session is established, the neighbors use Open messages to negotiate the BGP session. Each neighbor indicates its Router ID, ASN, the BGP version it is running (always version 4 in modern networks), its hold time (the time it expects to wait for a Keepalive from the neighbor before closing the session) and possibly some optional parameters. There is alot more to BGP than what has been described here. What is important for this discussion is that there are four BGP data entities that carry ASNs: · The AS_PATH attribute; · The AGGREGATOR attribute; · The COMUNITES attribute; and · The Open message Consideration must be given to each of these entities not only in terms of adapting them to 4-byte ASNs but also making the adaptations interoperable with older BGP implementations that only understand 2-byte ASNs. For simplicity, we’ll call BGP implementations supporting 4-byte ASNs New_BGP, and legacy BGP implementations that only support 2-byte ASNs Old_BGP. The first requirement for a New_BGP implementation is to discover whether a neighbor is New_BGP or Old_BGP. It does this by using the BGP Capability Advertisement when opening a BGP session. In addition to advertising itself as New_BGP, it includes its 4-byte ASN in the Capability advertisement. If a neighbor responds that it also is a NEW_BGP speaker, the neighbor includes its 4-byte ASN in its own Capability advertisement. Thus two New_BGP neighbors can inform each other of their 4-byte ASNs without using the 2-byte My Autonomous System field in the Open message. (If the neighbors are NEW_BGP but have 2-byte ASNs or mappable 4-byte ASNs, they can still put the ASN in the My Autonomous System field in addition to the Capability advertisement.) If a neighbor is Old-BGP, it either responds that it does not support the 4-byte ASN capability or does not respond to the Capability advertisement at all. In this case, the New_BGP neighbor can still bring up a session with the Old-BGP neighbor, but cannot advertise its 4-byte ASN. The neighbor wouldn’t understand it. Instead, New_BGP uses a reserved 2-byte ASN, 23456, called AS_TRANS (AS_TRANS is easily remembered because of its 2-3-4-5-6 sequence). This AS number is added to the My Autonomous System field of the Open message. Because AS_TRANS is reserved, no Old_BGP speaker can use it as its own ASN; only New_BGP speakers can use it. Interoperable peering, then, is achieved because the New_BGP speaker “knows” its neighbor is an Old_BGP speaker and adapts to it; the Old-BGP speaker simply continues using legacy BGP rules. Path Attribute Interoperability Because the New_BGP speaker knows whether its neighbor is New_BGP or Old_BGP, it knows what rules to follow when advertising routes to the neighbor.
<urn:uuid:4c654a13-c348-45ca-a7c5-666a978a792f>
CC-MAIN-2017-04
http://www.networkworld.com/article/2233273/cisco-subnet/understanding-4-byte-autonomous-system-numbers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00515-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932431
3,144
2.578125
3
Around the world, IBM employees are celebrating a big birthday. Thursday, June 16, marks the company’s centennial. IBM is 100 years old. Though much has changed over the years since IBM was founded in 1911 — as punch cards and cumbersome mainframes paved the way for PCs and smartphones — the public sector continued to rely on the company’s technology as the Information Age became the digital age. What began as manual processes — IBM’s tabulating machines sorted and counted the nation’s first computerized Census data in the early 1900s — eventually evolved into the Watson supercomputer (named after IBM’s first president, Thomas Watson Jr.) that won a game of Jeopardy earlier this year against human opponents. IBM is banking on data analytics as a big part of its future, including its worldwide “Smarter Cities” initiative. “A hundred years old is a pretty impressive age,” said Dave McQueeney, vice president of software for IBM Research, “and it’s something that I think is a great cause for celebration. To have that long of a life as a leader, as a company, is a tremendous testament to the ability to change, adapt and grow.” IBM’s 100 years have included several historical milestones, many a result of working hand-in-hand with government agencies. The Census was only the beginning. In 1935, after the Social Security Act was passed, the government required businesses to do accurate timekeeping so that workers would receive proper benefits based on how many years they worked. The use of IBM’s more than 400 punch card tabulating machines supported employment records for 26 million Americans, which helped decide those benefits, McQueeney said. “If you go through the years and you look at Social Security, it’s always been one of the federal agencies that has the largest systems, the largest stores of data — arguably the most critical data for all of us as citizens and taxpayers,” McQueeney said. Moving forward a few decades, IBM took computing out of this world — literally — by assisting with NASA space exploration, including the historical Apollo 11 mission in 1969 that enabled astronaut Neil Armstrong to take mankind’s first steps on the Moon. For the Apollo missions, IBM performed trajectory tracking as well as on-board computing. During Apollo 11, which launched in 1969 from the Kennedy Space Center in Florida, IBM’s computer systems set the spacecraft on its trajectory to the Moon and back, said Jack Flora, IBM’s client executive for NASA. “A lot of our early work took place around solving the problems for NASA that included real-time analysis that didn’t exist before that,” Flora said. “So it was really created for and with NASA.” One year later, NASA launched Apollo 13 — its third lunar mission. The famous spacecraft, which also launched from the Kennedy Space Center, didn’t complete its mission because an oxygen tank exploded, forcing the crew to return to Earth as soon as possible. The Apollo 13 mission failure left little time for the crew to decide the best course of action. McQueeney recalled a story from Homer Ahr, an IBM programmer who worked on the Apollo missions, who explained that during the crisis, IBM computing systems on the ground performed real-time simulations of “what if” scenarios. After trying hundreds of computer-simulated scenarios, a solution was found to bring the spacecraft back safely. McQueeney said that at the time, most computer simulations weren’t being done in real time, but IBM was one of the first to use the technology. Flora also attributed the success of the Apollo 13’s safe return to the IBM and NASA teams that prepared the mission in advance. The teams spent countless hours in advance of the mission preparing for possible outcomes. “[IBM and NASA] had as many variations and variables they were thinking about before the fact,” Flora said. “And I don’t think anyone thought of the actual one that happened on Apollo 13, but by going through all that preparation, they were enabled to help when the crisis did happen.” After years of working with NASA on space exploration missions, and with the Department of Defense during the Cold War, McQueeney said the importance of data became a key lesson for IBM. “Whether that’s data about a mission, or about personal information. Or if you’re in the defense agencies, maybe it’s data about a threat or what might indicate there’s a cyber-attack under way,” McQueeney said. The company is staking much of its future on providing analytics to states and municipalities. Data is a major component of IBM’s Smarter Cities initiative, a range of projects that are designing computer models of transportation patterns, precipitation and other items of interest to city governments. For example, the computer models can determine factors like how quickly water is being absorbed in soil and what neighborhoods should then be evacuated during a disaster, McQueeney said. Having predictive analysis of these factors can help improve public safety. Overall, McQueeney said the most important lessons have been “data, dependability of the systems that manipulate the data, and the energy you get from innovating with your client to solve problems together that neither one of you could have solved by yourself.”
<urn:uuid:fa309407-574d-4c48-ad2e-781f3b8801bf>
CC-MAIN-2017-04
http://www.govtech.com/technology/IBM-Turns-100.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00239-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964209
1,146
3.1875
3
The news from the US yesterday was that Project Wing had given permission to Google to test its delivery drones on six different sites. Commercial drone use is not far off, and soon companies like Amazon and Google will be delivering shopping, clothes and a host of other items door-to-door, from drone-to-drone. But there was another announcement from the US yesterday. The White House announced its plan to encourage the use of drones throughout government departments in a move to bring GIS systems into the heart of government operations in the United States. Matt Jones, technical research manager at Esri UK, understands that “GIS [systems] can empower governments and businesses by capturing, storing, manipulating, analysing, managing and presenting all types of critical spatial or geographical data – in almost real time”. Drones, in business and security are becoming an increasingly used technology that saves time, and improves the ability to monitor assets. “Any government or organisation can use a drone to gather data to drive results,” according to Jones, which will “reduce costs and improve operational efficiencies by capturing professional 2D and 3D imagery in a matter of minutes”. The availability of drones has provided GIS systems with a greater reach and enhanced its application. With the drone, GIS systems can enhance the capabilities of the organisation, and also provide specialised tools in order to gain access to sensitive information. Vast networks of information are derived from the interpretation of data. The drone – with its implementation of geospatial intelligence – has a vast host of applications, and it is easy to see why the US government believes the commercial drone industry will be worth $82 billion to the U.S. economy in ten years, and by 2025 support up to 100,000 jobs. Extensive and increasing use of drones and GIS systems, however, does have a dark side. Aside from the obvious possibility of breaches to individuals privacy and security, geographical information systems can be used to shape and manipulate certain landscapes for military, political, or financial gains.
<urn:uuid:5c280678-7934-479a-a4f8-af19bc177aba>
CC-MAIN-2017-04
http://www.information-age.com/theyre-watching-you-drone-technology-establish-evolved-geographical-information-systems-123461772/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00175-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944919
430
2.546875
3
Armored fiber optic cables are often installed in a network for added mechanical protection, as they have extra reinforcing in the cable housing to prevent damage. Two types of armored fiber optic cables exist: interlocking and corrugated. Interlocking armor is an aluminum armor that is helically wrapped around the cable and found in indoor and indoor/outdoor cables. It offers ruggedness and superior crush resistance. Corrugated armor is a coated steel tape folded around the cable longitudinally. It is found in outdoor cables and offers extra mechanical and rodent protection. The Structure Of An Armored Fiber Optic Cable In basic armored fiber cable designs, the outer sleeve provides protection against wind, solvents, and abrasion. This outer sleeve is usually made of plastic such as polyethylene. The next layer between the sleeve and the inner jacket is an armoring layer of materials that are difficult to cut, chew, or burn, such as steel tape and aluminum foil. This armoring material also prevent the fiber from being stretched during cable installation. Ripcords are usually provided directly under the armoring and the inner sleeve to aid in stripping the layer for splicing the cable to connectors or terminators. The inner jacket is a protective and flame retardant material to support the inner fiber cable bundle. The inner fiber cable bundle includes strength members, fillers and other structures to support the fibers inside. There are usually a central strength member to support the whole fiber cable. There are several potential jacket materials are considered for armored indoor outdoor cable. The choice of jacket material depends on the required level of flame retardance in the final cable, including Polyvinyl Chloride (PVC) jacket, Halogen Free Polyolefins (HFPO) and Coated Steel Armor. Armored cable is also available with a double-armor protective jacket for added protection in harsh environments. The steel armor should always be properly grounded to an earth ground at all termination points, splice locations and all building entrances. Benefits Of Installing Armored Cable During some fiber optic installations, there is a need to provide extra protection for the cable due to the installation environment. That environment may be underground or in buildings with congested pathways. Installing an armored fiber-optic cable in these scenarios would provide extra protection for the optical fiber and added reliability for the network, lessening the risk of downtime and cable damage due to rodents, construction work, weight of other cables and other factors. But one inconvenience is the need to bond and ground the cable. This inconvenience can be eliminated by using a dielectric-armored cable. Dielectric-armored cable options exist that offer the required protection without the hassle of grounding and bonding the armor, or the extra steps of installing a conduit and cable when the cable is without any armored protection. Compared With Other Common Fiber Optic Cables These armored fiber optic cables are the same diameter with commonly seen 2mm O.D or 3mm O.D cables, and their optical performance is also same as the common fiber optic cables. The difference is armored fiber cables are with stainless steel armor inside the cable jacket and outside the optical fiber, this stainless steel armour are strong enough to make the cables anti-rodent and the whole cable can resist the steps by an adult people. Armored fiber optic patch cables are also can be single mode and multimode types, the connectors optional including commonly used LC, SC, ST, FC, E2000, MU, SMA, etc. Cable structure can be simplex, duplex or multi-fiber types. Armored fiber cables from FiberStore can be with custom made colors and cable length, they are manufactured according to industrial and international standards.
<urn:uuid:c76f5952-d5fe-4f48-b4ae-af2406daf47c>
CC-MAIN-2017-04
http://www.fs.com/blog/common-armored-fiber-optic-cables.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00571-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916923
757
3.0625
3
An Introduction to Information Technology - 15th March 2016 - Posted by: Stacie Jansen van Vuren - Category: Technology The most commonly accepted definition of Information Technology is the use of computers and software to manage information. In recent years there has been a shift in focus from single computers to computer networks and the main credit for this must be given to the internet. The internet has become the number one choice for communication whether that choice is email, social media, VoIP calling or instant messaging services. However, to truly appreciate the advances in Information Technology we need to take a look at its history. The history of Information Technology can be divided into four main ages: The Premechanical Age: This is the earliest age of Information Technology and is defined as the period from 3000 BC to 1450 AD. This is considered to be the time when humans first began to communicate using simple language and basic drawings known as petroglyphs. Petroglyphs were generally carved into rock and these were the seed from which early alphabets grew. During this period number systems also began to emerge and with the creation of numbers came the need to process them. Thus the abacus was developed and this was the first information processor. The Mechanical Age: The mechanical age is the time between 1450 AD and 1840 AD and it was during this period that we begin to see the connections to our current information technology. Interest in information technology blossomed in this era and basic technologies were invented to satisfy the human thirst for calculations. It was also during this time that Charles Babbage was credited with the development of the first automated computing machine and Ada Lovelace’s work is considered to be the first example of computer programming. The machine was unfortunately never completed; however, it was the beginning of the journey that led to modern day computers. The Electromechanical Age: This age is when information technology began to take the form that we know today. Between 1840 and 1940 telecommunication began to emerge and one of the first creations was the telegraph in the early 1800’s. In 1835 Samuel Morse created the Morse Code and the telephone was brought to us by Alexander Graham Bell in 1876. 1894 saw Guglielmo Marconi develop the radio and all of these inventions led to massive advances in the field of information technology. To demonstrate this, it must be mentioned that in 1940, Harvard University created the Mark 1 which was the first large-scale automatic digital computer which was programmed using punch cards. This creation led to the exploration of developing smaller versions that could be used in businesses and, eventually, homes. The Electronic Age: This is the information technology age that we are fortunate enough to live in at this time. For detailed information on the inventions that have taken place during the Electronic Age of information technology, you can read all about it here. Now that computers form the main resource for information technology, it is important to ensure that we have qualified professionals who are able to manage this infrastructure. In order to do so it is necessary to gain the appropriate training and certifications that will enable Information Technology Professionals to perform their roles in an effective and efficient manner. The Value of Information Technology Certification As with any other career path, gaining the relevant certifications will enhance your Information Technology career and increase your employability and earning potential. Employers prefer to hire certified candidates as this proves to be less of a risk to the organisation. Certified Information Technology Professionals negate the need for full training which saves the company money. They are also validated in their ability to perform their role effectively and efficiently which reassures the employer that the candidate will be ready to hit the ground running. Certifications are beneficial to both the IT professional and the organisation that they work for. There is a wide range of IT certifications available and one of the most sought after when beginning the IT certification process is the CompTIA A+ qualification. IT Careers Start with CompTIA A+ Once you have decided to gain an IT certification, the next question is which one should I choose as my first IT certification? We recommend that you begin with the CompTIA A+ qualification as this is the best starting point in the journey to IT certification. CompTIA A+ is an IT certification awarded by CompTIA once a student has studied the relevant training material and passed the associated CompTIA A+ certification exam. The CompTIA A+ training course teaches the knowledge and skills that are required to install and maintain operating systems, hardware, mobile devices, laptops, printers and basic networking technologies. Upon passing the CompTIA A+ certification exam a student will be able to configure, upgrade and maintain Windows operating systems, computer work stations and small to medium networks. As you can see, the CompTIA A+ qualification will provide you with everything that you need in an IT certification which will enable you to become a productive IT Professional. If you are already working in an IT environment, the CompTIA A+ IT certification will validate your experience and prove to employers that you are capable and knowledgeable. CompTIA A+ Exam Description Now that you have completed your CompTIA A+ training course and practiced with the sample exams until you feel confident in knowing the material, it is time to take the CompTIA A+ certification exam. For this IT certification you will need to write an exam that is broken down into two parts: • CompTIA A+ 220-801 • CompTIA A+ 220-802 The CompTIA A+ 220-801 IT certification exam will test you on the fundamentals such as installation, configuration, mobile devices, networking, safety measures and prohibited content. This is the first of the two parts of the CompTIA A+ certification exam. The second part of the CompTIA A+ exam is the CompTIA A+ 220-802 which will test your ability to install and configure computer and mobile device operating systems and common functions in email, security and networking aspects of information technology. The CompTIA A+ certification exam consists of a maximum of 90 questions (across the two parts) which are comprised of both multiple-choice and performance-based questions. You will be given 90 minutes to complete both parts of the exam and you will be required to achieve a minimum of 675 on a scale of 900 for the first part (CompTIA A+ 220-801) and 700 on a scale of 900 for the second part (CompTIA A+ 220-802) in order to pass the exam. Upon passing both parts of the CompTIA A+ exam, you will earn your CompTIA A+ certification which is internationally recognised. The CompTIA A+ qualification will set you well on your way to becoming a respected Information Technology Professional.
<urn:uuid:115cc968-2033-41e6-ac91-4d4cbb570bf5>
CC-MAIN-2017-04
https://www.itonlinelearning.com/blog/an-introduction-to-information-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00387-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95396
1,396
3.09375
3
Welcome to the world of exotic flow control. With Python 2.2 (now in its third alpha release -- see Resources later in this article), programmers will get some new options for making programs tick that were not available -- or at least not as convenient -- in earlier Python versions. While what Python 2.2 gives us is not quite as mind-melting as the full continuations and microthreads that are possible in Stackless Python, generators and iterators do something a bit different from traditional functions and classes. Let's consider iterators first, since they are simpler to understand. Basically, an iterator is an object that has a .next() method. Well, that's not quite true; but it's close. Actually, most iterator contexts want an object that will generate an iterator when the new iter() built-in function is applied to it. To have a user-defined class (that has the requisite .next() method) return an iterator, you need to have an __iter__() method return self. The examples will make this all clear. An iterator's .next() method might decide to raise a StopIteration exception if the iteration has a logical termination. A generator is a little more complicated and general. But the most typical use of generators will be for defining iterators; so some of the subtlety is not always worth worrying about. A generator is a function that remembers the point in the function body where it last returned. Calling a generator function a second (or nth) time jumps into the middle of the function, with all local variables intact from the last invocation. In some ways, a generator is like the closures which were discussed in previous installments of this column discussing functional programming (see Resources). Like a closure, a generator "remembers" the state of its data. But a generator goes a bit further than a closure: a generator also "remembers" its position within flow-control constructs (which, in imperative programming, is something more than just data values). Continuations are still more general since they let you jump arbitrarily between execution frames, rather than returning always to the immediate caller's context (as a generator does). Fortunately, using a generator is much less work than understanding all the conceptual issues of program flow and state. In fact, after very little practice, generators seem as obvious as ordinary functions. Taking a random walk Let's consider a fairly simple problem that we can solve in several ways -- both new and old. Suppose we want a stream of positive random numbers less than one that obey a backward-looking constraint. Specifically, we want each successive number to be at least 0.4 more or less than the last one. Moreover, the stream itself is not infinite, but rather ends after a random number of steps. For the examples, we will simply end the stream when a number less than 0.1 is produced. The constraints described are a bit like one might find in a "random walk" algorithm, with the end condition resembling a "statisficing" or "local minimum" result -- but certainly the requirements are simpler than most real-world ones. In Python 2.1 or earlier, we have a few approaches to solving our problem. One approach is to simply produce and return a list of numbers in the stream. This might look like: import random def randomwalk_list(): last, rand = 1, random.random() # init candidate elements nums = # empty list while rand > 0.1: # threshhold terminator if abs(last-rand) >= 0.4: # accept the number last = rand nums.append(rand) # add latest candidate to nums else: print '*', # display the rejection rand = random.random() # new candidate nums.append(rand) # add the final small element return nums Utilizing this function is as simple as: for num in randomwalk_list(): print num, There are a few notable limitations to the above approach. The specific example is exceedingly unlikely to produce huge lists; but just by making the threshhold terminator more stringent, we could create arbitrarily large streams (of random exact size, but of anticipatable order-of-magnitude). At a certain point, memory and performance issues can make this approach undesirable and unnecessary. This same concern got xreadlines() added to Python in earlier versions. More significantly, many streams depend on external events, and yet should be processed as each element is available. For example, a stream might listen to a port, or wait for user inputs. Trying to create a complete list out of the stream is simply not an option in One trick available in Python 2.1 and earlier is to use a "static" function-local variable to remember things about the last invocation of a function. Obviously, global variables could do the same job, but they cause the familiar problems with pollution of the global namespace, and allow mistakes due to non-locality. You might be surprised here if you are unfamiliar with the trick--Python does not have an "official" static scoping declaration. However, if named parameters are given mutable default values, the parameters can act as persistent memories of previous invocations. Lists, specifically, are handy mutable objects that can conveniently even hold multiple values. Using a "static" approach, we can write a function like: import random def randomwalk_static(last=):# init the "static" var(s) rand = random.random() # init a candidate value if last < 0.1: # threshhold terminator return None # end-of-stream flag while abs(last-rand) < 0.4: # look for usable candidate print '*', # display the rejection rand = random.random() # new candidate last = rand # update the "static" var return rand This function is quite memory friendly. All it needs to remember is one previous value, and all it returns is a single number (not a big list of them). And a function similar to this could return successive values that depend (partly or wholly) on external events. On the down side, utilizing this function is somewhat less concise, and considerably less elegant: num = randomwalk_static() while num is not None: print num, num = randomwalk_static() New ways of walking "Under the hood", Python 2.2 sequences are all iterators. The familiar for elem in lst: now actually asks lst to produce an iterator. The for loop then repeatedly calls the .next() method of this iterator until it StopIteration exception. Luckily, Python programmers do not need to know what is happening here, since all the familiar built-in types produce their iterators automatically. In fact, now dictionaries have the methods .itervalues() to produce iterators; the first is what gets used in the new idiom for key in dct:. Likewise, the new idiom for line in file: is supported via an iterator that calls But given what is actually happening within the Python interpreter, it becomes obvious to use custom classes that produce their own iterators rather than exclusively use the iterators of built-in types. A custom class that enables both the direct usage of and the element-at-a-time parsimony of import random class randomwalk_iter: def __init__(self): self.last = 1 # init the prior value self.rand = random.random() # init a candidate value def __iter__(self): return self # simplest iterator creation def next(self): if self.rand < 0.1: # threshhold terminator raise StopIteration # end of iteration else: # look for usable candidate while abs(self.last-self.rand) < 0.4: print '*', # display the rejection self.rand = random.random() # new candidate self.last = self.rand # update prior value return self.rand Use of this custom iterator looks exactly the same as for a true list generated by a function: for num in randomwalk_iter(): print num, In fact, even the idiom if elem in iterator is supported, which lazily only tries as many elements of the iterator as are needed to determine the truth value (if it winds up false, it needs to try all the elements, of course). Leaving a trail of crumbs The above approaches are fine for the problem at hand. But none of them scale very well to the case where a routine creates a large number of local variables along the way, and winds its way into a nest of loops and conditionals. If an iterator class or a function with static (or global) variables depends on multiple data states, two problems come up. One is the mundane matter of creating multiple instance attributes or static list elements to hold each of the data values. The far more important problem is figuring out how to get back to exactly the relevant part of the flow logic that corresponds to the data states. It is awfully easy to forget about the interaction and codependence of different data. Generators simply bypass the whole problem. A generator "returns" with the yield, but "remembers" the exact point of execution where it returned. Next time the generator is called, it picks up where it left before -- both in terms of function flow and in terms of One does not directly write a generator in Python 2.2+. Instead, one writes a function that, when called, returns a generator. This might seem odd, but "function factories" are a familiar feature of Python, and "generator factories" are an obvious conceptual extension of this. What makes a function a generator factory in Python 2.2+ is the presence of one yield statements somewhere in its body. If return must only occur without any accompanying return value. A better choice, however, is to arrange the function bodies so that execution just "falls off the end" after all the yields are accomplished. But if a encountered, it causes the produced generator to raise a StopIteration exception rather than yield further values. In my opinion, the choice of syntax for generator factories was somewhat yield statement can occur well into the body of a function, and you might be unable to determine that a function is destined to act as a generator factory anywhere within the first N lines of a function. The same thing could, of course, be true of a function factory -- but being a function factory doesn't change the actual syntax of a function body (and a function body is allowed to sometimes return a plain value; albeit probably not out of good design). To my mind, a new keyword -- such as generator in place of would have been a better choice. Quibbles over syntax aside, generators have the good manners to automatically act as iterators when called on to do so. Nothing like the .__iter__() method of classes is needed here. Every yield encountered becomes a return value for generator's .next() method. Let's look at the simplest generator to make >>> from __future__ import generators >>> def gen(): yield 1 >>> g = gen() >>> g.next() 1 >>> g.next() Traceback (most recent call last): File "<pyshell#15>", line 1, in ? g.next() StopIteration Let's put a generator to work in our sample problem: from __future__ import generators # only needed for Python 2.2 import random def randomwalk_generator(): last, rand = 1, random.random() # initialize candidate elements while rand > 0.1: # threshhold terminator print '*', # display the rejection if abs(last-rand) >= 0.4: # accept the number last = rand # update prior value yield rand # return AT THIS POINT rand = random.random() # new candidate yield rand # return the final small element The simplicity of this definition is appealing. You can utilize the generator either manually or as an iterator. In the manual case, the generator can be passed around a program, and called wherever and whenever needed (which is quite flexible). A simple example of the manual case is: gen = randomwalk_generator() try: while 1: print gen.next(), except StopIteration: pass Most frequently, however, you are likely to use a generator as an iterator, which is even more concise (and again looks just like an old-fashioned sequence): for num in randomwalk_generator(): print_short(num) It will take a little while for Python programmers to become familiar with the ins-and-outs of generators. The added power of such a simple construct is surprising at first; and even quite accomplished programmers (like the Python developers themselves) will continue to discover subtle new techniques using generators for some time, I predict. To close, let me present one more generator example that comes from the test_generators.py module distributed with Python 2.2. Suppose you have a tree object, and want to search its leaves in left-to-right order. Using state-monitoring variables, getting a class or function just right is difficult. Using generators makes it almost >>>> # A recursive generator that generates Tree leaves in in-order. >>> def inorder(t): ... if t: ... for x in inorder(t.left): ... yield x ... yield t.label ... for x in inorder(t.right): ... yield x - Read the previous installments of Charming Python. - Get the third alpha release of Python 2.2. - Regarding the last several Python versions, Andrew Kuchling has written his usual excellent introduction to the changes in Python 2.2; read What's New in Python 2.2. - Read the definitive word on Simple Generators in the Python Enhancement Proposal, PEP255. - The real dirt on Iterators is in PEP234. - The code demonstated in this column installment can be found in a single source file. - Read related developerWorks articles by David Mertz: - Browse more Linux resources on developerWorks. - Browse more Open source resources on developerWorks.
<urn:uuid:ffe34bcd-d742-4412-97d6-2fb3ab4b3acb>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/library/l-pycon/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00295-ip-10-171-10-70.ec2.internal.warc.gz
en
0.853956
3,039
3.53125
4
"Computer viruses may have contributed to the Spanair passenger plane crash which killed 154 people in Madrid two years ago", reports the Spanish newspaper El Pais. "The Spanair central computer which registered technical problems in airplanes was not functioning properly because it had been contaminated by harmful computer programs", the magazine continues. We cannot confirm whether malware played a part, nor do we know which particular malware it could have been. However, over the years, we have seen real-world infrastructure affected by computer problems. In most cases, this has been just a side effect; the malware behind the problem wasn't trying to take systems down, it just did. This was especially bad in 2003, when we saw malware induced problems in real-life systems unprecedented in their severity. The main culprits were network worms Slammer and Blaster. The network congestion caused by Slammer dramatically slowed down the network traffic of the entire Internet. One of the world's largest automatic teller machine networks crashed and remained inoperative over the whole weekend. Many international airports reported that their air traffic control systems slowed down. Emergency phone systems were reported to have problems in different parts of the USA. The worm even managed to enter the internal network of the Davis-Besse nuclear power plant in Ohio, taking down the computer monitoring the state of the nuclear reactor. The RPC traffic created by Blaster caused big problems worldwide. Problems were reported in banking systems and in the networks or large system integrators. Also, several airlines reported problems in their systems caused by Blaster and Welchi, and flights had to be canceled. Welchi also infected Windows XP-based automatic teller machines made by Diebold, which hampered monetary transactions. The operation of the US State Department's visa system suffered. The rail company CSX reported that the worm had interfered with the train signaling systems stopping all passenger and freight traffic. As a result of this, all commuter trains around the US capital stopped on their tracks. There was a lot of attention to the indirect effects of Blaster on a major power blackout in the Northeastern USA which occurred during the outbreak week. According to the report of the blackout investigative committee there were four main reasons behind the power failure, one of them being specifically computer problems. We believe these problems were to a great extent caused by the Blaster. It is important to note that even though the system problems caused by Slammer and Blaster were truly considerable, they were only byproducts of the worms. The worms only tried to propagate: they were not intended to affect critical systems. The malware affected environments that had nothing to do with Windows: it was the massive network traffic caused by the worms that alone disrupted normal operations.
<urn:uuid:d0d84652-9044-4f4d-9dbe-a533821fcf99>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/00002013.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00075-ip-10-171-10-70.ec2.internal.warc.gz
en
0.978357
540
2.515625
3
We are losing the battle for cyberspace. Not because malicious actors are taking over the digital world, but because we are forgetting what is the element that makes us feel safe and secure in any world: the ability to trust. There is an urgent need to address trust questions in cyberspace, if we want to slow down, or preferably reverse, the ongoing slide towards omnipresent suspicion. Trust that bases on realistic estimations needs to be actively built and upheld. According to the EU statistics only 12 per cent of European internet users feel completely safe online. In the US, an AP‒GfK poll found that 58 per cent of people have deep concerns about the safety of online shopping; 62 per cent about spending money via smart phone applications. Sixty per cent of people say they value privacy over anti-terrorism acts, and less than a third trusts others much in everyday encounters. Yet, trust is the factor that makes society work where cyberspace is the backbone of its contemporary model. Trust is a basic building block of all security, including cybersecurity. Yet particularly trust in digital products and services is underpinned by cybersecurity. As cyberspace is something very new ‒ commercial internet only emerged in 1995 ‒ we are still learning to live with it. People do not have the time or interest to familiarize themselves with complex information and communication technology for which reason they can trust or distrust it blindly. Misplaced trust easily leads into compromised security. This is where ICT manufacturers and vendors, as well as law enforcement, governments and international organizations come to play a role. Cyberspace needs to be what it is promised to be and function as expected for realistic trust only emerges from experience. Still the question of trust is often neglected or only partially understood. As today we are experiencing the dawn of cyber era it is natural that both distrust and blind trust coexist. When being asked people say they do not trust cyberspace, yet their daily lives are fully dependent on it. Society’s critical infrastructure is controlled through cyberspace, multiple services we are used to only exist there, and information needed to run our daily businesses is stored and exchanged online. The world is tightly interconnected ‒ not to mention that the “internet of everything” is just emerging. Over the past few years states have become active players in cyberspace. This has raised the weight of digital issues on the agenda of (inter)national politics. Administrations and companies are also waking up to the dangers of cyberspace, yet sadly often forgetting its vast opportunities. At the beginning of the 1990s the situation was very different: globalization and ICT revolution were seen to help overcome almost any difficulty in life. There was plenty of trust on ICT (even if sometimes exaggerated). Gradually this trust has crumbled or, at least, become more reserved. Malicious actors have learned to use cyberspace, companies have not been prepared for this development, there is a lack of transparency and states are defining the digital world as an arena of power struggle and warfare. Rivalry and covering of security breaches only reinforce mistrust. What we need to do is to turn this development around. We need to find ways to build trust in cyberspace. Alongside ICT companies this is the task of states, international organizations and corporations. Reinforcing digital trust, that is developing technological solutions to induce trust, is one of the means. In addition, there is a need for regulation that addresses the manifold questions of cybercrime or cyberwar, but does not hinder the development of ICT sector. No single actor can overcome omnipresent digital problems alone but cooperation and information sharing are a necessity. The EU is building its digital agenda on the aforementioned abutments. For example, its Horizon 2020 programme aims at developing “trustworthy ICT solutions ensuring a secure and reliable digital environment in Europe”. It is both to promote innovation and economic growth and to protect society, economy and people’s rights. In the US, for example, the National Science Foundation has similarly reasoned research programmes. Alongside innovation enhancing transparency is a way to induce trust. Instead of denying intrusions, companies and administrations should be honest about them. Highlighting what has been done to address problems and prevent security breaches in the future should become the yardstick of trustworthiness. They also need to be resilient enough to continue operating under cyber-attack. This is the sole approach to build trust in a world in which everyone knows that anyone can be breached at any time. In addition, ICT manufacturers and operators need deliver what they promise, help customers in making the right decisions, and also take the responsibility when something goes wrong. Security should become a built in feature in cyberspace. Cybersecurity ‒ and trust as an integral part of it ‒ is a topical issue right now. We are just learning to live in societies penetrated by cyberspace. Both today and tomorrow actions in digital world have consequences in physical world that we have to deal with. This changes our traditional understandings of, for instance, war, peace, security and privacy. There is a need to re-organize our conventional world view ‒ the decisions we make today have long standing influence and consequences. We have to find ways to reinforce trust as it is the thing holding societies together – today and tomorrow. Related Reading: The Cost of Failed Trust Report
<urn:uuid:5dc350f8-851a-44a1-8126-e52ee1093f3d>
CC-MAIN-2017-04
http://infosecisland.com/blogview/23689-We-Have-to-Find-Ways-to-Reinforce-Trust-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00469-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960411
1,102
2.84375
3
Social networking sites continue to become more popular. Facebook is expected to hit its one billionth user in August 2012. While social networking sites can offer many benefits, such as a good way to make connections with people with similar interests and goals and to exchange ideas and information, it is important to remember some basic security tips. It really comes down to basic common sense and applying the same principles you would apply to share personal information with anyone you may not know well or someone you just met. - Limit what you share. Most social sites allow you to limit what information you share and who you share it with. By setting your privacy settings appropriately, you can prevent your profile from being public to the world. - Friend only people you know. While some people think it’s great to have 5,000 friends, it’s best to only connect with people you really know. - Limit personal information. Everyone loves to be told “Happy birthday,” but do you really need to list the year you were born, the state, or even your home town? This same information can be used by attackers to guess passwords or gain access to online accounts. - Limit your trust. Most studies show that users are much more cautions in clicking on links in emails yet don’t always show the same level of caution when clicking on links in social networking sites. - Don’t announce all your travel plans. While it’s nice to share the fact that you are going on that long, anticipated two week vacation to Hawaii, crooks can use that information to plan when to break into your house. Practicing basic common sense can help you enjoy social networking while protecting your personal information; this will also help keep you safer from hackers!
<urn:uuid:e2671d40-7c73-42c6-b537-090bf47d3970>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/08/09/5-steps-to-help-you-stay-safe-on-social-networking-sites/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00285-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953561
361
3.125
3
Father of the Photocopy 1945 10-22-38 Astoria. So read the world’s first photocopy on Oct. 22, 1938, in Astoria, N.Y. Physicist Chester Floyd Carlson and his assistant Otto Kornei, while dabbling in photoconductivity, poured sulfur across a zinc plate and zapped it with a white-hot light. Carlson then blew the remaining sulfur from the sheet, and voil¿The paper read "10-22-38 Astoria," an exact duplicate of the scribbling on a microscope slide that lay across the plate. Carlson pitched his invention to companies such as GE, IBM, Kodak and RCA but was rejected by all of them. (Rejection and loss became a familiar theme for Carlson during the next six years: his wife left him, his assistant Kornei left him, and a heap of debt accumulated all thanks to his copying pursuits.) But then, in 1944, the nonprofit Battelle Institute offered Carlson $3,000 for further research in exchange for three-quarters of future royalties. After the rights to his invention were purchased by the Haloid Co. (later Xerox) and the Greek term xerography (translation: "dry writing") was coined, Carlson went on to bank $150 million and became the father of possibly the most ubiquitous piece of contemporary office equipment. -Daniel J. Horgan Other Notable Events 4 The Soviets jump to an early lead in the space race with the launch of Sputnik on this day in 1957. 5 The first radio conversation with a submerged submarine happens in 1919. The U.S.S. H-2 radioed the destroyer Blakey from the depths of the Hudson River. 14 From the confines of the Apollo 7 spacecraft, the first live telecast from space takes place in 1968. 18 Thomas Edison, inventor of the electric lightbulb, the universal stock ticker and the motion picture camera dies in 1931. 19 The Justice Department’s antitrust trial against Microsoft gets underway in 1998. Microsoft is accused of bullying PC makers into providing Explorer as the default browser instead of rival Netscape Navigator. Sources: About.com, Fairbanks North Star Borough School District, History Channel, HowStuffWorks, Tripod
<urn:uuid:32f30d90-ac60-4bdd-9462-1c61fc3271c5>
CC-MAIN-2017-04
http://www.cio.com/article/2441914/consumer-technology/chester-floyd-carlson--father-of-the-photocopy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00285-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949859
472
3.328125
3
As we go through a tough economic and financial climate, attention is drawn by the cost of government and how slashing it would significantly contribute to both reducing public debt and injecting more resources into the economy. Although with varying degrees across the world, government is usually considered less efficient that the private sector. Sometimes this is a wild generalization, since government deals with a level of complexity that is unparalleled in almost any other industry sector, and is the service provider of last resort in a number of areas, from health care to industry subsidies, from education to public safety. It is also true that productivity could be increased significantly in many different areas, at least using criteria that would apply in the private sector. The fundamental difference between the private and the public sector is that the former is no democracy. There is a CEO, a board, a set of priorities that are meant to maximize shareholder value in clear, monetary terms (e.g. earning per share, increase of stock value, top and/or bottom line increase). Sure, if the board underperforms, shareholders will ask for a chance, so there is a form of democracy. But shareholders are people (or institutions) who take a deliberate decision to invest in a particular company. In government, every single citizen is a shareholder, and that’s not a choice. Therefore the diversity of viewpoints, needs, priorities is huge, and is reflected in how democracy works. Elected political leaders both need to get the job done (i.e. implementing the priorities that were at the basis of their election platform), and prepare for the next election. Where there are multiple tiers of government, there could be one or more elections per year, and the behavior of political leaders across all tiers may be influenced by the upcoming elections in one of those. In well-functioning democracies, like the U.S., the four-year cycle of the federal administration is quite clear. the first year goes into getting settled and launching the first change initiatives, two more years to run the program, and the fourth year spent campaigning for election (or re-election). In less mature democracies, like in Italy, timeframes for decision are even shorter and the number of exception to be handled and fires to be extinguished skyrockets. But in both cases the pace of democracy is not necessarily compatible with what needs to be urgently done. The cost of democracy is particularly evident in IT-intensive programs. Shared services, for one, do require a fair amount of time and a stable, well-thought-out governance framework to deliver business value. But the cost of democracy either prevents implementing strong, centralized governance , or challenges the ability to keep it in place for a long enough time.. Digital agendas or information society programs or e-government programs also require perseverance to pursue culture change, business transformation, major changes in IT portfolio. No surprise that they underperform, and mostly focus on spending on infrastructure or external services rather than making change really happen On the other hand look at places where democracy functions differently (and some might say there is less democracy): China, Singapore, UAE, to name a few. Although they are not immune from problems, their forms of more or less benevolent dictatorship lead to quite remarkable results when it comes to technology deployment and use. While I am not suggesting that we give up the democratic processes that many of our ancestors fought very hard for, we need to understand that total cost of ownership of democracy is high and that its intangible value does not play very well in a ROI calculation. We just need to set realistic objectives when it comes to government efficiency. Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.
<urn:uuid:583b40aa-cdd9-46d4-bdc2-64b5488bd2b3>
CC-MAIN-2017-04
http://blogs.gartner.com/andrea_dimaio/2012/06/27/the-main-obstacle-to-government-efficiency-is-the-cost-of-democracy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00405-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949919
855
2.546875
3
Help Irongeek.com pay for bandwidth and research equipment: Software Defined Networking: Software Defined Networking: Software Defined Networking (SDN) is a set of technologies for allowing greater control of how networks operate. Rather than a fairly static network that can only be controlled by proprietary vendor specific protocols, with sometimes limited visibility into the internals of layer 2 devices like switches, SDN allows for experimentation in optimizing and configuring how the network functions. Additionally, SDN can be controlled using commodity server hardware, which can add to the practicality and cost savings. This flexibility is network design is in part accomplished by separating the switch’s control plane from the data plane. Having this new level of control can be of great benefit to security engineers, and we will cover some potential use cases for SDN and information security. With this flexibility comes potential pitfalls as well, the logic in the code that runs the controller may have faults of its own that could be taken advantage of by attackers, or a given rule may have unintended side effects because of bugs in its implementation. This paper will cover a few potential possibilities, both negative and positive, of how SDN can be related to security. As OpenFlow seems to be the prevalent SDN architecture in research, this paper will focus on it. Software Defined Networking vs. the Autonomous Systems model Most traditional networks follow the Autonomous Systems (AS) model. In the AS model things are hierarchical, and its routing design has been compared to the post office, with set paths and delegation of who routes to whom on a fairly static basis based on simple communication protocols between the neighboring components. Each part of the network does its job, with little “awareness” of the other equipment’s duties. While the AS model is very scalable, it is not necessarily flexible. Let us say a given network node has been moved from one section of the network to another, it may take some time for the new location to be figured out and adjusted for by traditional Autonomous Systems, and some interruption of service is to be expected. In cloud architectures where servers are expected to be spun up and down as demand changes, perhaps even moved from data center to data center, and still be transparent to the end user, greater levels of control are needed for controlling the path of network communications. Another common use case for SDN is mobile data to devices such as cell phones and tablets. Mobile devices frequently change their location, but data flows to the ever-changing endpoints have to be managed in a reliable manner. One intent of SDN is to allow for this sort of flexibility and control. Standards vs. Roll Your Own In a way, network administrators and security engineers have had some of the power of SDN in the past. If desired, and the hardware in place had the functionality to support it, administrators could script something to detect changes and automate configuration adjustments on the network. The author of this paper has implemented similar things to this in the past to manage Cisco ASAs using the Python scripting language and the Pexpect library. The problem with some of these homebrew solutions is that they are not very standardized, may be specific to one vendor’s hardware, and may be incomprehensible to someone else trying to follow the collections of scripts, detection probes and improvised ways of making changes. APIs and standardized systems like OpenFlow can potentially help with this by allowing for a common interface amongst different switch vendors and allowing network admins who inherit a network from a predecessor a better understanding of what is being done on the controller side of a network. Most security practitioners the author has conversed with have never heard of OpenFlow or Software Defined Networking. This small section of the paper will be somewhat of a massive oversimplification of how OpenFlow works. We hope Software Defined Networking experts will excuse the diversion, but we would like to relay some of the concepts and core ideas of SDN to those who are unfamiliar with the concept so that they might better understand the model before we continue on to the more security related concepts. At its core, OpenFlow lets a network administrator define rules on a switch that allow them specify “if these conditions apply, send the packet out this port on the switch”. There are many variables that can be matched on, and some OpenFlow frameworks (like POX) have their own short-hands and extensions, but in OpenFlow version 1.3 a controller can set a switch to make matches on at least the following required fields from the OpenFlow spec: The administrator can set rules so that if certain matches are found, different actions can be taken. The most common are: Forward the packet out a given port or ports, encapsulate the packet and forward to the controller so a decision can be made about it, drop the packet, or just send the packet to the normal switch processing pipeline. Commonly, if the packet matches no rules in the flow table, the packet can be sent to the controller and the controller can decide what to do with it. The controller can then send new flow rules to the switch to tell it how to handle those sorts of packets in the future. This forwarding to the controller for instruction does cause a slow down in performance initially, but once a flow rule has been placed in the TCAM (Ternary Content-Addressable Memory) future packets should be sent on with little delay. Statistics about the switch can also be collected via the OpenFlow protocol which may be of interest to a security engineer. Potential uses and potential pitfalls The flexibility of Software Defined Networks/OpenFlow allows for a security engineer to try things they have not been able to easily do before. If they have an idea for how to rearrange data paths on the network to get better visibility on the network, or how to detect and shape possibly malicious traffic on the network, they can now experiment with these ideas. Of course, flexibility of control also allows for mistakes and unintended outcomes in configuration. For a comparison most security practitioners would recognize, consider stacks in memory and SQL backbends. Stacks and SQL are not necessarily insecure technologies in and of themselves, but badly designed applications that use them can bring with them vulnerabilities like buffer overflows and SQL Injection. Is it possible to find similar implementation issues in how people might configure OpenFlow? Currently looking for logic flaws or security possibilities in OpenFlow/SDN implementations would be a very niche endeavor, but over time, as more equipment supports it and more networks utilize it, it could become quite an interesting security topic for researchers. With greater control of the network though standardized means, a security engineer has new possibilities and pitfalls to face. If a researcher wants to experiment with ways to mitigate against common layer 2 attacks like ARP poisoning, they now have the hooks into the networking equipment to do it. An idea in a similar vein is ELK , which is intended to cut back on unneeded ARP congestion by having a more centrally managed ARP table for the network. As a proof of concept the author has put together a simple implementation of a switch that is hardened again ARP poisoning as a proof of concept for this paper. The Anti-Arp-Poisoning switch demo is implemented in POX , a Python based framework for creating OpenFlow controllers, and is based on the GPLed POX code from the OpenFlow Tutorial page at OpenFlow.org . The Anti-Arp-Poisoning switch actually had to be opened up somewhat from the switch made in the original OpenFlow tutorial to allow for more flexibility. As the tutorial switch was written originally, flows did not time out so the first MAC address to seize a set of identifiers would always have them. This is not very flexible if the same IP has to be reassigned later to a different MAC address, or moved to a different port on the switch. In the Anti-Arp-Poisoning switch POX implementation OpenFlow’s idle time option for flows is used, along with a table in memory to track IP to MAC address mappings. If a device continues to use the same MAC address and IP within the idle timeout period, the switch continues to consider it as having the rights to use those setting. If an attempt is seen to map a MAC address to an IP that is already taken according to the IP to MAC address table, the switch can detect this and take counter measures. These countermeasures could be anything from the shunning of connections from the (perceived) spoofing MAC address, forwarding to a switch port that has an IDS on it, scanning the host for malware using tools on the controller box initiated by the Python/POX script, or just simply alerting the controller’s admin. For the demo, the Anti-Arp-Poisoning switch simply alerts the console and sets up a shun flow to port 100 (unused) that takes twice as long to time out as a normal flow. If the controller receives a message that a given flow has timed out because it was idle for too long, the IP to MAC address mapping in the table for that IP is removed so that the IP is free to be used by another device attached to the switch. Of course, there are potential race conditions where an unintended MAC address could claim an IP first, but the logic of the switch makes this hard for ongoing connections that do not idle out. As mentioned, shunning of a perceived attacker is possible, but this should be done cautiously. Spoofed packets and bad logic in the controller’s code could lead to legitimate network devices being shunned and cause potential Denial of Service (DoS) problems to occur. ARP Poisoning protection is just one possibility, and is used only as an example that’s relatively easy to demonstrate in code. If a network administrator wants to control how switch ports are mirrored to Intrusion Detection Systems, Intrusion Prevention Systems or Data Loss Prevention Systems, OpenFlow can potentially be used to implement this sort of functionality as well. Some work has already been done in controlling flows to these sorts of monitoring devices in the form of CloudwWatcher . There is also the possibility of having systems that seem to be behaving oddly automatically quarantined via OpenFlow, or perhaps placed in a sandboxed honeynet for further study of what the host is doing. We have covered some of the positive possibilities of SDN for network security, so let us consider some of the potential negative ramifications of bad implementations as well. As an example of an unintended consequence of badly implemented SDN control, let us consider an application that can control an OpenFlow switch. Because of how OpenFlow works, this would likely be a piece of shim software on the application’s client host that communicates to the controller, which then communicates to the switch. With OpenFlow and flow tables, a firewall of sorts can be made out of an OpenFlow enabled switch by telling it to drop or pass packets depending on which ports and protocols are in use. A dynamic firewall based on OpenFlow messages could be seen as a little like a SOHO (Small Office/Home Office) router with UPnP (Universal Plug and Play) support, but with hopefully more authentication (depending on the implementation). Let us continue to use UPnP as a comparison technology to explain how things could potential go wrong security wise. Many SOHO wireless routers seem to leave UPnP on, which allows a client to set up a port-forwarding rule on the NAT device dynamically. The problem here is that a piece of malicious code, or a malicious user, may open up ports for things that are not wanted by the network administrator, like ports for back doors and file sharing. For example, BitTorrent may be slow at someone’s local coffee shop without forwarding a port for ratio reasons, and the user at the coffee shop likely doesn't have the admin password to set up a static port forward on the wireless router’s interface directly, but if UPnP is enabled the user can open the port anyway using the UPnP protocol. Can a similar situation appear in SDN? It depends greatly on the intended users. With the subject of intended users comes the subject of authentication, and OpenFlow has a fairly simple model. Communication between a switch and the controller happens over TCP on port 6633 (canonical, this could be configured differently). The option is there for using TLS (Transport Layer Security) and mutual authentication via certs on both the switch and the controller. However, based on what the author has read from the documents of one controller implementer, it seems that many switch vendors do not support the TLS options yet, so the control channel would be in the clear. Without TLS, what security features are in place to keep just anyone from configuring an OpenFlow enabled switch? The author’s understanding is the switch starts the communications, so it has to know the IP/host name of the controller. An important question for security is how is this configured? If it's by host name an attacker could think of doing some DNS shenanigans to have the host name map to their IP, thus gaining control of the switches. Or the attacker may knock out the real controller, and if the attacker is on the same LAN, take its IP for themselves. TLS with signed certs on each switch and controller would prevent these attacks, though having the communications between the controller and the switches be on its own channel/out of band would also help. Some potential pitfalls also exist depending on how the Software Defined Networking system is configured, where it is allowed to be configured from, and who is allowed to do the configuring. As a mental exercise, let’s say an application is given the power to decide on its own network path via some extensions/shims that communicate with an OpenFlow controller. This could be usefully in many situations. For example, let’s say the data is of an immediate nature, like Voice over IP. An application like VoIP may choose a path that is not as reliable but has lower latency and a lot of bandwidth. If the data is important from a confidentiality standpoint, but time is not of the essence, then a slower set of links routed around the more exposed sections of the network could be preferred. However, a user/application being able to choose the network path data takes could lead to potential problems depending on how authentication and repudiation are handled. As a comparison from history, IP4 has source routing options that allow for the host to specify the path of the packet though routers on the Internet. Let’s say Mallory wants to talk to Bob while pretending she is Alice, and use a connection based protocol. Normally spoofing IP packets is easy (Scapy, Nemesis, HPing are common tools for this task) if the attacker does not care if they get a response back. Mallory sending a packet with Alice's IP is not a problem using raw sockets, but when Bob gets the message he will reply in a way that is routed to Alice’s host, not Mallory’s, so Mallory never sees the return traffic and is blind as to what to send next (Sometimes not so blind as with vulnerable services like the R tools suite). If Mallory controls a router in the path however, and uses source routing to specify the path for the connection to return though, she can see the returning traffic and keep spoofing a session-based connection as Alice. The author wonders if something similar can be done with applications that use OpenFlow to choose their own path. For security reasons many modern Internet routers are configure to drop source routed packets to avoid these sorts of spoofing attacks. Can there be a modern equivalent? XSP (eXtensible Session Protocol) is a framework that leverages OpenFlow to allow an application to adjust its own path through the network. The author’s understanding is that XSP has mitigations in place to thwart these sorts of attacks, but what if someone designs a similar system and is not as careful? There are plenty of implementation details that could have security consequences, and as with cryptographic implementations, even seemingly minor details can matter. A few potential security related questions include: Is each application responsible for having credentials to make calls for path changes, or is the authentication built into the OS and any application the user is running has to make an OS call to use the SDN features? This paper has attempted to spur an interest in Software Defined Networking amongst security practitioners. We have tried to provide a few ideas, and some practical examples, of how SDN can be both used and misused. Many of the ideas put forth in this paper are just conceptual for the time being, but we hope it inspires others to research these topics in more depth. We highly recommend going to the OpenFlow tutorial site , downloading the VM, and working though the projects just to get a feel for the possibilities. After that, experiment with how OpenFlow/Software Define Networking can be used in securing your network environment. Thanks to my professor/advisor Martin Swany for feedback on this paper, and Brent Salisbury for answering some of my OpenFlow/SDN questions. 15 most recent posts on Irongeek.com: If you would like to republish one of the articles from this site on your webpage or print journal please contact IronGeek. Copyright 2016, IronGeek Louisville / Kentuckiana Information Security Enthusiast
<urn:uuid:50d1be6a-588c-487c-8c4f-288fee056c8d>
CC-MAIN-2017-04
http://www.irongeek.com/i.php?page=security/security-and-software-defined-networking-sdn-openflow
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00433-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944731
3,594
3.0625
3
I recently received an email from a learner who is studying for his CCNA Routing-and-Switching Certification and he had a few excellent questions about the OSI model and how, exactly data moves from one-layer to the next. I figured my response might prove valuable to others studying for their CCNA so…here it is! - Learner-Question: In video of the osi model, you said that the session layer should provide the source and destination port number but the fields of those ports are at the transport header- my question is how does the session layer put this number on field which does not exist in that time (when i send the date the encapsulation process goes down from the app layer)? In order to thoroughly answer all of your questions below, one really needs to know about computer programming, APIs, etc…which frankly, I know very little about. But what I do know, I’ll try to explain. From my understanding, there are some kind of software “links” or “hooks” which are used to allow a program at one layer of the OSI model to communicate with a program at another layer. Many applications have software built-in that provide multi-layer functionality. For example, imagine that you open some kind of Terminal Client (like Hyperterminal, SecureCRT, PuTTY, etc). That software you’ve opened technically does not reside at ANY of the OSI layers. That software just provides the graphical display such as the buttons you can press, the pulldown menus available, etc. Now imagine that within PuTTY or Hyperterminal you press a button to initiate a Telnet connection. At that moment, the PuTTY software informs your CPU that the CPU must start the Telnet program. PuTTY provides the interface so you can see…and control…what is going on, but PuTTY itself is NOT Telnet. It’s simply the user-interface so you can control Telnet. The functionality of Telnet actually is actually composed of an Application-Layer process as well as a Session layer process, all rolled into one. At the Application layer, the Telnet protocol answers such questions as, “what is a “username” and what is a “password” and is that required? Shall it send data downstream to lower-levels of the OSI model one-bit-at-a-time or several bytes-at-a-time? How is the user supposed to know when Telnet is waiting for input, versus currently transmitting output?” etc etc. The Session-Layer component of Telnet knows that it should be “listening” for incoming sessions on port-23. And when initiating outgoing sessions, it should use a destination port-23. At some point, the Telnet protocol creates a hook (I think these are called APIs) that allows it to invoke the Transmission Control Protocol (TCP). TCP knows that as part of the datastructure it creates, it must reserve 2-bytes for a “destination port-number” field and another 2-bytes for a “source port-number field” but what TCP DOESN’T know is what numbers to place in those fields. So this API (or whatever it is) allows the Session-Layer component of Telnet to convey to TCP that it place the value of “23” in either the Source or Destination Port Number field (depending on who is initiating the Telnet session). You may now be thinking, “but what about the Presentation Layer? You didn’t include that in the Telnet process?”. I believe that once SecureCRT (or PuTTY or Hyperterminal) invoke your Application-Layer protocol (such as Telnet or SSH) that SecureCRT/Hyperterminal will provide the Presentation Layer-component. SecureCRT knows if, when you press a keystroke on your keyboard, that key should be represented by ASCII or EBCDIC, SecureCRT/Hyperterminal also knows if you pressed the button indicating that encryption should be used. So it kind of “merges” or “blends” all of that information into Telnet thus providing the Presentation-Layer components. I’m not sure HOW it does this…but it does. - This question is about the type code field which lays at the llc sublayer, I understood that it purpose is to provide the upper layers what protocol is “talking”‘, how does it happens if the nic strips off the frame header in the decapsulation process? Basically what I wrote above happens in reverse here. There is some kind of internal software “hook” (probably another API) that allows your Layer-2 protocol (Ethernet) to communicate the value in the EtherType field to the CPU. In this way the CPU knows if it needs to invoke a Layer-3 procoess (like IP) or…if that process is already running…to take the Data from the Telnet frame and forward it to the correct layer-3 process. So IP itself does NOT see that Ethernet frame or any of the fields within it. But that “hook” (API???) provides the interface so that Ethernet data can be transferred upstream to the IPv4 process. At this point, my knowledge of the specific details of how this works ends. - If the type code provides the protocol(and its version), why does the IP header has “vers” field? Once again, to answer this question I believe it’s all about the APIs that allow protocols at different layers to talk to each other. Moving downstream (from Layer-3 to Layer-2) when IPv4 (as an example) has created a full IP Packet, it will “call” the API that allows it to hook into the Layer-2 protocol. IP doesn’t even CARE what that Layer-2 protocol is. It probably does something like, “Hey Layer-2 hooking API!! I’ve got some data here. Please pass it on to whatever protocol is operating at the Datalink Layer for me!!” The API, because it is talking to IPv4 will then invoke whatever layer-2 protocol is running (Ethernet, HDLC, Frame-Relay, etc) and say, “I’ve got some Layer-3 data for you!!”. At that point, the Layer-2 protocol (Ethernet in this case) will say, “Great! Can you give me some number that I can shove into my Ethertype field that indicates WHICH Layer-3 protocol created the data?? I don’t really care personally…but the device at the other end of the link receiving this data will need to know!”. So the API (that was originally called by the IPv4 process and was DESIGNED to be an interpreter between IPv4 and Ethernet) will say, “sure…the number you need is 0×800!” and thus…Ethernet places that value into the Ethertype field. Receiving an Ethernet frame would work the same way but in reverse. This time the Layer-2 protocol would “call” that L2-to-L3 API and provide the data, ALONG WITH the value of the Ethertype field to that API. In turn, the API would then know it needs to call-out to IPv4 and transfer the data upstream. The following question was recently sent to me regarding PPP and CHAP: At the moment I only have packet tracer to practice on, and have been trying to setup CHAP over PPP. It seems that the “PPP CHAP username xxxx” and “PPP CHAP password xxxx” commands are missing in packet tracer. I have it set similar to this video… (you can skip the first 1 min 50 secs) As he doesn’t use the missing commands, if that were to be done on live kit would it just use the hostname and magic number to create the hash? Also, in bi-directional authentication, do both routers have to use the same password or can they be different as long as they match what they expect from the other router? Here was my reply: When using PPP CHAP keep in mind four fundamental things: - The “magic number” that you see in PPP LCP messages has nothing to do with Authentication or CHAP. It is simply PPPs way of trying to verify that it has a bi-directional link with a peer. When sending a PPP LCP message a random Magic Number is generated. The idea is that you should NOT see your own Magic Number in LCP messages received from your PPP Peer. If you DO see the same magic number that you transmited, that means you are talking to yourself (your outgoing LCP CONFREQ message has been looped back to you). This might happen if the Telco that is providing your circuit is doing some testing or something and has temporarily looped-back your circuit. - At least one of the devices will be initiating the CHAP challenge. In IOS this is enabled with the interface command, “ppp authentication chap”. Technically it only has to be configured on one device (usually the ISP router that wishes to “challenge” the incoming caller) but with CHAP you can configure it on both sides if you wish to have bi-directional CHAP challenges. - Both routers need a CHAP password, and you have a couple of options on how to do this. - The “hash” that is generated in an outgoing PPP CHAP Response is created as a combination of three variables, and without knowing all three values the Hash Response cannot be generated: - A router’s Hostname - The configured PPP CHAP password - The PPP CHAP Challenge value I do all of my lab testing on real hardware so I can’t speak to any “gotchas” that might be present in simulators like Packet Tracer. But what I can tell you, is that on real routers the side that is receiving the CHAP challenge must be configured with an interface-level CHAP password. The relevant configurations are below as an example. ISP router that is initiating the CHAP Challenge for incoming callers: username Customer password cisco ! interface Serial1/3 encapsulation ppp ppp authentication chap ip address x.x.x.x y.y.y.y ! Customer router placing the outgoing PPP call to ISP: hostname Customer ! interface Serial1/3 encapsulation ppp ppp chap password cisco ip address x.x.x.x y.y.y.y ! If you have a situation where you expect that the Customer Router might be using this same interface to “call” multiple remote destinations, and use a different CHAP password for each remote location, then you could add the following: Customer router placing the outgoing PPP call to ISP-1 (CHAP password = Bob) and ISP-2 (CHAP password = Sally): hostname Customer ! username ISP-1 password Bob username ISP-2 password Sally interface Serial1/3 encapsulation ppp ppp chap password cisco ip address x.x.x.x y.y.y.y ! Notice in the example above, the “username x password y” commands supercede the interface-level command, “ppp chap password x”. But please note that the customer (calling) router always needs the “ppp chap password” command configured at the interface level. A global “username x password y” in the customer router does not replace this command. In this situation, if the Customer router placed a call to ISP-3 (for which there IS no “username/password” statement) it would fallback to using the password configured at the interface-level. Lastly, the “username x password y” command needs to be viewed differently depending on whether or not it is configured on the router that is RESPONDING to a Challenge…or is on the router that is GENERATING the Challenge: - When the command “username X password Y” is configured on the router that is responding to the CHAP Challenge (Customer router), the router’s local “hostname” and password in this command (along with the received Challenge) will be used in the Hash algorithm to generate the CHAP RESPONSE. - When the command “username X password Y” is configured on the router that is generating the CHAP Challenge (ISP Router), once the ISP router receives the CHAP Authentication Response (which includes the hostname of the Customer/calling router) it will match that received Hostname to a corresponding “username X password Y” statement. If one is found that matches, then the ISP router will perform its own CHAP hash of the username, password, and Challenge that it previously created to see if its own, locally-generated result matches the result that was received in the CHAP Response. Lastly, you asked, “ Also, in bi-directional authentication, do both routers have to use the same password or can they be different as long as they match what they expect from the other router?” Hopefully from my explanations above it is now clear that in the case of bi-directional authentication, the passwords do indeed have to be the same on both sides. Hope that helps!
<urn:uuid:1e02c374-304e-468a-9dc5-642f32a88e5e>
CC-MAIN-2017-04
http://blog.ine.com/author/kbogart/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00214-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921473
2,882
3.0625
3
Optical amplifiers are the device that can amplifie optical signal directly, without the need to first convert it to an electrical signal. They are a key enabling technology for optical communication networks. Together with wavelength-division multiplexing (WDM) technology, which allows the transmission of multiple channels over the same fiber, optical amplifiers have made it possible to transmit many terabits of data over distances from a few hundred kilometers and up to transoceanic distances, providing the data capacity required for current and future communication networks. Optical amplifiers are important in optical communication and laser physics. Today’s common used optical amplifiers include Erbium-Doped Fiber Amplifier (EDFA), Raman Amplifier, and Silicon Optical Amplifier (SOA). Erbium-Doped Fiber Amplifier (EDFA) EDFA works through a trace impurity in the form of a trivalent erbium ion that is inserted into the optical fiber’s silica core to alter its optical properties and permit signal amplification. The trace impurity is known as a dopant, and the process of inserting the impurity is known as doping or being doped. Pump lasers, known as pumping bands, insert dopants into the silica fiber at a 980 or 1480 nanometer (nm) wavelength, resulting in a gain, or amplification, in the 1550 nm range, which is the optical C-band. The 1480 nm band is usually used in amplifiers with greater power. Pump lasers operate bidirectionally. This action amplifies a weak optical signal to a higher power, effecting a boost in the signal strength. However, EDFAs are usually limited to no more than 10 spans covering a maximum distance of approximately 800 kilometers (km) and also cannot amplify wavelengths shorter than 1525 nanometers (nm). The EDFA was the first successful optical amplifier and a significant factor in the rapid deployment of fiber optic networks during the 1990s. In a Raman amplifier, the signal is amplified due to stimulated Raman scattering (SRS). Raman scattering is a process in which light is scattered by molecules from a lower wavelength to a higher wavelength. When sufficiently high pump power is present at a lower wavelength, stimulated scattering can occur in which a signal with a higher wavelength is amplified by Raman scattering from the pump light. SRS is a nonlinear interaction between the signal (higher wavelength; e.g. 1550 nm) and the pump (lower wavelength; e.g. 1450 nm) and can take place within any optical fiber. In most fibers however the efficiency of the SRS process is low, meaning that high pump power (typically over 1 W) is required to obtain useful signal gain. Thus, in most cases Raman amplifiers cannot compete effectively with EDFAs. Raman amplification provides two unique advantages over other amplification technologies. The first is that the amplification wavelength band of the Raman amplifier can be tailored by changing the pump wavelengths, and thus amplification can be achieved at wavelengths not supported by competing technologies. The other more important advantage is that amplification can be achieved within the transmission fiber itself, enabling what is known as distributed Raman amplification (DRA). Raman amplifiers are most often used together with EDFAs to provide ultra-low NF combined amplifiers, which are useful in applications such as long links with no inline amplifiers, ultra-long links spanning thousands of kilometers, or very high bit-rate (40/100 Gb/s) links. Silicon Optical Amplifier SOAs are amplifiers which use a semiconductor to provide the gain medium. They operate in a similar manner to standard semiconductor lasers (without optical feedback which causes lasing), and are packaged in small semiconductor “butterfly” packages. Compared to other optical amplifiers, SOAs are pumped electronically (i.e. directly via an applied current), and a separate pump laser is not required. However, small size and potentially low cost due to mass production, SOAs suffer from a number of drawbacks which make them unsuitable for most applications. In particular, they provide relatively low gain (
<urn:uuid:b4a3e3fe-cd65-45f6-adb4-33aee754cbba>
CC-MAIN-2017-04
http://www.fs.com/blog/optical-amplifiers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00150-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931524
851
4.03125
4
Space reclamation is the ability to reclaim disk space back to the storage pool after data has been removed from a server(s). For example, if I created a 100GB LUN and mapped it to a server, then I loaded 50GB of data onto that LUN. Essentially, I have taken up 50GB worth of disk space from my storage pool. Now let say that I deleted that 50GB of data, without space reclamation, that 50GB of space is still taken from my storage pool. Yes, that 50GB is still available on that server for future use but nowhere else. If I want to allocate that 50GB to another server, I cannot do so without space reclamation. If that is what you preferred, then may as well use JBOD instead of an intelligent SAN. Ever wonder what happens to disk space when you hit that ‘Delete’ store procedure on a tablespace or a database? Does the space occupied get freed up automatically so the next set of data can use it? The answer is it depends! How so you might ask – let’s find out! I remember back in 2004, Oracle came out with a tool (ASRU) to reclaim space in Oracle ASM. As a matter of fact, Oracle and 3PAR teamed together to extend storage efficiency for Oracle database environments. Keep in mind that space reclamation has a pre-requisite on the storage array side, and that is thin provisioning. Here is an excerpt from Oracle ASRU white paper with a very good explanation on the benefits on thin provisioning, and why space reclamation is a good thing: “Thin provisioning is a feature common to many storage arrays. Thin provisioning offers a different approach for provisioning storage where the storage array allocates capacity to the thin provisioned storage volume as application writes data over time, not upfront at time of storage volume creation. This approach is justified because for most enterprises, since written storage utilization for most applications operates between 20 and 45 percent. With Thin provisioning, written storage utilization can be increased to near 100 percent. As a result, IT users can significantly reduce the storage capacity that is required to support their Oracle databases. Oracle has actively supported storage vendors’ thin provisioning feature in conjunction with a database feature known as “autoextend.” Autoextend enables a file’s size used for tablespace to grow when applications add new data to database tables. Oracle is introducing new capability to ASM for extending support of thin provisioning. Previously, space inside a storage array supporting thin provisioning could only grow as a tablespace became larger. If a tablespace shrunk or even if an entire database were removed from an ASM disk group, space that had previously been allocated inside the array could not be freed for allocation to a different application. This new capability takes the form of an administrative ASM command that enables a storage array to detect unused space after it is freed by ASM and return that space to an unallocated status inside the storage array.” Below is the link to the Oracle ASRU white paper in case you want to learn the inner workings of Oracle DB space deallocation http://www.oracle.com/technetwork/database/oracle-automatic-storage-management-132797.pdf Given the benefits of thin provisioning, Nimble Storage supports it natively on all volume allocations by default. Now what about space reclamation for Oracle DB running on ASM? You bet! Here is proof along with instructions on how to go about validating it: Simple steps taken to validate: - Provision a thin provisioned volume to DB server - Create a diskgroup on the thin provisioned volume - Create a tablespace on the diskgroup - Bulk insert data into the tablespace - Check space usage/free on ASM AND Nimble volume - Drop the tablespace - Reclaim space using ASRU – This is for Oracle ASM - Enjoy newly reclaimed free space J Detail steps with supporting diagrams: - - Created a thin provisioned LUN and mapped to a database server. - - Created a diskgroup named “TESTDG”. - - Created a tablespace named “TESTTS” - - Inserted 58,000,000 records in a table - - Checked the tablespace size - - Checked the diskgroup size - - Checked the physical LUN size with compression The reason you see that only 4GB used on the LUN vs. 22GB on the diskgroup is because of compression on the Nimble Storage. - - Here is the physical LUN size without compression - - Drop tablespace TESTTS - - Checked ASM diskgroup size after drop Noticed the diskgroup size is freed up but the physical LUN size is not. Also the used disk space increased a tiny bit because of compression without data. When the tablespace was deleted, the Oracle ASM extents no longer contain data. With compression after tablespace was deleted Without compression after the tablespace was deleted - - Executed the ASRU command - - After the ASRU operation completed, the physical LUN space is freed up. Not only can I reclaim space on the physical LUN but I got very good compression rate on the LUN too. Take a look at the TESTDG diskgroup space usage after I loaded 58,000,000 records in a table. The TESTDG diskgroup took up 22GB of space while the physical LUN showed only 4GB was taken. Now, how is that for storage efficiency?
<urn:uuid:48716a18-f666-4fc1-83c5-3b441c4d83b4>
CC-MAIN-2017-04
https://connect.nimblestorage.com/people/tdau/blog/2013/10/16/asm-space-reclamation-utility-asru-on-nimble-storage
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00150-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896409
1,180
2.578125
3
Ghostly Boomerang Nebula / October 29, 2013 Captured by telescope from an international astronomical observatory in Chile, the Boomerang nebula is the coldest place on earth, at 1 degree Kelvin -- the equivalent of minus 458 degrees Fahrenheit. The image provides more detailed information on the shape and true nature of the nebula that was previously available. "This is important for the understanding of how stars die and become planetary nebulas," said Raghvendra Sahai, a researcher and principal scientist at NASA's Jet Propulsion Laboratory in Pasadena, Calif. "Using ALMA, we were quite literally, and figuratively, able to shed new light on the death throes of a sun-like star." The Atacama Large Millimeter/submillimeter Array (ALMA) facility is a partnership between the Republic of Chile, Europe, North America and East Asia. Photo courtesy of NRAO/AUI/NSF/NASA/STScI/JPL-Caltech.
<urn:uuid:bfae7b72-93a1-48d0-b184-ac51690f10e6>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Ghostly-Boomerang-Nebula.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00452-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905773
209
3.28125
3
The goal of this series is to try to answer an age-old question that is often asked and rarely answered. Namely: is the TLS protocol provably secure? While I find the question interesting in its own right, I hope to convince you that it’s of more than academic interest. TLS is one of the fundamental security protocols on the Internet, and if it breaks lots of other things will too. Worse, it has broken — repeatedly. Rather than simply patch and hope for the best, it would be fantastic if we could actually prove that the current specification is the right one. Unfortunately this is easier said than done. In the first part of this series I gave an overview of the issues that crop up when you try to prove TLS secure. They come at you from all different directions, but most stem from TLS’s use of ancient, archaic cryptography; gems like, for example, the ongoing use of RSA-PKCS#1v1.5 encryption fourteen years after it was shown to be insecure. Despite these challenges, cryptographers have managed to come up with a handful of nice security results on portions of the protocol. In the previous post I discussed Jonnson and Kaliski’s proof of security for the RSA-based TLS handshake. This is an important and confidence-inspiring result, given that the RSA handshake is used in almost all TLS connections. In this post we’re going to focus on a similarly reassuring finding related to the the TLS record encryption protocol — and the ‘mandatory’ ciphersuites used by the record protocol in TLS 1.1 and 1.2 (nb: TLS 1.0 is broken beyond redemption). What this proof tells us is that TLS’s CBC mode ciphersuites are secure, assuming… well, a whole bunch of things, really. The bad news is that the result is extremely fragile, and owes its existence more to a series of happy accidents than from any careful security design. In other words, it’s just like TLS itself. Records and handshakes Let’s warm up with a quick refresher. TLS is a layered protocol, with different components that each do a different job. In the previous post I mostly focused on the handshake, which is a beefed-up authenticated key agreement protocol. Although the handshake does several things, its main purpose is to negotiate a shared encryption key between a client and a server — parties who up until this point may be complete strangers. The handshake gets lots of attention from cryptographers because it’s exciting. Public key crypto! Certificates! But really, this portion of the protocol only lasts for a moment. Once it’s done, control heads over to the unglamorous record encryption layer which handles the real business of the protocol: securing application data. Most kids don’t grow up dreaming about a chance to work on the TLS record encryption layer, and that’s fine — they shouldn’t have to. All the record encryption layer does is, well, encrypt stuff. In 2012 that should be about as exciting as mailing a package. And yet TLS record encryption still manages to be a source of endless excitement! In the past year alone we’ve seen three critical (and exploitable!) vulnerabilities in this part of TLS. Clearly, before we can even talk about the security of record encryption, we have to figure out what’s wrong with it. Welcome to 1995 |Development of the SSLv1 record encryption layer The problem (again) is TLS’s penchant for using prehistoric cryptography, usually justified on some pretty shaky ‘backwards compatibility‘ grounds. This excuse is somewhat bogus, since the designers have actually changed the algorithms in ways that break compatibility with previous versions — and yet retained many of the worst features of the originals. The most widely-used ciphersuites employ a block cipher configured in CBC mode, along with a MAC to ensure record authenticity. This mode can be used with various ciphers/MAC algorithms, but encryption always involves the following steps: - If both sides support TLS compression, first compress the plaintext. - Next compute a MAC over the plaintext, record type, sequence number and record length. Tack the MAC onto the end of the plaintext. - Pad the result with up to 256 bytes of padding, such that the padded length is a multiple of the cipher’s block size. The last byte of the padding should contain the padding length (excluding this byte), and all padding bytes must also contain the same value. A padded example (with AES) might look like: 0x MM MM MM MM MM MM MM MM MM 06 06 06 06 06 06 06 - Encrypt the padded message using CBC mode. In TLS 1.0 the last block of the previous ciphertext (called the ‘residue’) is used as the Initialization Vector. Both TLS 1.1 and 1.2 generate a fresh random IV for each record. To get an idea of what’s wrong with the CBC ciphersuite, you can start by looking at the appropriate section of the TLS 1.2 spec — which reads more like the warning label on a bottle of nitroglycerin than a cryptographic spec. Allow me sum up the problems. First, there’s the compression. It’s long been known that compression can leak information about the contents of a plaintext, simply by allowing the adversary to see how well it compresses. The CRIME attack recently showed how nasty this can get, but the problem is not really news. Any analysis of TLS encryption begins with the assumption that compression is turned off. So ok: no TLS 1.0, no compression. Is that all? Well, we still haven’t discussed the TLS MAC, which turns out to be in the wrong place — it’s applied before the message is padded and encrypted. This placement can make the protocol vulnerable to padding oracle attacks, which (amazingly) will even work across handshakes. This last fact is significant, since TLS will abort the connection (and initiate a new handshake) whenever a decryption error occurs in the record layer. It turns out that this countermeasure is not sufficient. To deal with this, recent versions of TLS have added the following patch: they require implementers to hide the cause of each decryption failure — i.e., make MAC errors indistinguishable from padding failures. And this isn’t just a question of changing your error codes, since clever attackers can learn this information by measuring the time it takes to receive an error. From the TLS 1.2 spec: In general, the best way to do this is to compute the MAC even if the padding is incorrect, and only then reject the packet. For instance, if the pad appears to be incorrect, the implementation might assume a zero-length pad and then compute the MAC. This leaves a small timing channel, since MAC performance depends to some extent on the size of the data fragment, but it is not believed to be large enough to be exploitable. To sum up: TLS is insecure if your implementation leaks the cause of a decryption error, but careful implementations can avoid leaking much, although admittedly they probably will leak some — but hopefully not enough to be exploited. Gagh! At this point, just take a deep breath and say ‘all horses are spherical‘ three times fast, cause that’s the only way we’re going to get through this. Accentuating the positive Having been through the negatives, we’re almost ready to say nice things about TLS. Before we do, let’s just take a second to catch our breath and restate some of our basic assumptions: - We’re not using TLS 1.0 because it’s broken. - We’re not using compression because it’s broken. - Our TLS implementation is perfect — i.e., doesn’t leak any information about why a decryption failed. This is probably bogus, yet we’ve decided to look the other way. - Oh yeah: we’re using a secure block cipher and MAC (in the PRP and PRF sense respectively).** And now we can say nice things. In fact, thanks to a recent paper by Kenny Paterson, Thomas Ristenpart and Thomas Shrimpton, we can say a few surprisingly positive things about TLS record encryption. What Paterson/Ristenpart/Shrimpton show is that TLS record encryption satisfies a notion they call ‘length-hiding authenticated encryption‘, or LHAE. This new (and admittedly made up) notion not only guarantees the confidentiality and authenticity of records, but ensures that the attacker can’t tell how long they are. The last point seems a bit extraneous, but it’s important in the case of certain TLS libraries like GnuTLS, which actually add random amounts of padding to messages in order to disguise their length. There’s one caveat to this proof: it only works in cases where the MAC has an output size that’s greater or equal to the cipher’s block size. This is, needless to say, a totally bizarre and fragile condition for the security of a major protocol to hang on. And while the condition does hold for all of the real TLS ciphersuites we use — yay! — this is more a happy accident than the result of careful design on anyone’s part. It could easily have gone the other way. So how does the proof work? Good question. Obviously the best way to understand the proof is to read the paper itself. But I’d like to try to give an intuition. First of all, we can save a lot of time by starting with the fact that CBC-mode encryption is already known to be IND-CPA secure if implemented with a secure block cipher (PRP). This result tells us only that CBC is secure against passive attackers who can request the encryption of chosen messages. (In fact, a properly-formed CBC mode ciphertext should be indistinguishable from a string of random bits.) The problem with plain CBC-mode is that these security results don’t hold in cases where the attacker can ask for the decryption of chosen ciphertexts. This limitation is due to CBC’s malleability — specifically, the fact that an attacker can tamper with a ciphertext, then gain useful information by sending the result to be decrypted. To show that TLS record encryption is secure, what we really want to prove is that tampering gives no useful results. More concretely, we want to show that asking for the decryption of a tampered ciphertext will always produce an error. We have a few things working in our favor. First, remember that the underlying TLS record has a MAC on it. If the MAC is (PRF) secure, then any ciphertext tampering that results in a change to this record data or its MAC will be immediately detected (and rejected) by the decryptor. This is good. Unfortunately the TLS MAC doesn’t cover the padding. To continue our argument, we need to show that no attacker can produce a legitimate ciphertext, and that includes tampering that messes with the padding section of the message. Here again things look intuitively good for TLS. During decryption, the decryptor checks the last byte of the padded message to see how much padding there is, then verifies that all padding bytes contain the same numeric value. Any tampering that affects this section of the plaintext should either: - Produce inconsistencies in some padding bytes, resulting in a padding error, or - Cause the wrong amount of padding to be stripped off, resulting in a MAC error. This all seems perfectly intuitive, and you can imagine the TLS developers making exactly this argument as they wrote up the spec. However there’s one small exception to the rule above, which can turn TLS implementations that add an unnecessarily large amount of padding to the plaintext. (For example, GnuTLS.) To give an example, let’s say the unpadded record + MAC is 15 bytes. If we’re using AES, then this plaintext can be padded with a single byte. Of course, if we’re inclined to add extra padding, it could also be padded with seventeen bytes — both are valid padding strings. The two possible paddings are presented below: You see, if TLS MACs are always bigger than a ciphertext block, then all messages will obey a strict rule: no padding will ever appear in the first block of the CBC ciphertext. Since the padding is now guaranteed to start in the second (or later) block of the CBC ciphertext, the attacker cannot ‘tweak’ it by modifying the IV (this attack only works against the first block of the plaintext). Instead, they would have to tamper with a ciphertext block. And in CBC mode, tampering with ciphertext blocks has consequences! Such a tweak will allow the attacker to change padding bytes, but as a side effect it will cause one entire block of the record or MAC to be randomized when decrypted. And what Paterson/Ristenpart/Shrimpton prove is that this ‘damage’ will inevitably lead to a MAC error. This ‘lucky break’ means that an attacker can’t successfully tamper with a CBC-mode TLS ciphertext. And that allows us to push our way to a true proof of the CBC-mode TLS ciphersuites. By contrast, if the MAC was only 80 bits (as it is in some IPSEC configurations), the proof would not be possible. So it goes. Now I realize this has all been pretty wonky, and that’s kind of the point! The moral to the story is that we shouldn’t need this proof in the first place! What it illustrates is how fragile and messy the TLS design really is, and how (once again) it achieves security by luck and the skin of its teeth, rather than secure design. What about stream ciphers? The good news — to some extent — is that none of the above problems apply to stream ciphers, which don’t attempt to hide the record length, and don’t use padding in the first place. So the security of these modes is much ‘easier’ to argue. There’s probably a lot more that can be said about TLS record encryption, but really… I think this post is probably more than anyone (outside of the academic community and a few TLS obsessives) has ever wanted to read on the subject. * One thing I don’t mention in this post is the TLS 1.0 ’empty fragment’ defense, which actually works against BEAST and has been deployed in OpenSSL for several years. The basic idea is to encrypt an empty record of length 0 before each record goes over the wire. In practice, this results in a full record structure with a MAC, and prevents attackers from exploiting the residue bug. Although nobody I know of has ever proven it secure, the proof is relatively simple and can be arrived at using standard techniques. ** The typical security definition for a MACs is SUF-CMA (strongly unforgeable under chosen message attack). This result uses the stronger — but also reasonable — assumption that the MAC is actually a PRF.
<urn:uuid:a93c5551-30dd-4eff-ba72-34d68e6cc292>
CC-MAIN-2017-04
https://blog.cryptographyengineering.com/2012/09/28/on-provable-security-of-tls-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00452-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911127
3,241
2.5625
3
To get good fiber optic splices or terminations, especially when using the pre-polished connectors with internal splices, it is extremely important to cleave the fiber properly. If the fiber ends are not precisely cleaved, the ends will not mate properly. To prepare a fiber end for a connector or splice, the end of the fiber must be cleaved to a 90 degree flat end. For technicians the problem is that the end of the fiber strand is so small that it is impossible to tell with the naked eye whether the strand has a flat end. So in order for this to happen, you must use a cleaving tool called fiber optic cleaver. Some knowledge of fiber optic cleaves will be provided in this article. What Is Fiber Optic Cleaver? A cleave in an optical fiber is a deliberate, controlled break, intended to create a perfectly flat end face, perpendicular to the longitudinal axis of the fiber. A fiber optic cleaver is a tool that holds the fiber under low tension, scores the surface at the proper location, then applies greater tension until the fiber breaks. Usually, after the fiber has been scored, the technician will use a cleaver either bend or pull the fiber end, stressing the fiber. This stress will cause the fiber to break at the score mark, leaving a 90 degree flat end if all goes well. So the cleaver doesn’t cut the fiber. In fact, it just breaks the fiber at a specific length. Two Types of Fiber Optic Cleavers We know that the closer to 90 degrees the cleave is, the more success you will have with matching it to another cleaved fiber to be spliced or mated by a connector. So it’s important to use the proper tool with good technique to consistently achieve a 90 degree flat end. Good cleavers are automatic and produce consistent results, irrespective of the operator. The user need only clamp the fiber into the cleaver and operate its controls. Some cleavers are less automated, making them more dependent on operator technique and therefore less predictable. There are two broad categories of fiber optic cleavers, scribe cleavers and precision cleavers. A traditional cleaving method, typically used to remove excess fiber from the end of a connector before polishing, uses a simple hand tool called a scribe. Scribe cleavers are usually shaped like ballpoint pens with diamond tipped wedges or come in the form of tile squares. The scribe has a hard, sharp tip, generally carbide or diamond, that is used to scratch the fiber manually. Then the operator pulls the fiber to break it. Since both the scribing and breaking process are under manual control, this method varies greatly in repeatability. Most field and lab technicians shy away from these cleavers as they are not accurate. However, if in skilled hands, this scribe cleaver offer significantly less investment for repairs, installation, and training classes. Precision cleavers are the most commonly used cleavers in the industry. They use a diamond or tungsten wheel/blade to provide the nick in the fiber. Tension is then applied to the fiber to create the cleaved end face. The advantage to these cleavers is that they can produce repeatable results through thousands of cleaves by simply just rotating the wheel/blade accordingly. Although more costly than scribe cleavers, precision cleavers can cut multiple fibers while increasing speed, efficiency, and accuracy. In the past, many cleavers were scribes, but over time, as fusion splicers became available and a good cleave is the key to low splice loss, precision cleavers were developed to support various applications and multiple fiber cleaving with blades that have a much longer life span. Which One to Use: Scribe Cleaver or Precision Cleaver? While both types perform the functions above, the difference between the two categories of cleavers is the percentage yield of good cleaves. An experienced fiber optic technician will achieve approximately 90% good cleaves with a scribe cleaver, while the precision cleaver will produce 99% good cleaves. The difference doesn’t seem like much so you may hardly to make a specific decision. My suggestion is to buy precision cleavers if you plan to use a lot of mechanical splices or pre-polished splice/connectors. It will pay for itself in no time. If you decide to use the inexpensive scribe cleavers, you must learn how to use it properly. Follow directions, but also do what comes naturally to you when using the device, as they are sensitive to individual technique. Inspect the fibers you cleave to see how good they are and keep practicing until you can make consistently good cleaves. To find pricing, information and more information on the different fiber optic cleavers currently available, please visit www.fs.com. Sign up to get informative news, posts and deals in regards to current products in the fiber optic field. Or you also can contact our friendly staff members at email@example.com to learn more about all the fiber optic cleavers with the best value that are present in the industry today.
<urn:uuid:f415f034-8c93-4c89-a6d0-38b9cdf07bcf>
CC-MAIN-2017-04
http://www.fs.com/blog/a-good-fiber-optic-cleaver-helps-cut-out-costly-mistakes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00112-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942324
1,050
2.8125
3
How to Effectively Manage Storage and Protect Data in the Cloud Organizations of all sizes have to deal with an economic reality when it comes to cloud computing: cloud computing requires storage. These budgets continue to remain relatively flat even as demand for cloud storage capacity grows at a rate of nearly 60 percent per year. Here, Knowledge Center contributor Stephen Wojtowecz explains how organizations can effectively manage and protect the data stored in cloud environments. The shift to cloud brings new challenges to data storage, which is already complicated by virtualized systems, tape storage, network-attached storage (NAS) and other data storage formats. Because all data in a cloud lives in the same shared system, management of the data becomes paramount in maintaining service levels and securing critical business information. Organizations should evaluate how their storage resources can most effectively be used in the cloud. Before they can do that, it's best to categorize the model of cloud computing in the organization. Three types of cloud computing dominate the landscape: private (in which a company hosts, owns and manages its own cloud infrastructure), public (in which a third party owns and manages the infrastructure) and hybrid (in which the public and private models are combined). In hybrid models, the public cloud often acts as an overflow facility for the private cloud or is used to satisfy other application needs such as off-site information protection. The underlying characteristic of each is that cloud services need to be available and reliable to users, while effectively optimizing resources and providing a pay-as-you-go delivery model. Keys to effective cloud storage management Despite advantages of the cloud, not all organizations gain the maximum benefits. When outsourcing business processes to the cloud, organizations can select service options such as performance and capacity levels that best suit an organization's particular needs. Crucial components for storing critical data in the cloud are storage management, data protection and disaster recovery. For example, a retail company could opt to store and manage data (such as in-store transactions, online purchases and supplier details) on a private cloud because it allows for better control and access to sensitive data. The retailer, however, might decide that keeping copies of data for disaster recovery on a public cloud service is a lower-risk option. Whether it chooses to leverage a public, private or hybrid cloud model, the company needs to ensure that their cloud has automated data lifecycle management (DLM), built-in data reduction and advanced application protection, to name a few.
<urn:uuid:c5eabdb7-4dcb-4b7e-ad1b-f1ca90dcabcd>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Cloud-Computing/How-to-Effectively-Manage-Storage-and-Protect-Data-in-the-Cloud
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00324-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930213
504
2.65625
3
Numerical weather prediction was one of the original computing problems. When the ENIAC, the first electronic general-purpose computer, came online in 1947, simulations of the atmosphere (along with missile trajectories) was one of the first problems scientists ran on the system. James Kinter, director of the Center for Ocean-Land-Atmosphere Studies at the Institute of Global Environment and Society, presented this historical tidbit on the second morning of the recent XSEDE12 conference in Chicago. He then showcased the latest advances in climate and weather modeling enabled by the Extreme Science and Engineering Discovery Environment (XSEDE), the National Science Foundation (NSF)-supported cyberinfrastructure for open science. His talk, “Benefits and Challenges of High Spatial Resolution Climate Models,” included the results of simulations of climate runs between 2008 and 2011 on TeraGrid and XSEDE systems (TeraGrid was the predecessor to XSEDE). The presentation covered three major research projects funded by the NSF: (1) Project Athena – Resolving Mesoscales in the Atmosphere; (2) PetaApps Team – Resolving Ocean Eddies; and (3) CMMAP – Super-Parameterization and Resolving Clouds, a project led by David Randall at Colorado State University. Cumulatively, these projects, each of which involves dozens of researchers internationally, show the ability of simulations and scientific visualization to depict our warming Earth on a regional scale with uncanny accuracy. “You might think there’s a debate about climate change,” Kinter said. “But in my community, we’ve gotten past the point of it being a debate. However, our climate models are not perfect.” Climate change deniers leap on these imperfections to challenge whether we can trust the models. “To answer this question, we have to prove the case,” he said. In the last 50 years, the field of climate and weather modeling has taken advantage of the million-fold increase in computing power to make three improvements to the codes that mimic the atmosphere. According to Kinter, scientists have improved our understanding of the physical processes involved in atmospheric modeling and incorporated these insights into the evolving codes. They have developed better data assimilation methods to incorporate information from satellites, Doppler radar and ocean monitoring sensors into their models. And they have increased spatial resolution, or the amount of fine-grained detail, that can be included in the simulations. There is evidence that this last step — enhanced spatial resolution — can not only improve climate model fidelity, but also change our understanding of climate dynamics both qualitatively and quantitatively. The big question, though, is: “What’s the bang for the buck when you start looking at high resolution?” To test this, Kinter and his colleagues simulated a variety of climate scenarios at resolutions ranging from 7 kilometers (the most fine-grained) to 125 kilometers (the most coarse-grained). To accomplish this massive computing feat, Kinter’s team was granted a special allocation of computing time on the Athena supercomputer at the National Institute for Computational Sciences (NICS) in 2009 and 2010. For six months, the entire 18,048-core system was at the disposal of the team. Based on those runs and follow-ups on other high performance computing systems, his group has published more than a half dozen publications that run the gamut from the dynamics of tropical storm and cyclone formation to global and regional rainfall forecasts. Among the results he presented at the conference were simulations that represented boreal summer climatology at 7-kilometer resolution over the course of eight summers. Previously researchers had only been able to simulate a single week or month at this level of detail. Animation of boreal summer 2009 simulation at 7 km resolution using the NICAM model from JAMSTEC and University of Tokyo. Earlier simulations produced by many groups around the world showed trends of modeled surface temperature change over the last century that have a statistically significant separation at the global and large continental scale between simulations that include the human influence on climate (increasing greenhouse gases and aerosols) and those that don’t. This was “the smoking gun of whether humans are responsible for the rise in temperature,” Kinter said. However, the trends at regional scale are not as discernible. Is that because the trends are not there or because the models lack the acuity to see them? Kinter and his colleagues’ investigations of high spatial resolution shed light on this question. Other simulations explored the probability of extreme drought in the Midwest, Europe and elsewhere in the future. By his estimates, the Midwest will experience the levels of extreme drought it is currently experiencing in 20 years out of every 50 — a four-fold increase. “This drought will be the norm at the end of the 21st century,” Kinter said, “according to these simulations.” He also presented a number of key examples where increases in model resolution impacted the clarity and content of results. For instance, he cited research by collaborators that showed how low-resolution models of the East Coast Gulf Stream put rain associated with the weather pattern in the wrong place, whereas high-resolution models delineate the bands of rain off the East Coast with accuracy. After outlining the advantages of higher-resolution models, Kinter elaborated on the challenges that such a change generates. Biases in the models, the parameterization of small time and spatial scale effects (like clouds), and the coupling of global climate models with cloud resolving models, are all difficult, but not impossible, to overcome. However, the primary challenge that Kinter’s group and the community are dealing with is the “exaflood of data” produced by high-resolution and highly complex coupled models. For Project Athena, the total data volume generated and now resident at NICS is 1.2 petabytes. However, the total data volume on spinning disk at the Center for Ocean-Land-Atmosphere Studies for Project Athena is capped at 50 terabytes. This creates difficulties. Running on TeraGrid systems at large-scale for the first time with so much data, “everything broke,” Kinter said. He and his colleagues had to find ad hoc solutions to complete the simulations. The next step, he said, is to take those ad hoc solutions and use them to develop systematic, repeatable solutions. Put another way: to deal with the exaflood, the community needs to progress from Noah’s Ark to a professional shipping industry. “We need exaflood insurance,” Kinter concluded. “That’s what we’re calling on the XSEDE team to help us with.” The following contributed to the work described in this article: Deepthi Achutavarier, Jennifer Adams, Eric Altshuler, Troy Baer, Cecilia Bitz, Frank Bryan, Ben Cash, William Collins, John Dennis, Paul Dirmeyer, Matt Ezell, Christian Halloy, Mats Hamrud, Nathan Hearn, Bohua Huang, Emilia Jin, Dwayne John, Pete Johnsen, Thomas Jung, Ben Kirtman, Chihiro Kodama, Richard Loft, Bruce Loftis, Julia Manganello, Larry Marx, Martin Miller, Per Nyberg, Tim Palmer, David Randall and the CMMAP Team, Clem Rousset, Masaki Satoh, Ben Shaw, Leo Siqueira, Cristiana Stan, Robert Tomas, Hirofumi Tomita, Peter Towers and Mariana Vertenstein, Tom Wakefield, Nils Wedi, Kwai Wong, and Yohei Yamada.
<urn:uuid:666f489c-3ac4-453d-8421-688030937494>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/08/02/proving_the_case_for_climate_change_with_hi-res_models/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00534-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924607
1,607
3.421875
3
Researchers Link Most Spam To Only 50 ISPsDiscovery that spammers are using only a relative handful of Internet providers suggests new ways of stopping botnets. (click image for larger view) Slideshow: How Firesheep Can Hijack Web Sessions Only 50 Internet service providers (ISPs) host the majority of the world's spam, according to a new study, and that finding could reshape private and public approaches to combating the botnets that infect computers and then use them as spam mailers. The study was conducted for the Organization for Economic Cooperation and Development (OECD) by researchers at Delft University of Technology in the Netherlands and Michigan State University, who examined 109 billion spam messages from 170 million unique IP addresses, gathered via a "spam trap" from 2005 to 2009. One major finding is that where there's spam, you'll find an infected -- aka zombie -- machine. That's because according to the study data, on average 80% to 90% of the world's spam comes from infected machines. Researchers also found that the 33 member countries that comprise the OECD, as well as Estonia, the Russian Federation, Brazil, China, India, Indonesia, and South Africa, "harbor over 60% of all infected machines worldwide registered by the spam trap." In other words, the majority of infected machines aren't laying low in countries nearly off the grid. But perhaps the biggest surprise, said the researchers, was that "we discovered that infected machines display a highly concentrated pattern." In particular, "the networks of just 50 ISPs account for around half of all infected machines worldwide." In other words, "the bulk of the infected machines are not located in the networks of obscure or rogue ISPs, but in those of established, well-known ISPs." The results suggest a formidable new way to block botnets. With a caution that historical data is no guarantee of future botnet behavior, the researchers said that "current efforts to bring about collective action -- through industry self-regulation, co-regulation, or government intervention -- might initially achieve progress by focusing on the set of ISPs that together have the lion's share of the market." In other words, if policymakers want to maximize their bang for buck, start by improving the security practices of the 50 ISPs that host half the world's spam.
<urn:uuid:e953b099-a397-48c5-9835-6803cbb9a864>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/researchers-link-most-spam-to-only-50-isps/d/d-id/1094284
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00442-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945028
473
2.671875
3
Anatomy of an SQL Injection What is SQL Injection? South Carolina recently made the news as more than 3.6 million social security numbers were stolen from a public facing website. While it hasn’t been verified by SC officials, most information security experts believe the most likely method of data exfiltration in this case was an SQL injection. SQL injection is a favored attack vector for hackers wanting to steal protected information. Over the last year we have seen repeated reports of disclosed information ranging from email addresses to credit card numbers all delivered via this type of exploit. SQL injection attacks can be launched against web servers that use a backend database to generate web pages. These web servers normally only use specific portions of the information in the database and access to the data is controlled by the web application’s coding. An SQL injection attack allows the attacker to bypass that coding and communicate directly with the backend database server using the web server as a conduit. Intelligent processing of NetFlow data can both detect and prevent data loss that occurs by SQL injection. Let’s take a look at how Lancope’s StealthWatch System and NetFlow can provide actionable intelligence into every stage of an SQL injection attack. Phase 1: Discovery and Fingerprinting Before an attack can occur an attacker must first answer a series of questions: - How many web servers are there? - What TCP ports are web services running on? - Which web servers have access to the backend database? - What web server is running (IIS, Apache, etc.)? - What programming technology is running on the server (PHP, ASP.NET, Ruby, etc.)? - Is the application out-of-the-box, open source or custom? The attacker needs to answer these questions to determine what type of SQL injection may work. To accomplish this, the attacker will begin scanning the subnet the web servers are in to answer the first question. This portion of the attack will generate what StealthWatch calls Concern Index (CI) events. CI events are actions a host performs that are reason for concern. Some actions are more concerning than others and subsequently will accumulate points commensurate with the degree of concern. In the case of our attacker, 1.7M points of CI were logged as he was attempting to answer the first two questions. That accumulation of CI resulted in a “High Concern Index” alarm against his machine less than two minutes into the attack. If set up to do so, this alarm could have also triggered an automatic mitigation response in StealthWatch (i.e. sending a SHUN command to the firewall) that would have prevented the attacker from getting any further. Once the attacker had discovered the web server to exploit, the process of fingerprinting the web service to find exploitable vulnerabilities began. The attacker tried multiple Layer 4 types of scans to better understand the service to be exploited. This further increased the Concern Index. Since the attacking host had already developed a bad reputation (high CI), once it established a communication with the web server, an alarm was raised on the web server called a Touched Host alarm. A Touched Host alarm occurs when a suspicious host begins to have bidirectional conversations with a network resource. This is another condition that would notify operators and could trigger mitigation. The number of connections that are necessary for this type of fingerprinting generates alarms as well. Once the service has been profiled, the attacker attempts thousands of different requests designed to determine what exploit will gain access to the database via SQL injection. This can be observed by the number of TCP connections the attacking host has with the target. Phase 2: Steal the Data Once the attacker is able to craft an effective exploit, the information will leave the database, flow through the web server and be delivered to the attacker. Graphing the traffic between the web clients and web server as well as the web server and database server shows the exfiltration clearly. Zooming in on those spikes and looking at the flows causing them, we can determine that the attacker downloaded 6.38GB of information from the web server and the web server received 4.83GB of information from the database. These communications each would trigger anomaly alarms because of their deviation from normal, baselined traffic profiles. This is another step where the download could be disrupted through mitigation or response. In addition to these alarms on the servers, the attacking host will also flag an alarm for high total traffic. These types of alarms against outside hosts that are consuming abnormal amounts of data from the network are clear indicators of data exfiltration. They deliver actionable intelligence for internal investigations and to provide to law enforcement. Utilizing data provided by the StealthWatch FlowSensor Virtual Edition (VE) installed on the VMWare ESX server hosting the web server and on the database server, we are able to pull deep packet inspection details that show us that the page that was exploited on the web server was called “comment.php.” Intelligent NetFlow analysis can provide deep visibility into an SQL injection at several stages of the assault. The mitigation can be automatic or authorized by an operator. Mitigation can range from blocking the traffic at a firewall to advanced actions including routing traffic into a Honeynet. NetFlow analysis provides the actionable intelligence needed to prevent, mitigate and respond to data exfiltration of this type. For more information on detecting data loss with StealthWatch, to go: http://www.lancope.com/solutions/security-operations/data-loss/.
<urn:uuid:9eecbc76-0e0c-4134-ae41-52d355cf40e2>
CC-MAIN-2017-04
https://www.lancope.com/blog/sql-injection
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00442-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934115
1,146
2.96875
3
The software, known as Direct-To was developed at NASA's Ames Research Center and promises to let airlines to save fuel and reduce emissions by identifying flight route shortcuts that are acceptable to air traffic controllers. Boeing said it incorporated the technology into its new subscription-based Direct Routes service which is being offered as part of the company's overarching InFlight Optimization Services which the company says will help airlines save fuel and increase environmental efficiency. Direct Routes automatically alerts an airline's operations center and flight crew when a simple, more fuel-efficient path opens up along an airplane's intended route. In trials NASA said "We estimated a potential combined savings of about 900 flying minutes per day for all aircraft in the demonstration airspace," said David McNally, the project principal investigator at Ames. Initial Boeing projections show that Direct Routes can save more than 40,000 minutes of flight time per year for a medium-size U.S. airline -- the equivalent of operating hundreds of flights that use no fuel and produce no emissions. According to NASA, rather than being able to fly the most efficient route to a destination, aircraft operators in today's air traffic control system are usually constrained to follow established airways that are often comprised of inefficient route segments. Current air traffic control user interface inefficiencies inhibit controllers from issuing user preferred routes, even under light traffic conditions. According to as NASA paper on Direct-To accounting for the wind is an essential element of the Direct-To algorithm. According to NASA, its flight control software, Center-TRACON Automation System (CTAS) receives hourly updates of the National Oceanic and Atmospheric Administration's Rapid Updated Cycle atmospheric model, which represents the highest accuracy wind model currently available. For each candidate aircraft, CTAS computes the time to fly to the Direct-To fix along the flight plan route and the time to fly direct to the fix. If the savings along the direct route is greater than one minute, the clearance advisory is added to the Direct-To List. Aside from Direct Routes the service includes Wind Updates, which Boeing says increases fuel efficiency and improves aircraft performance by sending data link messages directly to the flight deck with real-time, flight-customized wind information. These messages enable the airplane's flight management computer to recalculate flight control inputs based on more accurate and precise information, Boeing stated. Boeing said it not only collaborated with NASA, but also Continental Airlines and Southwest Airlines in the development of Direct Routes to ensure operational viability and assess the benefits and shared details of the project and its findings with the U.S. Federal Aviation Administration. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:e47aea33-3a25-4678-be30-fa6798a7908b>
CC-MAIN-2017-04
http://www.networkworld.com/article/2227577/security/boeing-adopts-nasa-software-to-boost-airline-fuel-efficiency.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00104-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924786
556
2.796875
3
Data Masking Made Simple with DataSunrise Dynamic Data Masking Tool Data masking enables database owners to protect sensitive data of any kind by “masking” it. It means replacing actual data, the database outputs, with some useless values. You could probably see the ATM receipts where credit card number is replaced with asterisks or Xs — that’s the most obvious example of data masking. As you may guess from its name, DataSunrise Data Masking tool is used to mask data a database contains. In this article we will highlight some data masking related points. What is data masking used for? In simple words, data masking procedures used to protect sensitive data from exposure to individuals not authorized to view it. Let’s take a certain “Acme” company for example. Acme’s database contains some clients’ and employees’ personal info like national identification numbers or credit card numbers. Of course the database may contain sensitive data of other kind like production data, accounting info and so on. Since the company’s database could be used for reporting, analysis, software development or testing procedures, the sensitive data should be protected from being exposed. The point is that software developers as a rule don’t need actual data the database contains — in most cases they need just a “dummy” database, that looks like a real one and works like a real one. And the best way to create such a dummy is to use data masking. Some people confuse data masking with data encryption but it is not the same. Encryption makes data completely unreadable for an unauthorized person or application. Masked data, in turn, should remain readable and appear consistent while being, actually, a fake. Data masking does not prevent access to the data, it just hides the actual data like a mask. Static and Dynamic data masking There are two methods used to mask data in a database: static and dynamic. And here is the difference: Static data masking procedures involve creating a copy of live database and replacing actual data with fake one. It is the only method of data masking used by companies sending their databases to outsourced software specialists for testing. Static data masking method has some serious drawbacks. First, before masking applied, the real data should be extracted from database for evaluation and inspection, so this situation poses a potential threat of data exposure. Beyond that, database duplication requires some empty space on company’s server or even a new server, so this method of data masking could be pretty pricy. And last but not least: the “dummy” database lags behind the live database. Of course, it can be updated periodically but this process requires additional time and can be related with some issues. Dynamic data masking, in turn, involves replacing sensitive data with fake values on-the-fly, while the data is being transferred to a client. In other words, data masking software changes the way database responds a query, so it requires no interference to the database itself and the real database entries remain untouched. The data is masked before it exists the database so it is a very secure and reliable method. Dynamic Data Masking with DataSunrise As you can see, the dynamic data masking method is much more versatile and that’s why we use it in our product. DataSunrise suite works as a proxy — it intercepts SQL-queries to the protected database and modifies these queries in such a way, that the database outputs not actual, but random or predefined data. Before you use DataSunrise Data Masking you need to determine which database entries need protection and where they are located. Note that DataSunrise can mask a complete database as well as data in separate columns only. DataSunrise logs all the actions, so you can check what is happening anytime. Using DataSunrise data masking tool is very easy. All you need to do is to enter DataSunrise dashboard and create some masking policies. Here you need to enter information required to create data masking rule. You can define application which requests will be processed by the firewall. Then you need to define SQL-statements to be filtered and select masking type to be implemented. It means that you can select a method of fake entries generating. Then you should select the database elements (schemas, tables or columns) to be protected. It can be performed manually via handy database elements explorer or by using regular expressions. And that’s all. Quite simple. DataSunrise data masking provides you with another reliable tool for info protection. Along with DataSunrise Database firewall and SQL injection prevention tool it can become an additional line of defence against digital threats. DataSunrise supports all major databases and data warehouses such as Oracle, Exadata, IBM DB2, IBM Netezza, MySQL, MariaDB, Greenplum, Amazon Aurora, Amazon Redshift, Microsoft SQL Server, Azure SQL, Teradata and more. You are welcome to download a free trial if would like to install on your premises. In case you are a cloud user and run your database on Amazon AWS or Microsoft Azure you can get it from AWS market place or Azure market place. For more information about DataSunrise Database Security capabilities please refer to DataSunrise user guide or email us at email@example.com
<urn:uuid:fad6828a-6dc3-4908-9126-f082212d11f8>
CC-MAIN-2017-04
https://www.datasunrise.com/blog/data-masking-made-simple/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00104-ip-10-171-10-70.ec2.internal.warc.gz
en
0.865342
1,127
2.640625
3
The first WDM systems were two-channel ones that used 1310 nanometer (nm) and 1550 nm wavelengths. Shortly afterward came multi-channel systems that used the 1550 nm region, where the fiber attenuation is lowest. Depending on their wavelength patterns, WDM systems are typically divided into coarse wavelength-division multiplexing (CWDM) and dense wavelength-division multiplexing (DWDM). Infinera’s founding vision is to enable an infinite pool of intelligent bandwidth that the next communications infrastructure is built upon, and the company is well-recognized in the technology segments of optical WDM transport and packet optical transport network (OTN) switching. Infinera pioneered a new approach for networks with photonic integration, which provides massive WDM capacity in a small power and space footprint to handle growing bandwidth needs. CWDM typically has the capability to transport up to 16 channels (wavelengths) in the spectrum grid from 1270 nm to 1610 nm with 20 nm channel spacing. Each channel can operate at either 2.5, 4 or 10 gigabits per second (Gb/s). CWDM cannot be amplified as most of the channels are outside the operating window of the erbium-doped fiber amplifier (EDFA) used in DWDM systems. This results in a shorter overall system reach of approximately 100 kilometers (km). However, due to the broader channel spacing in CWDM, less sophisticated transceiver designs can be used, giving a cost advantage over DWDM systems. Infinera uses both CWDM and DWDM technologies as a means of transporting different types of services, e.g. Ethernet, Synchronous Digital Hierarchy (SDH)/Synchronous Optical Networking (SONET), and Fibre Channel (FC) in metro networks. CWDM is the more cost-efficient of the two WDM variants, but has limitations in the distance over which the traffic is transported and in total channel count. Infinera’s XTM Series is CWDM- and DWDM-agnostic. This means a CWDM network can initially be deployed with either product series and when required, the network can be simply upgraded to a hybrid CWDM/DWDM network using common cards and pluggable optics. Therefore, by deploying Infinera’s CWDM- or DWDM-based solutions, the lowest possible day one cost is enabled without sacrificing the scalability of the network. DWDM puts data from different sources together on an optical fiber, with each signal carried at the same time on its own separate light wavelength. Using DWDM, 80 (and more) separate wavelengths or channels of data can be multiplexed into a light stream transmitted on a single optical fiber. Since each channel is demultiplexed at the end of the transmission back into the original source, different data formats being transmitted at different data rates can be transmitted together. Specifically, Internet (IP) data, SONET and Ethernet data can all be traveling at the same time within the optical fiber. A super-channel is an evolution in DWDM in which several optical carriers are combined to create a composite line-side signal of the desired capacity, and which is provisioned in one operational cycle. This multi-carrier approach to building a DWDM network delivers scalability to terabits and beyond. Infinera DWDM line systems have the capability to transport 128 channels (wavelengths) across the extended C-band channel spectrum over thousands of kilometers (typically 4000 km). With core network traffic increasing at about 40 percent per year, service providers need a long-haul transmission technology that will deliver scalable, cost-effective capacity without compromising on optical reach. The DWDM industry is now aligned on the fact that multi-carrier super-channels are the way to quickly access the spectral capacity that coherent transmission delivers in a way that will scale WDM without scaling operations. Today, the Infinera Intelligent Transport Network delivers 500 Gb/s FlexCoherent™ super-channels and is designed to support terabits in the future, powered by large-scale photonic integrated circuits (PIC).
<urn:uuid:eac0a6be-64c6-4bf5-8ddf-cbe36f293f63>
CC-MAIN-2017-04
https://www.infinera.com/technology/wdm-wavelength-division-multiplexing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00012-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916369
873
2.859375
3
Sandia tool puts disaster models into one picture - By Henry Kenyon - Sep 12, 2011 This season’s steady stream of natural disasters has underscored how important it is for first responders to plan and train for — and respond to — a variety of scenarios. But coordinating and sharing data can difficult because of the incompatibility of many modeling and simulation systems. New software from an Energy Department lab could help solve the problem, allowing these different models to work together for the first time to create more realistic and precise models of disasters. And better models could improve coordination and response. The Standard Unified Modeling, Mapping and Integration Toolkit (SUMMIT) was developed by the Sandia National Laboratories with funding from the Homeland Security Department's Science and Technology Directorate and the Federal Emergency Management Agency’s National Exercise and Simulation Center. The origins of the software came from the need to coordinate the multiple disaster planning, simulation and management programs that federal agencies had developed over the years, said Karim Mahrous, Sandia’s SUMMIT project lead. It became evident that there was a large gap in how disaster response exercises are currently planned and managed, he said. Multiagency exercises rely on scenario planning, which traditionally involves numerous experts meeting to discuss and plot out issues. Although this produces good results for a single event, Mahrous noted that the data is not reusable since the scenario and information used in one regional exercise cannot be moved to another. SUMMIT is designed to remedy this problem by quickly porting disaster data and information from one exercise into another. For example, Mahrous said that the University of Arkansas created a model of a chemical plume from a derailed freight train for a local exercise. The model and data for the plume were then pulled and reused for another exercise in another state via SUMMIT, he said. Federal agencies such as FEMA and DHS have spent years designing models and simulations for disaster planning, but these various systems cannot work together or share information. SUMMIT is designed to knit together these different models to allow planners to quickly swap data and to set up new scenarios with existing information within minutes or even seconds. “SUMMIT’s entire role is to leverage that federal investment across the board,” Mahrous said. The software for SUMMIT is platform-independent. It can run on desktop and laptop computers or on handheld devices such as smart phones, Andriod devices and iPads. In a recent earthquake response exercise, the organization managing the event wanted to create a tool that allowed organizations to modify the scenario to meet their objectives. The event planners then wanted to be able to move this planning data to first responders on the ground, which allowed them to see buildings damaged by the simulated earthquake. SUMMIT allowed this tool to be moved seamlessly to handheld devices. The porting to the iPad was just a convenience move for us, Mahrous explained. Before SUMMIT, disaster response teams had to rely on written descriptions or other information to tell them about the extent of damage depicted in a scenario. This often led personnel on the ground to sometimes make up information because they had no immediate tools to model or assess a situation besides some maps and charts. For the first time they were able to see the post-disaster world that they were role-playing, Mahrous said. Stitching together different models allows exercises to become more fine-grained and flexible. During FEMAs recent National Level Exercise 2011 (NLE-11), SUMMIT allowed organizers to model casualties created by different disaster types and their impact on hospital resources. However, another important potential application for SUMMIT is to allow these capabilities to be used to support first responders in a real event. Based on data from NLE-11, in a real situation such as a building collapse, FEMA could use the modeling tools connected by SUMMIT to quickly estimate casualties and contact local hospitals to alert them about incoming patients in real time, Mahrous said. Besides modeling disasters, FEMA's next initiative is to take a whole-community approach to better prepare citizens to respond to emergencies. Although Sandia is in the early stages of working out the approach with FEMA, Mahrous said that the goal is to create a publicly accessible website that would allow entire communities to crowdsource data during a disaster. SUMMIT would be vital in helping to structure and map the data posted on such a site.
<urn:uuid:8add9016-4b30-431c-94d3-02af89fae3d8>
CC-MAIN-2017-04
https://gcn.com/articles/2011/09/12/sandia-summit-unified-disaster-modeling.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00314-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955865
898
2.65625
3
Increasing economic disparity over the past decade has caused profound changes in the American family, according to an analysis by an Ohio State University researcher. The study by sociology professor Zhenchao Qian provides yet more evidence of the negative impact caused by economic disparity and rapid erosion of the middle class in the United States. "The state of American families has become increasingly polarized," Qian says in a statement. "Race and ethnicity, education, economics and immigration status are increasingly linked to how well families fare." As a result of changes during the 2000s, he says, "there is no longer any such thing as a typical American family." The Great Recession that began in 2008 had a particularly dramatic impact on American families, Qian says. "There is no doubt that the gap between America's haves and have-nots grew larger than ever during the 2000s," he says. "It influences the kind of families we live in and the kind of family environment in which we raise our children." Among the findings of the report, titled Divergent Paths of American Families: * The number of cohabiting couples, which increased to 3.8 million in 2000 from 400,000 in 1960, has leveled off. Between 12% and 14% of never-married adults were living with a partner from 2008 to 2010, roughly the same percentages as in the year 2000. * The percentage of women ages 20 to 24 who have ever married dropped to 19% in 2008-2010 from 31% in 2000. In those same time frames, the percentages for men declined to 11% from 21%. * More Americans are remarrying after divorce, with some doing it more than once. Among currently married men, those who are remarried increased to 25% in 2008-2010 from 17% in 1980. Percentage changes for women were similar, the study reports.* The family situations of minorities, the uneducated and the poor have seen their family situations become less stable during the 2000s when compared to whites, the educated and the economically secure. Is there anything promising in the numbers? Seemingly not. "Economic inequality is key to the polarization of American families, and the disadvantages of children living in single and unstable families will just worsen the racial and ethnic inequalities we already have in this country," Qian says. The analysis was based on data from the 2000 U.S. Census and the 2008-2010 American Community Survey. Qain's research was sponsored by the Russell Sage Foundation and Brown University. Now read this:
<urn:uuid:61c8f271-284a-4e16-87fb-ca5e86dbf038>
CC-MAIN-2017-04
http://www.itworld.com/article/2704237/enterprise-software/the-ravages-of-economic-inequality-on-american-families.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00222-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975572
508
2.703125
3
Searching the Internet for an Individual This page describes several search techniques which may help you look for an individual Using the Internet to search for an individual can be a challenge. Obviously the more you know about the person, the more likely it will be that you can find them using the Internet. Google - You can try searching for the person's name in Google. This will only work if that person's name has been typed onto a web page, and that web page has also been discovered by Google. You have to ask yourself; "How 'public' is this person, how likely is it that my target person has built his own web page, or has been mentioned on someone else's web page?". Another drawback to this approach, is that there may be hundreds/thousands of people with the same exact name as your target person. See my Overview of search tools for a description of how search engines work. Google - You can also try searching for the person's email address in Google. Email addresses are unique (one of a kind). A regular Google search may show the person's email address on a web page, or in an online guest book. Google_groups - Google has a full-text, searchable archive all the text of all the messages posted to the usenet newsgroups for the past 20 years. You can search these bulletin boards for your person's name or person's email address. See my Overview of Usenet for more information. Specific searches - In most searches for individuals, you will have to get into Specific Databases due to the shape of the Internet's Information Space. There are thousands of specialized databases link on my Specialized Search Tools- page. Here are some examples: - public records: searchsystems - links to 20,000+ different public record databases. - Phone books - Most people have a telephone. Online databases such as anywho can quickly locate anyone how has a listed phone number. Anywho can also show you: a map to the person's house, and a list of their neighbors' names, addresses, phone numbers. Anywho also allows you to do a reverse lookup (enter the phone number and find out who owns it. - Country specific search tools and Phone books: as listed on my country-specific search page. - Real Estate Tax Records - In many counties/states this is public information, and the county may provide that information via the Internet. See Fairfax County Tax Assessment as an example. - Law Enforcement - They sometimes make information available to the public about criminals such as Offender registries. For example, Virginia's List includes current address, and color photograph. - Court Records - Most of these are considered public records and may be available online. You would have to know which court system your target person may have had dealings with. - Fee-Based Services - There are numerous pay-services which do a simultaneous search such of the public records listed above. Examples: many more listed at YAhoo. - Other public databases - The Fee-based services often describe their sources. For example, discreetresearch.com mentions many of the resources they search (state criminal records, Motor vehicle records, etc.) If the Fee-service can search a state's DMV records, there is a good chance that you too can search the state's DMV records. You would need to find the DMV's website and browse around for a records search. Most state home pages, County home pages, etc. can easily be found using a directory such as - Personalized searches. - What else do you know about the person. For example, if you know they went to a particular college, search for the college's web site, and see if they have an alumni directory. - Credit Check - If you are "a business", you might be able to run a credit check on someone if you know enough of their information. Experian , Trans Union misc articles - Show many other ways some might be tracked in Note for my Alumni: Your referrals are always appreciated
<urn:uuid:73f406a2-7bfa-4769-80e4-df3d9d30cf43>
CC-MAIN-2017-04
http://navigators.com/search_individual.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00130-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912259
883
2.59375
3
The consulting firm's Canadian Technology, Media & Telecommunications (TMT) Predictions 2013 report covers a range of technology predictions, including the outlook for subscription TV services and 4K televisions, but the vulnerabilities in today’s password practices top the list of things to consider in 2013. The problem, researchers said, is that everything that we thought to be true must be reconsidered given advances in technology. "Passwords containing at least eight characters, one number, mixed-case letters and non-alphanumeric symbols were once believed to be robust,” said Duncan Stewart, a director of research for the report. “But these can be easily cracked with the emergence of advance hardware and software.” For instance, a machine running readily available virtualization software and high-powered graphics processing units can crack any eight-character password in about five hours, he noted. But as ever, human behavior gets in the way when it comes to being safe. Specifically, the inability to remember multiple unique 24-character password strings. The limitations of most humans’ ability to remember complex credentials means that there is a tendency for password re-use, which also puts password security at risk. If a hacker cracks even an innocuous account, like a grocery store loyalty card, the credentials are likely to have been used elsewhere, like for online banking. Once a hacker has a password, he or she can potentially have the keys to the cyberkingdom based on most consumers’ behavior. “Moving to longer passwords or to truly random passwords is unlikely to work, since people just won't use them,” Stewart said. However, all hope is not lost: Multifactor authentication using tokens, cellphones, credit cards and more are likely solutions. That means that having additional passwords sent through SMS to a phone, a requirement for fingerprints and other biometrics, or even 'tap and go' credit cards may be the norm in the future, he concluded.
<urn:uuid:3ebb2c25-9a39-4d5b-9635-706700d32063>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/90-of-passwords-can-be-cracked-in-seconds/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00434-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944216
404
2.8125
3
LinuxBIOS Brings Clones One Step Closer to Freedom No matter how hard you try to build a 100 percent Free/Open Source (FOSS) computer, there are a few bits you can't reach that contain non-FOSS code: ROM chips. These power the BIOS (Basic Input/Output System), Ethernet interfaces, SCSI controllers, and SATA/PATA drive controllers. Today we're taking a look at the LinuxBIOS project, which gives us a modern GPL alternative to the two remaining proprietary BIOS vendors, AMI and Phoenix. Even if you don't care about the openness of your system BIOS, you might be concerned about its inflexibility and legacy baggage. The x86 BIOS still thinks it's supporting MS-DOS and performs a number of tasks, such as probing hardware and loading drivers, that modern operating systems (including Linux and Windows) now handle better, faster and with a lot more flexibility and user control. This can take 30-50 seconds. The operating system ignores what the BIOS does and then re-runs the same tasks. Newer PCs shortcut this with the "fast boot" setting, which skips most of the usual steps and gets to a boot prompt in about ten seconds. LinuxBIOS takes a completely different approach. Rather than merely skipping over the useless bits, LinuxBIOS is new from the ground up. It's designed to perform just enough hardware initialization to start the system, and then it hands off to the next step in the boot process. LinuxBIOS calls this executing a payload, which is any executable capable of starting a kernel. It's not limited to booting Linux, but can boot a number of operating systems including Windows 2000. It can also start memtest, enable netbooting, Open Firmware, or anything you care to code it to boot. What is the BIOS? "BIOS" is a shortcut name for a complex of programs and chips, including three programs stored on a Flash ROM chip. BIOS, Setup and Power-on Self-test (POST), plus Setup variables are stored on a CMOS (Complementary metal-oxide-semiconductor) chip. This is volatile memory; it loses its contents when the power goes off. So it needs a battery to maintain settings when the power is off, usually a soldered barrel battery, a lithium battery, or a coin cell battery. Batteries last several years; you can tell when one is getting weak because your settings, such as the time and date, are not retained between power cycles. Motherboards with barrel batteries usually have connectors for a replacement battery, so you don't have to solder in a new battery. The code in these programs is called firmware or microcode (the term favored by IBM). They mean the same thing. The common phrase for updating or replacing this code is flashing the firmware. LinuxBIOS adds an excellent twist to this, which we'll get to in a moment. LinuxBIOS is written in C rather than assembly language, which makes it easier to write and debug. There are no licensing fees, and it occupies a smaller footprint than proprietary BIOSes. A very large advantage, at least to me, is it foils Trusted Computing. Whoever controls the BIOS controls your computer, and you can call me a crazy hairy old anarchist paranoid hippie all you want—I still won't cede control of my computer to a consortium of ethics-free globalcorps who view me as a fleecy animal. Bugs in LinuxBIOS get fixed. The development toolchain is relatively inexpensive and accessible. Open code means it's auditable and has nowhere for home-phoning nasties to hide. It's also completely customizable, so you don't have to depend on non-responsive vendors for extra features. Very fast booting—the project page claims 3 seconds from power-on to console. You get real-live useful debugging output to the serial console. It is maintained by a community of developers and users who are more interested in making something inventive, flexible and excellent than locking-in users and vendors. You can update LinuxBIOS over the network. (This is the cool thing I referred to earlier.) Imagine the traditional way of changing settings in the PC BIOS: one system at a time, monitor and keyboard required. Now imagine logging in remotely to any machine&mashworkstation, server, cluster node—and making changes with a few keystrokes from the comfort of your personal lair. You don't need a Windows PC or floppy disk to flash the LinuxBIOS. Nobody was stopping the proprietary vendors from implementing new and excellent features, but that's the way it always seems to be. Who Uses It? LinuxBIOS booting an embedded Linux kernel was the original BIOS for the One Laptop Per Child (OLPC) project. In 2006 the Linux kernel was replaced with Open Firmware. A number of commercial products are already using LinuxBIOS— visit the Vendors & Products section for a partial list. In the beginning phases of the LinuxBIOS project it was a straight uphill job with no support from hardware vendors, and with the legal hassles perhaps outweighing the technical challenges; because even though reverse-engineering is legal, it only takes one small herd of corporate lawyers to ruin your life. You'd think that hardware vendors would be interested in expanding their customer base painlessly, but no, they weren't, as is so often the case in high tech. But finally, after seven years of hard work and persistence, a number of high-profile names have come aboard: Google, AMD, Acer, VIA, Newisys, and several others. Trying It Yourself This is very much a work in progress, and a chance to exercise some serious geekery. There don't appear to be any desktop motherboards that ship with LinuxBIOS as an option. The Vendors & Products section lists a number of embedded boards that use it. So testing it out yourself safely on a PC will require a few pieces of specialty hardware and a lot of reading before you start. Don't use a motherboard you can't afford to write off. Use a supported motherboard with a socketed, or removable, BIOS chip; this is always recoverable if you mess it up. Boards with soldered BIOS ROMs will be unusable with no cost-effective way to fix them if you make a mistake. Be absolutely paranoid about static electricity and follow all the precautions. If your motherboard is not listed then check the list of supported chipsets, because the same few chipsets appear on all motherboards. (Portland, Oregon residents, visit FreeGeek for all the inexpensive second-hand hardware you could ever use.) Linux on embedded devices is a fast-growing field, so this could be your introduction to an new and fascinating profession, and I think it is the next frontier of open source. Consider documenting your adventures and sharing them on the LinuxBIOS Wiki or mailing list. And maybe let your favorite motherboard vendor know that you want them to offer LinuxBIOS as an option.
<urn:uuid:7313a8c2-510c-4cce-9938-d417571589ee>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/3706881/LinuxBIOS--Brings-Clones-One-Step-Closer-to-Freedom.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00552-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933846
1,465
2.65625
3
[Photo: Organized mass feeding stations, such as this one during Hurricane Katrina, were too few along the Gulf Coast. Photo courtesy of Win Henderson/FEMA.] When Hurricane Katrina tore through the United States’ Gulf Coast in August 2005, the people of New Orleans experienced devastation for which few were prepared. Mayor Ray Nagin ordered an evacuation on Aug. 28, but when the storm made landfall in Louisiana two days later, those who remained had to deal with chaos. Within days, 80 percent of New Orleans was underwater and thousands of people sought shelter in the Louisiana Superdome. After Katrina passed, the Louisiana Department of Health and Hospitals estimated that 1,464 lives were lost. But the tragedy didn’t lie just with the storm — it was also the failure of emergency management forces to adequately feed the survivors. According to a U.S. House of Representatives report, most shelters and hospitals lacked adequate food or potable water for days after the hurricane’s landfall. The mayor called the Superdome a “refuge of last resort,” not intended to house and provide food and water for thousands of people over several days. Other evacuation points, like the Ernest N. Morial Convention Center, lacked food or water. A September 2005 USA Today editorial claimed that “every level of government that was supposed to prepare for the storm and its aftermath failed miserably.” And the disasters kept coming. CNN reported that in 2008 — the year of hurricanes Ike and Gustav — there was a major hurricane every month from July to November in the North Atlantic. “In 2008, when Ike and Gustav hit Louisiana and Texas, there were multiple problems in the delivery of feeding,” said Michael Whitehead, the state mass care officer for the Florida Department of Business and Professional Regulation. “The food was getting to the people, but the process was very ugly, and there was a lot of unnecessary pain and suffering by the emergency managers.” So he and some colleagues got to thinking — what if there were a way for disaster responders to coordinate feeding efforts whenever crises occur that are too big for one organization to handle? They convened following the 2008 hurricane season and after a lengthy brainstorming process, composed the Multi-Agency Feeding Plan Template, a document designed to make mass feeding across jurisdictions easier. “Past disasters like Hurricane Ike and earlier mass feeding efforts have taught us that a comprehensive plan that includes our federal, state and local partners, including the private sector, is vital to making sure that people are fed during and in the aftermath of a disaster,” said Peggy Mott, a specialist in mass care at FEMA. Mott was one of several emergency management professionals who worked to get the correct words down on paper. She said a work group started with five volunteer organizations that met for a daylong strategy session, followed by biweekly webinars. This expanded to 50 participants from the private sector and all levels of government. The feeding plan template has undergone multiple iterations, and a recent version was released in spring 2010. But some people came to the planning table with feeding mishaps not related to hurricanes. Kevin Smith, state disaster services director of the Salvation Army, recalled problems people had with the 2008 Iowa flood. “For 10 days, they expected around 100,000 or more people to be without resources for food because of the flood taking out all of Cedar Rapids at the time,” he said. “No one had enough resources to deal with that quantity of people for that sustained amount of time, so we started trying to pull pieces from everywhere to try to pull it together.” FEMA facilitated this collaboration, which Whitehead was a part of along with groups including the Salvation Army, Southern Baptist Disaster Relief and the American Red Cross. The template is a 50-plus-page, how-to guide instructing regional emergency management forces on how to work with the federal government to feed a public that’s in chaos. It’s customizable, so any group can adapt it to their need and region. An earthquake in California, for example, might involve different feeding players than a hurricane on the Gulf Coast or a tornado in Kansas. “I think it’s absolutely significant, especially in those that are large municipalities when you think of Atlanta or New York,” Smith said. “Anyone who has an emergency management function within a municipality should consider that plan.” The template is customizable and ready to go for any city, county or state group that’s going to be in the field helping, since most feeding efforts will invariably involve more than one group with multiple, overlapping areas to service. “It’s a template for each agency to create a feeding plan,” Whitehead said. He and co-workers in Florida took the national document, adapted it locally and tested it during a June 2009 hurricane exercise. Whitehead said Florida is encouraging other states to use its template to develop feeding plans. Once the feeding plan has been adapted for a specific region, the local feeding plan should call for the creation of a Feeding Task Force to coordinate the organizations that will supply food, water and action plans. The meals will come from a variety of forces, including contracts with commercial facilities, mobile kitchens, mobile delivery vehicles, churches, community organizations and local businesses. Ideally the players in the task force and supporting organizations will have hammered out the plan details and established these relationships before a disaster occurs. The template is meant as a proactive guide, not a reactive one. “I hope it will spur discussions at all levels, so that before a disaster happens, we have a good working relationship to make sure that the transitions and the delivery of necessary feeding to the affected communities happens seamlessly,” said Scott Meyer, a mass care and feeding senior associate in the Disaster Services division of the American Red Cross. He said the template’s goal is to keep things short and sweet — get everyone on the same page and working together smoothly. “That’s the point of it,” he said. “We want to make sure that those who are involved with disaster feeding on any level are at the table and are working in concert with one another so that the people who are affected are getting what they need.” The template also specifies feeding phases that organizations need to identify — immediate, sustained and long term — to determine how the process will work. Whitehead thinks officials in Florida would have a good handle on things. “Step No. 1 is, we’re going to define how big this disaster is,” he said. “Is this a 50,000-meal-a-day disaster, a 200,000-meal-a-day disaster or a 500,000-meal-a-day disaster?” Whitehead and company are spreading the word about the template to other emergency management forces by giving presentations about it at conferences, and Whitehead introduced the multiagency feeding plan concept during training he’s conducted. He and others behind the template’s creation also have developed a Feeding Task Force document to help adopters create task forces to help implement the steps outlined in the Multi-Agency Feeding Plan Template. “We expect that the documents will be updated as needed,” Mott said. “New lessons learned will be incorporated as they arise to ensure that we are always meeting the needs of disaster survivors and communities in the event of an emergency.” They expect the documents to benefit emergency management in the future, but it’s too soon to analyze a laundry list of disasters that have been handled by those who’ve adopted the teachings. Still, Whitehead and Smith said people have had the chance to modify it based on lessons learned from Hurricane Gustav, past floods and training like the hurricane exercise Whitehead spoke of in Florida. Planning will come in handy with organizations that are new to the mass-feeding arena. “Challenges happen when other agencies that may not have been in disasters before step in and want to get involved,” Meyer said. “And it’s not that we don’t want them involved. It’s that they may not understand the mechanisms and intricacies of doing a disaster operation, and the demands that a disaster operation places on an entity to do sheltering, feeding or any aspect.” And even those organizations with disaster experience don’t always see eye to eye. “People aren’t always talking to each other, and different agencies have different expectations of what other agencies are supposed to be doing,” Whitehead said. “By doing the coordination required to develop the plan among all the stakeholders, you establish that communication you need in a disaster.” Download the Multi-Agency Feeding Plan Template at www.nvoad.org/index.php/rl/mass-care.html.
<urn:uuid:6a500d0c-37cb-4a90-8e65-581b0b3810a1>
CC-MAIN-2017-04
http://www.govtech.com/em/disaster/Emergency-Managers-Mass-Feeding-Disasters.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96244
1,857
3.046875
3
Most people never take the time to read an End User Licensing Agreement (EULA) when installing or updating their software. For example, here is a link to the Microsoft Vista EULA that you probably skipped right over and a great article on how the EULA prevents people from using their purchased Windows operating system the way they wish. In fact, would you buy a car with a list of rules that tell you when and how you can use it? In comparison, here is the most common open souce software license, GNU General Public License (GPL) . The difference between the two legal documents is easy to see - one tells you what you cannot do and the other tells you the simple rules to follow when adding to or distributing the product. The GPL ensures that any developer who works on a piece of software is given copyright to the portion they worked on and gives users and other developers the right to copy, distribute, and/or modify it. So, if I find a piece of open source software that I find useful, I can legally send copies of that program to all my friends and colleagues who can use it and pass it along as well. Just try this with Microsoft Vista which has strict requirements in the EULA with the number of installations allowed. Another popular open source license is the Apache license from the Apache Software Foundation. This license preserves the copyright from the developers but allows anyone to take the source code and add their own code, either open source or proprietary, and distribute the product. Thus, I can take a piece of Apache licensed software and package it, change the licensing model and sell it without needing permission or paying the copyright owners of the software. This is different than a GPL license which does not allow me to change the license or add proprietary software to the solution. As you can see, there is a large gap between open source software licenses and proprietary software licenses in terms of restrictions on how you can use and distribute the software. Even within open source licenses, there is plenty of variety to meet the needs of the various development and distribution models. Of course, I kept this blog posting simple as lawyers are involved in this whole process and we all know how they confuse everything for job security so I will let you explore in more detail the variations on these licenses. For a complete list of open source licenses visit the Open Source Initiative site at http://www.opensource.org/licenses and to see a chart on license usage go to http://johnhaller.com/jh/useful_stuff/open_source_license_popularity/.
<urn:uuid:df71a14e-4593-4217-a32f-4c177118b791>
CC-MAIN-2017-04
http://www.networkworld.com/article/2231458/opensource-subnet/what-s-in-a-license-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92407
525
2.546875
3
A computer worm is a type of Trojan that is capable of propagating or replicating itself from one system to another. It can do this in a number of ways. Unlike viruses, worms don’t need a host file to latch onto. After arriving and executing on a target system, it can do a number of malicious tasks, such as dropping other malware, copying itself onto devices physically attached to the affected system, deleting files, and consuming bandwidth. Trojan is a malware that uses simple social engineering tricks in order to tempt users into running it. It may pretend to be another, legitimate software (spoofing products by using the same icons and names). It may also come bundled with a cracked application or even within a freeware. Once it is installed on the computer, it performs malicious actions such as backdooring a computer, spying on its user, and doing various types of damage. Trojans are not likely to spread automatically. They usually stay at the infected host only. Downloaders and droppers are helper programs for various types of malware such as Trojans and rootkits. Usually they are implemented as scripts (VB, batch) or small applications. They don’t carry any malicious activities by themselves, but just open a way for attack by downloading/decompressing and installing the core malicious modules. To avoid detection, a dropper may also create noise around the malicious module by downloading/decompressing some harmless files. Very often, they auto-delete themselves after the goal has been achieved. The term “rootkit” comes from “root kit,” a package giving the highest privileges in the system. It is used to describe software that allows for stealthy presence of unauthorized functionality in the system. Rootkits modify and intercept typical modules of the environment (OS, or even deeper, bootkits). Rootkits are used when the attackers need to backdoor a system and preserve unnoticed access as long as possible. In addition, they may register system activity and alter typical behavior in any way desired by the attacker. Depending on the layer of activity, rootkits can be divided into the following types: Usermode (Ring 3): the most common and the easiest to implement, it uses relatively simple techniques, such as IAT and inline hooks, to alter behavior of called functions. Kernelmode (Ring 0): the “real” rootkits start from this layer. They live in a kernel space, altering behavior of kernel-mode functions. A specific variant of kernelmode rootkit that attacks bootloader is called a bootkit. Hypervisor (Ring -1): running on the lowest level, hypervisor, that is basically a firmware. The kernel of the system infected by this type of a rootkit is not aware that it is not interacting with a real hardware, but with the environment altered by a rootkit. The rule states that a rootkit running in the lower layer cannot be detected by any rootkit software running in all of the above layers. Remote Access Trojans are programs that provide the capability to allow covert surveillance or the ability to gain unauthorized access to a victim PC. Remote Access Trojans often mimic similar behaviors of keylogger applications by allowing the automated collection of keystrokes, usernames, passwords, screenshots, browser history, emails, chat lots, etc. Remote Access Trojans differ from keyloggers in that they provide the capability for an attacker to gain unauthorized remote access to the victim machine via specially configured communication protocols which are set up upon initial infection of the victim computer. This backdoor into the victim machine can allow an attacker unfettered access, including the ability to monitor user behavior, change computer settings, browse and copy files, utilize the bandwidth (Internet connection) for possible criminal activity, access connected systems, and more.
<urn:uuid:810f9784-873f-4bb2-9fc9-8c0ce74c887c>
CC-MAIN-2017-04
https://blog.malwarebytes.com/threat/malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00261-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917734
793
3.375
3
New NASA program furthers space exploration Less than a month after the Space Shuttle made its final landing, NASA announced that operations of the International Space Station as well as human exploration will be getting some extra focus. The new program named the Human Exploration and Operations Mission Directorate will specifically focus on areas beyond low-Earth orbit and will manage the Space Station commercial crew and cargo developmental programs, building a spacecraft made to travel beyond low-Earth orbit, developing a heavy-lift rocket and more, a NASA news release said. "America is opening a bold new chapter in human space exploration," NASA Administrator Charles Bolden said. Connect with the FCW staff on Twitter @FCWnow.
<urn:uuid:3d35592a-7c4c-44f5-b2a2-4d8eda79ccda>
CC-MAIN-2017-04
https://fcw.com/articles/2011/08/15/agg-new-nasa-program-furthers-space-exploration.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00261-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919889
139
2.734375
3
LAS VEGAS -- A researcher at Black Hat has revealed a vulnerability in the most common corporate router protocol that puts networks using it at risk of attacks that compromise data streams, falsify network topography and create crippling router loops. The problem is serious not only because of the damage an attacker might do but also because the protocol, OSPF, is used so pervasively that many networks are vulnerable. Open Shortest Path First (OSPF) is the most popular routing protocol used within the roughly 35,000 autonomous systems into which the Internet is divided. Typically large corporations, universities and ISPs run autonomous systems. The only remedies are using another protocol such as RIP or IS-IS or changing OSPF to close the vulnerability, says Gabi Nakibly, a researcher at Israel's Electronic Warfare Research and Simulation Center, who discovered the problem. Nakibly says he has successfully carried out an exploit against the vulnerability on a Cisco 7200 router running software version IOS 15.0(1)M, but that the exploit would be equally effective against any router that is compliant with the OSPF specification. He says he chose a Cisco router to underscore the extent of the problem, since Cisco dominates the router market. The flaw lies in the OSPF protocol itself which allows uncompromised routers to be tricked into propagating false router-table updates known as link state advertisements or LSAs. The attack is such that the false tables persist over time. The false tables can be crafted to create router loops, send certain traffic to particular destinations or snarl a network by making victim routers send traffic along routes that don't exist in the actual network topology, he says. The attack requires that one router on the network is compromised.”[T]he true novelty of the attacks are their ability to falsify the routing advertisements of other routers which are not controlled by the attacker while still not triggering the fight-back mechanism by those routers,” Nakibly says in an email. He and his team initiated the attack from a phantom router connected to their test network – in this case a laptop. The phantom router sends to the victim router a spoofed LSA that appears to be the last one the victim router sent out. The spoofed LSA is accepted as legitimate because it has been crafted to have the appropriate LSA sequence number, checksum and age – the three things OSPF checks to determine the legitimacy of LSAs. At the same time the phantom sends to a second router on the network an LSA that looks like it came from the victim router. The LSA is tagged with the sequence number that will be assigned to the next LSA that the victim router sends out. Meanwhile, the victim router rejects the spoofed LSA from the phantom router and sends out a fight-back LSA, which is a copy of its last legitimate LSA. When the fight-back LSA reaches the second router, it appears identical to the disguised LSA the second router just received from the phantom router. This is because the fight-back LSA and the disguised LSA have identical sequence numbers, check sums and age. The second router rejects the fight-back LSA (which contains legitimate route tables) and refloods the network with the disguised LSA (which contains attacker-crafted tables) it received earlier from the phantom router. The net result is the second router generates a false LSA that other routers accept as genuine. Because OSPF sends out LSAs every half hour, the attack must be relaunched every half hour so the false tables persist. To initiate the attack the phantom router introduces itself as being adjacent to the victim router, which must be the designated router on the network. Designated routers store complete topology tables for the network, and they multicast updates to the other routers. Nakibly introduced a second attack that is not as effective, but similarly takes advantage of a vulnerability in the OSPF specification.
<urn:uuid:3590979b-5378-4510-be88-0278a8745d0b>
CC-MAIN-2017-04
http://www.networkworld.com/article/2179879/security/black-hat--routers-using-ospf-open-to-attacks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00169-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933494
822
2.59375
3
Scientists' plan for real-time virtual Earth would simulate...everything Living Earth Simulator would draw on databases, supercomputing to create a global model of human activity - By Henry Kenyon - Jan 11, 2011 Modeling and simulation has become a vital scientific tool capable of predicting weather patterns or charting chemical reactions. But a new effort launched by an international team of researchers plans to simulate everything that happens on the planet. The Living Earth Simulator is part of a European program designed to collect, aggregate and fuse data from a variety of sources, such as NASA’s Planetary Skin project, into one all-encompassing model. The simulator is part of the Future ICT Knowledge Accelerator program at the Swiss Federal Institute of Technology (ETH) in Zurich. The Future ICT effort seeks to team hundreds of the top scientists in Europe in a 10-year, 1 billion euro project to explore social life on Earth and everything it relates to. The effort is funded by the European Commission with the goal of becoming operational by 2022. According to the BBC, the goal of the Living Earth Simulator is to further scientific understanding about events on the planet by looking at the human actions that affect societies and the environmental forces at work. Dr. Dirk Helbing, chairman of the Future ICT Project, told the BBC that many of today’s global problems, such as wars, social and economic instability and the spread of disease, are related to human behavior. But he noted that there is a fundamental lack of understanding about how societies and the economy work, adding that we know more about the early universe than our own planet. Borrowing a term from physics, Helbing described the modeling projects as a “knowledge accelerator” that will bring different branches of science together to produce data much in the same way (metaphorically speaking) that a particle accelerator smashes atoms together to unlock scientific secrets. One of the key goals of the simulator is to model the behavior of entire economies or ecosystems in real time to detect and, hopefully, head off any crises. MIT Technology Review compared the effort to a kind of Google Earth for society. But instead of using the map function to zoom into a home, a similar function could be applied to monetary transactions, health trends, global tourism patterns and carbon dioxide emissions. Helbing described this process as “reality mining.” Besides modeling current events, the simulator will be used to predict potential future events such as financial crises and pandemics. With access to vast amounts of data, the model is also intended as a tool for finding solutions to such problems. According to Technology Review, Helbing also wants to establish situation rooms that will allow world political and business leaders to view and manage crises as they occur. Future ICT’s Living Earth Simulator will pull data from some of the world’s largest supercomputers. Some of the machines that will supply computing and modeling muscle include ETH Zurich’s Brutus supercomputing cluster and supercomputers at the Los Alamos National Laboratory and the Brookings Institution in the United States. Science Daily reported that European researchers involved with the simulator are already running separate simulation programs, studying the travel activities of all of Switzerland’s population or the origins of international conflicts. All of these massive databases and powerful machines could be tapped to support the Living Earth Simulator. Making sense of all of the data will be a challenge. Helbing told Daily Tech that his team will feed massive amounts of data to various global supercomputer clusters to cover all of the daily activity on the planet. He said that much of this data already exists and that his team is currently using more than 70 online information sources, such as Google Maps and Wikipedia. After the data is collected and integrated, the research team’s computer and social scientists and engineers will build an architecture to convert the data into a real-time model of planetary activity. Although supercomputers will help make sense of some of the information, Helbing told Daily Tech that researchers will use Semantic Web technology to encode data, which allows the simulator to better understand what it is reading. Some of the technology necessary for the project, especially the analysis computing elements, will not become available in the coming decade, he said. Although the simulator will follow human behavior, all personal data will be scrubbed out of the model. An ethics committee and targeted research will be put in place to ensure that personal data is not misused. The goal of the process is to identify statistical interconnections when many people interact, but not to track or predict individual behavior, Science Daily reported.
<urn:uuid:862de0ac-0fd4-4e2c-84d4-64f3efbecdbd>
CC-MAIN-2017-04
https://gcn.com/articles/2011/01/11/scientists-plan-a-real-time-virtual-earth-simulation.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00563-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928606
940
3.515625
4
Perform an IP trace with mtr, an advanced traceroute tool that uses multiple ICMP pings to test the connectivity to each hop across the Internet. What is Traceroute? Traceroute is a network testing term that is used to examine the hops that communication will follow across an IP network. It also is commonly referred to by the name of the tools used to perform the trace; typically traceroute on Linux based systems and tracert on Windows operating systems. There are also variations on these such as tcptraceroute. The tools all perform a similar function but have different capabilities or methods for performing the trace. Why would I run a Traceroute? A popular reason one might perform a traceroute is for interest sake. It is a good way to see the path your network connection is taking as it traverses the globe. However, the most common reason is its use by networking and computer professionals to diagnose problems in a network path. As the traceroute can pin point routers with high response times, possibly indicating network congestion or other problems. What techniques are used to measure the IP Path? Packets are sent across the network and something called a TTL (time to live) is measured, as the packets reach hops (router) in the network the TTL is incremented. The packets are usually ICMP or UDP packets; another version uses the TCP protocol. One advantage of using the different protocols is on some networks a router or firewall may block the packets, thereby giving you an incomplete path across the network. By using different protocols you may be able to get past some of the systems that are blocking the other types of packets. TCP Traceroute is popular as it can be used to trace a path to a TCP service that has to have an open port for that service to be operational. For example a web server on port 80 or a mail server on port 25. GeoIP Location Finding of a network path We have an online traceroute tool hosted at traceroute-online.com where we include a map of the IP trace as it traverses the world. This is done by looking up each responding router's IP address against GeoIP based services. These services are not always reliable; therefore always take the mapping data with a grain of salt and beware of strange results that bounce back and forth. Host Name / IP Address Responding hops are resolved in DNS, if the have a PTR record this is included in the table, those with no record will show the IP Address. Packet Loss and Response Times As is pretty obvious these are the results from the traceroute. The packet loss can show if packets are being dropped either along the path or at the router and the response time shows the latency. API access to MTR Traceroute All of our IP Tools have an easy to use API that allows access to remote Traceroute by requesting a URL and receiving the results back in a simple text based output. Access the API using curl a browser or even netcat if that floats your boat. 🙂 Access to the MTR Traceroute API As with all the API calls access is completely free, however you are limited to a total of 50 queries per day from a single IP address.
<urn:uuid:95555aa9-a5cd-4b4a-bb20-9bf9660c8181>
CC-MAIN-2017-04
https://hackertarget.com/ip-trace/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00105-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948179
683
3.265625
3
Physicists at UC Santa Barbara (UCSB) have taken a huge leap forward towards what they refer to as a “fully functional quantum computer” – long-considered the holy grail of both physics and computing. The five cross-shaped elements are the Xmon variant, so named by the team, of the transmon qubit placed in a linear array. A team from the esteemed lab of John Martinis, UCSB professor of physics, have demonstrated a new level of reliability for superconducting qubits, paving the way for large-scale, fault-tolerant quantum circuits. The details of the research appear in this week’s issue of the journal Nature. Quantum computers promise unimaginable speedups compared with today’s fastest number-crunchers, but at this stage, the technology suffers from reliability issues due to the fragile nature of quantum states. Thanks to the strange laws of quantum mechanics and a phenomenon called superposition, the qubit (“quantum bit”) can exist in multiple states at once. Instead of being relegated to a one or a zero, like the classical bit, the qubit can represent a one and a zero and all points in between. A computer that is comprised of qubits is thus inherently parallel and theoretically capable of conducting multiple computations simultaneously. The trouble with qubits, though, is their instability – they tend to “forget” their state very quickly. Quantum error correction, which distributes a logical state among many qubits by means of quantum entanglement, goes a long way to protecting the state, but until now fidelity targets were still shy of the 99 percent goal. This week in the journal Nature, the UCSB physicists report that they’ve created a small quantum computing array that performs with enough accuracy to make error correction viable. “Quantum hardware is very, very unreliable compared to classical hardware,” notes Austin Fowler, a staff scientist in the physics department, whose theoretical work prompted the experiments. “Even the best state-of-the-art hardware is unreliable. Our paper shows that for the first time reliability has been reached.” The experimental system, comprised of five superconducting qubits arranged in a linear array, is the first of its kind to cross the 99 percent accuracy threshold, setting the stage for even larger quantum arrays. The team achieved an average fidelity of 99.92 percent for a single-qubit logic gate and 99.4 percent for a two-qubit logic gate. Error correction was implemented with a surface code approach, which is based on nearest-neighbour coupling and rapidly cycled entangling gates. “Motivated by theoretical work, we started really thinking seriously about what we had to do to move forward,” says John Martinis, a professor in UCSB’s Department of Physics. “It took us a while to figure out how simple it was, and simple, in the end, was really the best.” The UCSB team’s superconducting multi-qubit processor is a representative architecture for a “universal quantum computer,” one that can handle any algorithm given to it. This stands in contrast to the quantum annealing machines made by the Canadian company D-Wave, which are only good at solving a specific set of tasks, called optimization problems. Having passed this crucial threshold, the team will continue to work on reducing errors while scaling the system. Will a practical quantum computer be far off? “If you want to build a quantum computer, you need a two-dimensional array of such qubits, and the error rate should be below 1 percent,” Fowler explains. “If we can get one order of magnitude lower – in the area of 10-3 or 1 in 1,000 for all our gates – our qubits could become commercially viable. But there are more issues that need to be solved. There are more frequencies to worry about and it’s certainly true that it’s more complex. However, the physics is no different.”
<urn:uuid:449ad61b-7e49-4633-9192-f8a1e14603d3>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/04/25/quantum-processor-hits-reliability-target/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00407-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95088
850
3.359375
3
A technology that enables a single optical fiber to carry multiple data channels (or wavelengths). Commercial DWDM systems can have as many as 160 separate channels. Here are the products families and other products related to this article Packet optical transport (POT) networks are optical super-highways that use the most advanced technological enablers, from reconfigurable optical add/drop multiplexers (ROADMs) to amplifiers—and from very high bit rates to dense wavelength-division multiplexers (WDMs). Packet traffic on submarine networks is steadily on the rise and submarine network operators must meet exacting services while optimizing performance. Link characterization is essential for verifying that fiber infrastructure will support new equipment being placed into service when upgrading networks from 2.5 Gbit/s to 10 Gbit/s and even to 40 Gbit/s. Although these powerful systems can take network performance to the next level, they require proper installation, tuning and maintenance. Network service providers need to perform several tests to properly characterize their links when planning to increase the capacity of their network. Support | Contacts | Blog | Glossary | Send Feedback
<urn:uuid:42c5de00-9992-4bfe-8c0e-b53fc078e6e4>
CC-MAIN-2017-04
http://exfo.com/glossary/dense-wavelength-division-multiplexing-dwdm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00315-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928513
235
2.640625
3
Cha E.S.,Emory University | Kim K.H.,University of Pittsburgh | Lerner H.M.,Emory College | Dawkins C.R.,Emory University | And 3 more authors. American Journal of Health Behavior | Year: 2014 Objectives: To examine relationships among health literacy, self-efficacy, food label use, and dietary quality in young adults aged 18-29. Methods: Health literacy, self-efficacy, food label use, and dietary quality were assessed. Participants were categorized into low, medium and high health literacy groups based on Newest Vital Sign score. Results: Self-efficacy and health literacy were predictors of food label use, which positively predicted dietary quality. The low health literacy group had significantly lower use of food labels than the high health literacy group. However, there was no significant difference between medium and high health literacy groups. Conclusion: Strategies to enhance health literacy, self-efficacy and food label use should be developed to improve dietary quality and health outcomes. Source News Article | April 19, 2016 L'Hernault, chair of Emory College's Department of Biology, researched sperm proteins (not male hormones) in nematode worms. He and fellow researchers were able to establish a connection between fertilization in mammals, including humans, and nematodes. It was a highly unexpected outcome, given the two animal groups last shared a common ancestor about a billion years ago. The conclusion, which some think could eventually lead to the equivalent of "the pill" for men, provides new insights on the basic mechanics of sperm and egg fertilization. It was recently reported in the journal Current Biology. "At the end of the day, fertilization in humans seems to share some fundamental features with fertilization in worms," L'Hernault said. "Specifically, a similar protein is found on the sperm surface in humans and worms and, if a drug could be discovered that interfered with its function, we might be able to prevent sperm from fertilizing the egg. "The worm may offer an inexpensive way to find such a drug," he added. "Women have borne more than their fair share in that category of contraception, so the idea is to look at what might be possible for men." In mammals, such as mice and humans, this protein is called Izumo, named for a shrine in Japan where newly married couples visit seeking luck in having children. The Izumo equivalent in worms, named SPE-45, allows the sperm to be recognized by the egg, so that fertilization can occur. Without it, the sperm can move and do other processes normally, but they cannot fertilize the egg. Worms with a mutation affecting SPE-45 are sterile. If you do "gene therapy" by expressing the worm SPE-45 protein in mutant worms, fertility is restored. The challenge was to show that mammalian Izumo was functionally similar to SPE-45. L'Hernault says that he and his team of researchers worked for seven years, focusing on whether there was something specific that connected the two that allowed for fertilization. Both SPE-45 and Izumo proteins have an Ig region that probably allows the sperm to adhere to the egg. Ig regions are widely found in many proteins of all animals, where they provide "stickiness" to proteins. So, L'Hernault and his team took the Ig region from the mouse Izumo protein and used it to replace the Ig region in the worm SPE-45 protein, making a "hybrid" protein. Surprisingly, this "hybrid" protein can be expressed in a worm SPE-45 mutant and it will partially restore fertility to the worm SPE-45 mutant. In contrast, if the Ig domain from a worm skin protein is used to replace the Ig domain of the worm SPE-45 protein, this "hybrid" does not restore fertility. In other words, not any Ig domain, with its associated "stickiness," will allow SPE-45 to fertilize an egg. It must be either the natural worm SPE-45 Ig domain or the Ig domain from a similar mammalian gene. "One useful way to think about Ig domains is that they are all keys and, like real keys that look similar, some specifically open your house, while others only open your car," L'Hernault added. His research shows that the mouse Izumo and worm SPE-45 Ig domains are near-identical "keys." All animals produce sperm that stick to and fertilize eggs from that species, but, generally, sperm from one animal cannot fertilize eggs from another species. That means L'Hernault's work extends well beyond any potential connection to birth control and could provide more understanding on the basic underpinnings of fertility. "Knowing how sperm stick to and fertilize eggs will provide key insights into what has changed and what has remained similar as animals have evolved," L'Hernault concluded. Pace T.W.W.,Emory University | Negi L.T.,Emory College | Sivilli T.I.,Emory Collaborative for Contemplative Studies | Issa M.J.,Emory Collaborative for Contemplative Studies | And 3 more authors. Psychoneuroendocrinology | Year: 2010 Increasing data suggest that meditation impacts stress-related physiological processes relevant to health and disease. For example, our group recently reported that the practice of compassion meditation was associated with reduced innate immune (plasma interleukin [IL]-6) and subjective distress responses to a standardized laboratory psychosocial stressor (Trier Social Stress Test [TSST]). However, because we administered a TSST after, but not prior to, meditation training in our initial study, it remained possible that associations between practice time and TSST outcomes reflected the fact that participants with reduced stress responses prior to training were more able to practice compassion meditation, rather than that meditation practice reduced stress responses. To help resolve this ambiguity, we conducted the current study to evaluate whether innate immune, neuroendocrine and behavioral responses to a TSST conducted prior to compassion meditation training in an independent sample of 32 medically health young adults would predict subsequent amount of meditation practice time during a compassion meditation training protocol identical to the one used in our first study. No associations were found between responses to a TSST administered prior to compassion meditation training and subsequent amount of meditation practice, whether practice time was considered as a continuous variable or whether meditators were divided into high and low practice time groups based on a median split of mean number of practice sessions per week. These findings contrast strikingly with our original study, in which high and low practice time meditators demonstrated marked differences in IL-6 and distress responses to a TSST administered after meditation training. In addition to providing the first published data regarding stress responsivity as a potential predictor of subsequent ability/willingness to practice meditation, the current study strengthens findings from our initial work by supporting the conclusion that in individuals who actively engage in practicing the technique, compassion meditation may represent a viable strategy for reducing potentially deleterious physiological and behavioral responses to psychosocial stress. © 2009 Elsevier Ltd. All rights reserved. Source Pace T.W.W.,Emory University | Negi L.T.,Emory College | Dodson-Lavelle B.,Emory College | Ozawa-de Silva B.,Emory College | And 5 more authors. Psychoneuroendocrinology | Year: 2013 Background: Children exposed to early life adversity (ELA) have been shown to have elevated circulating concentrations of inflammatory markers that persist into adulthood. Increased inflammation in individuals with ELA is believed to drive the elevated risk for medical and psychiatric illness in the same individuals. This study sought to determine whether Cognitively Based Compassion Training (CBCT) reduced C-reactive protein (CRP) in adolescents in foster care with high rates of ELA, and to evaluate the relationship between CBCT engagement and changes in CRP given prior evidence from our group for an effect of practice on inflammatory markers. It was hypothesized that increasing engagement would be associated with reduced CRP from baseline to the 6-week assessment. Methods: Seventy-one adolescents in the Georgia foster care system (31 females), aged 13-17, were randomized to either 6 weeks of CBCT or a wait-list condition. State records were used to obtain information about each participant's history of trauma and neglect, as well as reason for placement in foster care. Saliva was collected before and again after 6 weeks of CBCT or the wait-list condition. Participants in the CBCT group completed practice diaries as a means of assessing engagement with the CBCT. Results: No difference between groups was observed in salivary CRP concentrations. Within the CBCT group, practice sessions during the study correlated with reduced CRP from baseline to the 6-week assessment. Conclusions: Engagement with CBCT may positively impact inflammatory measures relevant to health in adolescents at high risk for poor adult functioning as a result of significant ELA, including individuals placed in foster care. Longer term follow-up will be required to evaluate if these changes are maintained and translate into improved health outcomes. © 2012 Elsevier Ltd. Source Tan X.,Sun Yat Sen University | Tan X.,Emory University | Sidell N.,Emory University | Mancini A.,Emory College | And 6 more authors. Reproductive Sciences | Year: 2010 Curcumin, a component of turmeric, has been reported to exhibit potential antitumor activities. This study assessed the effects of a novel synthetic curcumin analog, EF24, on proliferation, apoptosis, and vascular endothelial growth factor (VEGF) regulation in platinum-sensitive (IGROV1) and platinum-resistant (SK-OV-3) human ovarian cancer cells. EF24 time- and dose-dependently suppressed the growth of both cell lines and synergized with cisplatin to induce apoptosis. Although treatment with EF24 had no significant effect on VEGF messenger RNA (mRNA) expression,VEGF protein secretion into conditioned media was dose-dependently reduced with EF24 demonstrating ĝ̂1/48-fold greater potency than curcumin (P <.05). EF24 significantly inhibited hydrogen peroxide (H2O2)-induced VEGF expression, as did the phenolic antioxidant tert-butylhydroquinone (t-BHQ). EF24 upregulated cellular antioxidant responses as observed by the suppression of reactive oxygen species (ROS) generation and activation of antioxidant response element (ARE)-dependent gene transcription. Given its high potency, EF24 is an excellent lead candidate for further development as an adjuvant therapeutic agent in preclinical models of ovarian cancer. © The Author(s) 2010. Source
<urn:uuid:4d3d1b8a-15f6-433f-8413-9e1bae12603f>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/emory-college-1275610/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00067-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944715
2,244
3.078125
3
These questions are derived from the Self Test Software Practice Test for CompTIA’s RFID+ exam. SubObjective: Given a scenario, interpret a site diagram created by a RFID architect describing interrogation zone locations, cable drops, device mounting locations Single Answer, Multiple Choice A shirt manufacturing company plans to deploy an RFID system to automate the product packaging system. Different packaging will be used for shirts of different price ranges. According to the RFID system plan, shirts will be tagged. Based on the category and price information stored in a tag, appropriate packaging will be used. The following logical architecture diagram has been designed for this RFID system: What does the actuator in the logical architecture diagram do? - It automatically turns the interrogator on and off. - It rings an alarm when a tagged item is not read successfully. - It monitors and controls a packaging line. - It decreases the interrogator signal strength. C. It monitors and controls a packaging line. The actuator highlighted in the logical architecture diagram will monitor and control the packaging line. An actuator is a mechanical device that is used to either control or move an object. For example, an actuator may be used to open an access gate when an interrogator successfully reads a tag. A programmable logic controller (PLC) is one of the most versatile actuators that can be used in RFID systems to automate a process. A variety of mechanical tasks, such as monitoring and controlling a product packaging line or applying a predetermined amount of torque to nuts in an automobile production line, can be performed by using a PLC. In this scenario, the interrogator, after reading a tagged item, will instruct the actuator to perform a specific task, such as moving the tagged item to an appropriate packaging line depending upon the category and price of the item. The actuator highlighted in the logical architecture diagram will not turn automatically turn the interrogator on and off. Sensors are used to automatically start and stop interrogators based on the occurrence of an external event that is detected by the sensors. The actuator highlighted in the logical architecture diagram will not ring an alarm when a tagged item is not read successfully. Annunciators are used to set audio-visual signals. An annunciator is an electrically controlled signaling device that may be used to generate audio-visual signals in response to events, such as audible alarms and light stacks. The actuator highlighted in the logical architecture diagram will not decrease the interrogator signal strength. Attenuators are used to reduce the power of the RF signal emitted by the interrogator as the signal travels from the interrogator to the antenna through the cable. RFID Sourcebook, Chapter 1: Technology Overview, Section 1.2.5: Sensor, Annunciator, and Actuator, p. 41.
<urn:uuid:18577ac0-27d7-4ddd-8ecf-83f3a12b1db7>
CC-MAIN-2017-04
http://certmag.com/installation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00095-ip-10-171-10-70.ec2.internal.warc.gz
en
0.858049
586
2.625
3
/ October 15, 2007 Nuclear power expansion exploded in the United States during the 1970s, heralded as a cheap, unlimited source of clean energy. Dramatic electricity usage predictions during that decade prompted aggressive plans for further expansion. By the mid-1980s, those predictions proved overblown, and mass cancellations followed. A few accidents during the 1970s helped sour America on nuclear power development for more than 20 years. The unthinkable nearly happened in 1979 at Three Mile Island Nuclear Generating Station in Dauphin County, Pa. The plant suffered a partial core meltdown, but the meltdown caused no deaths or injuries. Before that, a fire at the Unit 1 reactor at Browns Ferry Nuclear Power Plant, near Huntsville, Ala., prompted a scare in 1975. Now, however, nuclear power may be poised for a dramatic comeback. Supply and Demand Volatile natural gas prices, updated nuclear plant designs, demand for a carbon-free energy source and massive government incentives may enable the nuclear power industry to rise again in the South. After more than 20 years of inactivity, the Unit 1 Browns Ferry reactor resumed splitting atoms in May. So far, utilities have announced plans to request federal licenses in the next two years to build up to 30 reactors, mostly in the South. Just receiving a license does not commit a utility to actually building a plant, though. Energy giant Southern Co. announced plans to pursue several new licenses, but won't commit to building any plants. "We are in negotiations right now with Westinghouse to determine if nuclear energy is the best option for our customers," said Beth Thomas, spokesperson for Southern Co. "Nuclear is going to have to be cost-competitive with the other base-load sources." A base-load power plant is one that, in theory, is available 24 hours a day, seven days a week, and primarily operates at full capacity. Coal, natural gas and nuclear plants are common forms of base-load electricity sources. "[Getting a license] is a very low-cost way of going through step one," said Jerry Taylor, senior fellow at the Cato Institute. However, "low-cost," in this case, means a $50 million investment, according to Marilyn Kray, president of NuStart Energy Development, a consortium of 10 power companies seeking nuclear plant licenses from the U.S. Nuclear Regulatory Commission (NRC). She said preparing an application for a license takes nearly two years of expensive research, and if printed, the "application" would be roughly 25 volumes of 4-inch binders. Interest in nuclear power is growing because the Energy Information Administration (EIA) projects electricity generation in the United States will increase by 40 percent in the next 25 years. Natural gas - which powered nearly every U.S. electricity plant constructed during the 1990s - has long been the preferred option for electric power generation. But as the third millennium arrived, the United States' natural gas supply plummeted far below the "proven reserve" estimates made during the '90s. Utilities responded with plans to revert to coal-fired plants, infuriating environmentalists. Other opponents contend that power-generation needs can be satisfied by better energy efficiency, more conservation and greater use of renewable energy sources - wind, biomass, geothermal and solar. But those technologies, some say, are too immature to satisfy future energy demands. Meanwhile, utilities insist they will need more "base-load capacity," meaning construction of massive new high-performance power plants is unavoidable. Since the American public currently appears to favor carbon-free electricity, some utilities are starting to view nuclear power as a reasonable balance. But according to critics, nuclear technology remains unsafe, and some contend the projected need for base-load capacity is overblown, like it was during the 1970s. Though the serious accidents of the 1970s gave nuclear power a black eye, manufacturers say they have since simplified plant designs to make them safer. And, although the United States generates relatively little electricity from nuclear power, American plant designs currently dominate the industry. Though France gets 78.5 percent of its energy from nuclear power and the United States gets roughly 19 percent, France and other countries seek American plant designers for design advice, according to Adrian Heymer, senior director of new plant deployment at the Nuclear Energy Institute (NEI). "We've monitored what's been going on overseas, and we've learned from them, and they've learned from us," he said. "In the last six years, more have been coming over to the U.S. to find out how we're operating the plants, rather than us going overseas. In the early '90s, it was the other way around. Now, most - like the Japanese, the French, the Taiwanese and others - have come to the U.S." Heymer said the primary improvement in the new plant designs was their simplicity - they typically use gravity, convection and conduction, as opposed to multiple pumps and valves, to inject water into the reactors. The fewer pumps and valves a plant uses, the fewer areas it has that could malfunction, making nuclear power safer. American nuclear plants also dramatically improved their efficiency due to various material upgrades, Heymer said, which also extended the plants' life spans. "Our capacity factors have gone from around the 65- to 70-percent mark, as they were in the late '80s and early '90s, to an industry average today of 90 percent. We've maintained that for six years," he said, adding that nuclear power plants reduced their staff by roughly 40 percent over the past 15 years, in terms of how many employees it took to produce one Watt of electricity. The power industry and the federal government project a dramatic increase in electricity demand over the next 25 years. But nuclear opponents say part of that demand would disappear if Southern states tightened energy efficiency standards. Georgia could delay its extra base-load capacity needs if it increased its energy efficiency standards, said Sara Barczak, safe energy director for the Southern Alliance for Clean Energy, and a Georgia resident. "Energy efficiency is our No. 1 source of energy that can be quickly tapped, and it's the most affordable because we are the least energy-efficient region in the country," she said. "If you look at all the states where the nuclear plants are being proposed in the South, we rank poorly in terms of money invested in electricity efficiency programs or building code measures. And we use the most energy." The resulting reprieve from extra base-load construction, she said, would give renewable energy technologies time to mature and expand their capacity capabilities. And further-developed renewables, combined with energy efficiency increases and conservation, could possibly satiate the South's forthcoming power demand increases. The Cato Institute's Taylor said he wouldn't rule out that possibility, but doubted its success. Other than frowning on the high expense of renewable energy sources and artificial, government-supported market demand for them, he said many of the technologies aren't conducive to satisfying intense demand, such as wind. "The majority of the wind we get from a wind-fired power plant comes during the night and during low-pressure periods. The demand for energy is primarily during peak, and peak demand is when you get the least amount of energy out of that wind power plant," Taylor said. "That's why each of these facilities either has to have its own stand-alone fossil fuel backup system so they can provide electricity when it's needed, or they have to contract with someone else to get it." Though NuStart encourages utilities to develop renewable energy, Kray said it would be far more expensive to produce the volume of power a nuclear plant can produce, using renewable energy sources. Heymer agrees energy efficiency has been lackluster in the South, but insists it's on the rise, and could temporarily delay base-load construction. However, he doubts that renewable energy, increased efficiency and conservation could handle the projected demand alone. "If you look at Progress Energy, their growth rate is in the area of 40,000 new customers a year. They're going forward with significant new programs for conservation and energy efficiency, but they still need base-load generation," Heymer said, adding that Florida is also in dire need of base-load capacity. "You have 1,000 people a day moving to Florida. They need base-load generation." Some say the EIA's 40 percent usage increase projections are overblown. Jon Block, project manager for nuclear technology and climate change at the Union of Concerned Scientists (UCS), points to the industry's similar demand projections during the 1970s that proved false during the 1980s. Utilities canceled many of the nuclear plants originally ordered due to those false predictions. He said he sees no reason to believe the usage projections this time around. "Energy forecasting is a practice for drunk monkeys. They have a better record than most of the forecasters," Taylor said. "No matter how smart the analyst, no matter how blue the blue-ribbon commission might be, no matter how well credentialed the academic, no matter how steeped in the industry they might be, if you look at the history of past prognostication, with regard to energy price, technology or demand, you'll find that they are unerringly incorrect." NuStart's Kray admits predictions are risky, but said she believes results will be different this time. "What might be different is, if you look at the capacity investment that has been made during the last 10 years," she said, "you'll find that there really hasn't been much, if any, significant base-load generation put onto the U.S. grid." Efficiency gains in plants during the mid-1990s saved utilities from needing new plants as demand rose, she said, and at this point, the industry has gotten most of the maximum extra output from increased efficiencies. "During the '80s, the capacity factor was down in the '70s, so you had a lot of room for improvement," Kray said. "That enabled nuclear to keep up with the growth in demand. But, if you look forward, and if you want nuclear to continue to uphold a 20 percent contribution to the power supply, then you're going to have to add new plants." The volatility of natural gas prices was a strong incentive to grow nuclear power, coal and renewable energy, she said, to maintain stable energy prices. "You don't want to be completely hostage to one fuel-type, whether it's coal, nuclear or natural gas." Nuclear Welfare State Federal government subsidies will play a vital role in determining whether Wall Street investors back new nuclear plant construction in the United States. Congress aimed to guarantee 100 percent of the first six construction loans with the federal Energy Policy Act of 2005. However, the Department of Energy recently demonstrated that the legislation's final draft only obligated it to cover 80 percent. Taylor said the nuclear industry reacted furiously, insisting that without 100 percent coverage, no new nuclear plants would be built. Taylor questioned newly proposed plants' economic viability if private lenders demand that the federal government pay 100 percent of any potential losses. "If nuclear power were economically attractive and made sense, you wouldn't need to guarantee that loan. People like Goldman Sachs and Deutsche Bank would be happy to loan you the money," he said. "The argument was that there is so much risk in the market - and so much uncertainty going forward with this technology - that the first five or six plants have an unusually high hurdle." The loan guarantees were more critical for utilities in unregulated energy states, Kray said, like Maryland and Texas, than for Southern states, many of which regulate their power industries, mandating when utilities must grow capacity. Regulated utilities can get affordable interest rates, due to those growth requirements, and guaranteed ratepayers to support them. Deregulated utilities don't have either of those, and without government underwriting, get astronomical interest rates. The Safety Debate Interest in nuclear power may be growing, but debate continues over its safety. Nuclear proponents say, overall, nuclear power boasts a relatively high safety record in the United States, given that the two accidents happened during the 1970s. But the UCS, which was influential in safety upgrades mandated during the 1970s, paints a grimmer picture. The NRC shut down 38 plants for a year or longer to raise safety standards, which occurred after the Three Mile Island accident. And the UCS contends that these 38 instances cannot be considered a good record of the NRC's effectiveness. Taylor disputes that argument. "The fact that plants shut down here or there is not particularly remarkable - so do coal mines; so do gas-fired power plants. All plants shut down - need maintenance - have things go wrong," he said. "They take care of them. We don't really care about that. What we care about are China Syndrome-type incidences, and things like that." The UCS points to a dangerously close call at the Davis-Besse Nuclear Power Plant, owned by Ohio utility FirstEnergy, in 2002. According to the Nuclear Information and Resource Service (NIRS), following the plant's February shutdown for refueling and inspection, operators discovered a cavity had eaten through 6 inches of carbon steel on the top of the 6.5-inch thick reactor pressure vessel. It was the apparent result of corrosive coolant leakage from the reactor core. Less than a half inch of the reactor vessel's stainless steel liner remained in the bottom of the 4-inch by 5-inch by 6-inch cavity separating the reactor's highly radioactive and pressurized internal environment from blasting into the reactor containment building. This could have damaged safety equipment, and possibly set into motion a core meltdown accident, according to the NIRS. Initial company inspections also found cracks in the welds on five of the 69 nickel alloy sleeves that penetrate the reactor pressure vessel head to allow for control rod insertion to safely shut down the reactor. "FirstEnergy pushed this reactor beyond all reasonable safety margins, and the NRC basically allowed it," said Paul Gunter, director of the Reactor Watchdog Project for the NIRS, in a statement. "This was a dangerous nuclear experiment on public safety that came damn close to exceeding the strength of a fundamental piece of reactor safety equipment, the reactor pressure vessel." According to the NIRS, the NRC had earlier granted operators at Davis-Besse a delay from a Dec. 31, 2001 inspection report deadline on the same vessel head area of the reactor pressure vessel. Repairs on the Davis-Besse plant took two years. The plant went back online in 2004. NRC spokesman Scott Burnell defended the NRC's handling of that incident. "The fact remains that it was detected before an accident occurred," he said. "We have taken steps since that point to enhance the inspections that will prevent a recurrence of that event." The NRC implemented better-defined procedures for inspectors to share information about what they found at particular plants. "For example, if an inspector at 'plant A' sees a buildup of material on a certain valve, that inspector can then check with other inspectors to ask,
<urn:uuid:0b312e91-1c7e-4317-a5a1-4b38eba3d126>
CC-MAIN-2017-04
http://www.govtech.com/photos/100917599.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00518-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964342
3,120
3.140625
3
In 2009, the US Department of Energy (DOE) launched a bold experiment, a $32 million program to assess the benefit of cloud computing to the scientific community. A distributed testbed infrastructure, named Magellan, was established at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC) to provide a tool for computational science in a cloud environment. Magellan, with funding from the American Recovery and Reinvestment Act, was to help major research organizations answer the classic cloud question: is it better to rent or buy? “What we’re exploring is the question of whether the DOE or other government agencies should be buying their own clusters … or whether those kinds of purchases should be done in a more consolidated way,” said NERSC Director Kathy Yelick in a previous article. Despite high-hopes and community support, in late 2011, we learned that the Magellan project was being discontinued, leaving many wondering what happened. Now we have some answers in the form of a 169-page report, sponsored by the Department of Energy’s Office of Advanced Scientific Computing Research (ASCR), which funded the study to assess what Magellan tells us about the the role of cloud computing for scientific applications. Since industry was already benefiting from the cloud model, from the economies of scale generated by a shared pool of network-accessible resources, the Magellan team members initially set out to determine if cloud would hold the same potential for science. As stated in the executive summary: The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid-range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. Specifically, Magellan was tasked with addressing the following questions: - Are the open source cloud software stacks ready for DOE HPC science? - Can DOE cyber security requirements be met within a cloud? - Are the new cloud programming models useful for scientific computing? - Can DOE HPC applications run efficiently in the cloud? What applications are suitable for clouds? - How usable are cloud environments for scientific applications? - When is it cost effective to run DOE HPC science in a cloud? It should be noted that Magellan was not a typical commercial cloud, rather this “science cloud” was purpose-built for the special requirements of scientific computing. Magellan was based on the IBM iDataplex chassis using Intel processor cores for a theoretical peak performance of over 100 teraflop/s. Other components include: - High bandwidth, low-latency node interconnects (InfiniBand). - High-bin processors tuned for performance. - Preinstalled scientific applications, compilers, debuggers, math libraries and other tools. - High-bandwidth parallel file system. - High-capacity data archive. During Magellan’s two-year run, the staff at NERSC and Argonne National Laboratory examined how different aspects of cloud computing infrastructure and technologies could be harnessed by various scientific applications. They evaluated cloud models such as Infrastructure as a Service (IaaS) and Platform as a Service (Paas), virtual software stacks, MapReduce and open-source implementation (Hadoop), as well as resource provider and user perspectives. Using a wide-range of applications as benchmarks, the researchers compared the Magellan cloud with various other architectures, including a Cray XT4 supercomputer, a Dell cluster system, and Amazon’s EC2 commercial cloud offering. Despite the testbed moniker, a lot of important production science took place, contributing to advances in particle physics, climate research, quantum chemistry, plasma physics and astrophysics. Science workloads, by their nature, tend to be cloud-challenged, although to varying degrees. The report outlines the three major classifications of computational models, beginning with large-scale tightly-coupled science codes, which require the power of traditional supercomputers and take a big penalty working in a virtualized cloud environment. Then, there are the mid-range tightly-coupled applications, which run at a smaller scale and tend to be good candidates for cloud, although there is some performance loss. The final category, high-throughput workloads, usually involve asynchronous, independent computations, and in the past relied on desktop and small clusters for processing. But due to an explosion in sensor data, cloud is a good fit, especially when you factor in the fact that these high-throughput and data-intensive workloads do not fit into current scheduling and allocation policies. The two-year Magellan project led to these key findings: - Scientific applications have special requirements that require cloud solutions that are tailored to these needs. - Scientific applications with minimal communication and I/O are best suited for clouds. - Clouds require significant programming and system administration support. - Significant gaps and challenges exist in current open-source virtualized cloud software stacks for production science use. - Clouds expose a different risk model requiring different security practices and policies. - MapReduce shows promise in addressing scientific needs, but current implementations have gaps and challenges. - Public clouds can be more expensive than in-house large systems. Many of the cost benefits from clouds result from the increased consolidation and higher average utilization. - DOE supercomputing centers already achieve energy efficiency levels comparable to commercial cloud centers. - Cloud is a business model and can be applied at DOE supercomputing centers. From this list, it is apparent that cloud was unable to measure up to a centralized supercomputer system in many ways, but the delivery model does have its place. According to the report, “users with applications that have more dynamic or interactive needs could benefit from on-demand, self-service environments and rapid elasticity through the use of virtualization technology, and the MapReduce programming model to manage loosely coupled application runs.” In other words, cloud excels when it comes to flexibility and responsiveness. In fact, the report found that “for users who need the added flexibility offered by the cloud computing model, additional costs may be more than offset by the increased flexibility. Furthermore, in some cases the potential for more immediate access to compute resources could directly translate into cost savings.” However, when it comes to the potential cost savings of using a public cloud versus the costs of hardware acquisition, the report makes the point that DOE procurement costs are often significantly discounted, which offsets some of the potential savings: Existing DOE centers already achieve many of the benefits of cloud computing since these centers consolidate computing across multiple program offices, deploy at large scales, and continuously refine and improve operational efficiency. Cost analysis shows that DOE centers are cost competitive, typically 3-7x less expensive, when compared to commercial cloud providers. Because the commercial sector constantly innovates, DOE labs and centers should continue to benchmark their computing cost against public clouds to ensure they are providing a competitive service. “Cloud computing is ultimately a business model,” state the authors. “But cloud models often provide additional capabilities and flexibility that are helpful to certain workloads. DOE labs and centers should consider adopting and integrating these features of cloud computing into their operations in order to support more diverse workloads and further enable scientific discovery, without sacrificing the productivity and effectiveness of computing platforms that have been optimized for science over decades of development and refinement.” The authors further suggest that when an integrated approach is not sufficient, a private cloud solution should be considered based on its ability to provide many of the benefits of commercial clouds while avoiding some of the pitfalls, such as security, data management, and performance penalties. To recap: cloud services are a good complement to centralized computing resources, but not a replacement. This should not come as a surprise to our community. This is HPC, high-performance computing, and whenever you add additional layers, i.e., virtualization, the application takes a performance hit. However, as the report makes clear, there are good use cases for cloud services, such as “scientific groups needing support for on-demand access to resources, sudden surges in resource needs, customized environments, periodic predictable resource needs (e.g., monthly processing of genome data, nightly processing of telescope data), or unpredictable events such as computing for disaster recovery.” The report goes on to note that “cloud services essentially provide a differentiated service model that can cater to these diverse needs, allowing users to get a virtual private cluster with a certain guaranteed level of service.” Magellan was billed as an exploratory project, set to go for two years. In fact, the project was named Magellan in honor of the Portuguese explorer Fernão de Magalhães, the first person to lead an expedition across the Pacific. The original “clouds of Magellan” refers to two small galaxies in the southern sky. The current-day Magellan, as the first major scientific cloud testbed, also navigated uncharted waters and documented the journey for the benefit of future generations.
<urn:uuid:8e89cc6d-08da-4915-8749-b48e8e9f97f1>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/02/01/learning_from_clouds_past_a_look_back_at_magellan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00426-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938422
1,925
2.765625
3
Fiber optic splicing involves joining two fiber optic cables together. The other, more common, method of joining fibers is called termination or connectorization. Fiber splicing typically results in lower light loss and back reflection than termination making it the preferred method when the cable runs are too long for a single length of fiber or when joining two different types of cable together, such as a 48-fiber cable to four 12-fiber cables. Splicing is also used to restore fiber optic cables when a buried cable is accidentally severed. There are two methods of fiber optic splicing, fusion splicing & mechanical splicing. If you are just beginning to splice fiber, you might want to look at your long-term goals in this field in order to chose which technique best fits your economic and performance objectives. In order to protect and support the fusion splice point, fiber optic splice protector plays an important role. Fusion splice protection sleeve is made up of cross linked polyolefin heat-shrinkable tubes, hot melt tubes and Stainless steel needle. It provides vital protection at the fused juncture between two fusion spliced optical fibers. In addition to providing a substitute for the original fiber optic cable jacket, splice protection sleeves provide a degree of rigidity that prevents the spliced area from bending or flexing. Consist of cross linked polyolefin, hot fusion tubing and stainless reinforcing steel rod which keep optic transmission properties of optical fiber and enhance the protection to optical fiber splices. Easily operating to the optical fiber during installation without damaging and clear sleeve make it easy to detect splice before shrinkage. Sealing structure make the splice free from influence of temperature and humidity in special environment.
<urn:uuid:c681384a-6820-454f-8e7a-7de3141e6f1c>
CC-MAIN-2017-04
http://www.fs.com/blog/the-application-of-fiber-optic-splice-protector.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908457
346
3.03125
3
In information technology, a network is a series of places called nodes that are interconnected by some kind of communication. Networks can interconnect with other networks and contain subnetworks. Simple Mail Transfer Protocol – SMTP is used as an internet standard for the transmission of e-mails in case of an IP (internet protocol network) network. A switch can be used in the local area network (LAN) as a multi-port bridge device. Under discussion mechanism works at the Layer2 (data-link) of the OSI reference model. NIC (network interface card) is a gadget that works as a bridge between computers. In other words, this piece of equipment is used to join together multiple computer systems in a local area network (LAN). What is LAN? Connecting two PCs for the purpose of sharing data and attached peripherals that mean, you have established a LAN (local area network). But the figure of computers connected within a LAN can be fixed and can be to a limit of several hundreds computer systems.
<urn:uuid:d886d397-46b3-42e8-b2ff-cbe917bbda9e>
CC-MAIN-2017-04
https://howdoesinternetwork.com/tag/network
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00574-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914815
209
3.796875
4
2.1.2 What is secret-key cryptography? Secret-key cryptography is sometimes referred to as symmetric cryptography. It is the more traditional form of cryptography, in which a single key can be used to encrypt and decrypt a message. Secret-key cryptography not only deals with encryption, but it also deals with authentication. One such technique is called message authentication codes (MACs; see Question 2.1.7). The main problem with secret-key cryptosystems is getting the sender and receiver to agree on the secret key without anyone else finding out. This requires a method by which the two parties can communicate without fear of eavesdropping. However, the advantage of secret-key cryptography is that it is generally faster than public-key cryptography.
<urn:uuid:7631296a-cb97-4aab-8ab0-ab6cc022e21f>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-secret-key-cryptography.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00390-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953516
155
3.875
4
Joined: 06 Jan 2004 Posts: 247 Location: Hyderabad I hope you are the beginner in coding COBOL programs. Because many people faces the same problem at their learning phase. Might be you coded your program as follows: Performing a loop until EOF where the loop contains read (at end set EOF) and write. If this is the way you have coded your program certainly it will write the last record again. Even though it is reaching the end of file as there is another write following, it will write the last record again. Instead of this read the file and then in the loop keep a write and read. So when the EOF is reached without writing it comes out of the loop. And pradeep one small suggestion is better provide your code instead of simply telling about your problem.
<urn:uuid:a86a5e16-fca0-4115-bb56-d118532d2d48>
CC-MAIN-2017-04
http://ibmmainframes.com/about1036.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00132-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950845
170
2.71875
3
Summer is heating up, and so are our supercomputers. The insatiable drive for more computational performance means servers are becoming ever denser, and correspondingly hotter. Today, a rack of high-end blades can dissipate 30 kilowatts or more. And with the era of coprocessor acceleration upon us, many HPC servers are being fitted with 200-watt GPUs, further adding to the heat load. This wouldn’t matter so much if we kept our machines in the pool, but air being what it is (a poor conductor of heat), the burden on the cooling infrastructure keeps escalating. Keeping the machinery at a comfortable temperature can represent from a third to a half of a facility’s power consumption. Even with the most recent recommendations from the American Society of Heating, Refrigerating and Air-conditioning Engineers (ASHRAE) to crank up the datacenter thermostat from 77F to 80F, HPC machine rooms are reaching their thermal limits. That’s why liquid cooling has been such a big part of supercomputing. These machines, especially the proprietary designs, have always been on the leading edge of computational density, and sometimes wouldn’t survive on air flow alone. In fact, since the days of the early Cray systems in the 1970s, a lot of the top-end supercomputers have had water or some other liquid coolant running through the hardware. That’s why the father of supercomputing, Seymour Cray, referred to himself as “an overpaid plumber.” A couple of recent stories point to a new direction for liquid cooling. Instead of just running coolant through the racks, it’s now being funneled directly onto the hottest components: the processors themselves. IBM’s Aquasar supercomputer, which was recently delivered to the Swiss Federal Institute of Technology Zurich (ETH Zurich), is an example of one such system. The 6-teraflop Aquasar machine uses customized water-cooled BladeCenter servers that sport both Intel Nehalem CPUs and IBM PowerXCell processors. Water is piped into a heat exchanger that sits right on top of the chips. Because of the intimate contact with the processors, the water does not need to be chilled, and can be as warm as 60C. That’s 140F for those of you keeping score in the USA. The idea is to keep the processors below their critical maximum of 85C (185F). At ETH Zurich, the heated (waste) water is piped away to help warm the buildings at the facility. IBM claims the carbon footprint of such a system is reduced by as much as 85 percent compared to a conventionally-cooled computer setup. A more general case involves what Google is doing — or thinking about doing. The company recently filed to patent a server assembly design in which two motherboards sandwich a liquid-cooled heat sink. In this setup, the processors are being cooled via the heat sink, while the other components, like the memory chips, are air cooled. According to a report in Data Center Knowledge: The design is among a number of Google patents on new cooling techniques for high-density servers that have emerged since the company’s last major disclosure of its data center technology in April 2009. Several of these patents deal with cooling innovations using either liquid cooling or air cooling applied directly on server components. In 2007, Google filed a patent for a different sort of liquid-cooling arrangement. The “Water Based Data Center” design outlined sea-based computing facility that floats on the water, employs the waves to help generate electricity, and uses the sea water to help provide cooling for the computers. That patent was granted in May 2009. Perhaps an even more novel method is immersion cooling, in which the whole server is submerged into an inert liquid, such as mineral oil. That too, is not a new concept. Some of the early supercomputing systems, including the Cray-2*, used immersion cooling. A modern version is being offered by Austin, Texas-based Green Revolution Cooling, which claims its horizonal rack design and “GreenDef” oil coolant can manage power densities as high as 100 kilowatts per rack. Bring on the GPUs! The company is claiming its immersion system uses 95 percent less power than conventional cooling. Some of that can be attributed to the fact that all the internal server fans can be yanked out, which alone should reduce the power draw by 5 to 25 percent. The company recently installed some test units at the Texas Advanced Computing Center (TACC). If the Green Revolution offering pans out as advertised, maybe we’ll see more supers taking the plunge. *The original post incorrectly specifed Cray-1 as one of the early supercomputers using immersion cooling. It was the Cray-2 design that introduced this cooling design. Hat tips to readers Richard Lakein and Max Dechantsreiter for pointing out the gaffe. — Michael
<urn:uuid:3cc2974d-cf81-4fcd-a4e4-f04078fb73d3>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/07/15/supercomputers_when_they_sizzle/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00252-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945162
1,042
2.96875
3
For an attacker to maintain a foothold inside your network they will typically install a piece of backdoor malware on at least one of your systems. The malware needs to be installed persistently, meaning that it will remain active in the event of a reboot. Most persistence techniques on a Microsoft Windows platform involve the use of the Registry. Notable exceptions include the Startup Folder and trojanizing system binaries. Examining malware persistence locations in the Windows Registry and startup locations is a common technique employed by forensic investigators to identify malware on a host. Each persistence technique commonly seen today leaves a forensic footprint which can be easily collected using most forensic software on the market. The persistence technique I'll describe here is special in that it doesn't leave an easy forensic trail behind. A malware DLL can be made persistent on a Windows host by simply residing in a specific directory with a specific name, with no trace evidence in the registry or startup folder and no modified system binaries. There isn't just one directory location and DLL filename that are candidate locations for this persistence mechanism but rather a whole class of candidate locations exist for any given system. On my laptop (Windows 7 64-bit) there are no less than 1032 such path and DLL name combinations where a DLL could be placed such that it would automatically be loaded at some point during my normal boot-up, and that's just for a 32-bit DLL! If you had a 64-bit malware DLL the number would be much higher as I have many more 64-bit processes running at boot time. So how does this work? DLL Search Order Hijacking When an application requests to load a DLL either statically via an import table in its executable file, or dynamically via the LoadLibrary() function the operating system will look for the DLL in a predefined sequence of locations. This sequence is defined in the MSDN documentation here: http://msdn.microsoft.com/en-us/library/ms682586(VS.85).aspx. The most important tidbit of information to take away from that document is that the first place the application looks for a DLL is the location of the executable itself. This isn't always the case though. If the DLL name that is requested is listed in the "\.KnownDlls" object then it will always load from a fixed location (the System32 folder). This object is populated at boot-time using data from the Registry at the following location: Microsoft employee Larry Osterman describes this in a blog post (http://blogs.msdn.com/b/larryosterman/archive/2004/07/19/187752.aspx). He states in the post that the KnownDlls object will be larger in memory than what is in the Registry key and will be built recursively from the statically imported DLLs from any DLL listed in the registry. In the limited testing I've done on Windows XP and Windows 7 systems, the KnownDlls object in memory is identical to the list provided by the KnownDLLs registry key. Casual browsing of the KnownDlls key will reveal a short list of about 30-35 of the most commonly used DLLs. For example, the low level networking API DLL "ws2_32.dll" is contained in this list. Whenever any application attempts to load a DLL named "ws2_32.dll" it will always load it from the System32 folder because it is listed in this key, regardless of where the application was launched from. The KnownDlls system provides a thin layer of security for this small set of crticial DLLs because an attacker can't simply place a DLL named "ws2_32.dll" inside a folder containing an application which uses ws2_32.dll and expect their local copy to be loaded. The KnownDlls system is far too limited to provide any realistic sense of DLL loading security though. For example, even though we can guarantee that the copy of ws2_32.dll that will be loaded will always be the one from system32, other components loaded when ws2_32.dll is loaded (such as iphlpapi.dll and mswsock.dll) are not guaranteed because they are not covered by KnownDlls. Lets imagine that we had a legitimate program called update.exe which ran from the location "C:Program FilesMyCompany" and loaded ws2_32.dll, all we would have to do to make update.exe load our malware DLL is place our malware in the "C:Program FilesMyCompany" directory and give it the name "iphlpapi.dll". When the update.exe program runs it loads ws2_32.dll, which in turn loads iphlpapi.dll which it loads from the application directory first before checking the System32 folder where it legitimately resides. All the malware author needs to do is make sure their malicious iphlpapi.dll eventually loads the real thing and the user of the system (and a forensic analyst most likely) will have no idea that malware has been loaded. You might have come to the conclusion in reading the description of the problem above that executables which reside in the System32 folder are not susceptible. If you thought that, you'd be correct. If you also thought that there is no real practical problem because all consistent and reliably placed startup binaries exist in the System32 folder, you'd be incorrect. Case-in-point: Explorer.exe . Strangely, this binary resides in C:Windows (I assume for historic reasons). So when explorer.exe launches and it requests a DLL that is not protected by KnownDlls, the first place the system looks to find the DLL is the C:Windows directory. Thus far, the most common place we've found this malware persistence technique being used is in the location and name "C:Windowsntshrui.dll". The real ntshrui.dll is located in the System32 folder but since this dll is loaded by Explorer.exe and not protected by KnownDlls, it's unfortunately susceptible to DLL search order hijacking. The Extent of the Problem Once you really understand the nature of the problem it may occur to you that it's a very widespread and pervasive issue. It has always existed in Windows and will likely exist for the foreseeable future. To alter the DLL search path mechanism could have severe backward-compatibility problems for Windows and is most likely not going to happen due to the high value they have always placed in compatibility (We love you Raymond Chen!). I've written a program to identify all locations and filenames that a DLL could be placed to achieve persistence on a given system. The idea is that you can run this program on a clean (Gold Image) system and forensically search for any DLL name listed in the output on a machine you suspect of being compromised with this method of persistence. Similar programs may be developed to attempt to identify hijacked DLLs on a live system. I chose to write this program first however because its output helps to explain the extent of the problem. I ran the program on my laptop and it produced output which contained 1032 lines, each describing a location and filename that a DLL could be placed to be loaded at boot-time by my system. On a clean XP SP2 machine I get 91 locations listed. Here are a few lines from the output from my laptop: Hijackable Location: C:Program Files (x86)iTunesSspiCli.dll Hijackable Location: C:Program Files (x86)iTunesCRYPTBASE.dll Hijackable Location: C:Program Files (x86)iTunesCoreFoundation.dll Hijackable Location: C:Program Files (x86)iTunesMSVCR80.dll According to this output, some program that loads when my system boots (most likely iTunes) attempts to load the DLL named "CRYPTBASE.DLL" which is commonly found in the System32 folder but an attacker could place a malicious DLL in the iTunes folder and that would be loaded instead. The program examines running processes and determines hijackable DLL locations by the following properties (applied to each loaded dll in every running process in the system): - The process executable that loaded the DLL is not located in the System32 folder - The DLL name is not found in the KnownDlls object - The DLL is not found in the same directory as the executable Any loaded DLL that contains all three properties is susceptible to being trumped by search order hijacking. The tool (compiled and source) to identify possibly malicious 32-bit DLL locations from a clean system can be found here.
<urn:uuid:ea2b1879-6527-468b-8a34-0056728c1029>
CC-MAIN-2017-04
https://www.fireeye.com/blog/threat-research/2010/07/malware-persistence-windows-registry.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00490-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93292
1,849
2.625
3
Researchers are developing machine-to-machine (M2M) communication technology that allows cars to exchange data with each other, meaning vehicles will soon know what the cars all around you are doing on the highway. Your car, for instance, could "see" the velocity of nearby vehicles and react when they turn or brake suddenly. And with computer algorithms and predictive models, your car will be able to predict where other vehicles are going and measure the other drivers' skills -- ensuring you're safe from their bad moves. "We're even imagining in the future cars would be able to ask other cars, 'Hey, can I cut into your lane?' Then the other car would let you in," said Jennifer Healey, a research scientist with Intel. Intel is working with National Taiwan University on M2M connectivity between vehicles as a way to make roads more predictable and safe. "Car accidents are the leading cause of death in people 16 to 19 in the United States. And 75% of these accidents have nothing to do with drugs or alcohol," said Healey, who delivered a TED Talk on the subject in March (see video below). She recounted her first accident when she was a young driver: The driver she was following on a highway slammed on his brakes and the resulting collision totaled her car. "I think we can transform the driving experience by letting our cars talk to each other," she said. That idea came from caravanning, Healey said, citing an available, but-not-yet-deployed technology that uses direct line of site infrared (IR) and a range finder in order to automatically adjust the speed of cars so they can travel at a measured distance from each other. In other words, they're electronically tethered to one another. Instead of using IR, the researchers wanted something that is omnidirectional. They tried radio communications, but quickly discovered that omnidirectional radio signals tend to bounce off vehicles, making them unreliable at high speeds. So Healey and university researchers began using unique Internet Protocol addresses for vehicles, which would allow them to be instantly identifiable to nearby cars around on the same network. "Imagine a group of cars traveling down the road together as an ad hoc network," she said. "Let's say you are three cars ahead of me and I get those IP packets that say I'm the packet from the blue car whose GPS position is here. Now I can associate my position with the unique ID of that physical blue object." Along with a steady stream of data a bout the GPS location of cars around you, your car could also know drivers' intentions.
<urn:uuid:6e993140-1735-409b-b4ed-70aca065875c>
CC-MAIN-2017-04
http://www.computerworld.com/article/2497184/emerging-technology/when-cars-talk--this-is-what-they-ll-tell-each-other.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00125-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972826
536
3.109375
3
I’ve spoken before on how mobile health apps and devices can expand the ability of healthcare providers to customize treatment for patients. An important part of making that happen is the ability of app developers to quickly get innovative products to market. But one big hurdle has been the uncertainty around regulation. Mobile health apps exist in a gray zone between medical devices, which are highly regulated, and computer applications, which aren’t regulated much at all. When an app is used to facilitate communication between a medical device, such as a blood pressure monitor and a mobile phone that transmits data to a physician’s office, does that app become part of a medical device? When does a mobile health app warrant regulation by the Food and Drug Administration (FDA) and other regulatory bodies around the world? This is not a trivial question. The potential of medical apps to improve care and lower costs is enormous. We need creative, energetic app developers working on new ways to make this technology work for all of us. But if developers don’t know how these apps will be regulated, they’re going to spend their time on other pursuits rather than gamble on what regulators might do in the future. FDA issues guidelines, but some question if the framework is appropriate for apps In July 2012, the U.S. Congress passed The Food and Drug Administration Safety and Innovation Act of 2012 (FDASIA), which directed the FDA to come up with a proposed strategy and recommendations within 18 months. It directed the FDA to create a risk-based regulatory framework pertaining to health IT that promotes innovation, protects patient safety, and avoids regulatory duplication. The FDASIA report is due to be released near the end of this month. A more detailed risk-based framework is proposed in a report by the Bipartisan Policy Center’s Health Project, which urged an approach that they believe protects patient safety without creating unnecessary regulation. In September 2013 the FDA issued guidelines that regulate mobile apps based on the FDA’s assessment of potential to harm patients. Under these guidelines, an app that only records your diet and exercise information wouldn’t rise to the level of a regulated device. But an app that tells you to adjust your insulin dose based on a reading from a glucometer will likely be regulated. Gray area or appropriate flexibility? In gray areas, where the FDA guidance still provides no clear answer, the agency has said it will exercise discretionary enforcement, essentially leaving open regulatory options if they believe the situation warrants it. Some observers think this gives developers room to innovate, while allowing the FDA to step in where patients might be at risk. Others (including some members of Congress) see this gray area as just the kind of ambiguity and uncertainty that will discourage innovation, and they believe that the FDA should be precluded from regulating certain categories of apps altogether. The fast pace of app development, combined with the melding of lifestyle/medical and consumer/clinician functionality, offers great possibility for innovation; at the same time, it makes regulation of this area a challenge and stretches the limits of existing regulatory frameworks. The most recent FDA guidelines are not likely to be the final word on the subject, though they are the current rules of the road for developers. In the meantime, some app developers will forge ahead despite ambiguity, while others may balk at the risks created by the remaining uncertainty. As well, some doctors may hesitate to use products that lack the FDA’s formal approval and associated liability protections. It is critical that, in the end, a regulatory balance is found to allow innovations to improve health while maintaining patient safety protections. How this debate plays out also has global significance. Regulatory agencies across the globe are watching what happens in the U.S., and the FDA’s actions will likely influence how other governments balance patient safety and the need to foster innovation. Opportunity for mHealth startups To help promising mHealth innovators get their products to market faster, Dell and Intel are sponsoring a pitch challenge for healthcare and life sciences startups developing mobile technologies that improve patient outcomes. Technologies of interest include healthcare analytics tools, clinical workflow management tools, mobile health applications, cloud-based solutions and wireless health monitoring devices. For more information, visit the Center for Entrepreneurs.
<urn:uuid:c4bfb8d6-9f3d-4dee-812a-91f62c529382>
CC-MAIN-2017-04
http://www.computerworld.com/article/2476087/healthcare-it/when-is-a-mobile-app-a-medical-device--the-future-of-healthcare-may-depend-on-the-answ.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00519-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942429
875
2.703125
3
Enterprise Perimeter Security and Firewall Systems The perimeter of the enterprise and the first line of defense, firewall systems, are vital to the security of any business. The perimeter of the enterprise establishes the boundary between the inside of the business and the outside. While businesses are vulnerable to insider and outsider threats—both of which are significant—the enterprise perimeter and the firewall system are designed to protect the inside from outsider attacks. So how do you secure your internal network from an external network, such as the Internet? One potential solution is to set up a firewall system. A firewall is designed to keep intruders from getting into your internal network. A firewall is one or more systems, which may be a combination of hardware and software, that serve as a security mechanism to prevent unauthorized access between trusted and untrusted networks. Firewall systems are typically the first line of defense between an organization’s internal network and its connection to the Internet. Firewall systems are typically the primary tool used to enable an organization’s security policy to prevent unauthorized access between networks. An organization may choose to deploy one or more systems that function as firewalls. A firewall refers to a gateway that restricts the flow of information between the external Internet and the internal network. The trusted internal network may include several LAN and WAN subnets—a firewall is a system or systems that separate an autonomous network from the external network. Firewalls may be internal or external. Firewall systems can protect against attacks that pass through network interfaces. Firewall systems cannot protect against attacks that do not pass through the firewall. For example, consider an organization’s internal network, which may include several LAN and WAN subnets. The WAN subnets may be used to provide connectivity to the corporate network. Thus, technologies such as frame relay, ISDN or dedicated point-to-point circuits (56 kbps, fractional T-1, T-1, T-3) may be used to provide connectivity between branch offices and the corporate network. If access to the Internet is through a router on the corporate network and that is where the firewall system architecture is defined, it is possible for the firewall system to control inbound and outbound access to the Internet on the basis of filters (rules) that have been defined. Types of Firewalls There are several types of firewall systems. These include: - Packet-filtering firewalls - Stateful-inspection firewalls - Application-proxy gateway firewalls A packet-filter firewall is a lower-layer firewall device that includes access-control functionality for system addresses and communication sessions. An example of a packet-filtering firewall system is a boundary router. This typically is deployed on the “edge” of the enterprise network. Its advantages are that it is fast and flexible. It can filter out unwanted protocols, perform simple access control and then pass data to other, more advanced firewalls. Stateful-inspection firewalls represent a superset of packet-filter firewall functionality. These firewalls can interpret and analyze the information in layer-four headers (transport layer). The firewall creates a directory of outbound TCP connections along with each session’s “high-numbered” client port. This state table information is used to validate any inbound traffic. Application-proxy gateway firewalls are highly advanced firewalls that combine the capabilities of access control provided at the lower layers with application layer functionality. Typically, these have extensive logging capabilities and can authenticate users directly. These devices are less vulnerable to spoofing attacks. Firewall systems sometimes provide an organization with centralized control in today’s highly decentralized computing environment. This implies that security tools for logging events, auditing transactions and defining alarms for threats detected can all be defined and controlled centrally as a part of the firewall system. In large, multifaceted organizations that are made up of more or less “independent” subsidiaries, centralized firewall controls may not be in place. Rapid consolidation of some businesses has been facilitated by continual merger-and-acquisition activity. This has left some large organizations with numerous connections to external data communications networks, each having some level of firewall infrastructure, yet without effective coordination. This presents such organizations with a significant risk—consider the “hacker” saying, “You have to plug every real and probable hole across your organization, but I only need to find and exploit one to win.” Also, keep in mind that a firewall infrastructure must perform an incredibly difficult task. Remember when we said above, “A firewall is designed to keep intruders from getting into your internal network.” That is absolutely true. The problem is that firewalls must also pass data traffic. At the traffic flow—or network—level, all communications tend to look the same. Consider, for example, the standard TCP/IP session that consists of the three-way handshaking process, transfer of data and then the session teardown. Many modern firewalls can be configured to disallow session initiation from one or another side of the network boundary layer. Some firewalls can also detect and drop (or reject/deny) malicious attempts to send “mid-session” TCP/IP frames into a network from the outside. (This technique can be used to help map the resources that are available inside your network.) Other firewall infrastructures may sometimes include programs called “proxies” that accept traffic destined for “the other side of the firewall” and examine the higher-level details of specific application communications and then either pass valid traffic along to the intended destination or drop (or reject/deny) malicious or otherwise inappropriate activity. Risk-conscious companies are installing systems that are able to identify a wide range of malicious activities. The systems react by initiating actions that will help employees effectively deal with the threat. Today’s firewall systems protect sites from vulnerabilities in the TCP/IP protocol suite. They are also able to integrate capabilities that can not only provide access control on TCP/IP packets, but also filter the content of traffic entering or leaving the enterprise. Some examples of firewall vendors include Check Point, Cisco and SonicWALL. Firewall systems require expert knowledge to implement and configure. Most security certification programs, as well as those offered specifically by firewall vendors, include training designed to acquire skills to deploy firewalls successfully. Each organization must defend its perimeter—its connections with the outside world. Firewall systems are the first line of defense. The design of the firewall system architecture, the selection of the firewall solution that meets your enterprise requirements and the configuration and management of the system will be critical to “close and lock” entry and exit points. Security is only as strong as the weakest link—firewall systems can help make your enterprise security architecture a lot more formidable at the perimeter. Uday O. Ali Pabrai, CEO of ecfirst.com, created the CIW program and is the co-creator of the Security Certified Program (www.securitycertified.net). Pabrai is also vice-chair of CompTIA’s Security+ and i-Net+ programs and recently launched the HIPAA Academy. E-mail him at email@example.com.
<urn:uuid:5ec852b5-7b6d-4fe4-93c1-9179e05340a6>
CC-MAIN-2017-04
http://certmag.com/enterprise-perimeter-security-and-firewall-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00207-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930395
1,521
3.03125
3
Playtime: Learning IT Through Gaming Whether they are old standbys such as chess, poker or UNO, timeless games such as Trivial Pursuit and Monopoly, video game classics such as “Pitfall!,” “Donkey Kong” and “Dragon Warrior,” or modern video games such as “The Sims,” “Grand Theft Auto” and “Final Fantasy,” games are a dominant source of entertainment for people of all ages. Although the entertainment factor is the main draw for playing games, they can offer players a good deal of education, as well. For example, Monopoly teaches individuals at an early age how to manage money and acquire properties. “The Sims” teaches individuals to fend for themselves, as they are required to shelter, clothe, nourish, protect and entertain virtual people in a simulated, real-world environment. Adult-education providers, however, had not accepted “learning games” until recently. In fact, the budding professionals entering today’s business world are driving this acceptance of gaming solutions as a viable tool for adult learners to practice their skills and increase their competence as professionals. As the first generation bred on personal computers, the Internet and sophisticated video games, Generation Y favors alternative learning methods such as gaming solutions simply because they mirror their upbringing. “From our perspective, clients are starting to recognize that there is this shift in the demographics of the worker base, and that shift is driven in part by the fact that all of these workers — workers under 35 — grew up playing video games,” said Jim Wexler, executive vice president of marketing at BrandGames, a provider of custom-developed learning games and simulations. “This shift has in part led to a change in the way they need to be spoken to, messaged and trained. This dividing line of workers under 35 who have spent the last 10 to 15 years in their basements playing on their PlayStations, PCs and Game Boys think differently. The old passive ways and the old ways of training them aren’t as effective anymore.” David Serdynski, director of e-learning at Root Learning, a provider of education, communications and change management solutions wrapped in nontraditional delivery methods, agrees. “Older workers are accustomed to an environment where an instructor pushes information out to them, whereas the younger generation has grown up with gaming, grown up with the Internet and are attuned to pulling out short nuggets of information when they want it, and when they need it,” he said. For most educators, however, the acceptance of gaming solutions for practice and learning has been slow in coming because of cost and lack of awareness. “Money is the primary driver why many schools and corporations have not adopted such complex solutions, but it is not the only reason,” Serdynski said. “Awareness is really another key factor. These more advanced simulations or gamelike simulations are relatively new — only within the last 12 to 18 months have these solutions really taken off. There’s been a lot of buzz about it, but when you start to talk about the dollars that are involved with developing a truly complex Xbox-like game, the conversation tends to turn to, ‘What can we do for our $300,000 budget?’” In information technology, the exercise of gaming solutions has been just as slow — the Cisco Systems CCNA Prep Center is the only certification vendor to offer such learning solutions to date. Cisco Manager of Learning and Development Christine Yoshida said her company continuously aims to increase its learning offerings and preparation options as a way to not only appeal to the learners of Generation Y, but to make learning fun. “We are trying to appeal to learners of a younger generation who have grown up playing learning games on their computers — you know that’s how they learned to read or to perform math equations,” Yoshida said, “so there’s an expectation that learning should be fun for these younger folks, and those are the ones we are trying to appeal to.” The CCNA Prep Center, which is a Cisco Internet portal designed to assist networking professionals preparing for the CCNA certification, offers more than seven different learning games — some more conceptual than technical. Yoshida said the games that coincide with the CCNA certification particularly target the areas where certification candidates tend to struggle most. “Our first games really focused on building awareness of a particular technology solution rather than teaching a skill,” she said. “The Cisco ‘IPC Rockin’ Retailer’ game is an example of a game whereby playing the game, you start to realize how a Cisco voice solution could really improve efficiencies within a store or business.” Don Field, director of certifications, Cisco, cited another game that has served a functional purpose. “The Binary Game, as an example, is one that we created specifically in order to teach a skill,” Field said. “Many of the learning offerings that we have in the CCNA Prep Center are more about reinforcing a skill or giving practice in a skill. The Binary Game is a little bit different because it actually teaches skills in understanding how binary numbers work. “We have another game that is more oriented toward practice called the ‘Certification Multi-Player Challenge Game: CCNA.’ During this game, users are being hit with questions, similar to what they would see on our exams. The neat thing about this game is users can actually play on the Internet with somebody else.” Although the Cisco family of games isn’t as flashy and high-tech as some of today’s high-tech video games such as “The Elder Scrolls IV: Oblivion,” the games’ sole purpose is to enhance certification candidates’ skills in a fun, interactive way. “For education in configuring different devices, entering commands and navigating through menus, online games or computer-enabled games really lend themselves well to that because you are on a device, and you can simulate the experience with the software,” Yoshida said. “So when you are trying to study for a timed exam that includes a simulation, you are better prepared, and you won’t run out of time. It is so helpful to have tools to practice at home and to have a tool that makes it fun because it can be pretty dry. It helps people prepare, and it gives them more options to learn and practice. It can also motivate people to do their practicing to be successful in the exam.” Wexler agrees gaming solutions are excellent for technical-type training and practice because like Yoshida, he said these areas of study are not always very interesting. “Certification can be very dry — it’s often a ‘check a box’ or Q&A scenario, which doesn’t necessarily fit with today’s up-and-coming professionals,” Wexler said. “So how do we make this highly multitasking, intelligent and sophisticated worker pay attention when it really matters? Games are one answer — they speak their language, they’re relevant, entertaining and fun. For that reason, learners are likely to participate, be focused and find the experience worthwhile. That is the big benefit of games.” Serdynski said gamelike simulations allow people to comprehensively apply their knowledge and practice their skills in a safe environment. “Users can practice over and over again without ever having to have a conversation or making a sales call to a customer, for example,” he said. “So, essentially, users practice without ever having a negative experience with a real customer. During such simulations or games, it’s the repetition and targeted feedback that allow them to become successful in executing those skills before they ever hit the marketplace.” Although gaming solutions are thought to be an excellent way to practice and further a professional’s knowledge and skills, Yoshida, Field, Wexler and Serdynski said they are best used in conjunction to other training modalities because not everyone
<urn:uuid:38251ec3-ff9b-4c7f-9b31-2ab04f6e581e>
CC-MAIN-2017-04
http://certmag.com/playtime-learning-it-through-gaming/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00207-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963243
1,706
2.640625
3
Green K.E.,Bristol Zoological Society | Daniel B.M.,Durrell Wildlife Conservation Trust | Lloyd S.P.,Imperial College London | Said I.,Dahari | And 5 more authors. Bird Conservation International | Year: 2015 Although birds are among the best studied taxa, many of the globally threatened species lack the information required to fully assess their conservation status and needs. One such species is the Anjouan Scops Owl Otus capnodes which was presumed extinct until its rediscovery to science in 1992. Based on the limited extent and decline of the moist forests in the highlands of Anjouan in the Comoro Islands, a population size of only 100-200 pairs was estimated and the species was classified as 'Critically Endangered'. The current study is the first comprehensive survey ever conducted on this species, and aimed to establish the current distribution and population size. Point counts with distance sampling were conducted across the agroforestry and forest zones of Anjouan in both a dry and wet season. A niche suitability model predicted the species distribution to be wider than expected with owls observed as low as 300 m altitude and in highly modified agroforestry habitats. However, the encounter rate in natural relatively undisturbed forest was significantly greater than in other habitats. The wider than expected geographic range of O. capnodes supports a possible downlisting of this species on the IUCN Red List to 'Endangered'. Population size was found to be far greater than previously thought, at approximately 3,450 individual owls in the dry season and 5,450 in the wet season. These results show the importance of investing in robust surveys of poorly known and cryptic bird species, and provide up to date and important information for landscape scale conservation planning in the Comoros Islands. Copyright © BirdLife International 2014. Source
<urn:uuid:e4c81fc4-b55a-47e2-9fff-aaeeec9f353a>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/dahari-2013597/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00115-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933541
387
3.484375
3
With the launch of NASA's lunar mission tonight, the space agency is taking a big step in an effort to create an outer space Internet. "This is akin to the fact that you have a fiber-optic connection into your house and you get faster downloads. We want to get faster downloads in space," said Don Boroson, principal investigator for NASA's lunar mission and a Laboratory Fellow at MIT's Lincoln Laboratory. "We're told by scientists that they have cameras to send to Mars, but they're stuck because radio systems are not capable of sending much of the data and images collected there back to Earth. With higher download speeds they could send all of their data back. They could do much more with more downloads." At 11:27 p.m. ET, NASA is scheduled to launch its Lunar Atmosphere and Dust Environment Explorer (LADEE) observatory from the Wallops Flight Facility on Wallops Island, Va. NASA will provide live coverage and commentary on the LADEE launch beginning at 9:30 p.m. ET tonight on NASA TV. The robotic space probe's main mission is to study the moon's atmosphere. However, it also has another mission. About a month after launch, the spacecraft will begin a limited test of a high-data-rate laser communication system. If that system works as planned, similar systems are expected to be used to speed up future satellite communications, as well as deep space communications with robots and human exploration crews. Don Cornwell, Lunar Laser Communications Mission Manager at NASA's Goddard Space Flight Center, told Computerworld that this will be the space agency's first laser communications test. In 2017, NASA is expected to launch a Laser Communications Relay Demonstration, which is expected to run tests for two to five years. Using a laser for communications, instead of radio systems, would enable robots -- similar to the Mars rover Curiosity -- as well as astronauts to send and receive far greater data loads, whether they're in orbit around Earth, on the moon or a distant asteroid. The two-way laser communications system can deliver six times more data with 25% less power than what can be done with the best radio systems, according to Cornwell. He also said that laser communications would use devices that weigh half of what radio devices on rockets, rovers and spacecraft weigh today. When scientists are sending spacecraft into space, weight is a critical factor. "When you send satellite communicationss up in space, every pound really counts," said Boroson. "You could send a mission very far away but the size of the radio [on the rocket] grows and grows. This is a fraction of the weight... Every ounce counts. The fact that we could be smaller and deliver a much higher deliver rate is very important." And space exploration is largely about the data. Rovers and astronauts are expected to take measurements, shoot images and video of distant planets and asteroids. However, if they can't get that data back to scientists on Earth, it's scientifically limiting the entire mission. "Right now, there's often a bottleneck with the radio system," said Cornwell. "Scientists have to pick and choose which images and data they want because it's not optimal to send it all back. With laser systems, they shouldn't have to choose. And as NASA goes further and further out in space, we'd like to... send more of that data back." He added that the large pipe that laser communications give them will become increasingly important as explorations travel farther from Earth. Cornwell said that some commercial operations are looking into testing laser communications in an effort to create an Earth-orbiting network of satellites that would speed data transmissions using lasers. "We're really in the formulation phase," he said. This article, NASA's lunar mission could lead to Internet in space, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com. Read more about networking in Computerworld's Networking Topic Center. This story, "NASA's lunar mission could lead to Internet in space" was originally published by Computerworld.
<urn:uuid:c07c177f-ddb5-42e3-baab-56630103cc8b>
CC-MAIN-2017-04
http://www.networkworld.com/article/2169707/smb/nasa--39-s-lunar-mission-could-lead-to-internet-in-space.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00079-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954796
896
3.265625
3
Keeping Accurate Time on Linux The network time protocol can ensure accurate, synchronized time across your entire network. Carla Schroder explains how to set up a local time server on Linux and how to configure it so that every PC on your network synchronizes with the server. Keeping accurate, synchronized time across your network is important for all sorts of reasons: for accurate time stamps in logs, for ensuring that processes run on time, and also for applications that depend on keeping good time. Fortunately, it doesn’t have to be a difficult process to manage, as ntpd, the NTP (network time protocol) daemon, can do all the work for you. Typically, with ntpd you can "set it and forget it." The hard part is wading through the documentation and finding the bits that tell you how to implement and configure it, and what programs and configuration files you’ll need. You know the old cliché about asking the time and being told how to build a watch? I think the NTP project may have inspired it. In this article we’ll cover how to set up a local time server on a LAN, without discussing the protocol itself at all, even though it is very cool. Instead, we shall stay focused on telling the time, not on building a watch. Most Linux distributions include a motley collection of time and date utilities: hwclock.sh, date, 822-date, tzselect, tzsetup, vcstime, uptime, zdump, ddate, rdate, ctime, and doubtless several more. These tools have all sorts of odd, specialized functions, and are fun to play with on occasion. In the olden days we kept time with hwclock.sh, rdate, or ntpdate. They ran at boot or were put in cron jobs for periodic updating. ntpd replaces hwclock.sh, rdate, and ntpdate. I recommend disabling any of these that are set to run automatically, whether from init or cron, and instead let ntpd be your sole timekeeper. With one exception — for ntpdate, don't delete those init scripts; save them. After installation, all you need to do is: - Add some public time servers to /etc/ntp.conf - Set your time zone. Make a symlink from /etc/localtime to the appropriate file in /usr/share/zoneinfo - Make sure that UDP port 123 is open through your firewall - Run ntpdate to set the system time - Start up ntpd Let's take these steps one at a time.
<urn:uuid:b49a9a7e-1138-4456-addc-2ec1bd87fe24>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsysm/article.php/3302411/Keeping-Accurate-Time-on-Linux.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00565-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922152
556
2.890625
3
Ly S.,Seoul National University of Science and Technology | Kim N.J.,Seoul National University of Science and Technology | Youn M.,Advanced Scientific Research Group | Kim Y.,Advanced Scientific Research Group | And 3 more authors. Toxicological Research | Year: 2013 A method of detecting lead was developed using square wave anodic stripping voltammetry (SWASV) with DNA-carbon nanotube paste electrode (CNTPE). The results indicated a sensitive oxidation peak current of lead on the DNA-CNTPE. The curves were obtained within a concentration range of 50 ngL-1- 20 mgL-1 with preconcentration time of 100, 200, and 400 sec at the concentration of mgL-1, μgL-1, and ngL-1, respectively. The observed relative standard deviation was 0.101% (n = 12) in the lead concentration of 30.0 μgL-1 under optimum conditions. The low detection limit (S/N) was pegged at 8 ngL-1 (2.6 × 10-8 M). Results showed that the developed method can be used in real-time assay in vivo without requiring any pretreatment and pharmaceutical samples, and food samples, as well as other materials requiring water source contamination analyses. Source
<urn:uuid:d7e05e39-ac0a-406e-88eb-cf9c6ae6dba7>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/advanced-scientific-research-group-1765281/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00381-ip-10-171-10-70.ec2.internal.warc.gz
en
0.883328
269
2.53125
3
It was only a matter of time before it happened, but a group of researchers have cracked the 768-bit RSA encryption. An international team of cryptographers, scientists and mathematicians spent two-and-a-half years with hundreds of computers, and finally broke the encryption. Just the first step of the computation took the equivalent of 1,500 years on one core of a x86 processor resulting in 5 terabytes of data. The explanation of how the encryption was broken would give the most technical of us a headache, but the implications should be considered. You would be hard pressed to find 768-bit encryption in use today, but 1024 bit encryption is widely used. So how long before 1024-bit encryption is broken? That timeline us up for debate . While the effort will be 1,000 times more difficult than 768-bit, some believe it can be broken within ten years. Entrust SSL certificates are issued from 2048
<urn:uuid:e68739ae-032a-4bb9-87bf-9638237b16f3>
CC-MAIN-2017-04
https://www.entrust.com/768-bit-rsa-encryption-finally-broken/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00409-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945162
189
3.234375
3
In Word 2010 (I think 2007 did the same thing), the spacing between words is different depending on the words and how they fit together on the line. I assume that Word is automatically trying to fit as much information on one line as possible by adding or subtracting small amounts of space (smaller than what the spacebar can do) between words. I understand the purpose of this, but it is getting very distracting and confusing in my written pieces. For example, some words look almost as if they are one large word (e.g. 'teaching position' looks like 'teachingposition' because of the small amount of space between the two words) while others look as if a small word (like 'a' or 'I') could possibly fit between them (e.g. 'teaching [ ] position'). This creates a strange looking set of words across a line when the spacing is different between each word. I have looked up and attempted to execute line spacing and character spacing to no avail. I have no problem with single and double spacing nor the spacing between characters within the same word. I am only having an issue with the distance between the words themselves. I do not want Word automatically adjusting the spacing between words in order to make a "pretty" line of words. I would rather like a fixed rate at which words are spaced. As I said, I have looked this up for weeks trying different search queries and different combinations of terms to try and find what I am looking for. However, all I get is character and line spacing troubleshooting, which is not what my problem is. Does anyone have a good idea why Word does this (as it apparently does it by default) and how to turn it off? Edited by HamSandwich, 16 June 2011 - 06:54 PM.
<urn:uuid:f6109a86-9a15-4db9-ab66-247362f8014b>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/forums/t/404267/microsoft-word-spacing-between-words/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00005-ip-10-171-10-70.ec2.internal.warc.gz
en
0.973435
370
2.546875
3
Apologies in advance, this is a bit of a connective blog entry – this is a big topic, and it needs some scene setting, basic understanding and several weeks worth to get the most out of it. We live in a connected world now – my other half was showing me a washing machine with a WiFi connection and an associated iPhone App that would allow you remote control of and reporting about your intimate garments spin cycle ! I wonder if that is really necessary to be honest, as even if it has finished, knowing that while I’m in the office and the washing machine is at home is a complete waste of electrons. The network, and the connected nature of things is what allows us as penetration testers to attempt to compromise the security of a company without going anywhere near it. There are other aspects to full scale penetration testing as I’ve alluded to before – with social engineering and physical attack ( lock picking, not baseball bat ) parts of such a scope – but a majority of the work is computer and network based. To that end, a good understanding and working knowledge of networking is pretty much a job pre-requisite. So, rather than giving you a lesson myself, I’ll give you a quick and dirty set of online references – this won’t make you an expert by any stretch of the imagination, but hopefully it will get us through the rest of this section without too much head scratching.1 - The OSI Model - Internet Protocol (IP) - Transmission Control Protocol (TCP) - User Datagram Protocol (UDP) I would apologise for the laziness on my part, however I subscribe to Larry Wall’s school of thought that it is a virtue – if someone else has done it well enough already, why spend time re-inventing the wheel. The corollary of that is, if you find that there isn’t a good explanation of something in that set that you’d like to understand better – add a comment on the bottom of this post and we’ll bring it up to scratch ( perhaps both here and at Wikipedia 😉 ). So seing as you all now fully understand TCP/IP packet structure and know your URG from your SYN … ( It’s ok, I’m only joking. ) We are fortunate that in reality, we have some amazing tools available to us that include all of the low level things done for us already. I am going to profess a view though that, like forensics, you shouldn’t rely on the output of a tool that you don’t understand the inner working of and, that you couldn’t reproduce and/or verify the results of at a binary level. There are plenty of PenTest ( and Forensic ) companies out there who get cheap, unqualified labour to run automated tools and then publish the results as gospel – occasionally with disastrous results – please, please, please don’t add to them. To that end, I’m going to introduce a few tools this week, and next time we are going to build a small lab and run a few scans and look at the network traffic and the results. First off, our listening post, Wireshark (nee Ethereal). Wireshark is a network protocol analyser, given a promiscuous network port on a machine it will sit and listen to all traffic that it can see on its segment.2 I love Wireshark, and actually, as a general purpose network trouble shooting tool, it’s pretty hard to beat. It can colour, track and decode flows across a wide range of protocols and applications, and best of all – it’s free. Secondly, our port scanner, NMap. Whilst, as they say on the BBC, other products exist, frankly I don’t see any reason to use them. NMap has been around for nearly as long as I have with early editions out in 1997, it has grown since then to one of the most comprehensive ( if not the most comprehensive ) tool of it’s type. There are graphical front ends and countless enhancements, and it is cross platform with clients for pretty much anything you might want to run it on and it plugs into dozens of other PenTest tools ( MetaSploit and Nessus [ which we will get to later ] amongst them ). For now, I’m going to leave it there I’m afraid, I am trying to keep this in bite-size chunks and if I go into any more detail today I’m really going to over run. As a preview though, next time we are going to build a test lab using virtualisation, which we are going to continue to use for subsequent exercises, and we are going to run a range of port scans using NMap and see what we get back and what we can see in Wireshark while we do it. I’m also hoping to get some usage out of my new toy and see if we can’t get some demo video tutorials available to go with the text content. I intend to make downloadable VMs that you can easily use on a number of platforms, so hopefully this won’t be too painful an experience ! 1. Above all other material in this area I would recommend, without hesitation, TCP/IP Illustrated: The Protocols v. 1. This is a phenomenally detailed book, that actually isn’t that bad to read, and is an excellent reference moving forward. In fact, I now own two copies, as I’ve found out through writing this that it has been updated to cover IPv6 late last year – so I’ve put my money where my mouth is ! 2. Where a network is broken down into sections or segments with routers and switches ( rather than hubs ) – traffic is actively filtered by the networking devices, restricting the amount that can be seen by a sniffing device – worth remembering if you are wondering why you can’t see something and also if you are designing a secure network …
<urn:uuid:e7f60a5f-948d-4eff-b22f-4dcda8207e93>
CC-MAIN-2017-04
https://articles.forensicfocus.com/2012/07/17/introduction-to-penetration-testing-part-3a-active-reconnaissance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00126-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953495
1,260
2.515625
3
Optical fibers are made of extremely pure optical glass. We think of a glass window as transparent, but the thicker the glass gets, the less transparent it becomes due to impurities in the glass. However, the glass in an optical fiber has far fewer impurities than window-pane glass. One company’s description of the quality of glass is as follows: If you were on top of an ocean that is miles of solid core optical fiber glass, you could see the bottom clearly. Understanding how fiber optics are made and function for uses in everyday life is an interesting work of art combined with science. Fiber optics have been fabricated from material that transmit light and are made from a bundle of very thin glass or plastic fibers enclosed in a tube. One end is at a source of light and the other end is a camera lens, used to channel light and images around the bends and corners. Fiber optics have a highly transparent core of glass, or plastic encircled by a covering called “cladding”. Light is stimulated through a source on one end of the fiber optic and as the light travels through the tube, the cladding is there to keep it all inside. A bundle of fiber optics may be bent or twisted without distorting the image, as the cladding is designed to reflect these lighting images from inside the surface. This fiber optic light source can carry light over mass distances, ranging from a few inches to over 100 miles. There are two types of fiber optical, Single-mode fibers and Multi-mode fibers. Single-mode fibers have small cores (about 3.5 x 10-4 inches or 9 microns in diameter) and transmit infrared laser light (wavelength = 1,300 to 1,550 nanometers). Multi-mode fibers have larger cores (about 2.5 x 10-3 inches or 62.5 microns in diameter) and transmit infrared light (wavelength = 850 to 1,300 nm) from light-emitting diodes (LEDs). Fiber optics most common and widely used in communication systems, fiber optic communication systems have a variety of features that make it superior to the systems that use the traditional copper cables. The use of fiber optics with these systems use a larger information-carrying capacity where they are not hassled with electrical interference and require fewer amplifiers then the copper cable systems. Fiber optic communication systems are installed in large networks of fiber optic bundles all around the world and even under the oceans. Many fiber optic testers are available to provide you with the best fiber optic equipment. In fiber optic communication systems, lasers are used to transmit messages in numeric code by flashing on and off at high speeds. This code can constitute a voice or an electronic file containing, text, numbers, or illustrations, all by using fiber optics. The light from many lasers are added together onto a single fiber optic enabling thousands of currents of data to pass through a single fiber optic cable at one time. This data will travel through the fiber optics and into interpreting devices to convert the messages back into the form of its original signals.
<urn:uuid:e57a52a8-0609-4c7b-ba14-bb7d947418fc>
CC-MAIN-2017-04
http://www.fs.com/blog/a-simple-introduction-of-fiber-optic-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00052-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928797
627
3.828125
4
In the US, it’s been a very long election cycle. Don’t worry. I’m not going to get political here. However, if we put aside all the issues of the election, there is one very interesting thing to think about from an information security centric point of view. If you think about what an election is, you can see that it’s largely about data quality. Millions of people create a few pieces of data each. All that data flows through a multi-layered system and ends up being part of a very critical dataset that drives very real world matters. Is there any place where the security and integrity of information is more important? Another thing I am not going to do is dive into the possibility of tampering with the election by hackers. This was a very real threat in the minds of many. I will touch on why it seems remote at best to me in a little bit. Right off the top, though, it’s easy to say that the notion of tampering with the actual voting machines seems slim. There were reports of machines that did do bad things. But the common theme in each case was that the problem was caught immediately by the voter and dealt with by the people staffing the polling location. In other words, the humans were able to make up for the flaws in the machines. The source of the flaws – malfunction or malfeasance – are irrelevant from the security context if their effects are canceled out. The broader implications are for consideration elsewhere. In fact, the ability of the humans in the system to make up for problems elsewhere is part of the larger point. The mechanisms through which a US election happens are large, fault tolerant and very resilient. Hearing the claims made from many quarters both before and after the election took place about things being rigged made me curious. I looked into it and was impressed with what I found. There are many local variations. But generally there are several local, regional, and state level layers of checking and double checking of election results. The folks who do this are mostly volunteers. And it seems there are many efforts made in every place to ensure a balance of political parties are represented. Perhaps most important, there appear to be very clear and prescriptive policies that tell people how to handle contingencies. This is a policy driven, heavily audited, and thoroughly monitored process. Isn’t that the heart of what good information security is supposed to be? Cybersecurity Can Learn From the Security Practices of Elections There are several standard practices election monitors and polling staffers use that should be interesting to security folks. The most obvious is the use of multiple checksums that are compared to ensure validity. In the world of elections, this means using a combination of both paper and electronic results. Both are tallied and then compared to ensure they agree. In privileged identity management programs that my company develops, we always advise that folks use our own auditing as one source of truth and that they also watch the system level monitoring for another, as a comparison. If we say that Bob took root password at 10am, then the system says root logged in at 10:01am, it seems like things are OK. But if there were a record of a root login without a corresponding check out in our system, that may spell trouble. Another practice of election protection is to make people interact with several people during the process of casting their vote. You must sign in, talk to the people at the machine, and perhaps more steps depending on your area’s practices. Again, the similar mechanism of having people use a request and grant model in security can go a long way. If you’ve got bad intents in mind, it’s harder to hide it when you have to ask just before you commit the crime. Also, people are surprisingly good at sniffing out when someone is up to no good. Most people can have a gut feeling about that. And, similar to the Israeli airport security that bucks the better scanner trends for better people, the people in your organization will also likely get better at spotting when a request just doesn’t seem right. The last thing to think about is something we all already know. When there are issues, the thing that makes life easier for election officials is that there are clear policies that dictate what to do. Of course, there are places where judgment is called for (which hole is punched?). But for most decisions, there is a rubric that ensures fair process. This is the gold standard for security. If we can have This is the gold standard for security. If we can have policy that tells people what to do to keep things secure, then we remove the most unstable elements of our equations. Elections have had 200 years or so to get this right. Most organizations haven’t had quite that much time to consider cybersecurity. If we keep that policy driven process as a goal, though, we are on the right track to having security that we can trust with the most important information we all have.
<urn:uuid:e10ce10b-fd4f-40d9-b13f-052d8d18e4e6>
CC-MAIN-2017-04
http://www.identityweek.com/cybersecurity-can-learn-from-elections/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00538-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975609
1,031
2.578125
3
Kaspersky Lab, a leading developer of secure content management solutions, has published an article entitled "The botnet busines" by senior virus analyst Vitaly Kamlyuk. The article describes how botnets (also known as zombie networks), which have evolved into one of the most serious information security threats, are created and operated. The article is the first in a series of publications on the botnet problem. Botnets have been in existence for about 10 years; experts have been warning the public about the threat posed by botnets for more or less the same period. Nevertheless, the scale of the problem caused by botnets is still underrated. A botnet is a network uniting computers which are infected with a malicious program that enables cybercriminals to remotely control the infected machines. Malicious programs that are designed specifically to create botnets are called bots. The owner of a botnet can control the computers which make up the network from anywhere in the world – from another city, country or even another continent. Importantly, the Internet is structured in such a way that a botnet can be run anonymously. The owner of an infected machine usually does not even suspect that the computer is being used by cybercriminals. Most zombie machines are home users’ PCs. Botnets can be used by cybercriminals for conducting a wide range of malicious activities, from sending spam and engaging in blackmail and phishing to attacking government networks. Today, cybercriminals need neither specialized knowledge nor large amounts of money to get access to a botnet. The underground botnet industry provides everyone who wants to use a botnet with everything they need, including software, ready-to-use zombie networks and anonymous hosting services, at low prices. Today, botnets are among the main sources of illegal income on the Internet and are powerful weapons in the hands of cybercriminals. It is totally unrealistic to expect that criminals will relinquish such an effective tool; security experts, anticipating the continued development of botnet technologies, view the future with some trepidation. It's not only cybercriminals who have an interest in creating international botnets. Such botnets can be used by governments or individuals to exert political pressure on opponents. Networks which unite the resources of tens or hundreds of thousands or even millions of infected computers, pose a potentially very serious threat. This potential has not yet been fully exploited. Virtually all this cyber power stems from infected home computers, which make up the overwhelming majority of zombie machines exploited by cybercriminals. The complete article can be found on Viruslist.com. The Executive Summary is available on the Kaspersky Lab corporate website.
<urn:uuid:23cd27d9-19d4-4e35-9e20-a12489d2b74e>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2008/Kaspersky_Lab_releases_a_new_article_The_botnet_business_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00538-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948453
544
3.359375
3
Linda Adams, secretary for environmental protection of the California Environmental Protection Agency, spoke at the Green California Summit and Exposition on Tuesday about increasing green development during tough economic times. "'Green' is a much-used term these days," Adams said. "It's a frame of mind." She said California is the first state to actively engage the United Nations on the topic of sustainability, and that the state's goal is to reduce its greenhouse gas emissions to the levels they were in 1990 by 2020. Dan Kammen, director of the University of California, Berkeley Renewable and Appropriate Energy Laboratory, also spoke about how the country needs diverse, measurable cases of low-carbon development. He said 20 percent of California's energy is used to move water, and although there's not carbon embedded in the water, carbon's used in the processes of moving and cleaning water. Processes like these must be re-examined. He also said the country needs to unlearn its old thinking of things like solar energy. Technology is allowing traditional thinking to be undone. Kammen's example was that solar panels no longer must be linked together to gain the most efficiency. Traditionally when people think of installing solar panels on their roofs, they imagine them side by side. However, through the use of microinverters -- which attach to the back of each solar panel and convert the panel's power current -- solar panels work individually to capture power and can be more efficient.
<urn:uuid:a85fe69c-d4de-4229-9825-a3964f0ff095>
CC-MAIN-2017-04
http://www.govtech.com/technology/Green-Summit-Features-California.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963209
298
2.953125
3
The idea of building a robotic manufacturing facility in space might have been in the realm of a Star Wars, Star Trek or other science fiction story, but like some of the technologies in those tales, reality may soon imitate art. First off, you may recall that NASA is looking for an asteroid weighing about 500 tons that could be moved into within the moon's orbit so astronauts can examine it as early as 2021. [MORE: The sizzling world of asteroids] Because asteroids are loaded with minerals that are rare on Earth, near-Earth asteroids and the asteroid belt could become the mining centers for remotely operated excavators and processing machinery. In 20 years, an industry barely imagined now could be sending refined materials, rare metals and even free, clean energy to the Earth from asteroids and other bodies," according to NASA scientists in a recently published paper entitled: "Affordable, Rapid Bootstrapping of the Space Industry and Solar System Civilization." The scientists say two fundamental developments make this prospect possible: robotics and the discovery of fundamental elements to make plastic and rubber and metals existing throughout space. Another critical technology also is coming in at just the right time: manufacturing in the form of 3D printers that can turn out individual pieces that can be assembled into ever-more-complex machinery and increasingly capable robots. "Now that we know we can get carbon in space, the basic elements that we need for industry are all within reach," said one of the paper's authors, NASA physicist Phil Metzger said. "That was game-changing for us. The asteroid belt has a billion times more platinum than is found on Earth. There is literally a billion times the metal that is on the Earth, and all the water you could ever need. The idea is you start with resources out of Earth's gravity well in the vicinity of the Earth. But what we argued is that you can establish industry in space for a surprisingly low cost, much less than anybody previously thought." Metzger said that when the scientists wrote this paper we were focused on the moon as a source of near-Earth resources, but near Earth asteroids work equally well and offer several additional advantages. "It takes less fuel to bring resources away from the lower gravity of an asteroid, and since the ultimate goal is to move the industry to the asteroid main belt starting with asteroids first will help develop the correct technologies," Metzger said. A near-Earth asteroid or other nearby body presumably will contain enough material to allow a robotic system to mine the materials and refine them into usable metal or other substances. Those materials would be formed into pieces and assembled into another robot system that would itself build similar models and advance the design. "The first generation only makes the simplest materials, it can include metal and therefore you can make structure out of metal and then you can send robots that will attach electronics and wiring onto the metal," Metzger said. "So by making the easiest thing, you've reduced the largest amount of mass that you have to launch." Metzger said the first generation of machinery would be akin to the simple mechanical devices of the 1700s, with each new generation advancing quickly to the modern vanguard of abilities. They would start with gas production and the creation of solar cells, vital for providing a power source. Each new robot could add improvements to each successive model and quickly advance the mining and manufacturing capabilities. It would not take long for the miners to produce more material than they need for themselves and they could start shipping precious metals back to Earth, riding on heat shields made of the leftover soil that doesn't contain any precious material. Perhaps the most unusual aspect of the whole endeavor is that it would not take many launches from Earth to achieve, Metzger said. Launch costs, which now run at best $1,000 per pound, would be saved because robots building themselves in space from material gathered there wouldn't need anything produced by people. Very quickly, only the computer chips, electronics boards and wiring would need to come from Earth. "We took it through six generations of robotic development and you can achieve full closure and make everything in space," Metzger said. "We showed you can get it down to launching 12 tons of hardware, which is incredibly small." For comparison, that would be less than half the weight of the Apollo command and service modules flown on a moon mission. The operation the scientists acknowledge, would take years to establish, but not as long as one might think. The payoff for Earth would be felt when the first shipments of materials began arriving from space. A sudden influx of rare metals, for instance, would drive down the price of those materials on Earth and allow a similar drastic reduction in manufacturing costs for products made with the materials, Metzler stated. The article was published in the Journal of Aerospace Engineering. Check out these other hot stories:
<urn:uuid:ef8b93c7-ecc3-43e5-9a5e-ec98e3e122ea>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224711/data-center/nasa--asteroid-based-manufacturing-not-science-fiction.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00108-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9671
988
4
4
In earlier posts I provided an overview of the various phases of the drug approval process. In a nutshell this consists of drug discovery, development, and testing. Across all of these phases various clinical trails are performed to test the drugs effects on people and how well the effectiveness of the drug. These clinical trails that are a part of the overall drug development process consist of 3 distinct types: Phase I Clinical Development (Human Pharmacology) – Thirty days after a biopharmaceutical company has filed its IND, it may begin a small-scale Phase I clinical trial unless the FDA places a hold on the study. Phase I studies are used to evaluate pharmacokinetic parameters and tolerance, generally in healthy volunteers. These studies include initial single-dose studies, dose escalation and short-term repeated-dose studies. Phase II Clinical Development (Therapeutic Exploratory) – Phase II clinical studies are small-scale trials to evaluate a drug’s preliminary efficacy and side-effect profile in 100 to 250 patients. Additional safety and clinical pharmacology studies are also included in this category. Phase III Clinical Development (Therapeutic Confirmatory) – Phase III studies are large-scale clinical trials for safety and efficacy in large patient populations. While phase III studies are in progress, preparations are made for submitting the Biologics License Application (BLA) or the New Drug Application (NDA). BLAs are currently reviewed by the FDA’s Center for Biologics Evaluation and Research (CBER). NDAs are reviewed While the logistics of all types of trials are very similar, Phase III trials represent the most time consuming and expensive one of the three. These types of trials are meant to provide proof to the FDA on the actual effectiveness of the drug and can require thousands of test subjects and take years to complete. Out of the entire drug development process a significant portion of the expense is incurred in completing the Phase III tests. Here is a brief synopsis of some of the issues and tasks involved in executing long term Phase III clinical trials: - Protocol design – creating the overall design of the trial as to patient profiles, drug dosage, administration, tracking of patients, data capture, managing adverse events or side effects reporting, trial supply chain management - Enrolling patients – in many cases thousands of patients are required to be involved in the trial to get the information required to obtain FDA approval - Complex logistics – scheduling all of the patient visits and getting the test supplies and drug products to the research centers for administration to patients - Geographically disperse – most trails are held in multiple locales with many companies are now performing clinical trials overseas - Expensive – as mentioned before it can cost tens to hundreds of millions of dollars to complete a Phase III trial - Time consuming – trials can run for years especially for drugs to be taken continuously for chronic disease conditions - Patient data security – ensuring the security and integrity of the patients personal and health information - Data access – providing appropriate and secure access to the data for the scientists, researchers and primary investigators - Data management – all trials create large amounts of data including Case Report Forms (CRF’s) where the results of each patient interaction are recorded - Regulatory compliance – ensuring that the supporting systems and processes meet both 21 CFR Part 11 and HIPAA regulatory compliance guidelines To support large scale clinical trials, the life science CIO has to deal with the challenges of provisioning and supporting the necessary hardware and software infrastructure. While there are a number of applications on the market for managing clinical trials many life science companies are looking to cloud based offerings to reduce the complexity along with the time and expense for performing these trials. Several companies are stepping into this space and providing cloud based SaaS applications that can drastically cut the time and costs required to put into place the systems and processes required to support the clinical trial process. Software companies, such as Cmed with their eClinical system, ClinPlus with their CTM application and Clinical Systems with their Clinical Trials Management Software package. To alleviate concerns about putting patient data in the public cloud many vendors are providing their applications via a private cloud where security, validation and data protection can be ensured and to only allow access to properly trained users as part of their Part 11 compliance efforts. These cloud based clinical trails applications provide a number of advantages: – No need to provision hardware and provide associated infrastructure – Data security and disaster recovery are built in – FDA compliance is a part of the overall environment – Support and maintenance of the system is provided by the vendor – Many of these systems are quickly configurable so that new trials and associated protocols can be quickly defined and made ready for use – Centralized control of the entire environment – Easier access for the sharing of data and results By utilizing cloud based applications to facilitate phase 3 clinical trials life science CIO’s can drastically reduce both costs and time required to get new medications to market.
<urn:uuid:253c3841-ced9-40f5-b648-a5e4401dbb8a>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/08/24/managing_clinical_trials_in_the_cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00016-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935098
1,006
2.78125
3
There is a general interest to quantify biology to help growth of economy and affordable healthcare without causing much damage to the environment. This interest is both at the political level and at the scientific levels. From nutritional security to biofuel and beyond, all have roots in quantification of biology, we need to understand the theory behind all biological events that happen around us, whether within a cell due to a pathogen or in the environment due to the toxic industrial waste. To manage this range of biological event, we need to understand the cause and effect equation of biological events. This is the reason, biology as a whole is embracing technology at an unprecedented rate. The main challenge in quantifying biology is that most of the biology problems are NP-hard; they need supercomputers to solve almost any problem. Therefore, biosciences research was always been the domain of computational elites who has access to large supercomputers. Though many algorithms have been perfected in the recent past, the voluminous data from a NGS (Next Generation Sequencer) and increase in metadata have offset the benefits of these smarter algorithms. Newer and efficient algorithms combined with high performance cloud computing opens up the opportunity for smaller labs and under-privileged researchers and clinicians to engage in biology research and join the mainstream. The cloud also opens up the possibility for small clinics to offer personalized medicine who otherwise could not afford a supercomputer. The new age genomics and the NGS technology in particular pushes the realm of compu-ting like never before. The challenges are immense as there is a need for parallel and efficient algorithms and tools to solve the data tsunami that the NGS machines offload. For example, the latest HiSeq 2000 from Illumina Inc, is expected to generate close to about 600 gigabase to 1 terabase per run. This data deluge coupled with the NP-hard nature of biological problems makes computer scientists to innovate newer and better techniques for transferring, managing, processing, decoding and analyzing the NGS data haze to unearth meaningful insights that can be applied to improve the quality of life. In the last few decades Web and network-delivered services have changed people’s lives. This technology has effectively “shrunk the world” and brought it into the pockets of individuals; it also helped the technology in a different way – it moved the center-stage of technology from giant technology companies to the technologists and technology consumers. Today an underprivileged entrepreneur somewhere in the world can innovate and open a shop in the Web and be successful. Likewise, HPC (High Performance Computing) accelerated by the cloud will transform the biotechnology, and life-sciences research, disease prognosis, and disease therapeutics. Unlike other domains. In the life sciences almost everything is available in the open domain – most of the journals are open-access and free – even nicely catalogued in PubMed; almost all software are open-domain if not open source; even better is that, all experimental data are available for verification, download, and use. There are databases like NCBI (National Center for Biotechnology Information), Hapmap, SMD (Stanford Microarray Database), that archives data from genome to protein, microarray to microRNA. Anything anybody needs is available for free. The only component that was missing in this whole equation is the supercomputer. The cloud bridges this gap – a researcher can now do almost anything on the cloud. The cloud not only addresses the CPU power needs of a NP-hard problem, but also will addresses the storage requirements of hundreds of terabytes of storage a biology experiment might need. Though the communication technology is not ready yet to transfer such large volume of data online, the cloud vendors have perfected transfer of data offline. High Performance Cloud Computing (HPCC) is poised to become the disruptive technology of the 21st century for lifesciences; cloud computing in particular will become an essential tool in the world for the biotechnology research for farmers, and clinicians alike. From high yield crops to industrial enzymes to high productive livestock and finally in Personalized medicine. HPCC solutions comprising of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) will play a pivotal role in translating lifescience to affordable applications. Cloud computing will be the highway to reduce the divide between computational elite and computational underprivileged, With cloud computing one can realize flexible, on-demand, and low-cost compute infrastructure whenever you want, wherever you want. Another trend that will emerge along these lines very soon is the outsourcing model in biotechnology. Today the computational elites have monopolized the biotech research – they have large supercomputers with a large team of bioinformaticians. This will soon change – bioinformatics will graduate to computational quantitative biology with many smart researches offering their solutions in the clous mainly as an SaaS. Cloud computing is of particular benefit to small and medium-sized research centers, farmers, and clinicians who wish to outsource their data-center infrastructure, or large centers who wish to get peak load capacity without incurring the higher cost of ownership hrough larger data centers internally. In both instances, service consumers will use what they need on the Internet and pay only for what they use. The next generation lifesciences, biotechnology, and healthcare applications will be a con-glomeration of a gamut of tools including but not limited to Systems biology, High Performance cloud computing systems, computer algorithms, mathematics, statistics, biological networks, molecular interaction pathways, Protein-enzyme-simulations, etc. Each of these techniques have a pivotal role to play in deciphering the complex human – environment system and thereby providing enough insights to translate research into appllication or science into discovery, be it personalized medicine, industrial bio-products, agri products or livestock systems. However, the question that we need to answer is how to make this future wellbeing system accessible, omnibus, and affordable for everybody? The answer is in integration of engineering, technology and science. Here there is an unanimous choice for high performance cloud computing – as, it enables and ignites affordable next generation genomics applications to reach the masses in the form of new therepies, drugs, better crops, sustainable enviroment, and proactive and preventive medicine. About the Author Asoke Talukder is an author, professor, and a practicing computational geneticist. He worked in tech-nology domains for companies like ICL, Fujitsu-ICIM, Microsoft, Oracle, Informix, Digital, Hewlett Packard, Sequoia, Northern Telecom, NEC, KredietBank, iGate, Cellnext, etc. He Internet enabled Microsoft PowerPoint, engineered the first 64-bit database (Informix), engineered Oracle Parallel Server for Fault Tolerant computer, and developed many killer technologies and products. He setup the first X.25 network in India for Department of Telecommunications (currently BSNL & MTNL), and the first Java Competency Centre in India. He engineered the Network Management System for Queen’s Award winning PDMX. He is recipient of many international awards for innovation and professional excellence including ICIM Professional Excellence Award, ICL Excellence Award, IBM Solutions Excellence Award, Simagine GSMWorld Award, All India Radio/Doordarshan Award etcetera. Asoke has been listed in “Who’s Who in the World”, “Who’s Who in Science and Engineering”, and “Outstanding Scientists of 21st Century”. He authored many research articles, book chapters, and textbooks. Asoke did M.Sc (Physics) with Bio-physics major and Ph.D (Computer Engineering). He was the DaimlerChrysler Chair Professor at IIIT-Bangalore, and currently Adjunct Faculty at Indian Institute of Information Technology & Management, Gwalior, Department of Computer Engineering, NITK Surathkal, and Department of Computer Science & Engineering, NIT Warangal. His current domain of expertise is in Computational Genomics and Statistical Bioinformatics. Along with teaching, he is founder of Geschickten Biosciences, a company in the domain of Computational Quantitative Biology focusing on Omic sciences analytics, GenomicsCloud, and Personalized/Holistic Medicine & Wellbeing. About Geschickten Biosciences To solve the challenges posed by the current Genomics technology such as Next Generation Sequencing, Geschickten has designed GenomicsCloud a novel cloud based software-as-a-services application for managing, analyzing and visualizing NGS data. A simple yet powerful software engine for NGS data analytics, Genomics Cloud will bring the power of a supercomputer accessible through a mobile device. One area that still remains as the concern in the cloud, is the data security and conformance to the regulatory requirements for transfer of genomics data across geographical boundaries. To mitigate this challenge, Geschickten has added an additional layer in the Cloud computing stack that will address the security requirements. This is termed as the cloud vendor layer. Cloud vendors will primarily be cloud resource aggregators, who will aggregate the services of many Original cloud providers’ and offer the services to the end biologist at an affordable price that will conform to the regulatory and taxation requirement of the end-user and the geography. The cloud vendor will ensure data transfer, data security, data management, and charging. This layer will also ensure some of the concerns of multi-tenancy in the Cloud. Geschickten Biosciences (www.geschickten.com) is a niche scientific intelligence company from Bangalore, India. The first Computational Quantitative Biology company from India Geschickten offers a wide range of products and scientific services to independent researchers, sequencing centers and industry including but not limited to Biotechs, Pharmaceuticals, Chemical, FMCG, and Biofuel companies etc. As experts in in NGS data analytics and Microarray data analysis, Geschickten is combining engineering, technology and science to translate research into discovery. Geschickten offers innovative technological solutions for Agriculture research, Animal biology, Environmental science and in human genetics.
<urn:uuid:af558a35-52a9-44ff-8e8b-98ecdd1f46ea>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/04/11/boosting_biology_with_high_performance_clouds/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00410-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918704
2,105
2.75
3
A brief background Over the past decade, the password has been repeatedly declared defunct by industry leaders and pundits. During this same period, technology and networks have evolved, but one thing has remained the same: 99% of access to IT infrastructure is with a password. Despite strident proclamations of the demise of the password, they are nowhere near dead. What you'll learn This entertaining and informative white paper explains: - Techniques hackers use to exploit passwords - Human factors that weaken passwords - Why passwords aren’t the problem - How to mitigate password risk
<urn:uuid:6dd028ca-c0b0-40c1-841b-916db5bc7f42>
CC-MAIN-2017-04
http://marketing.crossmatch.com/acton/media/6999/mitigate-password-risk-with-authentication-white-paper
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00346-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937871
119
2.765625
3
The security of a company’s networks and devices is a common issue many managers and owners face. There are a near constant stream of new threats brought to light, some of which are more important than others. The majority of the time these threats are to the software, but a recent one is hardware based and puts millions of systems at risk. At the end of January, numerous news and tech media services issued warnings about UPnP (Universal Plug and Play) enabled devices. This was taken to be a big issue because of the widespread adoption of these devices and the fact that many of them have little to no security measures, which could open whole systems to attacks. Many business owners and managers are wondering what exactly is UPnP and how it can open systems to attack. UPnP is a protocol or code that allows networked devices like laptops, computers, Wi-Fi routers, and many modern mobile devices, to search for and discover other devices connected to, or wanting to connect to, the same network. This protocol also allows these devices to connect to one-another and share information, Internet connection and media. A good example of UPnP in use is your laptop. When you first connect your laptop to your router, you likely have to enter a password and maybe even the router’s network name. Without UPnP you would have to find the network and enter the password each time you want to connect to the Internet. With UPnP, your laptop can automatically connect whenever it’s in range. Why is UPnP a security threat? UPnP has been in use for the better part of seven years and has since come to be found in nearly every device that connects to the Internet – pretty much everything. While it was written for devices in the home e.g., Wi-Fi routers, many businesses also use these devices because they are often easier to set up and cost less than their enterprise counterparts. Because of the sheer number of devices that use this protocol, and the fact that it’s engineered to respond to any request to connect to the device, it makes sense that this could be a security issue. A recent study tested the security of UPnP and revealed some interesting results. Rapid7, the company that conducted the study, sent UPnP discovery requests to every routable IPv4 address. – IPv4 (Internet Protocol version 4) is a set of protocols for sending information from one computer to another on the Internet. A routable IPv4 address is one that can be contacted by anyone on the Internet. They found that over 80 million addresses used UPnP, and 17 million of these exposed the protocol that enables easy connection to the system or device. This can be easily exploited by hackers. In other words, 17 million systems, many of which could be businesses, are open to attack through the UPnP device. This security threat opens networks to denial-of-service attacks which make resources, including the Internet, unavailable to the user. One example of a popular denial-of-service attack is a hacker making your website unavailable to others. Can we do anything? Most experts are recommending that you disable UPnP on your networked devices. The first thing you should do however is to conduct a scan for vulnerable UPnP devices on your network. Tools like ScanNow (for Windows) can help you search. For many, this is a daunting prospect, as the chance of creating more issues is just too great. We recommend contacting an expert like ourselves, who can conduct a security analysis and advise you on steps you can take to ensure you are secure. So, if you are worried about the security of your systems, give us a call today. We may have a solution for you.
<urn:uuid:a591ee0e-386b-4a2b-8ae1-1e16278c9d6a>
CC-MAIN-2017-04
https://www.apex.com/upnp-devices-security-risk/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00282-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960283
776
2.96875
3
if u want to retrive more than 1 row thru cobol program then u have to use CURSOR. CURSOR :- The basic function is to retrieve more than 1 rows from the CURSOR BLOCK :- There r 4 blocks in cursor. 1st the cursor need to be declared by using DECLARE cmd. 2nd the cursor has to be opened by using OPEN cmd. 3rd the cursor has to fetch the rows by using FETCH cmg. lastly the cursor has to be closed by using CLOSE cmd. cant we retrive more than one row using SELECT query? we can retrive only one row using CURSOR from the no.of rows which are retrived using SELECT query.i think CURSOR is only just like one pointer to the each row.becoz we will mention the host variables in the cursor it will retrive only one row by putting in loop it will retrive the rows one by one ok please check it out and if any thing is wrong let me know.
<urn:uuid:6e43b310-5f8b-468c-abd8-9a74ea9a5c9a>
CC-MAIN-2017-04
http://ibmmainframes.com/about1025.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00190-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926918
227
2.921875
3
Getting NASA's astrobiology community to agree on a set of specific future space life exploration goals must be like herding cats - there are over 500 members, but the group is currently trying to do just that by defining the most important questions it want to focus on for the 2014 Strategic Plan. [NEWS: The world's craziest contraband] The NASA group, which defines astrobiology as the study of the origin, evolution, distribution, and future of life in the universe, has in the past been successful in exploring a host of significant areas including: - How did life on Earth emerge and how did early life evolve with its changing environment? - What are the environmental limits of life as we know it? - By what evolutionary mechanisms does life explore the adaptive landscape shaped by those limits? - What principles will shape life in the future? - Do habitable environments exist elsewhere in the Universe? - By what signatures may we recognize life on other worlds as well as on early Earth? NASA says the success of the group's work over the past five years "can be measured in terms of peer-reviewed publications (more than 5,000), graduate and post-graduate students trained (hundreds), increased public awareness and interest as well as in how NASA funds have been leveraged to create new intellectual property (approximately half a dozen invention disclosures) and at least one start-up company." The group began constructing its 2014 Strategic Plan - set to be released next April - in May and has whittled down hundreds of topics to 21, which includes topics such as: - What are the common attributes of extant living systems, and what can they tell us about all living systems? - How did bio-relevant elements evolve into molecules? - How can we best overcome our ignorance about microbial life on Earth? - How would we find and identify an inhabited planet? - How can we enhance the utility of biosignatures as a tool to search for life in the Solar System & beyond? From NASA: The next steps in the creation of the new Strategic Plan will move the process back on-line. Starting in September, the 21 working documents will be published on the astrobiologyfuture.org website and the astrobiology community will be invited to review them. One webinar will be held for each document, after which community members will be allowed to provide comments. Community members will also be able to add documents if a compelling case can be articulated for the existence of a gap in the existing documents. A face-to-face integration workshop will be held in late February to create a first draft of the Strategic Plan. This draft will be reviewed by the Planetary Science Subcommittee of the NASA Advisory Council and, possibly, an ad hoc committee of the National Research Council. Following the consideration of comments arising from these reviews, a final draft will be published in April 2014. Are there other questions NASA should be asking? Check out these other hot stories:
<urn:uuid:118cd67a-2d3f-4358-a744-8dad20e12181>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225178/data-center/what-are-the-next-big-space-life-questions-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00006-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94625
610
3.015625
3
A report published today says the UK Government needs to explore ways the British workforce can deal with social and ethical challenges in a post-AI world. Many believe that robots and AI have the potential to replace human jobs in the future, but according to the UK’s Science and Technology Committee, the government needs to do more to address the threats. The MPs on the committee have proposed the idea of a separate commission, where the issues surrounding AI and robotics would be investigated and scrutinised. They want action to be taken as soon as possible. Attention is needed If AI and robots are to replace jobs in the future, the committee expects the government to find ways to help humans thrive alongside. To do this, new skills suitable in an AI-led world need to be found. Although the report outlines the need for consideration on AI’s future, it also explores the current state of the tech. In particular, it questions the ethical issues of AI-based decision-making, as well as privacy and safety implications. Dr Tania Mathias MP, the acting chair of the committee, said artificial intelligence is still in the early stages but expects it gain more momentum over the next few decades. However, she’s concluded that the government needs to start investigating now. “Artificial intelligence has some way to go before we see systems and robots as portrayed in films like Star Wars. At present, ‘AI machines’ have narrow and specific roles, such as in voice-recognition or playing the board game Go,” she said in a statement. “But science fiction is slowly becoming science fact, and robotics and AI look destined to play an increasing role in our lives over the coming decades. “It is too soon to set down sector-wide regulations for this nascent field but it is vital that careful scrutiny of the ethical, legal and societal ramifications of artificially intelligent systems begins now,” continued the MP.” Change is to be expected Roger Bou, director, IoT Solutions World Congress, said it’s normal for technological revolutions to bring major change and we’ve been here before. He points out the invention of the wheel and the industrial revolution. “Concerns about AI and robotics fundamentally changing – or eliminating entirely – some roles are realistic, but the fact of the matter is that every major technological change in the history of industry has had this effect,” he said. “The invention of the wheel, around 3,500 BC, displaced some by requiring fewer labourers, but increasing the productivity of an individual worker. In the industrial revolution the UK’s great cottage industries like textiles were automated and subsequently decimated by factories. “Production lines created new jobs for millions, but many skilled workers were also left high and dry. This cycle was repeated in the deindustrialisation that has left many communities feeling forgotten since the 1980s. “Automation brought about by technologies such as AI, robotics, machine learning and IoT will also bring about profound change. But we need to give ourselves the best possible chance of understanding what these effects might be. “In enterprise and industry, the ‘beta testing’ phase happens in testbeds – an area where we simulate real-world conditions to test these technologies.” AI shouldn’t be feared Mark Barrenechea, CEO of enterprise information management firm OpenText, believes that AI shouldn’t be feared and that it’ll have big benefits for businesses. “This Digital Revolution will bring an increasing reliance on self-service technology, machine-to-machine communication (M2M) and artificial intelligence,” he said in response to today’s report. “These will completely transform the workplace as menial tasks, and some non-routine jobs, are digitalised through robotics and process automation. “As many as 25 to 40 million jobs globally will disappear as a direct result of extreme automation and extreme connectivity, with the greatest losses occurring in white-collar office and administrative roles. “We shouldn’t, however, fear this disruption. M2M communications will enable machines to process data and make decisions based on this data as we move toward more intelligent, cognitive systems. “In many cases, the intelligence these systems deliver will be more accurate, immediate and safer than humanly capable. He added: “The economic impact of digital is vast. Businesses that use the internet tend to grow more quickly, export two times as much as those that don’t, and create more than twice as many jobs. “Yet many companies are off to a poor start on the journey toward digital transformation. While organisations are taking advantage of digital technologies, many economies remain digitally immature. This means that the ability to unlock the value of digital is far from being realised.” Gerry Carr, CMO of UK-based Ravelin, which uses AI and machine-learning to detect and prevent fraud, added that it’s too early to look deeply at the consequences of an AI world. ‘As one of those start-ups actually working in AI (we use machine learning techniques detect fraudulent payments for online merchants), it feels premature to spend a lot of time looking at the ethical implications of an industry that is really new. For instance, the insistence on ‘transparency’. In practical terms for machine learning this means a choice of certain techniques over others, and can often means choosing a sub-optimal technique so it can be ‘explained’. “Neural networks for instance are hard to interrogate. Is the committee suggesting the UK do not explore their capabilities? I doubt this is the committee’s intention but it might well be the result. Trying to impose barriers to discovery when we are only now beginning to understand what is possible seems needlessly cautious. What we can commend is the call for a strategy to equip the UK with the skills to develop and use artificial intelligence products and services.”
<urn:uuid:c3d3d2bb-2a5b-48ea-8032-da2a536b406c>
CC-MAIN-2017-04
https://internetofbusiness.com/humans-need-new-skills-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00311-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947708
1,256
3.03125
3
Dorey N.R.,University of Florida | Mehrkam L.R.,University of Florida | Tacey J.,Busch Gardens Zoo Biology | Year: 2015 It is currently debated as to whether or not positive reinforcement training is enriching to captive animals. Although both husbandry training and environmental enrichment (EE) have been found to benefit animal welfare in captivity, to date, no systematic investigation has compared an animal's preference for performing a trained behavior to engaging freely with a stimuli provided as EE. In the current paper, we used four captive wolves to (1) test the efficacy of a paired-stimulus preference assessment to determine preference for engaging in a trained behavior as a choice; and to (2) use a paired-stimulus preference assessment to determine whether or not individuals prefer to engage in a previously trained behavior versus a previously encountered EE stimuli. Of the four subjects tested, visual inspection of the graphs revealed that two of the subjects preferred trained behavior stimuli and two of the subjects preferred EE stimuli; only one of the wolves had a statically higher preference for an EE stimulus over a trained behavior. We believe that letting the animals choose between these two events is the first step in answering the question of whether or not is training enriching, however more research needs to be done and suggestions for future research is discussed. © 2015 Wiley Periodicals, Inc. Source Schmidt D.A.,Lincoln Park Zoo | Barbiers R.B.,Lincoln Park Zoo | Ellersieck M.R.,University of Missouri | Ball R.L.,Busch Gardens | And 7 more authors. Journal of Zoo and Wildlife Medicine | Year: 2011 Serum chemistry analyses were compared between captive and free-ranging giraffes (Giraffa camelopardalis) in an attempt to better understand some of the medical issues seen with captive giraffes. Illnesses, including peracute mortality, energy malnutrition, pancreatic disease, urolithiasis, hoof disease, and severe intestinal parasitism, may be related to zoo nutrition and management issues. Serum samples were collected from 20 captive giraffes at 10 United States institutions. Thirteen of the captive animal samples were collected from animals trained for blood collection; seven were banked samples obtained from a previous serum collection. These samples were compared with serum samples collected from 24 free-ranging giraffes in South Africa. Differences between captive and free-ranging giraffes, males and females, and adults and subadults were analyzed by using a 2 × 2 × 2 factorial and Fisher's least significant difference for mean separation; when necessary variables were ranked and analyzed via analysis of variance. Potassium and bilirubin concentrations and alanine aminotransferase (ALT) activities were different between captive and free-ranging giraffes, but all fell within normal bovid reference ranges. The average glucose concentration was significantly elevated in free-ranging giraffes (161 mg/dl) compared with captive giraffes (113 mg/dl). All giraffes in this study had glucose concentrations higher than bovine (42-75 mg/dl) and caprine (48-76 mg/dl) reference ranges. Differences were also seen in lipase, chloride, and magnesium though these findings are likely not clinically significant. There were no differences detected between sexes. Adults had higher concentrations of potassium, total protein, globulins, and chloride and higher gamma glutamyltransferase activities, whereas subadults had higher concentrations of phosphorus. Within the captive group, nonimmobilized animals had higher concentrations of total protein and globulins. Captive giraffe diets need further investigation to determine if the differences seen in this study, especially glucose and bilirubin concentrations and ALT activities, may result in some health problems often seen in captive giraffes. © 2011 American Association of Zoo Veterinarians. Source Koutsos E.A.,Mazuri Exotic Animal Nutrition PMI Nutrition Intl LLC | Armstrong D.,Omahas Henry Doorly Zoo | Ball R.,Busch Gardens | Dikeman C.,Omahas Henry Doorly Zoo | And 5 more authors. Zoo Biology | Year: 2011 In response to new recommendations for feeding giraffe in zoos, giraffe (n = 6) were transitioned from a typical hoofstock diet to diets containing reduced starch, protein, Ca and P and added n3 fatty acids. This diet was fed as a 50:50 mix with alfalfa and grass hay. Over the next 4 years, serum Ca, P, and fatty acids were measured every 6 months (summer and winter). Serum Ca was not affected by season (P = 0.67) or by diet (P = 0.12). Serum P was not affected season (P = 0.14), but was reduced by diet (P<0.01), and serum Ca:P was also increased by diet (P<0.01). The ratio of serum Ca:P tended to be affected by season (P = 0.07), in which animals tended to have greater Ca:P during the summer vs. the winter. The diet transition resulted in reduced serum saturated fatty acids (including lauric, myristic, palmitic, arachidic, and behenic acids), and increases in n6 fatty acids (including linolenic and arachidonic acids) and n3 fatty acids (docosahexaenoic acid) (P<0.05 for each). Overall, this diet transition resulted in blood nutrient profiles that more closely match that of values found in free-ranging giraffe. © 2010 Wiley Periodicals, Inc. Source After years of pressure, SeaWorld made a surprise announcement: It no longer breeds killer whales in captivity and will soon stop making them leap from their pools or splash audiences on command. Surrendering Thursday to a profound shift in how people feel about using animals for entertainment, the SeaWorld theme parks have joined a growing list of industries dropping live animal tricks. Ringling Bros. and Barnum & Bailey Circus is retiring all of its touring elephants in May. Once-popular animal shows in Las Vegas have virtually disappeared. "Society's attitude toward these very, very large, majestic animals under human care has shifted for a variety of reasons, whether it's a film, legislation, people's comments on the Internet," SeaWorld Entertainment CEO Joel Manby said. "It wasn't worth fighting that. We needed to move where society was moving." SeaWorld's 29 killer whales will remain in captivity, but in "new, inspiring natural orca encounters," according to the company. SeaWorld's orcas range in age from 1 to 51 years old, so some could remain on display for decades. Attendance at SeaWorld's parks declined after the 2013 release of "Blackfish," a highly critical documentary. Some top musical acts dropped out of SeaWorld-sponsored concerts at the urging of animal rights activists, who kept up a visible presence demonstrating outside the parks' gates. Still, the decision shocked advocates who have spent decades campaigning against keeping marine mammals captive, and it represents a sharp U-turn from SeaWorld's previous reaction to the documentary. In August 2014, SeaWorld announced major new investments in the orca program, including new, larger tanks, first in San Diego and then at its parks in Orlando and San Antonio, Texas. But the California Coastal Commision didn't approve the $100 million expansion until last October, and when it did, it banned orca breeding as part of the decision. SeaWorld sued, arguing that the commission overstepped its authority, but said it would end its San Diego orca shows by 2017. Meanwhile, SeaWorld brought in a new leader with more experience in regional theme parks than zoos and aquariums, which have been fending off such protests for decades. Manby was hired as SeaWorld CEO last March 19 after running Dollywood and other musically-themed parks. He said Thursday that he brought a "fresh perspective" to the killer whale quandary, and soon realized that "society is shifting here." Orcas have been a centerpiece of the SeaWorld parks since shows at the Shamu stadium in San Diego became the main draw in the 1970s. But criticism has steadily increased in the decades since and then became sharper after an orca named Tilikum battered and drowned trainer Dawn Brancheau after a "Dine with Shamu" show in Orlando in 2010. Her death was highlighted in "Blackfish," and it wasn't the first for Tilikum. The whale also killed an animal trainer and a trespasser in the 1990s. "Blackfish" director Gabriela Cowperthwaite said she applauds SeaWorld's decision, "but mostly I applaud the public for recalibrating how they feel ethically about orcas in captivity." The new orca shows will begin next year at the San Diego park, before expanding to its San Antonio park and then to Orlando in 2019, Manby said. What about shows involving dolphins and other marine mammals? "Stay tuned on that," Manby said. "A lot of people don't understand how hard it is internally to make these kinds of decisions. We need to execute this well. We need to make sure we have the organization in the same direction. Then we will apply those learnings elsewhere." SeaWorld has not only discontinued breeding orcas through artificial insemination; it also feeds the whales birth control medication, Manby said. One of SeaWorld's most prolific breeders has been Tilikum. The 35-year-old whale has sired 14 calves during his 23 years in Orlando, but he's gravely ill now and not expected to live much longer. "So you're saying you're ending your breeding program? Well, guess what? Your breeding program is ending anyhow. I think it's greenwashing," said Ric O'Barry, who directs the DolphinProject.net advocacy group. In 2012, SeaWorld sent workers to infiltrate the animal rights group People for the Ethical Treatment of Animals, which has been particularly critical. Manby confirmed the effort last month. He said the undercover workers were sent to protect the safety of SeaWorld employees and customers, but he vowed to end the practice. Now, SeaWorld hopes to turn a less strident foe, the Humane Society, into a collaborator, helping to educate guests about animal welfare and conservation through interpretive programs and expanded advocacy for wild whales, seals and other marine creatures. Humane Society CEO Wayne Pacelle, who called SeaWorld's about-face a "monumental announcement," said his organization is by no means naive about SeaWorld, but sees a chance to make progress for animal rights." "We didn't want to be endlessly mired in conflict," Pacelle said. PETA wasn't satisfied, insisting Thursday that SeaWorld should give up its orcas altogether. "SeaWorld must open its tanks to the oceans to allow the orcas it now holds captive to have some semblance of a life outside these prison tanks," PETA spokeswoman Colleen O'Brien said in a statement. Manby countered that no captive dolphin or orca has been successfully released into the wild. SeaWorld is abandoning plans to expand its orca tanks now that the breeding program has ended, the company said. A spokeswoman for the California Coastal Commission praised this, and suggested that SeaWorld drop its lawsuit as well. Manby said SeaWorld's three marine parks may move closer to the balance of rides, shows and animals found at the company's Busch Gardens parks. They need a mixture of experiences to keep a family at the park all day, he said. "I do think you have to have more rides," Manby said. "Some of these messages about animal welfare ... You can't hit them with that all day because sometimes it's a heavy message. You have to balance it." Greco B.J.,University of California at Davis | Greco B.J.,Aware Inc | Meehan C.L.,Aware Inc | Miller L.J.,Chicago Zoological Society Brookfield Zoo | And 5 more authors. PLoS ONE | Year: 2016 The management of African (Loxodonta africana) and Asian (Elephas maximus) elephants in zoos involves a range of practices including feeding, exercise, training, and environmental enrichment. These practices are necessary to meet the elephants' nutritional, healthcare, and husbandry needs. However, these practices are not standardized, resulting in likely variation among zoos as well as differences in the way they are applied to individual elephants within a zoo. To characterize elephant management in North America, we collected survey data from zoos accredited by the Association of Zoos and Aquariums, developed 26 variables, generated population level descriptive statistics, and analyzed them to identify differences attributable to sex and species. Sixty-seven zoos submitted surveys describing the management of 224 elephants and the training experiences of 227 elephants. Asian elephants spent more time managed (defined as interacting directly with staff) than Africans (mean time managed: Asians = 56.9%; Africans = 48.6%; p<0.001), and managed time increased by 20.2% for every year of age for both species. Enrichment, feeding, and exercise programs were evaluated using diversity indices, with mean scores across zoos in the midrange for these measures. There were an average of 7.2 feedings every 24-hour period, with only 1.2 occurring during the nighttime. Feeding schedules were predictable at 47.5% of zoos. We also calculated the relative use of rewarding and aversive techniques employed during training interactions. The population median was seven on a scale from one (representing only aversive stimuli) to nine (representing only rewarding stimuli). The results of our study provide essential information for understanding management variation that could be relevant to welfare. Furthermore, the variables we created have been used in subsequent elephant welfare analyses. © 2016 Greco et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Source
<urn:uuid:bcb5db4b-52a7-4ac0-aee5-c2da79126556>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/busch-gardens-685021/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00521-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952647
2,941
3
3
Packaging made with captured carbon |As part of our effort to source 100 percent of our packaging materials from sustainable materials, Dell has turned to Newlight Technologies’ AirCarbon® for a pilot project to manufacture protective bags that incorporate captured carbon emissions that would have otherwise become part of the air.| Today most plastics are made exclusively from oil or other fossil fuel derivatives. In fact, around 4 percent of the world’s oil production is used as feedstock to make plastics, with a similar amount consumed as energy in the process. AirCarbon is different. AirCarbon is a thermoplastic material made by combining industrial sources of methane-based carbon emissions, such as methane from dairy farms, digesters and landfills, with air to produce a thermoplastic polymer. Using a process developed by Newlight over 10 years of research and now operating at commercial scale, Newlight uses a biocatalyst to carry out a carbon sequestration process in a water-based reactor, where carbon emissions are combined with air and converted into long-chain thermoplastic polymers by a biocatalyst at a yield that is 9 times higher than previous technologies—thereby providing a solution that does not increase cost relative to the oil-based process, but in fact enables cost reduction. |Once produced, AirCarbon can then be used as a standalone material or incorporated into existing materials, such as Dell’s linear low density polyethylene (LLDPE) bags, resulting in products that capturecarbon that would normally become part of the air we breathe and use that carbon to displace oil.| By capturing methane emissions, which are over 20 times more potent then carbon dioxide emissions as a greenhouse gas, AirCarbon is independently verified by SCS Global Services to sequester more carbon dioxide equivalent greenhouse gas than is emitted to produce it, meaning it is a carbon sequestration material on a net basis. Dell is the first in the IT industry to use AirCarbon. While the initial pilot project will focus on packaging – specifically for the protective bags for Dell Latitude notebooks shipped to the U.S. and Canada – AirCarbon’s functional flexibility makes it attractive for other possible uses with Dell products. Our use of AirCarbon will also help Dell work toward our zero-waste packaging goals that are a part of our Dell 2020 Legacy of Good Plan. AirCarbon will join other packaging solutions derived from bamboo, mushrooms and wheat en route to a 2020 packaging profile that is 100 percent sourced from sustainable materials and 100 percent recyclable or compostable.
<urn:uuid:ed3742f3-7721-4fcd-87fe-ceb486ba5dc0>
CC-MAIN-2017-04
http://www.dell.com/learn/us/en/uscorp1/corp-comm/air-packaging
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00063-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931358
536
2.640625
3