text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
At the same time as Moore’s Law progress is slowing down, the demand for processing data is at an all-time high. A full 90 percent of all the data in the world was created in the last two years, and datacenters are responsible for 2 percent of overall US electrical usage. Keeping all those warehouse-scale server farms and supercomputers at their optimal operating temperature is costly from an economic and an environmental perspective. The challenge has some big iron operators giving their computers a bath, as highlighted by a recent piece in The New York Times. Japan has been especially motivated by energy concerns. The country has been operating with a reduced electricity supply ever since the March 2011 earthquake and tsunami and subsequent meltdown at the Fukushima Daiichi power plant. As a protective measure, the country limited the output of its other nuclear facilities, which curtailed electricity supplies. With energy budgets constrained, organizations with major electricity needs had to get creative. Such was the case at the Tokyo Institute of Technology. The university administration had capped power supplies, but the institute required more supercomputing capacity. Thus the idea for the oil-cooled Tsubame KFC was born. The supercomputer employs a mineral oil-based cooling solution, called CarnotJet, developed by Austin, Texas-based Green Revolution Cooling. Last November Tsubame KFC was declared by the Green500 list as the most energy-efficient machine of its kind. The system is 50 percent more powerful than an older machine with the same energy footprint. Other companies involved in the space include Iceotope, a start-up based in Sheffield, England, which is promoting the use of liquid fluoroplastic, rather than oil. Another vendor, the Hong Kong-based Allied Control, used 3M’s “passive two-phase liquid immersion cooling system” for a 500kW datacenter that mines for Bitcoins. Liquid cooling is not new. Compared to air, liquids are more efficient coolants because they are better thermal conductors and have a higher heat capacity. In fact, liquids are estimated to be about 4,000 times more effective at removing heat than air. Iconic American supercomputer maker Cray used submersion liquid cooling back in the 1980s, but the approach never fully caught on because of cost concerns and also the coolants of that era were known ozone-depleters. A combination of air conditioning and pipe-based water cooling worked “good enough” and reduced the cost and complexity of the datacenter build. Now dimensions have aligned to make immersive cooling attractive again. Supercomputer centers can easily spend tens of millions of dollars a year on energy bills. For the biggest corporate datacenters, the cost is even higher, running to hundreds of millions a year. In both cases, it’s not uncommon for 50 percent of that bill to go to energy and cooling costs. At this time, immersive cooling has made it into more supercomputers than datacenters, according to Christiaan Best, chief executive of Green Revolution, who suggests that corporations are more risk averse than the academic crowd, which tend to be more open to experimentation. “You can imagine, if we walk in and say, ‘Why don’t you take your data center and put it in oil,’ you have to have something pretty solid to point to,” Mr. Best said. And yet Green Revolution’s method has been part of several datacenter deployments, including a United States Department of Defense facility. A one-year study of the system performed by Intel found that the servers suffered no ill effects, and power consumption dropped considerably.
<urn:uuid:7a3babf4-12ac-4f81-9643-da5074da864c>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/02/14/immersion-cooling-floated-green-energy-solution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00299-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961741
756
2.9375
3
When computer networks were invented, copper wiring was used for the cables that handled the Internet. But nowadays fiber optic cable is more often used for new cabling installations and upgrades, including backbone, horizontal, and even desktop applications. They are more favored for today’s high-speed data communications, such as Gigabit Ethernet, FDDI, multimedia, ATM, SONET, Fiber Channel, or any other network that requires the transfer of large, bandwidth-consuming data files, particularly over long distances. Fiber optic cables offer a number of advantages over copper. Lower Cost–While fiber optic cable itself is cheaper than an equivalent length of copper cable, fiber optic cable connectors and the equipment needed to install them are more expensive than their copper counterparts. Long Distance And High Capacity–Fiber optic cables carry communication signals using pulses of light. Only fiber optics can go the long distance. Not only is fiber optic cable capable of carrying far more data than copper, it also has the ability to carry that information for much longer distances. Fiber to the Home (FTTH) installations are becoming more common as a way to bring ultra-high speed Internet service (100 Mbps and higher) to residences. Higher Bandwith–Fiber has a higher bandwidth than copper. Example: cat6 network cable is classified by the Telecommunications Industry Association (TIA) to handle a bandwidth up to 600 MHz over 100 meters, which theoretically, could carry around 18,000 calls at the same time. Multimode Fiber, on the other hand, would have a bandwidth of over 1000 MHz which could carry almost 31,000 simultaneous calls. Adaptable To Any Environment–Fiber optic cables don’t mind roughing it. Since fiber optic cables are glass-based, glass fibers don’t only escape interference. They are virtually free from the threat of corrosion, too. While copper cabling is sensitive to water and chemicals, fiber optic cabling runs almost no risk of being damaged by harsher elements. Fiber optic cables can be used outdoors — and in close proximity to electrical cables –without concern. As a result, fiber optic cable can easily endure “living conditions” that coaxial cable just can’t, such as being put in direct contact with soil, or in close proximity to chemicals. For reasons stated above, fiber optic cable is a more reliable means of communication. While the decision on using copper cables or fiber optic cables may be difficult. It will often depend on your current network, your future networking needs, and your particular application, including bandwidth, distances, environment, and cost. While in some cases, copper may be a better choice. Copper works on simple ADSL connections since there is not much of a distance from a modem to a phone jack on a wall. Copper usually transmits data without loss at distances of two kilometers or less. On top of all that, the demand for bandwidth in an ADSL connection is often low enough (around 6 to 8 Mbps on average) to use copper wires. As the mature of fiber optic cables production, they are more affordable. Choosing fiber optic cables or copper wire for your communication is completely up to your future networking needs and your particular application.
<urn:uuid:618a9bb3-bb05-4c4b-9e4c-87980d47e06d>
CC-MAIN-2017-04
http://www.fs.com/blog/choosing-fiber-optic-cable-or-copper-wire-for-communication.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00537-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929832
657
3.328125
3
What does it bring to the party that's new? Well, 802.11n builds on the 802.11 spec by allowing for a new feature called Multiple Input/Multiple Output (MIMO). MIMO uses multiple transmitter and receiver antennas to improve system performance. To further enhance its capabilities over a legacy router (a/b/g), the new 802.11n spec uses the 5GHz spectrum to reduce the interference issues found on 802.11g routers using the 2.4GHz spectrum. For the past few years, the IEEE Standards Association -- the wireless spec governing body -- has issued a number of upgrades to the N spec, from pre-N to Draft-N to its current stage of development: Draft 2.0. Generally speaking, newer routers work on this new spec, which is the result of thousands of minute improvements to previous iterations. It's also worth noting that 802.11n has not been approved by the IEEE and should still be considered a work in progress. Currently, there are two key questions to ask before purchasing any specific 802.11n router: Is it worth buying? And does it perform well enough to justify junking your 802.11g router and spending money on the new device?
<urn:uuid:a3ec4f13-818e-4564-94fe-977152ee8ddb>
CC-MAIN-2017-04
http://www.networkcomputing.com/networking/review-80211n-wi-fi-routers/347255143
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00537-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937858
252
2.546875
3
STEMNet – Learning from the GamesSTEMNet Cisco's Inspired thinking As a part of Cisco's commitment to building a brilliant future for the UK after London 2012 – we formed a partnership with STEMNet – an education scheme funded by the Department for Business, Innovation, and Skills and the Department for Education. This program aims to encourage young people across the UK to engage in STEM subjects (Science, Technology, Engineering and Maths) and take their first steps towards a career in technology. The STEM Challenges STEMNet developed the STEM Challenges as part of its aim to provide a contextual and cross-curricular approach to studying STEM subjects at Key Stage Three. The 10 challenges do this by focussing on different aspects of preparation for the London 2012 Games. They involve a mixture of hands-on testing, experimental work and research. Cisco worked with STEMNet on two of these challenges: The Cisco Website Challenge with gold-winning Paralympic GB handcyclist Rachel Morris, and the Maths and Science Challenge 2012. The Cisco Website Challenge: Rachel MorrisCisco Website Challenge STEMNet and Cisco invited 11-14 year olds to design a new online presence for Rachel. The judges had to whittle down 80 entries from schools across the UK to just eight finalists, who were asked to present their ideas in person on July 6th 2011 at Ravensbourne College, London. This was a fitting location because the college's students already use Cisco technology and benefit from the company's support. Each team was given the opportunity to present their ideas to Rachel and was quizzed by a panel of judges. As part of the day, there were group discussions about the behind-the-scenes Cisco technologies that make television and the internet work. Plus, there was lots of advice and encouragement for the finalists to consider a career in the digital economy. One finalist summed up the experience: “I've had such an exciting day and learned loads, but it was incredibly nerve-wracking presenting a website about Rachel to Rachel herself.” The judges awarded first place to Blessed Edward Oldcorne Catholic College. As a reward for their hard work, the students won £1,000 worth of equipment for their school's STEM Club plus a trip to the World Skills 2011 Conference, courtesy of Cisco. STEM Challenge 10 Winners AnnouncedSTEM Challenge 10 Winners All Hallows High School, Lancashire, were the final STEM challenge winners. In challenge 10, Cisco, STEMnet and The Pearson Foundation asked students to plan a large, sustainable event in their area. Nearly 200 state-maintained schools took part with eight teams contesting the final at Cisco House, overlooking the Olympic Park in Stratford. - All Hallows Catholic High School, Lancashire - Angley School, Kent and Medway - Kendrick School, Surrey and rest of Berkshire - Perins School, Hampshire and Isle of Wight - South Axholme School, Humberside - The Elton High School, Greater Manchester - The Gryphon School, Wiltshire, Swindon and Dorset - Woodham Community Technology College, County Durham and Tees Valley The teams pitched their events to a Dragon's-Den-style panel of experts, who judged them based on the core values of the Olympic and Paralympic games: excellence, friendship, respect, courage, determination, inspiration and equality. The final was closely contested with pitches for everything from blind football to an aquatics centre. In the end, All Hallows Catholic School won £2,500 for its STEM club and eight tickets for the Olympic Hockey. The team proposed a venue for swimming - an entry that stood out for its excellent data analysis. On the podium Woodham Community Technology College finished second and won £1,000 for its STEM club and eight tickets for the Olympic canoeing. And South Axholme Academy won £500 for its STEM club and eight tickets for the Olympic rowing. The Elton High School was also highly commended by the judges for excellent use of research, sources and surveys. STEM Challenge 10 forms part of Cisco's ‘Out of the blocks’ project which gives schools the chance to explore the events and venues of London 2012 whilst practising maths and science skills. More information about the ‘Out of the blocks Maths and Science Series 2012’ project can be found at www.mathsandscience2012.co.uk.
<urn:uuid:a4f9b9cc-5fe2-45bd-a8bb-76aa218f789c>
CC-MAIN-2017-04
http://www.cisco.com/cisco/web/UK/london2012/stem.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00263-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962901
914
2.5625
3
"Open source" has come a long way and with the new administration adopting the open source content management system Drupal to power the recovery.gov Web site, open source's visibility will likely get another big boost. Speaking from the standpoint of a developer, the number of tools, utilities and programs available under open source licensing continues to be very exciting. But it is also true that confusions still persist about what it is and, in particular, about its costs. "Open source" and "free" are not synonymous -- though there is a relationship between the two terms. As with any engineering product, using software requires more than just having access to the application. To take a more concrete example, let's consider the task of building a bridge over a stream -- it involves more than just having a crew pull up to the river and start building. The environmental impact, the needs and concerns of the surrounding community, how to make a connection to the electric grid and even connecting to the existing roads are all factors that need to be taken into account. And that all occurs before the bridge is built. Once construction is done, it requires ongoing maintenance, inspection, repairs and a means of controlling the traffic on it. But let's get a bit more precise about the analogy. Before the bridge is built, someone needs to have done the engineering work to figure out how the bridge is put together, the size of the beams, etc. A fabrication operation then makes the beams and other pieces needed to do the construction. If the bridge is small, it might be assembled in a shop and transported to the target site. If it is larger, then the fabricated pieces will shipped to the site and assembled in place. Open source projects are similar: the architectural work has been done and has been made available for general use. Many of the pieces have been fabricated and often those pieces have been assembled and the "bridge" is just waiting to be transported to the installation location. And while that means a lot of work has already been done and made available without cost, it doesn't mean that the new bridge will be "free." The fact that a general blueprint exists is nice but it may need some tweaking to make it fit for the specific use. This requires a resource that can read and update blueprints. When working on open source projects, you can't necessarily depend on a vendor to supply that resource -- you may have to supply it yourself. To keep things in perspective, the following is a quick list of the items that commonly need to be taken into account when deciding on open-source software alternatives: Many open source software applications, like commercial software, require configuration which can require expert-level knowledge. For example, the Apache Web server requires administration which is primarily done by editing one or more configuration files. Configuring Apache is not difficult but if your staff doesn't have existing expertise in it, then the total cost of ownership will need to include either hiring experts or getting staff up to speed. By contrast, Microsoft's Web server also requires a great deal of customization but is all done through a graphical user interface. Both require expertise. This issue also shows itself when selecting an application that requires ongoing configuration or changes as part of routine use. For example, many open-source content management systems exist and are quite popular. But, when evaluating options, it is important to look into the technologies on which they are built. For example, if your staff's expertise is in .NET or ColdFusion, the no-cost price license for a PHP-based system may be appealing until you need to get something changed and find that the skills don't exist within your organization to make it happen. It is safe to assume that any software will require support at some time or another. Commercial software vendors usually provide support for their products. With open source, support options can be less clear. Some open source projects have spawned companies which specialize in providing support but if a company doesn't exist to specifically support a specific open source application, it is important to factor in the true support costs. Large open source projects often have large communities of users who work to answer questions and address issues. But, in most cases, interacting with the community is most efficient when someone with technical knowledge is asking the questions. So, when making decisions about how to handle support, your planning should include having someone on staff or under contract who understands the product and who can, as needed, interact intelligently with the online community. Particularly for government agencies, training is an important issue to consider when selecting software. Again, training on open source software is often available from traditional training companies -- at least for popular or large open source applications. But for smaller applications, no formal training may be available. This problem can be compounded by the tendency for some open source applications to focus more on functionality and performance than on user interface. For technologists, the functionality and performance is key and the user interface is something that can be adjusted or "lived with." But for end-users, the user interface is the application. The intelligence and usability of the navigation, the quantity and quality of online help and access to solid training materials or classes can have a greater impact on application adoption than the application's features. As with support, open source projects often have online training and frequently there are community members who dedicate themselves to helping with documentation and training. But, the safest route for an agency is to have on staff someone with technical knowledge of the application and an ability to train others to fill in the training gaps when needed. With commercial software products, it is possible to examine the financial health of a company to make a determination about whether the company might be around in five years. With open source, the same evaluation needs to be done but there's often no company to examine, just an online community that is supporting and developing the application. The size and activity of that community may be part of the evaluation, the number of installations of the software may figure into it and the general media buzz about it can also be a factor. The point is the evaluation of viability and longevity is at least as important to do for open source software as for commercial, but the ways to evaluate the software are different. None of this is to say that commercial software should necessarily be preferred over open source. Open source software provides great possibilities and in some cases is the preferred solution. But it does mean that agencies shouldn't make the mistake of equating "open source" with "free." Total cost of ownership" may be less with open source in the long run, but it is not "free" and each case needs to be evaluated against the business purpose, the availability of ancillary services and the level of in-house expertise available with lots of emphasis on what in-house skills are available. "Open source" in its strictest sense refers to the availability of the original work done by an application's developers. But, there are other aspects of what "open source" means in terms of application licensing. Applications are written by developers in a language that is readable by humans. It may require specialized knowledge to read, but it is readable. In order for the application to actually run on a machine, it must be turned into a language that is understandable to the machine. The process by which the human-understandable form is translated into the computer-understandable form is called "compiling." Once the program has been compiled, it will run on a computer but is no longer readable by humans. Most of the desk top applications we're familiar with such as Word, Excel, etc. are only available in compiled form -- Microsoft does not make the uncompiled version available. By contrast, open source software does include the uncompiled form so anyone can make additions or changes to it and compile it themselves. But, according to the Open Source Initiative, "open source" refers to more than just the handling of the application's code -- it also relates to the terms covering the way the application is distributed. Full details are available on the OSI site but the general concept is that open source software must remain open source and freely available to anyone for any purpose. If it is used as the basis for other products, those derivative products must, in their turn, abide by the open source distribution rules. This could have implications for agencies that use open source software as the basis for their own application development projects. In most cases, it will not be a problem but certain is an issue that needs to be considered. So, though "open source" strictly speaking refers to the widespread availability of original developer work-product, it has come to mean much more as regards the ownership of software and the restrictions (or mandated lack of restrictions) on its distribution.
<urn:uuid:b0b7b6c7-7d0a-4808-9dd7-5221c2edf30c>
CC-MAIN-2017-04
http://www.govtech.com/pcio/Open-Source----Is-it-Free.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00107-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957205
1,780
2.671875
3
The White house's second annual Safety Datapalooza showed off a plethora of new online ways to alert, educate and disseminate all manner of safety information to the public. The idea behind Datapalooza is to show off private, nonprofit, and academic products, services, and apps that have been developed using freely available government public safety data. +More on Network World: 17 hot new wearable computers+ The list of new online safety products goes like this, from the White House Office of Science and Technology: - New Emergency Information Hashtags: The White House, the Federal Emergency Management Agency,and the Department of Energy are launching standardized hash tags (#PowerLineDown #NoFuel and#GotFuel ) to enable citizens to report important emergency information, such as downed power lines or whether a gas station has fuel, across social media platforms during disasters. Hashtags will be used by FEMA, the Department of Energy, survivors, first responders, state and local official and utility companies to improve disaster response and recovery efforts. - Leveraging Data to Help Prevent Illness: An estimated 1.3 million illnesses can be attributed to Salmonella bacteria every year. In an effort to help reduce the number of illnesses associated with products regulated by USDA's Food Safety and Inspection Service (FSIS), USDA will host a Data Jam aimed at improving the accessibility and usability of FSIS data and empowering the public to leverage open data to help prevent illnesses. - Expanding Access to Food and Drug Safety Data: The Food and Drug Administration is announcing openFDA, a new initiative that will provide easy access to FDA datasets that help educate the public, and save lives. The project will make several valuable FDA public datasets-including millions of adverse event and medication error reports on FDA-regulated drugs-available to the public for the first time, via application programming interfaces (APIs) and raw structured files. - Crowdsourcing to Improve Disaster Response: The Federal Emergency Management Agency and The National Geospatial-Intelligence Agency are announcing the development of "GeoQ", a tool that crowdsources geo-tagged photos of disaster-affected areas to assess damage over large regions. This information will allow the Federal Government to better allocate critical response and recovery resources to regions in need. - Reducing Hazardous Noises: Every year, approximately 30 million people in the United States are occupationally exposed to hazardous noise, which can cause permanent hearing loss. In an effort to reduce hazardous noises in the workplace, the U.S. Department of Labor will convene subject matter experts for a Data Jam event aimed at highlighting relevant data sets and brainstorming data-driven ideas to reduce workplace-related hearing loss. - Energy Emergency Preparedness:The U.S. Department of Energy is previewing "Lantern", a mobile app that provides helpful information and assistance during disaster. The mobile app is designed to provide consumers timely disaster preparedness tips and recommendations, allow consumers to report and access information on power outages and fallen power lines, and help users find fuel and report the status of gas stations. - Safety Warnings for Overseas Travel: The U.S. Department of State's Bureau of Consular Affairs has released an online service to share Travel Warnings and Travel Alerts so that U.S. citizens have information about international travel risks such as health alerts, ongoing crime and violence or frequent terrorist attacks. The new Application Programming Interface (API) lets developers access these data sets from the State Department and integrate them into websites and mobile applications such as tourism guides and online travel sites. - Increasing Consumer Product Safety Awareness: The Consumer Product Safety Commission is announcing a "Safer Products App Challenge", calling on innovators to create applications and tools that help raise awareness of product reports submitted through SaferProducts.gov and of consumer product recalls. SaferProducts.gov has more than 17,000 consumer reports with information about the manufacturer as well as data about the date, location and type of incident. The CPSC plans to announce challenge winners in summer 2014. - Monitoring Product-Related Injuries: The Consumer Product Safety Commission's National Electronic Injury Surveillance System allows users to review information submitted by hospitals about product-related injuries including the products and body parts involved, injury type and diagnosis and patient statistical information. CPSC is releasing XML data of its consumer product recalls dating back to 1973,including product description, hazard, and recall date. - Mapping High-Risk Locations to Help Fight Crime: Rutgers University is showcasing, a "Risk Terrain Modeling" tool that uses crime data to identify and map high-risk locations, with the aim of helping law enforcement officers and others anticipate places where illegal behavior will most likely occur; identify where new crime incidents may emerge; and help to develop intervention strategies and tactically allocate resources. The Risk Terrain Modeling app is being offered free of charge to law enforcement agencies and can be used to help evaluate the crime risk of specific locations and to help focus crime prevention efforts on the areas that need it most. - Tracking Labor Law Violations:Created in response to the U.S. Department of Labor's "Labor Data Challenge", LaborSight is a tool designed to Promote Fair Labor practices by allowing consumers and jobseekers to check whether local businesses have violated United States labor laws-including the severity of the violations. The tool is powered by Occupational Safety and Health Administration (OSHA) Inspections and Wage and Hour Investigations open data. - Disaster Response and Recovery Tech Corps Program: The Federal Emergency Management Agency will discuss the development of a Tech Corps Program with the goal of facilitating a national network of skilled, trained technology volunteers who can provide assistance during community response and recovery efforts following a federally declared disaster. - Using Technology to Improve Criminal Justice Operations The National Institute of Justice's$150,000 app challenge, "Ultra-High-Speed Apps: Using Current Technology to Improve Criminal Justice Operations," promotes the development, use and evaluation of criminal justice software applications that are compatible with ultra-high-speed (UHS) networks. New UHS applications can provide real-time information and support to criminal justice and public safety practitioners in emergency situations. The first round of submissions in response to the Challenge are due February 14, 2014. - The Weather Channel: The Weather Channel will publicize to its 100 million web visitors and TV viewers new standardized hash tags (#PowerLineDown #NoFuel and GotFuel )developed by the White House, the Federal Emergency Management Agency, and the Department of Energy to enable citizens to report important emergency information, such as downed powerlines or whether a gas station has fuel or not, across social media platforms during disasters. The Weather Channel is also developing an app to help families be prepared for natural disasters and extreme weather events. The app will focus on hyper local weather events and use both open government and proprietary data including National Weather Service alerts, precipitation proximity alerts, and lightning data. - TaskRabbit Disaster Recovery Assistance Program: TaskRabbit is a platform that lets users post small job offers to a community of those willing to pickup a task in exchange for payment. The company, which includes a network of over 20,000 vetted workers across the country, is announcing that it will provide a dedicated portal for specific recovery efforts during times of crisis, such as distributing food. The portal can serve as an interface for relief organizations to request help during a disaster and connect with high-skilled workers that are willing to volunteer their time. As part of this announcement, TaskRabbit will include a feature that allows workers to pledge their time as volunteers for relief organizations. TaskRabbit will not charge any fees for facilitating these connections. - Geofeedia is location-based social media monitoring service, is announcing a free version of its service for first responders, disaster survivors, utility companies, and local, state and Federal governments to use during a disaster to identify the location of downed power lines and gas stations with fuel through tweets and posting of photos. The service will allow first responders to access real-time intelligence about disaster-impacted areas and improve situational awareness. - Crowdtilt, an online crowdfunding platform, has found that one of the most impactful ways to use crowdfunding in post-disaster scenarios is to help prevent displacement of local businesses. In the event of a disaster, Crowdtilt will empower business owners to fundraise on a rapid timeline and amplify their stories to help prevent displacement. Crowdtilt will also present a preview for the crowdfunding platform it is developing for small businesses affected by disasters. - Getaround is a car sharing platform that allows users to find, rent, and access vehicles in their neighborhood using their mobile device. Getaround is committing to help get people and supplies out of harms way during disasters by communicating instructions to its users, as advised by authorities, so that its customers know how to help or get help; notifying all vehicle owners via email and SMS to encourage them to make their cars available at discounted rates; and waiving its commission fee during the recovery. - Keychain Logistics: Keychain Logistics is a business that directly connects companies looking to ship products with independent semi-truck owners and operators. Keychain uses freely available data from the Federal Motor - Carrier Safety Administration and the U.S. Department of Transportation to verify that drivers are qualified, to ensure drivers have appropriate insurance coverage and equipment, and to match drivers to shipments with specific needs- such as checking for drivers that are licensed to provide transportation services for hazardous materials. Keychain was started at Y Combinator, a Silicon Valley accelerator, and has grown from a company of one employee to nine in just over 18 months and expects to double in size by the end of 2014. - BeSharp is an application developed through a White House Safety Datajam that measures police officer fatigue through a wrist monitor and pushes text message warnings as the officer becomes potentially impaired by fatigue while on duty. The app aims to improve safety and medical conditions, as well as provide objective data about officers' fatigue-related impairment in the field. Check out these other hot stories:
<urn:uuid:ff7963cb-606a-41d0-a895-a54ad32e06f3>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226139/applications/white-house-goes-nuts-with-safety-hashtags--apps-and-other-online-tools.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00437-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93504
2,059
2.78125
3
IoT Botnets & DDoS Attacks: What you need to know The Internet of Things (IoT) brings the promise of efficiency and innovation to the enterprise. IoT also profoundly expands the threat surface for your organization. The issue of botnets that leverage Internet of Things devices was largely ignored until the internet itself came crashing down one morning not long ago. Some of the most popular destinations were suddenly unavailable. Within minutes, IoT botnets went from a relatively obscure and unappreciated technology story to front page news. IoT botnets are not a new phenomenon. Arbor has seen them for several years used to launch DDoS attacks, send spam, engage in man-in-the-middle (MitM) credentials hijacking and other malicious activities. What is new is attackers interest in exploiting this fast growing army of unsecured devices. - 2014: A large IoT botnet would have 75,000 compromised devices. - 2016: The now-infamous Mirai botnet that took down large portions of the internet was originally leveraging 500,000 devices. The botnet source code has been released and attackers are innovating and growing the botnet on a daily basis. No vendor has more experience or a better understanding of the Tactics, Techniques and Procedures (TTPs) of attackers who leverage IoT botnets. The stakes HAVE changed. Learn how can we help you meet this challenge.
<urn:uuid:6b5b1f70-0d1f-4cc1-807e-495546c6dde4>
CC-MAIN-2017-04
https://www.arbornetworks.com/stakes
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00253-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942158
291
2.546875
3
Make it smaller. That’s what many DIY hackers cry out for when they see a new programming board. People even call for powerful mini-PCs like the Raspberry Pi 3 to shrink, as we saw in the reaction to our recent review. Well, get a load of this, makers: An Arduino clone that’s literally the size of a AA battery. Sweden-based coder Johan Kanflo was looking for a way to make the Tiny328, which he was using in his DIY projects as an ISM radio node, more portable. Kanflo wanted a case for the Tiny328 with a battery compartment, but he couldn’t find anything that suited his needs. 3D printing wasn’t in the cards either. That’s when he decided to go out on his own and shortly thereafter, the aptly named AAduino was born. This Arduino-compatible board fits neatly into a AA battery slot. Kanflo opted for this form factor to create a self-contained radio node that supplies its own power. In other words, he dropped the AArduino into one slot of a battery case with room for three AAs, and then put regular batteries in the others. To take power from the batteries that are sharing the compartment Kanflo added Keystone battery terminals to the board, but with the poles reversed. He also had to file down some of the components without damaging them so they’d fit inside the battery compartment. The board itself has an 8-bit ATmega328P micro controller running at 8 MHz, an RFM69C transceiver module for wide frequency communication, two temperature sensors, and an LED. The impact on you at home: This is definitely not a Raspberry Pi replacement, but it is a nice little tool for any projects that require basic wireless communication between devices. If you’d like to create your own AAduino, Kanflo has posted all the information you need to get started on Github. This story, "This crazy tiny Arduino clone nestles into a AA battery slot" was originally published by PCWorld.
<urn:uuid:c78ada7b-56ee-4481-8c34-2956d651b5f8>
CC-MAIN-2017-04
http://www.itnews.com/article/3059312/open-source-tools/this-crazy-tiny-arduino-clone-nestles-into-a-aa-battery-slot.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00161-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962674
430
2.625
3
US Army warns Soldiers that the risks of uploading geotagged photos on Facebook and other social network sites. Geotagging is the process adding geographical identification metadata to various media such as a geotagged photograph or video, websites, SMS messages, QR Codes. Many smartphones automatically embed the latitude and longitude within the photograph for every picture you take. By uploading geotagged photos on Facebook or checking-in social media applications such as Foursquare and Gowalla, soldires broadcast their exact location of their unit or their family, said Steve Warren, deputy G2 for the Maneuver Center of Excellence, or MCoE. There is real-world example that explains the risks of geotagging: In 2007, four US army helicopters(AH-64 Apaches) were destroyed in Iraq after geotagged photos were posted on the Internet. Facebook's new Timeline feature includes a map tab of all locations users has tagged. Anyone who tagged as friend on facebook can get access to those information. "Some of those individuals have hundreds of 'friends' they may never have actually met in person," Staff Sgt. Dale Sweetnam, of the Online and Social Media Division explained. By looking at someone's map tab on Facebook, you can see everywhere they've tagged a location. You can see the restaurants they frequent, the gym they go to everyday, even the street they live on if they're tagging photos of their home. Honestly, it's pretty scary how much an acquaintance that becomes a Facebook 'friend' can find out about your routines and habits if you're always tagging location to your posts. According to BBC report, The British army has banned the use of mobile phones in operational zones like Afghanistan, and cautions against soldiers taking pictures on smartphones in any circumstances. Soldiers are asked to disable the geotagging feature on their phones and to check the security settings on their social networking sites to make sure only real friends have access to their information. "A good rule of thumb when using location-based social networking applications is do not become friends with someone if you haven't met them in person," Sweetnam said. "Make sure you're careful about who you let into your social media circle."
<urn:uuid:80f4976f-ded1-4ed9-92ab-e1e664e9db63>
CC-MAIN-2017-04
http://www.ehackingnews.com/2012/03/us-army-warns-soldiers-about-risks-of.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00069-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944901
454
2.53125
3
NASA, Google provide 3-D views of Mars - By Doug Beizer - Feb 02, 2009 A 3-D view of Mars is now available on Google Earth, NASA and Google announced today. The new Mars mode provides high-resolution views of the planet and enables users to virtually fly through canyons and scale mountains on Mars, according to NASA. Users can also explore the planet through the eyes of the Mars rovers. Satellite imagery from NASA’s Mars Reconnaissance Orbiter and other probes orbiting the planet is also available via the new mode, which allows users to add their own 3-D content. The project was developed under a Space Act Agreement that NASA’s Ames Research Center signed with Google in November 2006, according to NASA. Under its terms, NASA and Google agreed to collaborate to make the data NASA collects available online. Doug Beizer is a staff writer for Federal Computer Week.
<urn:uuid:c7416658-abe9-4fe7-b434-78ad476d978a>
CC-MAIN-2017-04
https://fcw.com/articles/2009/02/02/nasa-google-provide-3d-views-of-mars.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906679
192
2.734375
3
You’ve made it. You’re in the first weeks of your first year as a classroom teacher. You’ve made your classroom cozy, you’re learning your students’ names and personalities, you’re establishing routines and rituals, and you’re building your lessons for the upcoming months. How equally exciting and terrifying! One of the more scary tasks you’ll take on this fall is writing a lesson plan that grabs students’ attention and gets your objectives to stick in their minds. It’s likely that many lesson plans were handed down to you from a previous teacher, or were pre-written as part of a curriculum from a textbook series, which can be less than thrilling for those eager kiddos to keep engaged with. One of the best ways to spice up those aged and canned lesson plans is to integrate an online software application, or “app” used on a computer, laptop or handheld device. Whether you’re creating a new plan or adding to an existing one, here are some helpful tips for making the task of adding apps to lessons a bit less daunting and a lot more fun. These tried and true principles are used for lesson planning in general and can also be applied to many aspects of teaching. 1. Begin with the end Just as with creating lesson plans in general, when adding the use of an app to a lesson, think about the outcome for the learning experience first. Look at what the students need to know and how they will be assessed. “If students know what they are to learn, you greatly increase the chances that the students will learn.” – Harry Wong, The First Days of School If you are teaching a lesson on the elements of design, the outcome would be the students would know the elements and their characteristics, and be able show examples of them. The app you choose should allow the students to be able to learn those things. An example would be to have students use an app with drawing and type tools to create a document with examples and descriptions of the elements. 2. Don’t reinvent the wheel This is the age old mantra of the seasoned teacher. Teachers stick together and share. When adding apps to lessons don’t take a ton of time designing a lesson before you’ve checked around for already written ideas. If your classroom has iPads or your students use BYOD handheld digital devices, you can’t go wrong with Kathy Schrock’s iPads4Teaching website. The section “Classroom uses of the iPad” covers just about anything you can think of to incorporate iPads into lessons, and gives multiple lessons using apps available on iPads. The acronym, meaning Keep It Simple Stupid has been used in the information technology industry for decades. Adding app use to your lesson plans should be kept simple so as not to be overwhelming to you as a new teacher or to your students. 4. Have a backup plan Whether a lesson plan includes an app or not, it is always wise to have a backup plan for new lesson activities. Any time students are going to be online doing an activity you should have a plan B for if the application, internet, or computer has technical difficulties. One of the greatest gifts to your future teacher self is to take time to reflect after you’ve finished teaching a lesson. Especially important for lessons that involve online applications, make yourself sit down and type up your reflection on what worked and what didn’t. To conclude, new teachers, don’t be afraid to try adding activities using online applications to your lesson plans. Remember your students don’t know what you don’t know. As long as you begin with the end, don’t reinvent the wheel, keep it simple, have a backup plan, and take time to reflect you will be well on your way to creating and teaching lessons that will rock your first year. Here are some great articles we found to help with lesson planning: The New Teacher’s Guide to Creating Lesson Plans – K-5, from Scholastic.com New-Teacher Academy: Lesson Planning – All Grades, from Edutopia Back-to-School Guide For Beginning Teachers (And Not-So-New Teachers Too)! – All Grades, from Education World Learning Objectives: The Heart of Every Lesson – All Grades, Harry & Rosemary Wong: Teachers.net Here are a couple of resources from our blog to help with using technology in your classroom: Need help managing apps, computers, laptops and iPads in the classroom? Impero has solutions that allow teachers to fully use technology to enhance learning while keeping kids on task and safe online. To learn more about Impero Education Pro classroom and network management software sign up for a webinar, email us at firstname.lastname@example.org or call 877.883.4370 today.
<urn:uuid:4817bf2c-50d9-4f5d-9879-010e6d2e9847>
CC-MAIN-2017-04
https://www.imperosoftware.com/5-tips-for-new-teachers-when-adding-apps-to-lesson-plans/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00190-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931121
1,023
3.25
3
As sheriff of New Hanover County, N.C., Joseph McQueen Jr. has seen his share of hurricanes. Commenting on the devastation from last fall's storms, McQueen said, philosophically, "we're known as 'Hurricane Alley.'" But this year was particularly rough; Hurricanes Bertha and Fran, scarcely a month apart, bowled through the county, sending tens of thousands of residents and vacationers fleeing inland. Winds gusting up to 115 miles per hour ripped boats from their moorings and washed them ashore, mangled telephone and power lines, tore away roofs, drove uprooted trees into houses, and sent ashcans flying through the streets like shrapnel. It was a time when the Command and Control Center in Wilmington needed a dispatch management system to expedite the flood of 911 calls with maximum speed and accuracy. Fortunately, the center had recently installed a new computer-aided dispatch (CAD) system that instantly mapped caller locations and automated much of the time-consuming detail involved in calltaking and dispatching. As a result, calltakers were able to direct appropriate agencies to addresses and scenes throughout the county with greater efficiency than the county's enhanced 911 system allowed. New Hanover County is situated between Cape Hatteras and Cape Fear, a region of the Atlantic coast that lies directly in the path of hurricanes. One would think with the number of storms plowing through annually, the area would be sparsely populated. Not so. The county's 185 square miles, which include two heavily developed barrier islands, has a population of 141,000. Under normal conditions, the center handles 1,100 to 1,200 calls a day for the County Sheriff, the Wilmington Police, emergency medical services (EMS) and the Fire Department. During the recent hurricanes, 911 calls flooded into the center at the rate of about 2,000 a day -- a load communications personnel would have been hard-pressed to handle in previous years, not only because of power failures, but because the existing 911 system lacked mapping capabilities and many of the automated features found in advanced CAD systems. NEW CAD SYSTEM The Intergraph I/CAD system, installed shortly before Hurricane Bertha hit in July, is configured for seven dispatchers, one supervisor, and three calltakers, with a 21-inch monitor at each workstation. Redundancy on the network is provided by two servers. When a 911 call comes into the center, the calltaker's monitor displays a split-screen of an event window and a map with an arrow at the caller's location. The operator confirms the name, address and telephone number of the caller appearing in the event window, and enters the pertinent information and remarks. After determining the nature of the emergency, the operator -- if he or she is a dispatcher -- radios the appropriate response agency in the county where the call originated. The I/CAD system can be configured to generate color-coded windows, enabling operators to see at a glance the status of an event. For example, default colors are gray for incoming calls until they are answered, at which time the event window shifts to blue, indicating a new call. A red window indicates a unit responding to that event. When the responding unit arrives, the window shifts to green. In addition, events are prioritized by number, beginning with "0" as the highest priority. If three new events are on the screen, the windows will be blue, with the stack position of each determined by its priority number. If a caller has a previous event history -- domestic violence, unlawful use of a weapon, or a medical condition -- a Location of Interest (LOI) feature indicates the presence of that information by lighting a special button on the screen. The dispatcher clicks on the button, brings up all the recorded incidents associated with that caller, and advises the responding agency accordingly. Special Situation is a feature associated with LOI that allows the operator to create a buffer zone around a sensitive address, such as a chemical storage facility, refinery, a suspected drug house, or -- in the case of an apartment complex -- a seriously disabled person living next to someone who has a history of trouble with the law. If a call comes in from any location within the buffer zone, the system automatically retrieves information relating to the sensitive address. Other mapping features enable the operator to zoom in, zoom out, and temporarily "rope off" sections of streets and highways where major events -- football games, conventions, or road construction -- generate heavy traffic. To determine the fastest route around such closures for emergency response vehicles, the operator has only to enter the location of the vehicle and the desired destination; the system instantly provides directions. WIDE RANGE OF CALLS Most 911 calls during hurricanes Bertha and Fran were for injuries and situations directly caused by high winds and flooding, such as downed electrical wires, people trapped by rising waters or in houses partially crushed by felled trees, and people needing to be rescued from homes that had lost roofs. In the aftermath of the storms, 911 calls continued to pour in; people needed to be rescued from collapsed homes, or from trailers piled on top of each other. Some were in shock, wandering around looking for houses that had completely disappeared. One woman was reported floating on a mattress in the middle of a marsh with no idea how she got there. FASTER RESPONSE TIME According to Communications Supervisor Mary Antly, the main advantage of the CAD system is speed. "We purchased the original 911 system years ago. It gave you a number when the phone rang. For us, that was fantastic. Before then, we didn't even have that. Later, we bought the enhanced 911 system, and that enabled us to get the name and address of the caller. The mapping capability in the CAD system has shortened our response time." "The map automatically shows us where the call is coming from," added Brenda Hewlett, 911 coordinator. "That enables us to direct any agency we dispatch -- no more guessing. We can tell them exactly how to get there." As with all 911 CAD response systems, map maintenance is an ongoing requirement. Some changes are permanent, others temporary. Population growth, street closures, felled trees, newly created one-way streets or dead ends, holiday traffic routing -- all require continual updating of the base map. Auxiliary power systems with backups played an equally important role in keeping the Wilmington Command and Control Center online, said Sheriff McQueen. "In the past, when we had something like this, 90 percent of the time we got knocked off the air. This is the first time in my 27 years in law enforcement that the communication system didn't go down during a hurricane." Reflecting on how the new CAD system performed under countywide emergency conditions, McQueen added, "if we hadn't had this system, we might not have been able to handle the number of calls that did come in during the storms." Bill McGarigle is a freelance science and technology writer, e-mail: < bmcgari@ cruzio.com >.
<urn:uuid:65c21dae-5c21-4b84-bd0e-319b8a7e9da2>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/CAD-Provides-Hurricane-Relief.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00364-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947052
1,453
2.65625
3
Beyond a pure passion for technology and the thrill of turning ideas into reality, there is a hugely practical basis for investment in advanced computing. Supercomputers and other computational technologies bolster economic competitiveness, a notion that nearly all academic, industry and government leaders have embraced. As supercomputers become more powerful, manufacturers can run bigger and more complex models, saving time and money in the process. In Japan, manufacturers are increasingly turning to the nation’s fastest supercomputers – such as the 10-petaflop K supercomputer, installed at Japan’s RIKEN research institute – to get a leg up on the competition. As an article in Nikkei Asian Review details, Japanese business and research organizations are exploring how to best leverage the potential of the K computer and similar powerful computing machines. There are several projects in place now, which are expected to yield results within a couple of years. Software developed for K is being used by carmakers Toyota Motor and Suzuki Motor, and Bridgestone, the tire maker, to help them design their next-generation of products. The hardware-software combination is making it possible for the manufacturers to meet their prototyping needs without having to build full-scale physical designs. Not only is the digital approach less costly and time-consuming, it enables greater innovation as new ideas can be tried out with a few clicks of the keyboard. Testing a large number of design parameters in a physical format just wouldn’t be feasible from an economic or time standpoint. Developed by a team of specialists from 13 companies with the cooperation of Hokkaido University, the software simulates the air resistance created by a car by interpreting the space around a car as a grid of 2.3 billion segments. The computer simulation reflects how the air movement is affected by different driving conditions, for example a passing vehicle of a strong crosswind. Digital modeling enables engineers to determine the most aerodynamic shapes. Lower wind resistance enables vehicles to be more fuel-efficient and increases steerability. Previously, automakers had to construct large wind tunnels and run tests using full-scale models. The supercomputer helps minimize the need for expensive physical testing. It can even find flaws that would previously have gone undetected in a a physical mockup, according to K engineers. In addition to the auto industry, Japan is also expanding its supercomputing efforts into the shipbuilding field. Instead of simulating air flow, design software developed by the Shipbuilding Research Center of Japan shows how a ship’s movement creates turbulence in water as small as 1mm. By enabling shipbuilders to forego testing of real-life floating models in enormous tanks, the design costs for such vessels can be reduced by up to 50 percent. The Fujitsu-RIKEN K supercomputer is also being used to enable a faster pace of discovery in materials science and pharmaceutical research. While the main user base for the K system are universities and labs, the Research Organization for Information Science and Technology (RIST), which manages the allocation process, also maintains a number of industry relationships. RIST accepted 42 applications for projects using the K in fiscal 2014, up from 27 in 2012. Going forward, Japan’s science ministry is working to develop an exascale supercomputer, 100 times faster than the K, by 2020.
<urn:uuid:65765766-5a19-4cf7-91a7-a14c8430d4e9>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/01/27/japans-manufacturers-cozy-supercomputing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00364-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938563
674
3.4375
3
A new geospatial data center at the U.S. Agency for International Development aims to mash together satellite imagery and on-the-ground surveys and reports to cut down on field-based work and give the agency a better sense of where development dollars can do the most good. Agency officials are in the early stages of planning the GeoCenter, as they have dubbed it, and will officially launch the center sometime before the end of the year, Shadrock Roberts, one of its designers, said at a USAID seminar Wednesday. GIS data can be combined with a range of other data collected by USAID and nongovernment aid organizations to make development work more efficient and productive, panelists at the event said. Project workers focused on food security, for example, can map data on conflict, economic development and population movements with satellite-based maps of agricultural production, roads and weather patterns to predict where food shortages are most likely to occur and focus resources there. In other cases, rapidly gathered satellite imagery can save workers on the ground minutes or hours of busy work during humanitarian emergencies. As a research fellow with the Centers for Disease Control and Prevention, Roberts used satellite imagery and data sets from the CDC and other organizations to map the Kakuma refugee camp in northern Kenya, home to more than 60,000 East Africans fleeing violence in neighboring Somalia and Sudan. CDC workers told Roberts that maps of other camps could drastically cut down the amount of time they spend monitoring the spread of disease, he said. Currently, CDC workers who go to newly formed refugee camps spend a great deal of time establishing the camp's contours and sometimes get lost in unfamiliar terrain, he said. GIS information also is helpful for monitoring and evaluating existing programs, Karl Wurster, a geographer who worked in the USAID mission in Rabat, Morocco, said during Wednesday's panel. Wurster worked mainly on mapping attendance data from training programs and workshops that USAID and other agencies conducted. That data ultimately could be overlaid against the training objectives, such as lower HIV transmission rates, higher female literacy or greater crop yields, to measure its effectiveness he said. Many USAID missions such as Wurster's already are working with GIS, Roberts said. The GeoCenter's goal will be to collect best practices from those missions and try to establish common data standards among both different missions and between USAID and other aid organizations working in a single country or region so data can be more easily shared. The group also plans to standardize, as much as possible, the metadata different missions and agencies use, said Carrie Stokes, another GeoCenter organizer, so different groups can confidently use the same maps and datasets without duplicating work. The center will likely contract out some mapping work from USAID missions that lack the capacity to do it locally, Roberts said, and create standard mapping products to be used across multiple missions. "The key part about thinking spatially isn't about computer programs," he said. "If your thing is food security or health or agriculture or economics or governance or internally displaced persons or whatever, it always starts with a question and we're here to help people sharpen those questions and to start thinking about the spatial components to them." The description of USAID's GeoCenter was clarified to avoid confusing the center with intelligence community assets. There is no relationship between the GeoCenter and the National Geospatial-Intelligence Agency.
<urn:uuid:1aa8565a-2c6e-4e0a-addd-db417af23e89>
CC-MAIN-2017-04
http://www.nextgov.com/health/2011/08/usaid-to-stand-up-new-geospatial-data-center/49529/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00272-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949845
702
2.765625
3
Have you seen buildings in motion that actually change their shape? This Dynamic Architecture building called ‘Rotating Tower’ by Italian-Israeli architect Dr. David Fisher will be constantly in motion changing its shape. It will also generate electric energy for itself. The skyscraper will be the world’s first building in motion. Uniquely, each floor will be able to rotate independently. This will result in a constantly changing shape of the tower. Each floor will rotate a maximum of 6 meters (20 ft) per minute, or one full rotation in 90 minutes. It will adjust itself to the sun, wind, weather and views by rotating each floor separately. This building will never appear exactly the same twice. You will have the choice of waking up to sunrise in your bedroom and enjoying sunsets over the ocean at dinner. In addition to being such an incredible engineering miracle it will produce energy for itself and even for other buildings because it will have wind turbines fitted between each rotating floor. So an 80-story building will have up to 79 wind turbines, making it a true green power plant. The ‘Rotating Tower’ in Dubai will be 1,380 feet (420 meters) tall, 80 floors, apartments will range in size from 1,330 square feet (124 square meters), to Villas of 12,900 square feet (1,200 square meters) complete with a parking space inside the apartment. It will consist of offices, a luxury hotel, residential apartments, and the top 10 floors will be for luxury villas located in a prime location in Dubai. The tower will be the first skyscraper to be entirely constructed in a factory from prefabricated parts. So instead of some 2000 workers, only 680 will be sufficient. Construction is scheduled to be completed by 2010.
<urn:uuid:b9048dd8-0f23-4992-aee5-fd053c3ad901>
CC-MAIN-2017-04
http://www.archivenue.com/rotating-tower-worlds-first-building-in-motion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00392-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940758
369
2.796875
3
Reaching Students Through the Web Ease of use, convenience, unconstrained accessibility and increased opportunities to connect with students have made online learning environments more popular than ever before. But IT trainers, many of whom also teach in a more traditional classroom setting, have had to rethink their training methods to accommodate the Web platform. Two tips emerged as standout methods to help you get the most out of your students online. Learn to Use the Equipment Teaching online versus training in a traditional classroom requires a considerable interaction with technology. That interaction is critical because without it there can be no communication between instructor and student. Online tools like emoticons take the place of a raised hand with a question. They become a mouth to frown with when students are confused, or they take the place of a smile to let the instructor know when the information has sunk in and made the right impact. Online trainers must first teach their students how to manipulate the emoticons and other equipment before they can begin the lesson. “When you enter a classroom as an instructor, you look at glazed eyes and you see people shaking their heads so you know that somebody is not following,” said Clement Dupuis, president and chief learning officer, CCCure Enterprise Security & Training Inc. “But you can train them right at the beginning of the course how to interact with the instructor. Usually there is something called emoticons, which are small graphics they can use to indicate to the instructor that something is wrong—the instructor is going too slow, too fast, the sound is not high enough. You’ve got to make them use this right off the bat. Usually when I start my course, I’ll introduce them to the interface and work them through it. It’s a completely different way of interacting.” Once students are adept at using the technology, Dupuis said that the online training environment is effective because it removes many of the social barriers that exist in a regular classroom. Online, he’s found that people are not as shy about asking questions, because there’s no face-to-face contact and they don’t risk looking stupid. Online interaction also preserves a students’ anonymity. “I get more questions from online training. More people send me questions one-on-one, and I can pass the answers along on the fly without mentioning who it came from.” Not only do the students have to use emoticons, microphones and other equipment, the trainer has to be clever with them as well. “You have to know how to use the tools,” said Ann Beheler, dean of engineering and emerging technology, Collin County Community College District. “It’s not something where you’ll say, ‘Oh, yeah. I’m going to teach on this tool right now.’ You have to go through some training to know how to use it effectively. It’s kind of like being a one-armed paper hanger. You’re manipulating the tool, lecturing and managing the class simultaneously. It’s not totally intuitive, and it takes some practice until you’re comfortable doing it. It’s not that hard, but you do need to practice.” Engage, Engage, Engage Once you and your students can work the equipment like pros, the next item on the IT trainer agenda is engagement. Engaging your students is a given no matter what environment you’re teaching in. But online, engagement means something a little different. On the Web, engagement means encouraging a higher level of interactivity and using a more conversational tone. Teaching in a regular classroom allows a trainer to use hands, gestures, voice inflections, even the body to help engage students. New Horizons Computer Learning Centers Inc. teaches its trainers how to use online tools to be effective without seeing or being seen because online, the IT trainer must probe a little deeper for the responses you see naturally in a classroom, or in the labs that frequently accompany traditional and online lectures. David Sundstrom, vice president of business development at New Horizons, said that making their Online LIVE virtual classrooms sizzle means training instructors to not only know the material, but to have applications of that material on hand to use—either from their own personal experiences or from those of people they’ve worked with. “Some of that involves what they talk about. They have to be much more conversational about certain subjects,” Sundstrom said. “We have a train-the-trainer session that every trainer needs to go through, where we teach them the new way to talk, to have a subliminal conversation with their students and get their students to learn that well enough that their students can have the conversation back to them.” Further, in the online lab environment, which typically occurs directly after the lecture, it’s important to virtually engage or “walk” around the classroom and figure out what students are doing. “That’s actually more difficult to do in a classroom environment than an online environment because in the online environment, we get the opportunity to see what the student is doing,” Sundstrom said. “Our instructors can actually track the progress of each student as they’re working through the labs. They can also take what the students are doing in a lab and show it to other students in the class. It’s easier to see where they’ve run into problems, so when you step in to engage directly with the students, you’re not trying to figure out how they got to where they are. You can already see the history of everything they’ve done prior to picking up with them wherever they are now.” “The key differentiator is the instructor,” Dupuis said. “The instructor can make a whole world of difference. You need somebody that knows the material well, who can answer a bit outside the scope of the class. Students don’t want someone who reads slides. They can read the slides themselves. They want somebody who will add to the slides and bring real-life experience. That’s what they want to hear about. They want to hear a person who has 15 years of experience tell them how this relates to what they are about to learn.” As an IT trainer in an online environment, it’s essential that you make yourself available to your students. Labs and lessons often are available as recorded archives so students can review material they missed or weren’t 100 percent sure about. However, trainer participation outside the lecture and the lab can be the turning point for students who have an OK handle on the material rather than a thorough understanding. “You have to engage your students,” Beheler said. “That’s why the synchronous online is advantageous compared to asynchronous instruction. It’s much more difficult to engage students and let them know that you’re there and that you’re really interested and there to help asynchronously. It can be done, but it’s lots and lots of work. I’ve had classes where I can immediately tell whether the instructor is going to be responsive and interested from the get-go. They respond to their e-mails quickly, they participate in the discussion boards. In other classes I’ve had instructors who take four or five days to respond to e-mail, and they’re never there in the discussion boards. People do better when they think the instructor cares—that’s basic learning theory.” Kellye Whitney, email@example.com
<urn:uuid:32f7dec2-dcf3-440d-9c9c-60fefdcde7eb>
CC-MAIN-2017-04
http://certmag.com/technology-based-training-reaching-students-through-the-web/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00080-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959058
1,601
2.625
3
A chain is only as strong as its weakest link. In the security world, that weak link is the human element, and it manifests in the poor management of user passwords. As our society becomes increasingly wired we need to remember an increasingly large number of accounts, PINs, and passwords. I have at least 7 different email accounts, multiple network account/password pairs, building access codes, and bank PINs. Then there are my MANY various web access accounts. Everything we do needs a special code so that we alone can access our personal information. Passwords, pin numbers, access codes… Information overload! In the perfect world none of this would be necessary because we could trust each other not to break into each others houses, telephones, bank accounts or send the boss offensive e-mails using each others accounts. Unfortunately, this is not a perfect world. Passwords are necessary to protect the security of our personal information, our business and our day-to-day transactions and communications. The standard “memory” tricks or techniques or using post-it notes, birthdays, wife’s name, and stock words or phrases are not recommended. I remember one end-user that complained about the need to remember so many passwords and change them at regular intervals. His solution was to use his wife’s name for three months and then his anniversary date for the next three and then revert to his wife’s name. It’s no wonder our secrets aren’t safe! When creating new passwords, remember two main issues: security and efficiency. Passwords should be too difficult to crack, but still easy to create and remember. There are some simple tricks that make this task easier. One simple trick is to use two words together. This confounds most simple brute-force attacks that simple run through a dictionary of words. Another method is to purposely misspell a word in some manner that is easily remembered. Use both upper and lower case characters, in an unusual usage (unUSual cApiLIzation). Many people swap numbers for similar letters, such as replacing the letter “O” with a zero. Passwords alone don’t offer sufficient protection, even when following these recommendations. The proper use of passwords must be combined with strict security policies, and an overall positive security posture or climate. Security will only work when implemented from the top down. Proper policies must be established outlining mandatory security procedures. This must be reinforced by effective network administration. Consideration must be given to password length, expiration and lockout thresholds. Additionally, passwords should be required to consist of upper-lower case, special, and numeric characters. Combining all these techniques forces a would-be hacker to use a brute-force technique that is extremely time-consuming. Generally, if it takes too long, they just won’t bother! And that’s just what we want. After all, if your information is worth having, it’s worth protecting.
<urn:uuid:4b7e89ab-5c7f-4447-bb61-78e55de197b1>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2002/04/01/passwords---the-weak-link/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00016-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929154
613
3.078125
3
Virtualization and Cloud Computing: Does One Require the Other? Many people believe that cloud computing requires server (or desktop) virtualization. But does it? We will look at using virtualization without cloud computing, cloud computing without virtualization, and then look at using both together. In each case, we'll look at where each deployment might be most useful, some use cases for it, and some limitations. This white paper examines the relationship between cloud computing and virtualization. Many people believe that cloud computing requires server (or desktop) virtualization. But does it? We will look at using virtualization without cloud computing, cloud computing without virtualization, and then look at using both together. In each case, we'll look at where each deployment might be most useful, some use cases for it and some limitations. Virtualization without Cloud Computing Most organizations are virtualized without cloud computing. According to recent surveys, approximately 60 percent of all servers today are virtualized. Virtualization is deployed in businesses of all sizes and affects all industries, organizations, governments, and so forth. Virtualization projects typically start with compute (i.e., server) virtualization, as it is usually the easiest to virtualize and provides the greatest return on investment. This is what is most commonly thought of as "virtualization." However, more can be virtualized. Both networking and storage can be virtualized. Network Functions Virtualization (NFV) refers to the virtualization of traditional networking functions such as switching, routing, and load balancing. It can include firewalls, Intrusion Detection or Prevention Systems (IDS/IPS), antivirus management and more. Often, NFV is combined with Software Defined Networking (SDN) to automate management of the various physical and virtual network components. Many vendors also offer Software-Defined Storage (SDS), including traditional vendors, such as EMC, as well as companies that have specialized in SDS for years, such as Data Core. The idea is to use commodity storage devices, often installed in servers, and virtualize access to them so the local storage inside each server gets pooled together and becomes visible as shared network storage. When virtualized compute, networking, and storage are combined, the result is the Software Defined Data Center (SDDC), which promises a great deal of automation and scalability. Many companies will go to this point and stop. What is left undone if cloud computing is not also introduced? The self-service provisioning of the VMs necessary for the business workloads to run. It often takes days or even weeks for a VM to go through the approval processes at an organization and for a virtualization administrator to get the necessary VMs created and made available to the users. This decreases a company's agility and often leads users to find a cloud platform on their own, outside the control of IT. This can lead to security issues for the organization, as well as less demand for IT resources, which if taken to the extreme, would drastically reduce or eliminate the need for IT at the company. So what are some good use cases for using virtualization without cloud computing? Small businesses that don't have an extended VM provisioning process. Medium-sized businesses may also be OK with virtualization only, especially if they don't have developers or others that need VMs provisioned quickly.
<urn:uuid:3ae59214-6808-4897-a8e7-fbae9a63436a>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/virtualization-and-cloud-computing-does-one-require-the-other/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00410-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948396
686
2.71875
3
In what has become a data-driven world, your organization’s data is valuable. It has become the “keys to the kingdom,” so to speak. Very few companies today could function without data, especially good data. However, I would suggest that more important than data, is information. Data provides the building blocks, but information is really the consumable outcome that can be used as a competitive edge. In recent years, big data and predictive analytics have made great strides in helping organizations mine their data. We look for patterns in people’s behavior, hoping to understand or predict what our customers will purchase in the future. We are now at the stage where we want to anticipate what people will do next before they even know! While big data and predictive analytics have made significant contributions to the technology field and many organizations are now seeing data’s value, there is a dark side to all of this that we rarely discuss. Patterns in human behavior do not indicate correlation to events or activities nor causation. Yet organizations “tinkering” with big data talk about information as if it does. This is where the role of the data scientist—someone who has a diverse background in computer science, data architecture, and, especially, social science—becomes so important. Human behavior is far from predictable, and making decisions based on patterns discovered in data can be disastrous or even harm an organization’s reputation. Take the story about Target from 2012, for example, about the pregnant teenager who was sent coupons for baby supplies before all of her family knew she was pregnant. No one at Target meant for this to disrupt a family or cause any harm. Its data indicated a common pattern in consumer behavior, and the automated processes in place did what they were created to do. As technology professionals, herein lies the question: What are our ethical and social responsibilities with respect to the impact of new and emerging technologies and processes that we invent? We live in a global environment, where ripples in one country’s economy can be felt all over the world. The same holds true for technology. With the emergence of artificial intelligence and augmented reality, not to mention the Internet of Things, we are enabling technology to do amazing things in fields such as education and healthcare, reaching into remote and rural areas to help people all over the world. However, within the past few weeks, it has come to light how vulnerable some of our networked devices are, leaving us open to cyberattacks. Today, cars send data to manufacturers, and medical devices such as infusion pumps and pacemakers connect to healthcare centers, and these are all good things. But, at the same time, we have opened a Pandora’s box of unintended consequences that we, as professionals, may not have completely thought about. I was reading a recent article in Harvard Business Review about how predictive analytics is now being used to detect patterns in employee behavior to reduce turnover. This process is alerting managers about who these employees are so they can intervene and attempt to retain them. As a technologist, I see the benefit of doing this, but, at the same time, this makes me sad. We are relying on data to do what good leaders should be doing—having meaningful conversations with their employees and team members to better understand what their goals and aspirations are so they do not become flight risks. So, I am circling back to the question I asked earlier: As professionals, what is our ethical and social responsibility when introducing these technologies and processes into the world? Technology has become integrated with our lives, families, work, and health. We need to start thinking about whether a new technology or process has unintended consequences for us and our families. Are we utilizing customer data appropriately? Do our customers know how their data is being analyzed and for what purposes? How vulnerable are devices that are now networked to other systems and devices? Thinking about the ethically and socially responsible thing to do will serve the technology profession in good stead and help us continue to build trust among our peers and customers.
<urn:uuid:f1837a04-4c67-4964-b057-1d4d8e9c5705>
CC-MAIN-2017-04
http://www.dbta.com/Editorial/News-Flashes/How-Valuable-is-Your-Data-115186.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00044-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965668
826
2.84375
3
Gen Y'ers are also much more at ease with diversity, across economic, ethnic, gender, and cognitive-style strata. They are hence easily connecting with other people in other parts of the planet and anyone with a common interest or goal becomes a friend in the online world. They also like exploring and learning new things with an open mind, and are even cultivating their value systems through online interaction. From a content perspective, such a shift in behavior is creating more comfort with unstructured or loosely structured content generation, and a receptiveness to collaboration. In this brave new world the separate roles of producers and consumers of knowledge are merging and the age of the prosumer is at hand where people collectively generate and consume knowledge and everyone eats their own dog food. Collaborative technologies are allowing people to not only discuss issues and fuel the decision making process through collective gathering of views and opinions, but also create an explicit form of social memory. Social memory can be defined as what is known and understood in a social network. For example, people who are socially involved in the deliberations for making a decision often have the deep insight into the reasons for making that decision. They have an understanding of each others' viewpoints and feelings, as well. However, others not directly involved in the decision making process often only get communication of the outcome of the decision and don't have an understanding of what went into the decision making process. There is no evidence available to them of what transpired and why. And in today's empowered world, people spring into action only when they truly understand the reasoning underlying decisions more thoroughly. By creating an explicit form of social memory online, people can understand the why of certain decisions, and even get more involved in the decision making process itself. This applies to not only tactical issues at a project team level, but strategic issues at an organizational level as well. This significantly improves the quality of information available to people, and clarifies the context within which people are expected to act, which enhances buy-in, and creates a common alignment in purpose and direction. Social media then not only allows for conversations, but creates more inclusion overall. This is part of the reason that the open source movement has become so widespread and powerful. Innovation also is fueled by social networking. There is a growing opinion that innovation is fueled by an open and collaborative process which taps into the collective know-how of a community to not only come up with ideas, but to collectively transform those ideas into inventions and innovations. The success of the open source movement is a good example of this. Different people pitch in at different times in the process of creating open source software, balancing diverse collective thinking with collective action. And studies have shown that by creating a community of lead users and allowing them to interact, new demand itself is generated tapping into potential blue oceans. Customer communities are allowing companies to simply listen and observe and sense trends ahead of time. Innovation, is a largely a social phenomena, and the flattening of the world is accelerating the pace through an increased capacity for connecting people. So what does this all mean to the enterprise whose ability to harness knowledge and build capability is a core strategic issue? They must reconsider and redraft their knowledge strategy for this "2.0" world. From a nuts and bolts perspective, the traditional deployment of centralized content management systems will have to be merged with this newer phenomena of community-driven content generation. The process-driven approach has to be merged with a people-driven approach and strategically backed by technology to make it a widespread reality.
<urn:uuid:6763cf93-0cfc-4444-8818-d5724dd79091>
CC-MAIN-2017-04
http://www.cioupdate.com/reports/article.php/11050_3775796_2/Human-Centricity-Social-Media-and-the-Knowledge-Enterprise.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00070-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963081
726
3.25
3
If you ask anyone on the street what a smartphone is, most people will simply start listing devices or manufacturers, such as the Apple iPhone or Samsung Galaxy. Although these devices are indeed smartphones, they merely serve as examples rather than any sort of useful definition. So what is a smartphone? Although there is some ambiguity on where to draw the line for accurately classifying these devices, there are a few things that generally define a device as a smartphone rather than simply a ‘feature phone,’ the term used to describe cheaper, low-functionality phones like flip phones. Smartphones include advanced capabilities beyond a typical mobile phone, such as a complete operating system on the device as well as support for application programming interfaces (APIs) that allow third-party applications to be integrated with the OS and the hardware of the phone. With around 3.4 billion smartphone subscriptions in Q1 2016 (Ericsson mobility report), it is no surprise that the demand for smartphone forensics has grown so much in recent years. Variation in Smartphones Although there are a few big players in the smartphone market, the market as a whole is extremely diverse. According to IDC Research, the top five smartphone vendors by market share in Q4 2015 were Samsung (21.4%), Apple (18.7%), Huawei (8.1%), Lenovo (5.1%), and Xiaomi (4.6%). Beyond that, all other smartphone vendors combined compose 42.1% of the market. Consider that each of these vendors has a number of different models of phones, a variety of operating systems with legacy versions running on many devices, and new models and updates each year, and it becomes apparent just how varied smartphone forensics work can be. One silver lining to the jumble of smartphone forensics work is that there is at least a measure of consistency in terms of operating system market share. As of Q2 2016, Android held a staggering 87.6% of the market, leading iOS (11.7%), Windows Phone (0.4%) and all others (0.3%) by a huge margin (IDC Research). Experienced examiners have to know how each of these devices and operating systems function, the many locations that data can be on each different device/OS, as well as how to access and work with all of that information in a forensically sound manner. It’s this knowledge combined with significant experience, that separates average mobile forensics examiners from the experts. What Services Does Gillware Digital Forensics Offer for Smartphone Forensics? Our digital forensics experts have experience with a wide range of digital forensics cases, many of them involving smartphones. Sometimes the issue lies with the device itself, such as cases involving broken/damaged smartphones. Other times the data has been hidden or is otherwise inaccessible, such as smartphones with passcode locks or other device issues. Whatever the problem is, Gillware Digital Forensics is glad to help where we are able. Smartphone Forensics on Damaged Phones Not every device arrives at our lab in pristine condition. Many of the devices we work on are either accidentally or intentionally damaged, including several devices from various criminal cases where the suspect did not want the data to be seen, and so attempted to physically destroy the smartphone. Our president, Cindy Murphy, has significant experience performing forensics work on these types of devices and although they involve some degree of physical repair work to access the data, this added work can often pay off in successful acquisition of needed data. Our data recovery lab has also had plenty of damaged smartphones come through its doors, including cases involving water, fire, and physical trauma such as smashed phones. Smartphone Forensics on Locked Phones One basic aspect of security on modern smartphones is the application of user created passcodes and passwords to get into the device. There are a number of different locking mechanisms, including simple 4 digit codes, 6 digit codes, fingerprint scanners, pattern-based passcodes, and complex passcodes with any combination of numbers, letters, and symbols. Depending on the device model and operating system, passcodes can often be one of the biggest barriers to performing forensics work on a smartphone. In any case, there are still plenty of situations where our experienced forensic examiners can work around passcode difficulties in the case of a locked smartphone. Smartphone Forensics and Deleted Data One other popular application of smartphone forensics work is in cases where data has been deleted from the smartphone. Sometimes this is accidental, but more often than not it is intentional. Locard’s Exchange Principle, which posits that every contact leaves a trace, is often cited in reference to forensics work such as this. In these cases, it means that depending on the level of acquisition examiners are able to get from the phone, forensic examiners might be able to see if and when data was deleted from the smartphone. For example, in the case of iOS devices, examiners can check timestamps as well file system artifacts and SQLite database entries to attempt to determine whether there was a deletion, and whether or not that data might be fully or partially recovered. Since smartphones store much of their data in NAND flash memory, there are some situations where data has been deleted, but is still theoretically present in the memory and has simply been marked as unallocated space by the controller. Sometimes, examiners may be able to read the NAND directly and locate this data, though there are plenty of factors that impact the success of this. These factors include automatic processes such as garbage collection, wear-leveling, and other operations that solid-state technologies use. Sometimes a forensic examiner can simply look for backups of the device to find legacy backups of data that has been subsequently deleted off the smartphone, but this is also contingent on a number of factors such as when the data was deleted, whether or not the backups are accessible, how recent the backups are, and other factors. Even if the deleted data is truly unrecoverable and an experienced forensic examiner has tried unsuccessfully to recover it or find a copy of it, simply proving that data was deleted can sometimes be useful in a digital forensics case. With this in mind, extremely difficult cases involving data deletion need not be marked as total losses when it comes to smartphone forensics work. One aspect of smartphone forensics that is unique from many other forms of digital forensics work is the possibility of hidden applications and data hiding applications being present on the device. There are essentially three main ways to use hidden apps, including using device manipulation to hide an app where it doesn’t typically belong, using an app to hide other apps inside, or using a “decoy app,” which appears to do one thing but is actually designed to do something else. Even though experienced examiners can’t be expected to stay up to date on each and every new hidden application, merely knowing they exist can be helpful in an inquiry. As GDF President Cindy Murphy puts it, “in the world of forensics, just because you don’t see something initially, doesn’t mean nothing is there.” While understanding how to use forensics tools is useful in many ways, certain hidden applications can hide the storage location or meaning of the hidden data. This is where experienced examiners are needed, as knowledge and techniques beyond the scope of commercial forensics tools can be required in order to locate, manually parse, and in some cases decrypt the hidden data. In cases where the use of hidden applications is suspected, Gillware Digital Forensics has the necessary experience and techniques to take on these challenges. Mobile malware is another less common type of smartphone forensics work and involves working on mobile devices that are or are suspected to have been infected by some form of malware. It’s an unfortunate consequence of our increasingly smartphone-connected world that there have been a growing number of mobile malware infections to match, with an estimated 387 new threats every minute (IBM Security). There are plenty of different varieties too, each with varying levels of harm. This includes Trojans, worms, ransomware, adware, spyware, and more. If there is potential for an exploitation in an OS or application, there is a good chance that criminals have already come up with some malware to take advantage of it. And with new forms of mobile malware coming out all the time, forensic examiners and security experts have to stay ahead of the curve in order to work on each new type as they’re discovered. If you suspect a smartphone has been infected with mobile malware or spyware, Gillware Digital Forensics would be happy to assist you. Gillware Digital Forensics Can Help with Your Smartphone Forensics Case With world-class digital forensics experts, knowledgeable data recovery engineers, and the right tools to work on each of these issues, Gillware Digital Forensics is the right choice when it comes to smartphone forensics cases. Follow the link below to get started with your smartphone forensics case:
<urn:uuid:fff6cf9c-5c46-450d-8e6b-1707dabea2b7>
CC-MAIN-2017-04
https://www.gillware.com/forensics/smartphone-forensics
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00375-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951413
1,847
2.578125
3
The survey questioned 2,000 families about what technology they and their family possessed. 1 in 10 parents feeling that it is “appropriate for children as young as four years old” to have access to mobile phone services. The growing reach of smartphones was also confirmed, with 10% children under the age of ten already possessing an iPhone or a similar internet capable handset. According to the BBC, most parents believe that 10 was a suitable age for children to have their own phone, with the majority (69%) saying they allowed their child to have a mobile phone to keep in touch with them when they are out. In terms of secure internet usage, just under half (49%) said they blocked access to certain sites, while the remainder admitted they do not controls their child’s access to the web. This comes at a time when the government is considering mandatory internet filters for children, with the Bailey review due in 2015. Social networking habits were also covered. Almost one in ten children of primary school age had a social networking account. Alarmingly the age at which children are eligible to have a Facebook or MySpace account is thirteen. A quarter of parents added that their child had an active email account. However the survey did shed some positive light on computer literacy with a large percentage of under 10s able to makes calls, while one in five can competently text, one in twenty can draft and send an email and 10 per cent can easily go online. More than a quarter of youngsters can take photos or videos and play on applications. If you require more guidance on how children use technology then you can visit, The Child Exploitation and Online Protection Centre (CEOP) website thinkuknow. (Image by Bex Ross)
<urn:uuid:8fb23217-d407-4096-b8b1-b3414dd338c3>
CC-MAIN-2017-04
https://www.gradwell.com/2011/09/23/a-third-of-children-under-10-own-mobile-phones/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972218
357
2.875
3
Cloud computing isn’t shaped like—or used like—the traditional computing model of the past. Cloud architectures allow users to access virtual pools of IT resources—from compute to network to storage—when they need them and, thus, achieve shared efficiency and agility. Made possible by the advent of sophisticated automation, provisioning, and virtualization technologies, the cloud computing model breaks the ties between the user’s application and the need to maintain physical servers and storage system on which it is run. Instead, users tap into aggregated resources as they need them. Cloud infrastructure can be provided as a public cloud (IT resources shared by multiple clients) or private cloud (IT resources, whether external or internal, controlled and managed by the IT organization). In this section, we will explore this dynamic new model of IT as a service and its impact on enterprises and the IT industry. SEEDING THE CLOUD: ENTERPRISES SET THEIR STRATEGIES FOR CLOUD COMPUTING Read what IT executives at leading U.S. companies are saying about cloud computing in a new EMC-sponsored study by Forbes Insights.
<urn:uuid:f1f18f77-cb75-454f-a4be-1b870e848225>
CC-MAIN-2017-04
https://www.emc.com/leadership/articles/cloud-computing.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00219-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912471
235
2.765625
3
Counting down the days until Christmas or the beginning of Kwanzaa? Your Unix system can help with that. Let's look at the commands that you would use to get a talking cow reminding you about an upcoming holiday or an important event. $ countdown ___________________ < 3 days until Xmas > ------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || And, for the technique we're going to be looking at, your choice of holiday or special event is up to you. $ countdown2 ______________________ < 4 days until Kwanzaa > ---------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || The first thing you need is a very unusual Unix tool that allows you to, well, put words into a cow's mouth. That tool is called "cowsay" and it's one that you can install on many Linux systems with this yum command: $ sudo yum install cowsay Once cowsay is installed on your system, you can have the cow say things to you in much the same way as you would use the echo command. $ cowsay What am I thinking? _____________________ < What am I thinking? > --------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || OK, so this isn't going to be your favorite Unix tool, but it might break the monotony on some days when you're working on a pile of very tedious tasks. Next comes the counting down part. For that, we're going to use a special function of the date command that allows you to represent a date in the "day of year" format. For example, in this format, January 1st would be day 001. $ date -d 1-Jan +%j 001 Today's date would be translated to this format using this command: $ date +%j 356 By specifying our date of interest in the "day of year" format, we can easily calculate the number of days we have left to go before the special day that we're so busy anticipating and have the reminder spring forth from the mouth of our little cow. $ alias countdown='cowsay $(($(date -d 25-Dec +%j) - $(date +%j))) days until Xmas' $ countdown ___________________ < 3 days until Xmas > ------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || Doing the same thing for the start of Kwanzaa requires only a tiny change in our alias. $ alias countdown2='cowsay $(($(date -d 26-Dec +%j) - $(date +%j))) days until Kwanzaa' Using this little calculation won't work for just any holiday or special event, however, and the reason for this requires a little more insight into how the calculation is being performed. Say what you really want to know is how many days until April Fools Day. Using the same sort of command, we run into two issues. First, April Fools Day isn't until next year. So we have to make sure we don't compute the difference between 091 and today (356). That would give us a fairly large negative number! $ date +%j 356 $ date -d 1-Apr +%j 091 That's not too hard to fix. We add 365 to our calculation -- 366 if our target date is after April 29th on any year that, like 2016, is leap year. $ alias countdown3='cowsay $(($(date -d 1-Apr +%j) + 365 - $(date +%j))) days until April Fools Day' But, depending on the target date you're working with, you might just see something like this: $ countdown3 -bash: 091: value too great for base (error token is "091") "What's going on here?" you may ask. And, after a little experimentation, you might notice that you get the same error when you do this: $ echo $((091)) -bash: 091: value too great for base (error token is "091") As it turns out, the 0 at the beginning of the number 091 is interpreted as meaning that the number is octal. Do the same thing for a number like 77 and you'll see how the number is converted to decimal. $ echo $((077)) 63 So, to ensure that our calculation is going to work for April Fools Day or any arbitrary day, we need to make sure that our "day of year" date doesn't begin with a 0. If we add a sed command to strip off any leading zeroes, we should be able to count down to any special day next year. $ alias countdown3='cowsay $(($(date -d 1-Apr +%j | sed "s/^0*//" ) + 365 - $(date +%j))) days until April Fools Day' $ countdown3 ________________________________ < 100 days until April Fools Day > -------------------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || Using a tool like this, you won't be likely to forget Talk Like a Pirate Day, Pi Day, Sysadmin Day, or your wedding anniversary. And, hey, we only have 100 days until April Fools Day. It's time to start planning your Unix pranks! This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:6f0c9175-7bcf-431b-a3b0-dc58aaf39701>
CC-MAIN-2017-04
http://www.computerworld.com/article/3018034/operating-systems/counting-down-to-the-holidays-unix-style.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00337-ip-10-171-10-70.ec2.internal.warc.gz
en
0.885392
1,191
2.90625
3
Western Digital's HGST subsidiary may not have picked the best time to have a breakthrough in hard-disk drive innovation. After a decade of trying, HGST recently perfected a method to seal helium gas inside drives. The company is preparing to launch a line of hard drives filled with the gas, which it says will drastically reduce internal friction and thus lower power consumption by 23% while increasing capacity by 40%. Currently, however, helium reserves in the U.S., which supplies 75% of world's annual demand for 6.2 billion cubic feet, are at an all-time low. Under current conditions, the largest U.S. reserve will only last another five to six years unless additional supplies are brought on line. HGST told Computerworld that the helium it would be using would cost mere pennies per drive. "As we will not be a big consumer of helium, our requirements will have little incremental impact on the overall worldwide demand for helium. The incremental parts costs and capital costs of assembly tooling for the helium-filled drives are more significant but manageable enabling us to deliver a compelling $/GB and watts/GB advantage over air," wrote Brendan Collins, vice president of Enterprise Storage at HGST.. While helium is the second most abundant element in the universe, here on earth, most of it bleeds right through the atmosphere and into space. Helium is a byproduct of natural gas production and it is a non-renewable resource. In the simplest terms, however, the current shortage is not necessarily critical. Current demand is simply outstripping supply, said Donna Hummel, a spokeswoman for the U.S. Bureau of Land Management. Worldwide demand for helium is expected to rise by between 7% and 10% over the next year, primarily due to demands from the Asian market, Hummel said. The U.S. Bureau of Land Management (BLM) oversees the Federal Helium Reserve in Amarillo, Texas, which stores about 30% of the world's supply. "Just like everything else, supply and demand wrestle back and forth," Hummel said. "There's no reason new markets that use helium could not be managed, but at what cost? No one in the BLM is saying we can't support the new industries. But, we can't vouch for how the market will respond." The Federal Helium reserves are kept 3,000 feet below ground in a natural geologic gas storage formation called the Bush Dome Reservoir. The major issue confronting worldwide helium supplies is the wholesale sell off of the reserves in the Bush Dome Reservoir that has been going on for the past 15 years. Walter Nelson, director of helium sourcing for Air Products and Chemicals, told the U.S. Senate Energy and Natural Resources Committee in May that with current production rates of about two billion cubit feet per year, the Federal Reservoir could continue to create helium for another "five to six more years." Computer modeling, he told the committee, shows reservoir production will decline to approximately 1 billion cubic feet per year after 2014. The Federal Helium Reserve was established in 1925 as a strategic supply of gas for dirigibles. During World War I, America was only able to build a few air ships because of it lacked a non-flammable gas. Later, in the 1950s, those reserves were used as a coolant by NASA during the race to be first to put a man into space.
<urn:uuid:eb65b8d6-ced6-459f-8bc5-293431782713>
CC-MAIN-2017-04
http://www.computerworld.com/article/2491792/data-storage-solutions/could-gas-shortage-pop-wd-s-helium-drive-plans-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00091-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954702
710
2.71875
3
Have a question for us? Use our contact sales form: visited Conwy Castle one of King Edward the 1st’s great welsh castles. Conwy castle has stood for over 700 years, and it makes you reflect on what we will leave behind that still stands in 700 years’ time. The last time I visited the castle was 15 years ago, which of course to the castle is just a tiny drop in what is an ocean of time. In that 15 years, though, technology has evolved very quickly, leaving us all in a constant state of retraining. Today you can buy a mobile phone that is a computer, with a GPS chip inside for location; with a high-speed connection to the Internet; which plays mp3 music, and fits into a shirt pocket. Back in the 1990’s, GSM was still in its infancy (never mind 3G); computers sat firmly on desks; GPS units were large and primitive; the Internet was just being born; the cost of flash memory was in the thousands of dollars, and in music anyway a lot of people had not yet made the transition from records to CDs. Looking at software rather than hardware, you could argue that the whole of software engineering history fits into the last 60 years, with The first general purpose computers appearing with the development of electronic computers in the 1950s (although historians cite Ada Lovelace as the first programmer, having written a ‘program’ for Charles Babbage’s Difference Engine in the 1840s, so this would make the history of software another 100 years older). This makes the “C” language an “old timer”, since it was invented in the early 1970s, grew in popularity through the 80s and 90s, and of course it is still going today. It is probably the most successful programming language ever, having been adopted for embedded applications (even “C” on a chip) as well as general-purpose programming. But will programmers still be programming in “C” in 700 years? Will there actually still be “programming” as a human activity? It’s hard to imagine that span of time, since so much of the technology that we know today was created during our lifetimes.
<urn:uuid:0068b638-979c-44de-9d1c-07339d151f05>
CC-MAIN-2017-04
http://www.dialogic.com/den/developers/b/developers-blog/archive/2008/11/07/the-700-year-old-quot-c-quot-program.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00393-ip-10-171-10-70.ec2.internal.warc.gz
en
0.974248
499
2.625
3
Spock may be exceedingly happy today since the “Vulcan mind meld” is now a reality for humans, thanks to University of Washington researchers who achieved the first noninvasive human brain-to-human brain interface. One researcher sent a brain signal via the Internet and his thoughts controlled the hand movement of a fellow researcher across campus. “The Internet was a way to connect computers, and now it can be a way to connect brains. We want to take the knowledge of a brain and transmit it directly from brain to brain,” said Andrea Stocco whose finger moved on a keyboard in response to his colleague Rajesh Rao’s thoughts. “It was both exciting and eerie to watch an imagined action from my brain get translated into actual action by another brain,” Rao added. “This was basically a one-way flow of information from my brain to his. The next step is having a more equitable two-way conversation directly between the two brains.” On Aug. 12, Rao sat in his lab wearing a cap with electrodes hooked up to an electroencephalography machine, which reads electrical activity in the brain. Stocco was in his lab across campus wearing a purple swim cap marked with the stimulation site for the transcranial magnetic stimulation coil that was placed directly over his left motor cortex, which controls hand movement. The team had a Skype connection set up so the two labs could coordinate, though neither Rao nor Stocco could see the Skype screens. Rao looked at a computer screen and played a simple video game with his mind. When he was supposed to fire a cannon at a target, he imagined moving his right hand (being careful not to actually move his hand), causing a cursor to hit the “fire” button. Almost instantaneously, Stocco, who wore noise-canceling earbuds and wasn’t looking at a computer screen, involuntarily moved his right index finger to push the space bar on the keyboard in front of him, as if firing the cannon. Stocco compared the feeling of his hand moving involuntarily to that of a nervous tic. “We plugged a brain into the most complex computer anyone has ever studied, and that is another brain,” stated Chantel Prat, assistant professor in psychology at the UW’s Institute for Learning & Brain Sciences. She doesn’t want people to freak out and overestimate the technology since, “There’s no possible way the technology that we have could be used on a person unknowingly or without their willing participation.” Although Stocco jokingly called the human brain-to-brain interface a “Vulcan mind meld,” Rao said the technology cannot read a person’s thoughts. It also doesn’t give another person the ability to control your actions against your will; it can only read certain types of simple brain signals. The next experiment will involve sending more complex thoughts to another brain. If successful, then they plan to conduct experiments “on a larger pool of subjects.” Before this successful human-to-human brain interfacing demonstration, a first of its kind, Duke University researchers established a “brain-to-brain communication between two rats” and Harvard researchers were able to show brain-to-brain communication between a human and a rat. Examples of how direct brain-to-brain communication in humans might be used in the future include helping a person with disabilities “communicate his or her wish, say, for food or water. The brain signals from one person to another would work even if they didn’t speak the same language.” Or if a pilot were to become incapacitated, then someone on the ground could send human brain-to-brain signals to assist a flight attendant or passenger in landing an airplane.
<urn:uuid:d8f2513d-ee4a-4969-b129-1bf503d76677>
CC-MAIN-2017-04
http://www.computerworld.com/article/2474639/emerging-technology/researcher-sends-thoughts-over-internet--moves-colleague-s-hand--human-to-human-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00513-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965107
805
2.859375
3
The Math Behind Estimations to Break a 2048-bit Certificate The Math: So you’re interested in the math/science behind our claims in the SSL cracking video...? This is the basis of our assumptions... In order to "break" an RSA key based certificate like those provided by DigiCert, one must factor very large numbers that make up the RSA modulus. A certificate is considered "cracked" when the computer utilized reaches the average probability of time to factor the RSA modulus associated with the key in the certificate (in other words, it could happen in year 1 or it could happen in year 6 quadrillion, and the average would be half the time it eventually takes to efficiently try all possibilities). In December 2009, Lenstra et al announced the factorization of a 768-bit RSA modulus (see: http://eprint.iacr.org/2010/006.pdf) - this is a 232-digit number, and was at the time (and potentially still is) the record for factoring the largest general integer. The most efficient method known to factor large integers, and the method used in the factorization record listed above, is via the number field sieve (NFS) - which is much faster than a brute force attack (where every combination is tried), so a brute force attack would have taken much longer than even this. The Lenstra group estimated that factoring a 1024-bit RSA modulus would be about 1,000 times harder than their record effort with the 768-bit modulus, or in other words, on the same hardware, with the same conditions, it would take about 1,000 times as long. They also estimated that their record achievement would have taken 1,500 years if they normalized processing power to that of the standard desktop machine at the time - this assumption is based on a 2.2 Ghz AMD Opteron processor with 2GB RAM. So in other words, Lenstra et al claimed that it would take 1.5 million years with the standard desktop machine at the time, to repeat their record effort. DigiCert's base standard is to use 2048-bit keys in secure SSL certificates - that is enormously stronger than anything Lenstra et al attempted, in fact, it would require factoring a 617-digit number. RSA Labs claim (see: http://www.rsa.com/rsalabs/node.asp?id=2004) that 2048-bit keys are 2^32 (2 to the power of 32) times harder to break using NFS, than 1024-bit keys. 2^32 = 4,294,967,296 or almost 4.3 billion, therefore breaking a DigiCert 2048-bit SSL certificate would take about 4.3 billion times longer (using the same standard desktop processing) than doing it for a 1024-bit key. It is therefore estimated, that standard desktop computing power would take 4,294,967,296 x 1.5 million years to break a DigiCert 2048-bit SSL certificate. Or, in other words, a little over 6.4 quadrillion years. In putting together our video, we estimated the age of the Universe to be 13,751,783,021 years or a little over 13.75 billion years*, therefore if you tried to break a DigiCert 2048-bit SSL certificate using a standard modern desktop computer, and you started at the beginning of time, you would have expended 13 billion years of processing by the time you got back to today, and you would still have to repeat that entire process 468,481 times one after the other into our far far distant future before there was a good probability of breaking the certificate. In fact the Universe itself would grow dark before you even got close. The Art: A few concessions were made in the creation and visualization of these materials. The Big Bang shown is simply an artistic interpretation of the event. Most experts agree that there was no giant “explosion” at the start of time. Rather, the big bang is simply the expansion of the universe from an infinitely small source. Since space and time didn’t exist before the Big Bang there would be no possible way to witness the start of the universe from outside the singularity. This is also assuming that there was a Big Bang as other models of the universe are equally valid if not as popular. Other astronomical events shown are also, obviously, artistic interpretations. Furthermore, the exact year each event took place is only as accurate as the generally accepted timeline of the universe. Taking into account the liberty the margin of error within that model will allow. For more information see: Big Bang Age: Komatsu, E.; Dunkley, J.; Nolta, M. R.; Bennett, C. L.; Gold, B.; Hinshaw, G.; Jarosik, N.; Larson, D. et al. (2009). "Five-Year Wilkinson Microwave Anisotropy Probe Observations: Cosmological Interpretation". Astrophysical Journal Supplement 180 (2): 330 First Stars: Ferreting Out The First Stars; http://www.physorg.com/news6689.html Formation of Galaxies: "New Scientist" 14th July 2007 Age of our Solar System: A. Bouvier and M. Wadhwa. "The age of the solar system redefined by the oldest Pb-Pb age of a meteoritic inclusion."Nature Geoscience, in press, 2010. Doi: 10.1038/NGEO941 Age of the Earth: Dalrymple, G. Brent (2001). "The age of the Earth in the twentieth century: a problem (mostly) solved". Geological Society, London, Special Publications 190 (1): 205–221 Multi-cellular life (Proterozoic life): El Albani, Abderrazak; Bengtson, Stefan; Canfield, Donald E.; Bekker, Andrey; Macchiarelli, Reberto (July 2010). "Large colonial organisms with coordinated growth in oxygenated environments 2.1 Gyr ago". Nature 466 (7302): 100–104 Fate of the Sun: Schröder, K.-P.; Smith, R.C. (2008). "Distant future of the Sun and Earth revisited". Monthly Notices of the Royal Astronomical Society 386 (1): 155 End of the universe: A dying universe: the long-term fate and evolution of astrophysical objects, Fred C. Adams and Gregory Laughlin, Reviews of Modern Physics 69, #2 (April 1997), pp. 337–372
<urn:uuid:3a5e74e7-04f2-4ecf-a967-11ad86e4806c>
CC-MAIN-2017-04
https://www.digicert.com/TimeTravel/math.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00567-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910491
1,378
2.71875
3
From the macro, to the micro: how the new microchips would make the world smaller, by changing the communications game. This is the life story of a technology: the integrated circuit, or silicon chip. In 1970, there were only around 130,000 machines that incorporated these chips, primarily in the workplace. The chips were about the size of a few grains of salt. At the time, this was cutting-edge computing, and microchips contained approximately 450 transistors. To put this into a more modern perspective, integrated circuits circa 2006 contain more than a million per square millimeter. In 1971, circuits were designed with the help of early computers, but were mapped and hand-drawn using huge, 20' by 20' photographic masks, then shrunk to a microscale for use in manufacturing. After manufacture, the chips were cut apart with lasers, and used in picturephones (among other applications). Writer/Director: Henry R. Feinberg Produced by the Audio/Visual Media Dept. at Bell Laboratories Artwork by Ken Knowlton Footage courtesy of AT&T Archives and History Center, Warren, NJ
<urn:uuid:b12af165-3dca-4b4b-95aa-517649360880>
CC-MAIN-2017-04
http://techchannel.att.com/play-video.cfm/2011/2/18/AT&T-Archives-IC-A-Shrinking-World
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00411-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959471
234
3.21875
3
What is Aes256 Ransomware? Aes256 Ransomware is a dangerous infection that has been named after the encryption algorithm it uses to encrypt files it finds on the computer it infects. In most cases, these files are various documents, pictures, music, and video files, but, of course, it might encrypt other files it finds too. Even though this ransomware infection does not tell anything about the money users have to pay to get files back, there is no doubt that this infection has been created to steal money from users. Since you might be left without your files and money, paying cyber criminals is a bad idea. What you should do instead of trying to find out how to pay them for the decryption tool is to delete Aes256 Ransomware fully from the computer. This is the step you have to take to protect your other files from being encrypted and to be able to use the computer without fear once again. What does Aes256 Ransomware do? Aes256 Ransomware does not ask permission to enter computers, and when users find out that it is inside the system, it is usually too late because it encrypts files the second it successfully enters the computer. Aes256 Ransomware adds the extension .aes256, so you can see which of your files have been locked and which are left as they are. You will not only find a bunch of encrypted files on your system. On top of that, you will discover a file !!! READ THIS – IMPORTANT !!!.txt. It is a document containing a ransom note. This document usually tells users what they need to do to get their files back. Unlike other ransomware infections, Aes256 Ransomware does not provide any information about the amount of money that has to be paid to get the decryptor. Instead, users are told to write an email to get the private key. Do not expect to get this key for free, so do not even bother writing an email if you are not going to spend money on it. No matter what you decide, do not forget to fully uninstall Aes256 Ransomware from your PC in order not to find new files encrypted again. Where does Aes256 Ransomware come from? Ransomware is one of the sneakiest types of malicious software. It silently enters computers and then starts encrypting users’ files unnoticed. When users find it, files are usually already encrypted. There are several different ways how ransomware infections are spread, but the most common way of distribution is known to be spam emails. The malicious file of the ransomware infection is spread inside these spam emails. Do not pay attention to them and definitely do not open them ever again. Also, do not forget to be more careful with applications on third-party websites. How can I uninstall Aes256 Ransomware? It is very important to remove Aes256 Ransomware as soon as possible because it might download additional malware and encrypt new files. Also, it is known to be able to connect to the Internet from time to time. Unfortunately, it is extremely hard to remove Aes256 Ransomware manually, so we suggest going to delete Aes256 Ransomware automatically. Use a good tool only if you wish to implement the full Aes256 Ransomware removal. Automated Removal Tools Download Removal Toolto remove Aes256 RansomwareUse our recommended removal tool to uninstall Aes256 Ransomware. Trial version of WiperSoft provides detection of computer threats like Aes256 Ransomware and assists in its removal for FREE. You can delete detected registry entries, files and processes yourself or purchase a full version. While the creators of MalwareBytes anti-malware have not been in this business for long time, they make up for it with their enthusiastic approach. Statistic from such websites like CNET shows that th ... WiperSoft Review Details WiperSoft (www.wipersoft.com) is a security tool that provides real-time security from potential threats. Nowadays, many users tend to download free software from the Intern ... How Kaspersky Lab Works? Without a doubt, Kaspersky is one of the top anti-viruses available at the moment. According to computer experts, the software is currently the best at locating and destroyin ... Step 1. Delete Aes256 Ransomware using Safe Mode with Networking. Remove Aes256 Ransomware from Windows 7/Windows Vista/Windows XP - Click on Start and select Shutdown. - Choose Restart and click OK. - Start tapping F8 when your PC starts loading. - Under Advanced Boot Options, choose Safe Mode with Networking. - Open your browser and download the anti-malware utility. - Use the utility to remove Aes256 Ransomware Remove Aes256 Ransomware from Windows 8/Windows 10 - On the Windows login screen, press the Power button. - Tap and hold Shift and select Restart. - Go to Troubleshoot → Advanced options → Start Settings. - Choose Enable Safe Mode or Safe Mode with Networking under Startup Settings. - Click Restart. - Open your web browser and download the malware remover. - Use the software to delete Aes256 Ransomware Step 2. Restore Your Files using System Restore Delete Aes256 Ransomware from Windows 7/Windows Vista/Windows XP - Click Start and choose Shutdown. - Select Restart and OK - When your PC starts loading, press F8 repeatedly to open Advanced Boot Options - Choose Command Prompt from the list. - Type in cd restore and tap Enter. - Type in rstrui.exe and press Enter. - Click Next in the new window and select the restore point prior to the infection. - Click Next again and click Yes to begin the system restore. Delete Aes256 Ransomware from Windows 8/Windows 10 - Click the Power button on the Windows login screen. - Press and hold Shift and click Restart. - Choose Troubleshoot and go to Advanced options. - Select Command Prompt and click Restart. - In Command Prompt, input cd restore and tap Enter. - Type in rstrui.exe and tap Enter again. - Click Next in the new System Restore window. - Choose the restore point prior to the infection. - Click Next and then click Yes to restore your system. Incoming search terms: 2-remove-virus.com is not sponsored, owned, affiliated, or linked to malware developers or distributors that are referenced in this article. The article does not promote or endorse any type of malware. We aim at providing useful information that will help computer users to detect and eliminate the unwanted malicious programs from their computers. This can be done manually by following the instructions presented in the article or automatically by implementing the suggested anti-malware tools. The article is only meant to be used for educational purposes. If you follow the instructions given in the article, you agree to be contracted by the disclaimer. We do not guarantee that the artcile will present you with a solution that removes the malign threats completely. Malware changes constantly, which is why, in some cases, it may be difficult to clean the computer fully by using only the manual removal instructions.
<urn:uuid:8b0673e9-e3b2-4799-b0b6-da7534c4ab37>
CC-MAIN-2017-04
http://www.2-remove-virus.com/remove-aes256-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00411-ip-10-171-10-70.ec2.internal.warc.gz
en
0.889965
1,527
2.546875
3
A battle is brewing between two standards bodies - the Internet Engineering Task Force (IETF) and the International Telecommunication Union (ITU) - over which group will be the primary source of the underlying communications protocols that allow the Internet to operate in the future. Since 1986, the IETF has been the Internet's primary standards body. The all-volunteer group of network engineers has developed many of the most popular Internet standards including the Internet Protocol and the next-generation IPv6 through a process committed to "rough consensus and running code." Among the IETF's hundreds of innovations are standards for e-mail, domain names, network management and VoIP. The ITU, on the other hand, was established in 1865 to ease connectivity of the first telegraph and then telephone networks. Today it oversees global radio spectrum, satellite orbits and other carrier-centric technologies through formal development and review processes. The ITU's members include countries and private companies, rather than individuals. It has created popular standards for video compression, broadband and wave division multiplexing. Until now, the ITU has mostly taken a hands-off approach to the Internet. But that could change in December, when the World Conference on International Telecommunications (WCIT) is held in Dubai. This two-week conference will be the first major revision of the international treaties that define the ITU's role since 1988. The WCIT will make changes to the International Telecommunication Regulations - or ITR - which facilitate global interconnection and interoperability of telecommunications traffic. The ITR sets rules for traffic flows, quality of service as well as routing and billing between network operators. The Internet Society, which is the umbrella organization for the IETF as well as a member of the ITU, is concerned about the ITU taking a harder line on Internet governance. ISOC argues that it is critical that the Internet retain key principles in the future, specifically that it allow for open access, permission-less innovation and collaboration in order for it to continue to be an engine of economic growth. ISOC is concerned about proposals from ITU member states that deal with such issues as peering arrangements because this could impact the cost of international Internet traffic and how users pay for Internet services. Other proposals could give governments more leeway with regard to censorship and content control or could limit data privacy. Governments could get involved in Internet address allocation, which is currently handled by the regional Internet registries. Another worry is that more regulation by the ITU will result in a slowing of innovation on the Internet. "The Internet Society believes that decisions made by governments at WCIT could redefine the international regulatory environment for the Internet and telecoms in the 21st century and beyond, impacting how people around the world are able to use the Internet," the group states on its Web site. ISOC isn't the only group that's concerned about WCIT. The U.S. House of Representatives voted unanimously in August to send a message to the ITU that the Internet doesn't need additional regulation. The proposal that sparked the ire of the Congress would allow countries to tax incoming and outbound telecommunications traffic and impose Internet traffic termination fees. Internet pioneers such as Scott Bradner are worried about a proposal that would apply the telephone-oriented concept of the "sender party network pays" to the Internet. Bradner argues that this principle would threaten free content on the Internet by requiring content providers to pay ISPs to deliver data to customers. With WCIT opening on Dec. 3, the debate surrounding the role that ITU should play in Internet standardization and regulation will likely reach a fevered pitch in the days ahead. For example, Google made a public announcement in favor of a continued free and open Internet and against what it calls "closed-door meetings" by government regulators at the ITU. Google launched an online pledge that it calls "Take Action" for Internet users to sign in protest of WCIT. In response, the ITU issued a blog post criticizing Google for erroneously saying WCIT will be a forum for increased censorship and regulation of the Internet. ITU argues that Google should have joined the group as a member if it wanted more access to its proceedings. "ITU's goal is to continue enabling the Internet as it has done since the Internet's inception," the ITU assured. To keep an eye on the fireworks between the IETF and the ITU during December, visit this compendium of WCIT-related news that's being compiled by the Internet Society.
<urn:uuid:3226d4ba-e1ed-426f-a4ea-41d8e0682b92>
CC-MAIN-2017-04
http://www.networkworld.com/article/2161733/lan-wan/ietf-vs--itu--internet-standards-face-off.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00319-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950627
922
2.9375
3
At a glance, there's not a lot there to see. Despite the absence of an outsized cooling fan, the Raspberry Pi looks a little like an older-model graphics card, with the usual tangle of wires and plastic mounted on a green circuit board. On closer inspection, however, the true nature of the credit card-size device becomes clear -- the Raspberry Pi is actually a fully functional Linux computer, complete with an Ethernet port, USB and an HDMI output. According to the British nonprofit that administers the project, owners just need to plug in a keyboard and attach the device to their TV to start using it. The Raspberry Pi Foundation said that the original idea of the device was to improve computer science education by offering a cheap, flexible platform to budding programmers. On an "about us" page, the group said that present-day applicants to university comp-sci programs have less experience than they used to. In part, they added, this is due to a lack of the kind of highly programmable devices -- like Commodore 64s and Amigas -- that the previous generation cut its teeth on. However, the Raspberry Pi seems destined to have an impact far beyond the educational sector. One of the first production runs of the device in the U.K. reportedly sold out after a single day on the market, with a distributor saying that orders reached 700 per second at one point. The economics of the $25 computer are compelling enough, but its use of open-source technology adds even more potential applications. A developer of an encrypted communication app designed to sidestep online censorship told the BBC that he can use Raspberry Pi for tiny, cheap servers meant for activists in countries that restrict freedom of speech. Gizmodo UK lists several clever consumer uses, including smart TV and network storage. Regardless of the exact use to which Raspberry Pi is put, it's clear that the tiny Linux computer will have an impact far beyond its size in diverse parts of the computing world. Jon Gold wonders if you could turn one of these into an NES if you wanted to. Email him at email@example.com and follow him on Twitter at @NWWJonGold. Read more about software in Network World's Software section. This story, "Tiny Linux Computer Punches Above Its Weight" was originally published by Network World.
<urn:uuid:c147b423-5db2-4e6e-8ff0-28b6fe1b7b46>
CC-MAIN-2017-04
http://www.cio.com/article/2398592/hardware/tiny-linux-computer-punches-above-its-weight.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00045-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943681
479
2.546875
3
We all spend a good portion of our time these days touching, tapping and swiping our smartphone screens but sometimes when you need to access your phone your hands aren’t free to do so. The obvious example is when you’re driving but there are other times too, like if you’re preparing food or washing dishes and the phone rings. Voice commands can help, but can also be somewhat limited. Luckily, researchers at the University of Washington may have a new solution: controlling your phone with hand gestures. SideSwipe is an effort led by UW professors Shwetak Patel and Matthew Reynolds to enable control of a smartphone with nearby hand gestures without the need for a custom transmitter or external signal source or even a camera. They’ve developed a method that detects distortions in the phone’s own wireless GSM signals created by hand gestures. An algorithm they’ve created then interprets the gesture and performs a predefined action. More specifically, they’ve created a prototype which consists of a circuit board with four small receiving antennas that connects to the back of a phone. When a hand moves near the phone, for example pointing at it or hovering over it, skin, muscles and bones either absorb or reflect the signal. Absorption reduces the signal intensity while reflection generates a Doppler effect. SideSwipe recognizes particular gestures based on this modulation in the signal. Using this system, you could, in theory, scroll through a recipe just by swiping your hand over the screen. Also, the system doesn’t require being able to see the display. This means that nearby gestures could work even if the phone is in your pocket or in a purse. So, for example, if your phone starts to ring in your pocket during a meeting, you could quickly silence it by swiping your hand near your pocket. Using a modified Samsung Nexus S, the researchers ran a 10-person study of SideSwipe’s effectiveness. Each participant performed 14 different hand gestures, based on taps, hovers and swipes, about 30cm away from the phone. SideSwipe proved quite effective, achieving an 87% accuracy rate. These basic gestures were considered simple building blocks that could be combined to create a more complicated vocabulary. SideSwipe's creators told UW Today that they've applied for patents on the technology, and that it could be implemented on existing phones with little modification. They also said it would, in theory, have little impact on battery life since it’s based on low powered receivers and simple signal processing. Finally, though it depends on GSM signals, there would be no privacy concerns since it just needs to detect changes in the amplitude of the signal; it wouldn’t need or have access to the contents of the transmission. Pretty neat. If they could integrate the circuit board and antennas into a not-bulky case, I could see using SideSwipe on my own phone. It seems like I’m constantly drying my hands or wiping some gunk off of them before touching my phone so this would help. In addition to the use cases described, I could see this sort of technology also being useful to those with disabilities that affect motor control. We’ll see if it comes to market and catches on. Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:51ca08bb-d4a0-434d-9075-206d079f55bc>
CC-MAIN-2017-04
http://www.itworld.com/article/2694807/mobile/hands-off--gesture-based-smartphone-control-is-coming.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00347-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937764
729
3.1875
3
Researchers say their "molecular self-assembly" technique will use nanotechnology to create smaller, more-powerful processors. IBM Corp. researchers say they have made a breakthrough in chip development that could lead to processors that are smaller but more powerful than the current offerings. In a paper scheduled to be presented on Monday at the IEEE International Electron Devices Meeting in Washington, D.C., IBM researchers will say they have used a technique called "molecular self-assembly" to create important parts of a semiconductor memory device. According to the researchers, from IBMs Yorktown Heights, N.Y., research lab, the self-assembly technique takes advantage of a reliable way that certain types of polymer molecules come together and organize themselves. The result of that tendency are patterns that can be used to create device features that are smaller, denser and more uniform than techniques currently used, such as lithography, according to IBM. Chip makers will still be able to use lithography for many more years to create smaller and faster chips, but that will also increase the cost and complexity of the technique, according to an IBM spokesman. Molecular self-assembly, an approach based on nanotechnology, will give processor manufacturers another method to shrink the chips while boosting the performance. According to IBM, it also is compatible with existing chip-making tools, enabling manufacturers to implement the technique without greatly increasing costs by having to retool machines and assume risks that come with major changes in processes. The result could be more-powerful processors for everything from computers to wireless devices, the spokesman said. IBM researchers expect molecular self-assembly to be used in pilot programs within three to five years. In creating the crucial parts of the semiconductor memory device using the technique, researchers were able to create a dense silicon nanocrystal array, the basis for a variant of conventional flash memory. Nanocrystal memories are difficult to make via traditional methods, according to IBM. By using the molecular self-assembly technique, researchers have found an easier way to create the semiconductor device. It was performed on 200-mm silicon wafers, IBM said. The paper to be presented on Monday is titled "Low Voltage, Scalable Nanocrystal FLASH Memory Fabricated by Templated Self Assembly." Nanotechnology is a burgeoning field in which researchers work on materials at the atomic or molecular level. Self-assembly is a subset of nanotechnology.
<urn:uuid:f08e70e1-e89e-46c1-b572-9567bf421181>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/IBM-to-Announce-MicroChip-Breakthrough
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00284-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937163
498
3.78125
4
Across the country, states are turning to technology to save lives and save money on equipment to keep roads clear during the winter. States including Nevada, Michigan, Minnesota, Alaska and Colorado are utilizing new digital “smart-plow” systems to increase the efficiency of snowplows and other vehicles on the roads. The technology uses sensors to provide more accurate information on the conditions of roads ahead of the snowplows, giving drivers detailed information about what to expect. The context of this technology comes after much of the U.S. was mired in a polar vortex that caused inclement weather conditions and shut down large portions of the country. The Associated Press reported that a 1-year-old boy was killed in Missouri after a car the child was in struck a snowplow – the type of road safety accident the smart plow technology hopes to prevent. While outfitting snowplows with technology like advanced GPS and radars to prevent collisions isn't new, the system being developed by the federal government outfits plows with custom sensors that continuously feed important information regarding road and weather conditions to transportation officials and the vehicles' drivers. Like this story? If so, subscribe to Government Technology's daily newsletter. The technology was developed by the U.S. Department of Transportation and built by the National Center for Atmospheric Research (NCAR), a Boulder, Colo.-based research and development center. Called the Pikalert Enhanced Maintenance Decision Support System, or EMDSS, it uses satellite and computer weather models to more accurately predict the conditions on roads in near real time and spot potential problem areas. On the Road in Colorado The EMDSS technology was developed in parallel with private-sector companies like Iteris, a development company based in Grand Forks, N.D. The company’s products are being utilized by the Colorado Department of Transportation, according to agency spokeswoman Amy Ford. “This system lets us better pinpoint weather systems effectively,” she said. In 2004, Colorado joined a 16-state consortium, known as a “pooled-fund study,” to look into improving snowplow and emergency vehicle systems to help save lives and reduce costs. The technology used by the Colorado Transportation Department and being developed and utilized by NCAR do “comparable things,” Ford said. “As the system continues to become more robust and defined, I suspect the data collection and weather collection will be more reflective of what’s outside,” she said. “We’re noticing more effective use of our time and resources – in other words, our money.” She said Colorado's system is working pretty well, but suspects that with more developments, additional improvements will be made. EMDSS originally lacked the "E" and began as MDSS back in the early 2000s, said Mike Chapman, the NCAR scientific project manager overseeing the program. “The [transportation departments] and federal government figured out there was a disconnect between weather forecasts and what people needed,” he said. The federal government wanted to find a way to increase the accuracy of snowplows and other equipment using more precise sensors. “The federal government decided to leave it semi-open source, so a market would be formed,” Chapman said. “Almost immediately there were some private companies that took what NCAR developed and commercialized it.” After six to eight years, the original MDSS prototype was transferred to the states, through programs like a consortium Colorado participated in. NCAR is now on the seventh version of the original MDSS prototype, thus why it's known as the “enhanced” prototype. Chapman said the federal government no longer manages or funds the old prototype, which made it possible to transfer it to the states. “The state of Alaska was nice enough to contract with us to see if our system would be useful." This winter EMDSS is being tested in Michigan, Minnesota and Nevada, and it will be available to other states and vendors for the next snow season if it passes key tests, according to NCAR. While the current version is far more advanced than the old system, Chapman said more work needs to be done to increase its accuracy. “I think the system development was pretty successful, but in any type of weather application like this, it is only as good as any type of weather forecast pushed into it,” he said. The use of satellite imagery, sensors and GPS allows the system to work well on smooth surfaces such as roads and bridges. However, using it on complex terrain remains a work in progress. Chapman said that in the past, weather stations could only predict the weather for every 30 to 40 miles. There are weather systems that change in between those readings that have to be taken into account – and that’s what the EMDSS systems hopes to eventually be able to predict. With the new system, sensors are attached to the vehicles, allowing information like weather data to be collected on every road traveled. “When we do need higher density situations, the mobile platform will be able to give us high-definition readings of roads," Chapman said. Although MDSS was created to improve road safety in the winter, Chapman said there might be applications of the technology for other weather situations. For instance, areas in Texas that experience high winds that affect traffic could use it to help mitigate safety issues. In addition, railroads and even school buses could benefit from this program, he said. “It’s going to be nice for [buses] to know when flooded roadways or tornado warnings are approaching." Chapman said that with federal funding, more lives could be saved as the technology improves. He said that more than 7,000 people die on roads every year due to inclement weather. “The federal government is funding something that makes roads safer,” he said. “We’re going to very quickly lower those numbers and make the roads safer for everybody.” Eventually, Chapman expects that the EMDSS software will be released to states for development purposes, but he didn't have a time frame for when that may happen.
<urn:uuid:3f7429e8-c755-4f51-9089-cbac143955fa>
CC-MAIN-2017-04
http://www.govtech.com/Smart-Snowplow-Tech-Turns-Every-Vehicle-into-Weather-Sensor.html?flipboard=yes
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00008-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968587
1,290
2.84375
3
I always had an interest in the imaginary world since I was in Standard 8, which grew with time and made me take up a Multimedia course during my graduation. This course included a thorough understanding of a combination of 2D and 3D. In this blog, I’m going to share a little about 3D art. Let’s first understand what 3D is. In layman’s language, 3D refers to three-dimensional space, and in technical terms, 3D means “any method accomplished for recording three-dimensional illustration or information”. 3D Art Process Modeling is the process of creating a demonstration of any three-dimensional surface by using different software tools such as Autodesk Maya, Max, etc. There are two types of modeling - Organic and Non-organic. Organic modeling includes naturally existing things, such as human beings, animals, and conceptual characters. Non-organic modeling includes man-made things like buildings, cars, and conceptual designs. Texturing is a process of making the 3D model more practical. This process includes various stages like unwrapping, UV Snapshot, etc. Lighting is the process of achieving a real-world light effect. This contains a blend of use of artificial light sources such as lamps, bulbs and others, as well as natural lighting through daylight settings. After this comes stages such as animation, visual effects, compositing, and rendering. Figure 1: 3D Art Process Figures 2 & 3: My 3D work samples 3D Art Technologies utilized in various fields Television advertisements for companies such as Pepsi, Sony, LG and several others have used 3D skill for their creations. All action, war, and drama movies such as Avatar, Harry Potter, and 300, use this technology at a higher level. One of the major fields using 3D, is gaming. Games like GTA 6, and Call of Duty have amazing effects accomplished by expertise and motion capturing. Motion capturing is an additional incredible technique in 3D art. It’s a process in which the recordings of individual movements are translated into digital structure by sensors on each position. Post this, the demo is converted into 3D after incorporating more amazing effects. There are many kinds of capturing techniques, such as performance capture, mechanical motion capture, electromagnetic motion capture, and optical motion capture. After television, gaming, and movies, one of the most revolutionary fields is 3D printing. It is a process of creating solid objects from computer applications. 3D printing basically modifies the way we make things. Now, the question is: how does this process work? The entire process commences with the creation of a design in 3D software. Post that, the design is exported into a 3D print pipeline which makes the object of this design, and post that, a 3D printer prints the object by adding layers into the cross-sections. Figure 4: 3D printing process These objects are printed from different kinds of processes using different materials such as rubber, wax, ceramics, and even chocolate! Today, this technology is used in various fields such as architecture, industry, and by industrial printing companies. For example, Nike uses 3D printing processes for their shoes. With its success, local printing is seeing a massive growth day-by-day. According to local printing, an organization provides printing services to its client. Figure 5: 3D printed food Figure 6: 3D printed mobile stand Figure 7: 3D Printer Figure 8: Shoes printed by Nike I think 3D printing will change the standards of the industrialised world. This technology is able to print different colours and materials that already exist and will continue to expand with time. People will be able to print more and more products according to their choices. 3D is bound to have an impact in many areas where high-end technology plays an important role, like product design, movies, entertainment, and more. In the coming years, this field will be closely followed, with young talent ready to provide these services. India can play an important role in this revolution. But like any technology, it needs to be handled carefully as even weapons can be produced using the skills and technology related to 3D.
<urn:uuid:7ae67eb9-bb27-4289-8ffd-84c585a39953>
CC-MAIN-2017-04
https://www.hcltech.com/blogs/maya-illusion-3d-art
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00494-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952348
869
3.015625
3
In the process of detailing its $17.7 billion 2014 budget this week, NASA highlighted a mission to snag a 500 ton asteroid, bring it back, stash it near the moon and study it. It also took the time to put in a plug for an ongoing research project it has gong called Solar Electric Propulsion (SEP) that NASA says could be the key technology it needs to pull off the asteroid plan. [RELATED: The sizzling world of asteroids] Both plans are audacious in this age of budget austerity and it will be interesting to see if NASA gets the funding to do either project. But I digress. As for the Solar Electric Propulsion technology, NASA says studies have shown the advantages of using such as system would be a great way to transport heavy payloads from low Earth orbit to higher orbits. The idea would be that traditional chemical rockets could deliver payloads to low Earth orbit and solar electric propulsion could then power a spacecraft to higher energy orbits, including Lagrange points or a potential assembly point in space between Earth and the moon. This approach could facilitate missions to near Earth asteroids and other destinations in deep space, NASA said. In 2011, NASA split $6 million amongst Analytical Mechanics Associates; Ball Aerospace & Technologies; Boeing; Lockheed Martin Space Systems; and Northrop Grumman Systems to begin studying the feasibility of solar electric propulsion that might ultimately lead to the development of some variation of a test spacecraft. At that time NASA said: "Flying a demonstration mission on a representative trajectory through the Van Allen radiation belts and operating in actual space environments could reveal unknown systems-level and operational issues. Mission data will lower the technical and cost risk associated with future solar electric propulsion spacecraft. The flight demonstration mission would test and validate key capabilities and technologies required for future exploration elements such as a 300 kilowatt solar electric transfer vehicle." As for the asteroid snagging plan, NASA would need a robotic spacecraft capable of getting to the object, capturing it and transporting it back into our space realm. When that might happen is anyone's guess but the agency had in place a plan to at least visit an asteroid by 2025 and this new plan would fit in with that idea, NASA said. Check out these other hot stories:
<urn:uuid:99885ad7-323e-4d81-89ef-a4d256b7e2e2>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224468/security/solar-electric-spacecraft-propulsion-could-get-nasa-to-an-asteroid--beyond.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00128-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931749
458
3.59375
4
The Net neutrality rules released in December by the Federal Communications Commission ended years of speculation over what the agency would do to ensure that U.S. broadband providers keep the Internet open. Now, U.S. fixed and mobile broadband providers must prepare to comply with the rules. Below are answers to frequently asked questions about what the rules mean, what services are included and excluded, when the order will likely go into effect and whether the FCC has had the last word. 1. What is the crux of the FCC’s Open Internet Order? The FCC issued four general rules. - First, fixed and mobile “broadband Internet access service" providers like Comcast, Sprint Nextel and Verizon must make certain public disclosures concerning their network management practices, performance, and commercial terms. - Second, the order prohibits fixed broadband providers from blocking lawful content, applications, services or non-harmful devices. However, the prohibition is subject to “reasonable network management" practices. - Third, mobile broadband providers cannot block lawful Web sites or applications that compete with their voice or video telephone services. - Fourth, fixed broadband providers are barred from “unreasonably" discriminating “in transmitting lawful network traffic" over a consumer’s service. The FCC added a caveat: “Reasonable network management shall not constitute unreasonable discrimination." 2. What is a “broadband Internet access service?" The FCC defines it as a “mass-market retail service by wire or radio that provides the capability to transmit data to and receive data from all or substantially all Internet endpoints, including any capabilities that are incidental to and enable the operation of the communications service, but excluding dial-up Internet access service." The FCC also has discretion to define “broadband Internet access service" as any service the agency finds to be equivalent to the service described above or “that is used to evade" the protections in the order. 3. What do “mass market" services include? The FCC defines “mass market" as “a service marketed and sold on a standardized basis to residential customers, small businesses, and other end-user customers such as schools and libraries." 4. Why are enterprise services excluded from the Open Internet Order? The FCC rationalized these services “are typically offered to larger organizations through customized or individually negotiated arrangements." 5. Are facilities-based VoIP and IP-based TV offerings subject to the order? No. The FCC defines these offerings as “specialized services." 6. When do the rules go into effect?
<urn:uuid:7a6fefa6-b642-4f52-9a29-8f68e83b315c>
CC-MAIN-2017-04
http://www.channelpartnersonline.com/articles/2011/03/faqs-about-the-fcc-s-net-neutrality-rules.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00036-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944014
532
2.59375
3
Last week's newsletter "RSA: Lessons learned," which was about taking another look at biometrics, drew some interesting responses. I especially liked the anonymous poster who wrote "No one can surreptitiously pick your pocket and steal your finger. Agreed. But any one can chop off your finger." Well, maybe not anyone, and you'd probably notice that it was happening (try looking up the definition of "surreptitious"!). In looking at some older newsletters, though, I did come across another suggestion for replacing SecurID tokens -- SMS messages. Back in the spring of 2009 I spoke with the folks from Sweden's Nordic Edge about their use of cellphones, SMS messages and One Time Passwords (OTP). The idea is that someone logs in with a username/password combination, then the OTP server sends an SMS message to their cellphone. Only by entering the code received in the SMS does the user gain access. But how do you protect the phone? Typically, phones are protected with PINs, usually a four digit number. Well, if passwords are easily broken how hard is it to use brute force to guess a PIN? There's only 10,000 possible combinations. (Even a four character, alpha only, password has more than 45,000 possibilities. Make it alphanumeric and there's more than a million and a half). What to do, what to do. Maybe it's time to take another look at cellphone biometrics. A few years ago that meant adding a fingerprint reader to the phone. But now that almost all smartphones are equipped with cameras, facial scan and iris scan are possible. In fact there's at least four biometric measurements we can implement on phones: • Fingerprint recognition • Face recognition • Iris pattern recognition • Voice recognition As a recent article in TechBiometric noted: "Use and implementation of biometrics in cell phones is further enhanced by combining the technology with existing cell phone security arrangements. For instance, a cell phone user may have to authorize his mobile banking transactions through biometric recognition as well as using passwords and SMS codes." So now the authentication ceremony becomes: 1. Person logs in with username/password 2. Server sends SMS message with code to user's phone 3. User activates phone with biometric and reads text 4. User inputs code to authentication app on PC 5. User is granted access Is it 100% infallible? No, no method is. But it is better than either username/password alone or SecurID. And that's what we're aiming for right now.
<urn:uuid:f197f5c7-43d1-4d55-81ff-47307dade63d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2178627/security/more-on-biometrics.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00036-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943692
541
2.671875
3
X-by-wire or drive-by-wire system is the use of electromechanical technology in cars to replace mechanical linkages. There are different types of x-by-wire technologies in the marketplace like throttle-by-wire systems, brake-by-wire systems, steer-by-wire systems and park-by-wire systems. At present, these technologies are implemented separately in the engine control, braking or steering system. The x-by-wire technology has been in use in aerospace and is now finding its application in the automotive industry. One of the major factors helping the automotive manufacturers to incorporate this feature in vehicles is the availability of a variety of semiconductor ICs that are cheap and can meet the cost targets to provide control, power and communications required for these systems. The X-by-wire global market in the Japan is estimated to be worth USD XX billion in 2015 and is expected to reach USD XX billion by 2020, at a CAGR of X.X%. Stringent fuel-efficiency laws, advancements in research maximizing the marginal time savings by substituting mechanical linkages with electrical ones, low market penetration, a decrease in cost as the systems become mainstream gradually and a relatively high purchasing power are some of the factors that drive the x-by-wire market in Japan. Reduction of mechanical parts on account of replacement by electrical systems also contributes to making vehicles lightweight, thereby improving fuel-efficiency of cars- a condition that has been mandatory in most developed and several developing countries in the world. It is clear that going forward, as these technologies become more efficient and affordable and fuel-efficiency laws become more comprehensive and stringent globally, the market for x-by-wire systems is bound to grow. The car manufacturers who implement this would also enjoy substantial per vehicle cost savings to boot. In Japan, fuel-efficiency standards have recently been updated. In March 2015, the advising committee of Japan’s Ministry of Land, Infrastructure, Transport and Tourism (MLIT) mandated an increase of 26% of fuel economy for light and medium commercial vehicles. This indicates that the drive-by-wire systems have a large market scope in the country. In this report, the Japanese market for x-by-wire systems has been segmented by type, vehicle type, geography, and vendor. The high cost of systems and associated components, ongoing research leading to the inadequate speed of updating existing research and subpar implementation and enforcement of the fuel-efficiency laws are some of the bottlenecks to the adoption and proliferation of x-by-wire systems in Japan. Some of the key players in Japan’s x-by-wire market mentioned in the report are: What the Report Offers Who should be Interested in this Report?
<urn:uuid:5e605539-6fb6-4b98-bbe8-9ef5132f64d2>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/japan-x-by-wire-systems-market-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00522-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951146
566
2.75
3
From my experiences of dealing with global clients in banks and other sectors, the practice of checking OS controls is either just not covered, or at least the benefits of this are not seen. The area is typically treated in a sort of parrot-fashion checklist way, with no involvement with IT ops folk (who know the complexities and inter-dependencies of internal systems). After 10 years and more I find it surprising that this area hasn't really been targeted effectively because I think many in the field do actually appreciate it's benefits. Some of the more recent versions of so-called Vulnerability Management products are getting closer to be being able to monitor OS and database configurations with authenticated sessions and "dissolvable" agents, but we're still a long way short... and anyway even if the tools offer such capabilities, many clients aren't using these modules. Operating System Security is radically under-appreciated in security and this has been the case since the big bang of business security practices in the mid-90s. OS security, along with application security is now the front line in the battle against hackers, but this point has not been widely realized... We have a lot of terms in security that have no commonly understood definition, and it seems everybody’s version of the definition is the correct one. First up, what is meant by “Operating System?” The usual meaning in most businesses’ operational sense is a bare install – in other words, the box as in its shipped state, with no in-house or any other applications installed – there will [hopefully!] be nothing installed on the computer other than what is made available from the vendor’s install media. This is different from most university Computer Science course syllabus definitions of the phrase, but we will go with the former definition. So what of “operating system security” then? Computer operating systems are designed with configuration options, such as file system permissions, that allow administrators to apply some level of protection to information (files, databases, etc) hosted by the computer. Operating system security relates to the degree to which available controls have been applied, in the face of remote and local risks. Vulnerability Assessment: Current Perceptions and Misconceptions Mostly, when people think of vulnerability assessment the first thing that comes to mind is network penetration testing or use of automated scanning tools. However, cutting a long story short, neither of these two approaches give us a useful or efficient way of assessing vulnerability for our critical infrastructure. Penetration testing these days is mostly just performed as a requirement of auditors, and to be too analytical means to be slower, and most businesses will not tolerate this. The quality of delivery is poor, and furthermore the tests are so restricted as to make them close to useless. Usually the only useful item of information to come out of these engagements is the port scan results. The whole story on automated scanners is a long one (for a longer discussion on this matter refer to Chapter 5 of Security De-engineering) but just to summarize: unauthenticated scanning of critical infrastructure with no further analysis is a recipe for disaster. The marketing engine behind such products claims they can replace manual efforts. Such a claim suits the agenda of many in the security industry but overall it will be the security industry’s customers who will suffer – and are suffering. The expectations were set way too high with these tools. Again, the most valuable output from these tools will be the port scan results. Further developments have been made in the way of products misnomered in the Vulnerability Management genre (“management”? – vulnerability is not managed it is only enumerated) and some of these offer authenticated scanning. While there have been some recent improvements in this area, the most important items of OS security remain unchecked by these tools. Furthermore databases such as Oracle are given scant coverage. Why Analyze OS Security Controls? With regard the subsection heading “Why Analyze OS Security Controls”, there are two categories of answer to this question. The first goes something like “because we have perfectly fine security standards, signed off by our CEO, that tells us we need to analyze security controls”. The second type of answer is related to technical risk and the efficiency of our vulnerability management programs – and this is the subject of the remainder of this article. I mentioned the limitations of penetration testing previously and it should not be seen as a panacea. In a scenario where internal IT staff, including security personnel, do not have intimate knowledge of the IT landscape, the two-week penetration test costing 40K USD (this is an example from some years ago from the APAC region) will barely touch the surface. The gap in knowledge will be filled slightly by a penetration test, but the only scenario where a penetration test can be valuable is one where both target staff and penetration testers are highly experienced in their field. In this case the 40K USD, 40 man-day test is used to try and spot misconfigurations that the internal staff may have missed. This is a good use of funds in most cases. Any other scenario is unlikely to provide much value for businesses. In summary, penetration testing is not the answer for businesses in most cases. We have heard a great deal about APT in recent months. APT has been attributed with many of the recent high profile attacks although its a term that gets bounced around willy-nilly. Malware is released at rates faster than the anti-virus software vendors can release pattern updates. Overall… the perimeter has shifted. The perimeter is no longer the perimeter if you see what I mean, in that it is no longer the border or choke point external firewalls. Business workstation subnets are owned by Botnetz R Us. Many of the attacks are carried out with undisclosed vulnerabilities and because they are undisclosed to the public, there is no patch available to mitigate the software vulnerability. This is the point where many analysts raise a white flag. But this is also the point where OS controls can save the day in many cases. At least we can say that thoughtful use of OS security controls can prevent a business from becoming a “low hanging fruit”. Taking a Unix system as an example: an attacker may have remotely compromised a listening service but they only gain the privileges of the listening service process owner. In most cases this is not a root compromise. The attacker will need to elevate their local privileges in most cases. How do they do this? They look for bad file system permissions, root setuids, and anything running under root privileges. They look for anything related to the root account, such as cron jobs owned by lower privileged users that were configured to run under root’s cron. There are many local attack vectors. With effective controls on server operating systems we have the possibility to severely restrict privilege escalation opportunities, even in cases where zero-day/undisclosed vulnerabilities are used by attackers. Some services can even be “chroot jail” configured, but this requires knowledge of “under the hood” operating system controls. So think of the analysis of operating system controls as a kind of a twist on a penetration test – sort of a penetration test inside out. The approach is “I have compromised the target, now how I would I compromise the target?” Using a root / administrative account allows a vastly more efficient situation where a great deal more can be learned about a system in a short period of time as compared with a remote penetration test. I mentioned previously about the new perimeter in corporate networks – the perimeter is now critical infrastructure. Many businesses do not have internal segmentation. There is only a DMZ subnet and then a flat internal private network with no further network access control. But assuming that they do have internal network access control, then we can imagine that the new perimeter is the firewall between workstation subnets and critical servers such as database servers and so on. The internal firewall(s) makes up the perimeter along with… you got it… the operating system of the database, LDAP, AD, and other critical application hosts [penny drops]. Businesses can proportion the resources deployed in operating system control security assessment depending on the criticality of the device. The criticality will depend on a number of factors, not least network architecture. In the case of a flat private network with no internal segmentation, effectively every device is a critical device and the budget required for security is going to be much higher than in the case where segments exist at differing levels of business criticality. Hopefully then the importance of operating system and database security controls has been made clearer. Thoughtful deployment of automated and manual analysis in this area is a nice efficient use of corporate resources with huge returns in risk mitigation. The longer term costs of information risk management will be less where better targeting of resources is deployed – and certainly, some of the suggestions for vulnerability assessment I have outlined in this article can help a great deal in reducing the costs of vulnerability management for businesses. Cross-posted from Security Macromorphosis
<urn:uuid:d7e945ce-fb1b-4e86-aa6d-dbfe0d2e26ac>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/20603-Out-With-the-New-In-With-the-Old-OS-Security-Revisited.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00430-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948599
1,874
2.578125
3
Kappenman cites the impact of a 1989 event in which a geomagnetic storm brought down the power grid for the province of Quebec, leaving 6 million customers in the dark. Kappenman insists it could have been worse. “We came very close to a blackout that could have extended from the Northeast coast to the Pacific northwest,” he said it. “The North American continent had a near miss.” Kappenman has advised the President’s Commission on Critical Infrastructure Protection on the risks from geomagnetic storms, and has testified before Congress on the issue. The disastrous scenarios Kappenman describes are based on events much larger than the 1989 incident. Solar storms occur in 11-year cycles, but they’re not easy to predict. A story this week in the New York Times noted that although this year represents the peak of the solar activity, no major events have been seen. “The truth of it is there isn’t a lot going on,” Joseph Kunches, a space scientist at the Space Weather Prediction Center, told The Times. “It’s been a bit of a dud.” EMPs and Radio Frequency Weapons The U.S. power grid also faces a range of risks associated with electro magnetic pulses (EMPs) and radio frequency weapons (RFWs) that use intense pulses of RF energy to destroy electronics, according to George Baker, professor emeritus at James Madison University and a veteran of the Defense Nuclear Agency. The most dramatic risk is presented by a nuclear EMP, in which a nuclear device is detonated high in the atmosphere above North America. Such an attack could have a profound impact on power infrastructure. “The big characteristic of (nuclear) EMP for electrical grids and data centers is the is the wide affected region,” said Baker. “This is where you worry about the whole North American grid coming down.” Perhaps a more likely scenario is the use of a RF weapon on a data center. Baker cited an incident in the Netherlands in which a disgruntled former employee used a small RF weapon to damage data in a bank’s data center. Baker says working RF weapons have been created that can fit in everything from a briefcase to a truck. The good news? It’s a threat you can defend against. “The protection is very straightforward,” said Baker. “The Department of Defense has been doing it since the 1960s. We know from the DoD that the protection is affordable.” Special shielding can be incorporated into walls and enclosures to protect from EMPs and RF weapons, Baker said. Walls and enclosures can be shielded, as can ventilation shafts, penetrations in walls and ceilings, and doors. Popik, Baker and Kappenman all urged data center professionals to learn more about these issues and press for action from utilities and the government to investigate sensible defensive measures. Politicians may find the potential risks alarming, but that doesnlt always lead to action. “It’s easy to become distracted about things that may not have happened yet,” said Popik. Pages: 1 2
<urn:uuid:773ac628-5e96-42ca-b39a-3b134776a6df>
CC-MAIN-2017-04
http://www.datacenterknowledge.com/archives/2013/09/30/is-the-power-grid-ready-for-worst-case-scenarios/2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00248-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957724
664
2.78125
3
After reading Jon Taluba’s article, Extinguish or Evacuate, in the January/February edition of Emergency Management, I felt obligated to write a response to a number of the claims he made regarding the hazards of fire extinguisher usage. As a firefighter with an extensive scientific background and understanding of toxicology, I felt that the message of the article was unfairly biased against their use and portrayed a great deal of information inaccurately. When researching the medical effects of a dry chemical fire extinguisher exposure on a patient, critical analysis must be performed before stating with any degree of confidence that it is dangerous. First, the chemical composition must be known. Most current fire extinguishers are composed of inert chemicals like sodium bicarbonate, potassium bicarbonate or monoammonium phosphate. Although any good toxicologist will state that anything is toxic at a high-enough dosage (including water) these chemicals are easily handled by the body and are relatively nontoxic, even in the case of an acute high-dose exposure. The inert substances used in most fire extinguishers and the relatively nontoxic nature of these compounds is likely why the Journal of Toxicology does not report many deaths. And the nature of the injuries requiring hospitalization was not described by Taluba, so it is possible that these victims had other complicating injuries or pre-existing medical conditions. Any substances that are designed to be aerosolized are, by nature, a concern when it comes to respiratory exposure, including fire extinguishers. Dry chemical extinguishing agents aren’t easily water soluble and are also relatively large in size. Any substance, regardless of its composition, will cause a respiratory complication if inhaled into the airways, including fire extinguishing agents. The sole patient that was cited for Taluba’s argument was a trapped, traumatically injured man, who was unable to move while having the dry chemical extinguisher used on him by untrained professionals. He may or may not have been able to protect his airway or even stay conscious. This situation would rarely happen during a normal use of a dry chemical extinguisher by a health-care provider, trained or untrained. Taluba correctly lists the steps to deal with a small fire, beginning with evacuation of the immediate area and the PASS acronym (Pull out the pin, Aim at the base standing away from the fire, Squeeze the handle and Sweep at the base of the fire). The person operating the extinguisher also should consider the size of the fire and if he can effectively extinguish or contain it with a single extinguisher. While it is true that a health-care provider may not be able to deduce if a fire is still smoldering, he may be able to extinguish or control a fire prior to a fire department response. Writing this article using extenuating case studies and misleading statements to convey an opinion as factual was irresponsible and inflammatory in order to create Taluba’s “extensive controversy.” Instead of creating a blanket statement that dry chemical extinguishers are hazardous, there should be recommendations to increase training for the proper use of fire extinguishers and how to size up a fire to determine if it can be handled by a dry chemical fire extinguisher. The threat from smoke and fire if left unchecked is significantly higher than the minimal hazards presented by using dry chemical extinguishers, which is supported by the lack of cases cited in journal articles. The combination of effective fire detection and protection systems, fire extinguishers and proper training is more than sufficient to reduce any risks associated with both fire and dry chemical extinguishers. Patrick Jessee is a firefighter with the Chicago Fire Department. His opinion is his own and does not reflect that of the Chicago Fire Department.
<urn:uuid:62f34918-177d-47fd-ad06-4f7ad22fa7cd>
CC-MAIN-2017-04
http://www.govtech.com/em/health/Readdressing-Hazards-Fire-Extinguisher-Opinion.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00000-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946217
764
3.125
3
If you’ve spent much time around data centers, you’re likely familiar with load balancing. But if you haven’t, it’s a vital concept to understanding how to keep your infrastructure available to all of your users, while also maximizing the efficient use of your computing resources. Even if you’re familiar with load balancing, there are some recently released tools and similar concepts that we’ll cover in order to share your workloads across your servers — and even across the country. Load balancing is the distribution of computer workloads across multiple resources, whether that means computers, clusters, servers, network links, or storage drives. The goal is to maximize the use of available resources, avoid overloading one particular node with more work than it can handle, add redundant components, and encourage faster response times. In other words, load balancing shares loads around in order to achieve higher performance. Hardware or software methods can be used to achieve these goals. Load balancing is similar to another concept, channel bonding, but it focuses on the network layer instead of the physical interfaces at the packet layer of data link layer (see more information on the OSI layer model). For a common use scenario, imagine we are delivering a web-based application from virtualized cloud servers. This application includes a database, web site, and File Transfer Protocol. With software load balancing, the program monitors the external network ports for incoming traffic and forwards them on to the backend servers running the workload. If the backend server does not reply, additional instances are used to meet the demand. Back up nodes might also be used, which are kept inactive except in the case of failure. Before cloud resources, each of these instances ran on their own physical server. With virtual servers, it became much easier to scale out the solution, with the possibility to even have dozens of instances on a single server. Load balancing virtual environments did come with its own share of problems to address, however. There are occasions when it is important to send the same client to the same server in order to maintain their current state within the application. The most common example is a shopping cart in eCommerce, where sending a user to a new server may cause them to lose their saved information. Load balancers can be set up with session persistence in order to keep requests from a single client on the same server. Fault isolation is one of them. If a single node or instance fails, the shared network, storage, or compute resources could be compromised. That means all the virtual machines, each with their own instances, could also be compromised. Performance can also drop just because the amount of loads on a single server has reached the limit. Redundancy is therefore key, for all infrastructure components, and you should also be running several virtualized instances across different physical servers so your load balancer can move to a new physical host if necessary. Finally, when configuring your load balancing rules to autoscale, you must consider how to control where and when new instances are placed in order to maximize efficiency and avoid overloading a single server or storage unit. It’s unlikely, but left to defaults, a load balancer could place each instance on its own machine, wasting your resources. Or it could do the opposite, placing instances so that a single point of failure results in downtime. One similar concept in VMware vSphere virtualization is vMotion, which allows the live migration of virtual machines between cloud resource pools and even between data centers. The VMs move with their configuration settings intact, so you don’t need to reconfigure network or storage. vMotion in fact contains some load balancing technology itself, sharing the network traffic caused by pushing VMs around between different network adaptors. This helps reduce the time it takes to move a VM, especially one with a large memory configuration. There are three main types of load balancing: Round robin: this method simply moves incoming requests in a sequential order, where request 1 will go to server 1 and down the line. Least connections: this configuration will send incoming requests to the server that currently has the least connections and also the lowest compute load. IP Hash: this transforms the IP address of incoming traffic into a hashed code, which is then algorithmically examined to determine which server will receive the request. While load balancing helps keep your applications available to users and scales out additional resources automatically to meet demand, it must be set up carefully in order to avoid failure. Green House Data offers managed load balancing services for all cloud environments and can also assist in setting them up for collocated infrastructure.
<urn:uuid:755862a9-f7ff-48de-b924-29bf4c961d4a>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/the-lowdown-on-load-balancing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00330-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92369
944
3.078125
3
User management is an important part of distributed computing environments. It provides the consistent authentication and authorization services necessary for universal access. For centralized security, many customers use the IBM Directory Server, a centralized security mechanism supported on AIX®. To achieve a foolproof IBM Directory Server configuration and ready it for use, you need a good understanding of Lightweight Directory Access Protocol (LDAP) concepts and configuration management. This article provides an overview of LDAP and its architecture. It also discusses LDAP configuration and management on AIX. The article focuses on troubleshooting different types of problems while configuring the LDAP server and client. The suggestions in the troubleshooting section should be helpful to AIX administrators, technical support, and the development community. LDAP overview and architecture LDAP is an industry standard protocol for accessing directory servers. IBM Directory Server needs to be configured to support user authentication through LDAP with both the AIX specific schema and the RFC 2307 schema on AIX. LDAP is optimized for reading, browsing, and for searching directories and specialized databases storing ordered information. Many computing environments are designed to make network resources available to users from any location, such as workstations, public workstations, and the Web. IBM Directory Server can be used for user management to achieve this objective. Figure 1 shows an overview of an LDAP configuration. Figure 1. LDAP configuration LDAP is a standardized protocol and specialized database for storing ordered information. When users log in, the LDAP client sends a query to the LDAP server to get the user and group information from the centralized database. DB2® is a database used for storing the user and group information. The LDAP database stores and retrieves information based on a hierarchical structure of entries, each with its own distinguishing name, type, and attributes. The attributes (properties) define acceptable values for the entry. An LDAP database can store and maintain entries for many users. An LDAP security load module was created in AIX Version 4.3. This load module provides user authentication and centralized user and group management functions through the IBM SecureWay® Directory. A user defined on an LDAP server can be configured to log in to an LDAP client even if that user is not defined locally. The AIX LDAP load module is fully integrated with the AIX operating system. Configuration of IBM Directory Server IBM Directory Server on AIX can be configured with either: ldapcfgcommand line tool - The graphical version of the ldapcfg tool, called The following file sets are required to configure IBM Directory Server: - Install the DB2 file set db2_09_01.rte. - Install the following file sets: The following file sets are required for configuring the LDAP client. Note: 61 represents version of the file set. It will vary depending upon the version you are installing. - The system should run in 64-bit kernel mode. Use the bootinfo -Kcommand to determine the kernel mode. - AIX requires 64-bit hardware. Use the bootinfo -ycommand to determine the hardware. - A minimum of 512MB RAM is required. (For better results, use 1GB or more.) - IBM Directory Server requires 80MB of free space in the file system where the DB2 database is to be created. - If you plan to use the InstallShield GUI to install, be sure that you have at least 100MB of free space in the /var directory and at least 400MB in the /tmp directory. AIX provides the mksecldap command to set up the IBM Directory servers and clients to exploit the servers. mksecldap command performs the following tasks for the new - Creates the ldapdb2 default DB2 instance. - Creates the ldapdb2 default DB2 database. - Creates the AIX tree DN (suffix) under which AIX user and group is stored. - Exports users and groups from security database files of the local host into the LDAP database. - Sets LDAP server administrator DN and password. - Optionally sets server to use Secure Sockets Layer (SSL) communication. - Installs the /usr/ccs/lib/libsecldapaudit, an AIX audit plug-in for the LDAP server. - Starts the LDAP server after all the above is done. - Adds the LDAP server entry (slapd) to /etc/inittab for automatic restart after reboot. mksecldap -s -a cn=admin -p passwd -S rfc2307aix All setup information is stored in the /etc/ibmslapd.conf file. Configuration of an AIX client system for the IBM Directory Server The ldap.client file set contains the IBM Directory client libraries, header files, and utilities. You can use the command to configure the AIX client against the IBM Directory Server, as mksecldap -c -h <LDAP Server name> -a cn=admin -p adminpwd -S rfc2307aix You must have the IBM Directory Server administrator DN and password to configure the AIX client. Once the AIX client is configured, the secldapclntd daemon starts running. Once the AIX client is configured against the IBM Directory Server, change the SYSTEM attribute in /etc/security/user file to LDAP OR compat or compat or LDAP to authenticate users against the AIX client The /usr/lib/security/methods.cfg file contains the load module definition. mksecldap command adds the following stanza to enable the LDAP load module during the client setup. LDAP: program = /usr/lib/security/LDAP program_64 = /usr/lib/security/LDAP64 The /etc/security/ldap/ldap.cfg file on the client machine has configuration information for the secldapclntd client daemon. This configuration file contains information about the IBM Directory binddn, and password information. The file is automatically updated by the mksecldap command during AIX auth_type attribute in the /etc/security/ldap/ldap.cfg file specifies where the user needs to be authenticated. If the auth_type attribute is UNIX_AUTH, then the user is authenticated at the client system. If it is then the user is authenticated on IBM Directory Server. Configuration of IBM Directory Server with SSL The IBM Directory Server and client can be configured with SSL. This avoids the transfer of data in the clear-text format over the network. It encrypts the information and then sends it over the network. IBM Directory Server encrypts the user's password information, and then sends it over the network when SSL is configured. The following file sets are required to enable the server and client encryption support: For initial server setup, run the following command: mksecldap -s -a cn=admin -p pwd -S rfc2307aix -k usr/ldap/etc/mykey.kdb -w keypwd where mykey.kdb is the key database, and keypwd is the password to the key database. For servers that are configured and running, run: mksecldap -s -a cn=admin -p pwd -S rfc2307aix -n NONE -k /usr/ldap/etc/mykey.kdb -w keypwd For initial client setup, run: mksecldap -c -h <ldapserver name> -a cn=admin -p adminpwd -k /usr/ldap/key.kdb -w keypwd Frequently used commands on the AIX LDAP client system are listed in Table 1 below. Table 1. Frequently used commands |/usr/sbin/start-secldapclntd||Starts the | |/usr/sbin/stop-secldapclntd||Stops the | |/usr/sbin/restart-secldapclntd||Stops the currently running | |/usr/sbin/ls-secldapclntd||Lists the | |/usr/sbin/flush-secldapclntd||Clears the cache of the | |mkuser -R LDAP <username>||Creates users from the LDAP client.| This section includes several typical problems, followed by suggested solutions. Problem: LDAP server starts in configuration only mode... The LDAP server starts in configuration only mode while restarting the LDAP server or doing LDAP server configuration returns the following error: "Failed to initialize be_config. Error encountered. Server starting in configuration only mode." - Confirm whether the server started in configuration only mode by using the following command, or look at /var/ldap/ibmslapd.log for this # ldapsearch -h teak01.upt -b "" -s base objectclass=* | grep config ibm-slapdisconfigurationmode=TRUE - Sometimes the DB2 license key was not registered properly. This is one of the main reasons for this problem. The license key has to be registered, as follows, to resolve this problem: - Log in as a user with root authority. - Register the DB2 product license key: #/usr/opt/db2_08_01/adm /db2licm -a /usr/ldap/etc/ldap-custom-db2ese.lic #/usr/opt/db2_08_01/adm /db2licm -a /usr/ldap/etc/db2wsue.lic - If the above step doesn't resolve the problem, clean up the LDAP server configuration and export LDAP_DBG=1 before doing the LDAP server configuration again. The /var/ldap/dbg.out, /var/ldap/dbg.log, and /var/ldap/ibmslapd.log files should have required diagnostic information to debug this problem further. Problem: Cannot log in to the system with LDAP user... Cannot log in to the system with LDAP user after successful Directory Server configuration. Make sure there are no errors in the following areas, which can lead to a false impression about the existence of a particular LDAP user. - During client configuration, using mksecldap -u <userlist>specifies a comma-separated list of usernames or ALL to enable all users on the client. This means SYSTEM and registry attributes of the users is set to LDAP. mksecldap -c -h monster -a cn=admin -p adminpwd -u user1,user2 -uflag ensures that user2users can be used as LDAP users on the client machine, but this flag does not add any users in the LDAP server database. Login is successful for these users if they are added to LDAP using mkuser -R LDAP <user name>or while doing server configuration, as follows: mksecldap -s -a cn=admin -p adminpwd -S rfc2307aix All the local users will be added to LDAP in this case. As user2are local users, they will be automatically added into the LDAP database. - Verify that Directory Server is up and running. The ibmslapdprocesses should be running: # ps -eaf |grep ibm ldap 278760 1 0 Jan 14 - 0:08 /usr/ldap//bin/ibmdiradm -l ldap 434392 1 2 Jan 14 - 339:44 ibmslapd -f/etc/ibmslapd.conf - Verify whether the LDAP client is up and running. The secldapclntdprocess should be running: # ps -eaf |grep -i secldap root 393408 1 0 Jan 14 - 0:15 /usr/sbin/secldapclntd root 725062 692358 0 03:20:38 pts/0 0:00 grep -i secldap - Verify whether that user exists on the server: # lsuser -R LDAP usr_3112 usr_3112 id=3112 pgrp=gp_3112 groups=gp_3112,gp_3118,gp_3124 home=/tmp shell=/usr/bin/ksh login=true su=true rlogin=true daemon=true admin=false sugroups=ALL admgroups= tpath=nosak ttys=ALL expires=0 auth1=SYSTEM auth2=NONE umask=22 registry=LDAP SYSTEM=KRB5LDAP OR compat logintimes= loginretries=0 pwdwarntime=0 account_locked=false minage=0 maxage=0 maxexpired=-1 minalpha=0 minother=0 mindiff=0 maxrepeats=8 minlen=0 histexpire=0 histsize=0 pwdchecks= dictionlist= fsize=-1 cpu=-1 data=262144 stack=65536 core=2097151 rss=65536 nofiles=2000 roles= - Verify the user's registry and SYSTEM attributes. Both of them should be set to LDAP. lsuser -a registry SYSTEM username - Verify whether the LDAP stanza is added into # grep -p LDAP /usr/lib/security/methods.cfg LDAP: program = /usr/lib/security/LDAP program_64 =/usr/lib/security/LDAP64 Problem: What is required to migrate all the AIX users as LDAP authenticated users? What is required to migrate all the AIX users as LDAP authenticated users? mksecldap allow a user to migrate a specific set of AIX users while doing server configuration? No. By default, migrates all AIX users as LDAP authenticated users while doing server If you do not want to migrate any AIX users as LDAP users, run the mksecldap command with #mksecldap -s -a cn=admin -p adminpwd -s rfc2307aix -u NONE Problem: mkuser might return an error message mkuser command might return the following error # mkuser -R LDAP test 3004-686 Group "staff" does not exist. 3004-703 Check "/usr/lib/security/mkuser.default" file. If the LDAP client and NIS client are configured on the same machine, then users are not able to create users from the AIX LDAP client. They get the above error message. You can rectify this problem by installing APAR IY90556. Problem: Does mksecldap allow a user to migrate a specific set of AIX users? mksecldap allow a user to migrate a specific set of AIX users while doing server configuration? mksecldap does not support migrating a specific set of users as LDAP users while doing server configuration. To handle this requirement, run the command so that no AIX user is migrated, and create the required users mkuser -R LDAP later. It's important to note that the -u flag, while doing server configuration, only accepts NONE as an argument and any other argument is mksecldap -s -a cn=admin -p adminpwd -S rfc2307aix -u user1,user2 All local users are exported in this case. Problem: Client configuration problems if server configuration is done with -u NONE This is broken down into three problems. /usr/sbin/mksecldap -c -h batonrouge05.upt.austin.ibm.com -a cn=admin -p passw0rd "Cannot find users from all base DN client setup failed." The client setup basically does the ldapsearch to see if there are any users added to the LDAP server already. The configuration fails if it does not find any users in LDAP. At least one user should be added to LDAP to overcome this problem. The following ldif file should be added to LDAP DIT using the dn: ou=People,cn=admin ou: People objectClass: organizationalUnit dn: uid=testuser,ou=People,cn=admin uid: testuser objectClass: aixauxaccount objectClass: shadowaccount objectClass: posixaccount objectClass: account objectClass: ibm-securityidentities objectclass: top cn: testuser passwordchar: * uidnumber: 203 gidnumber: 203 homedirectory: /home/testuser loginshell: /usr/bin/ksh isadministrator: false mksecldap -c -h batonrouge05.upt.austin.ibm.com -a cn=admin -p passw0rd "Cannot find the group base DN from the LDAP server. Client setup failed." The group base DN should be present in the LDAP DIT before configuring the client. The above failure is due to non-existence of a group base DN. A group needs to be added to resolve this problem. The following ldif file should be added to the LDAP DIT using the dn: ou=Groups,cn=admin ou: Groups objectClass: organizationalUnit dn: cn=testgrp,ou=Groups,cn=admin cn: testgrp objectclass: aixauxgroup objectclass: posixgroup objectclass: top gidnumber: 203 memberuid: testuser isadministrator: false Creating a user with mkuser when the server is -u NONE and the client has been successfully # mkuser -R LDAP id=1000 pgrp=grp_2000 groups="grp_2006,grp_2012" usr_1000 Group "staff" does not exist. Check "/usr/lib/security/mkuser.default" file. mkuser command has a legacy behavior of checking the defaults first, even if it is not going to use them. It fails, since a group called staff does not exist. All the problems in this section will be resolved in one shot if you add the following ldif file to LDAP. dn: ou=Groups,cn=admin ou: Groups objectClass: organizationalUnit dn: cn=staff,ou=Groups,cn=admin cn: staff objectclass: aixauxgroup objectclass: posixgroup objectclass: top gidnumber: 203 memberuid: testuser isadministrator: false dn: ou=People,cn=admin ou: People objectClass: organizationalUnit dn: uid=testuser,ou=People,cn=admin uid: testuser objectClass: aixauxaccount objectClass: shadowaccount objectClass: posixaccount objectClass: account objectClass: ibm-securityidentities objectclass: top cn: testuser passwordchar: * uidnumber: 203 gidnumber: 203 homedirectory: /home/testuser loginshell: /usr/bin/ksh isadministrator: false The ldif file can be added to the LDAP server, as follows: #/usr/bin/ldapadd -D $ADMIN_DN -w $ADMIN_DN_PASSWD -f <ldif file> The base DN of a configured LDAP server must be used in the ldif file. Otherwise, this ldif file cannot be successfully added. - Understanding LDAP - Design and Implementation: This IBM Redbooks publication will help you create a foundation of LDAP skills, as well as install and configure the IBM Directory Server. - IBM Tivoli® Directory Server Administration Guide: This guide contains the information that you need to administer the IBM Tivoli Directory Server. - Read the following IBM Redbooks: - Check out other articles and tutorials written by Uma Chandolu: - AIX and UNIX®: The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills. - New to AIX and UNIX?: Visit the New to AIX and UNIX page to learn more about AIX and UNIX. - AIX 5L™ Wiki: A collaborative environment for technical information related to AIX. - Search the AIX and UNIX library by topic: - Safari bookstore: Visit this e-reference library to find specific technical resources. - developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts. - Podcasts: Tune in and catch up with IBM technical experts. Get products and technologies - IBM trial software: Build your next development project with software for download directly from developerWorks. - Participate in the developerWorks blogs and get involved in the developerWorks community. - Participate in the AIX and UNIX forums:
<urn:uuid:8f51474b-33f5-4f06-8fde-b964f478e32e>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/aix/library/au-ldapconfg/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00146-ip-10-171-10-70.ec2.internal.warc.gz
en
0.711325
4,610
2.625
3
Following the Yahoo breach many users are concerned about their online safety and what to do next. As with all data breaches although a lot of the issues are beyond the control of the user there are some straightforward measures that they can take to be as safe as possible online. Passwords should be long, strong and unique, worryingly the most common passwords in 2016 are: “123456” , ”password” and “12345678”. If you are using a simple or common password you are at risk with or without a breach. Attackers maintain lists of common passwords and simply cycle through them to gain access to accounts. If you struggle to remember long complex passwords try using more memorable pass phrases like: It is easier for your brain to remember so you can create a much longer password that is harder for an attacker to brute force. You should also not reuse passwords across different websites especially those that contain sensitive information such as email and banking websites. If you really want to ensure strong unique passwords it is worth considering using a password manager app. These apps can generate and store unique passwords for every site you use and store it in a secure “vault” that you unlock with a fingerprint or one master password. Almost all the major platforms offer the option to enable multifactor authentication. In simple terms this means that when you log in using a new device they send a code to your phone to verify it is really you. In the event your password is stolen the attacker is unable to login without this code. This is a really quick win in terms of online safety. Users should be aware of their digital footprint; many users are often unaware of quiet how much personal information they share online. Look at privacy settings on social media and Google yourself to see what can be found. haveibeenpwned.com is a website which can notify users if their details have appeared in any past or inevitably future data breaches. Think of it as credit monitoring for your online identity. Attackers often use events in the news such as this breach as a catalyst to trick users by sending out spam emails pretending to be associated with the breached company. These emails often have malware attachments they want the user to open or they are trying to get the user to fill in valuable personal details and passwords. If in doubt go directly to the website and contact the company directly to verify if any contact is genuine. Avoid clicking links or opening attached files. Taking these measures should help you stay ahead of the attackers as much as possible by restricting their ability to reuse and abuse stolen information.
<urn:uuid:b64ef5a5-47fa-4df0-bb0b-ff1969767f67>
CC-MAIN-2017-04
https://blog.avecto.com/2016/09/yahoo-what/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00146-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94823
527
2.515625
3
Definition: A string v is a substring of a string u if u=u′ vu″ for some prefix u′ and suffix u″. Also known as factor. See also subsequence. Note: A substring is contiguous. A subsequence need not be. From Algorithms and Theory of Computation Handbook, pages 11-26 and 12-21, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "substring", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/substring.html
<urn:uuid:9be90506-1161-4620-84c2-57716c0b233c>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/substring.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00202-ip-10-171-10-70.ec2.internal.warc.gz
en
0.804555
233
3.453125
3
With Google reporting recently that 35 hours of video are uploaded to YouTube every minute, it’s clear that online video is impacting everything from entertainment and education to customer service and a number of other areas of daily life. And after the U.S. elections on November 2nd, it’s clear you can add politics to that list. In past elections, the poor candidates spent endless days knocking on doors and standing outside grocery stores or other high traffic areas trying to meet as many people as possible to get their message out. Now, the combination of social media and online video enables them to quickly deliver a message to many times the number of people they could reach in person in a month, let alone a single day. That’s probably why YouTube had more than 450 registered political accounts ahead of the recent elections with some of the most popular videos getting viewed nearly three million times! And it’s not just candidates in races that use video, everyone from President Obama to news outlets and special interest groups all are increasingly using online video to inform and influence the public. Individuals can get into the debate, too – CNN’s iReporters and response videos on YouTube are examples of how people can use video to express their own thoughts and views. With politics increasingly using online video, what does this mean for data centers and networks? With an hour of video generating 100MB, for low-end quality, that means the 35 hours of video being uploaded in the time it took you to read this post will require 3.5GB. HD would require approximately 100 GB. In other words, more video means more of everything – storage, servers, bandwidth and a need to better manage network performance. Just imagine if Abraham Lincoln could have stepped down off a tree stump and instead recorded his campaign speeches with a Flip HD video camera, loaded them online, distributed them with a Content Delivery Network (CDN) and made them available to everyone on the Internet – now that would be a must see ‘server speech’!
<urn:uuid:6386a495-09d0-48e0-9f46-9dfed6ed8de4>
CC-MAIN-2017-04
http://www.internap.com/2010/11/23/political-speeches-move-from-the-stump-to-the-server/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957851
419
2.578125
3
Tee L.H.,University of Selangor | Yang B.,CAS South China Botanical Garden | Nagendra K.P.,University of Selangor | Ramanan R.N.,University of Selangor | And 7 more authors. Food Chemistry | Year: 2014 Dacryodes species are evergreen, perennial trees with fleshy fruits and belong to the family Buseraseae. Many Dacryodes species are underutilized but are widely applied in traditional folk medicine to treat malaria, fever and skin diseases. The nutritional compositions, phytochemicals and biological activities of Dacryodes edulis, Dacryodes rostrata, Dacryodes buettneri, Dacryodes klaineana and Dacryodes hexandra are presented. The edible fruits of D. edulis are rich in lipids, proteins, vitamins, fatty acids and amino acids. Its extracts (leaf, fruit and resin) exhibit antioxidant, anti-microbial, anti-carcinogenic and other bioactivities. D. rostrata fruit has significant nutrient content, and is rich in proteins, lipids and minerals. These fruits are also highly rich in polyphenols, anthocyanins and antioxidant activities. This comprehensive review will assist the reader in understanding the nutritional benefits of Dacryodes species and in identifying current research needs. © 2014 Elsevier Ltd. All rights reserved. Source Kong K.W.,University Putra Malaysia | Chew L.Y.,University Putra Malaysia | Prasad K.N.,University Putra Malaysia | Lau C.Y.,Semongok Agriculture Research Center | And 3 more authors. Food Research International | Year: 2011 The nutritional and antioxidant properties of peels, pulp and seeds of kembayau (Dacryodes rostrata) fruits were evaluated. Kembayau seeds and pulp were rich in fat, while peels had the highest ash contents. Potassium was the most prevalent mineral in peels (380.72-1112.00mg/100g). In kembayau fruits, total flavonoid content (1012.74-28,022.28mg rutin equivalent/100g) was higher than total phenolic and total monomeric anthocyanin contents. Kembayau seeds exhibited high flavonoid and phenolic contents compared to the contents in peels and pulp. Antioxidant capacities were also higher in seeds as typified by trolox equivalent antioxidant capacity assay (51.39-74.59mmol TE/100g), ferric reducing antioxidant power assay (530.05-556.98mmol Fe2+/100g) and by 1,1-diphenyl-2-picryl hydrazyl radical scavenging activity (92.18-92.19%) when compared to peels and pulp. Pulp and peels of kembayau fruit may be an important source of energy and minerals for human consumption, while seeds have a good potential as antioxidants. © 2010 Elsevier Ltd. Source
<urn:uuid:dd5f5323-593e-4e47-84ec-427959787ea5>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/agriculture-research-center-semongok-1086336/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00495-ip-10-171-10-70.ec2.internal.warc.gz
en
0.874945
654
2.828125
3
Given the risks we run in not securing our phones, you’d be forgiven for thinking it must be a task requiring a doctorate in computer science. In reality, however, securing a smart phone can take only a few simple steps. It’s not hard to render your phone and its data of little value to a criminal, and by doing so, protect your data and identity, and increase your chances of getting your phone back as well. How do you protect your phone and your data? 1. Don’t leave your phone unattended on the counter or table whilst paying for shopping or enjoying a meal out. 2. Always use a PIN to lock your phone and your SIM card. Locking the phone prevents someone from using the handset without resetting it (which normally wipes your data). Locking the SIM stops someone from removing your card and using it in another phone. 3.Use a different PIN to your bank accounts. A six-digit PIN is much more secure than a four-digit PIN, as long as it’s not your date of birth! 4. Keep your software up to date. Smart phones have an operating system just like a computer; and from time to time, security issues are identified. Some networks delay software upgrades but security updates should be available immediately. Install them right away. 5. Back up your data to the cloud or your home PC. The best way to do this is using software like iTunes that can do this automatically, but you still need to check on a regular basis. 6. Only install the apps you really need. Not all app stores check the applications they contain, and it’s easy for apps to extract your personal data. Check the authenticity of applications and what data they will access, and only install applications from developers you trust. 7. Only let apps access data they really need. Many smart phones allow you to set whether applications such as Facebook or Twitter can access your photos or contacts or track your location. If you don’t need it, turn it off. 8. Use different passwords for different websites so that a hacker who gets one password can’t take over your life. This is particularly important for online and mobile banking. Consider also using device specific passwords for email. Many services like Google Mail allow you to set a separate password for your phone, so even if your phone is compromised, your main password isn’t. 9. Store passwords securely. Most people now have 50 or more passwords for websites, applications, phones and computers. If you need to write these down, never add them to your contacts, store them in a web browser or on your phone. As convenient as it is, just think of how quickly a thief could wreak damage with this information. Instead, use an application such as lastpass or keepass to store them securely on your phone and PC. 10. Track it. Most major smartphone platforms, including Apple’s iOS and Google’s Android, offer tracking apps that can track your phone, lock it and wipe the data remotely. This may not recover the phone, but it will stop someone else from using it and gaining access to your personal data. Taking these simple steps won’t prevent you from leaving your phone in the shopping centre. It won’t make your phone look any less attractive to a thief. What it will do is make it much less valuable, even rendering it almost worthless. It will also mean that even without the phone, your memories and records will be safely in your possession. If you’re lucky, it might even mean you will get it back.
<urn:uuid:b82cc1ea-dba2-43ef-b14b-b6d264403f85>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/01/30/how-do-you-protect-your-phone-and-your-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00313-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926534
751
2.859375
3
Disaster Recovery Tip #34: El Nino’s Impact on the Year Ahead NOAA officials recently announced this year’s El Niño is expected to be among the top 3 strongest on record. In addition, they claim it’s impacts can be expected through spring 2016. “We’re predicting that this El Niño could be among the strongest El Niños on record dating back to 1950,” said Mike Halpert, deputy director of the Climate Prediction Center in Maryland. To help you prepare, below are general predictions by region for the months ahead and into winter. - Wetter: Southern U.S. from California to the Carolina’s then up parts of the East Coast - Drier: Parts of the Ohio Valley, Great Lakes, Northwest and Northern Rockies - Cooler: Desert Southwest, Southern Plains, northern Gulf Coast - Warmer: Northern tier of states from the Pacific Northwest to the Northern Plains, Great Lakes, and Northeast Learn more about this recent announcement by NOAA and more in depth predictions by clicking here.
<urn:uuid:8fda8b55-9f40-4487-9eb5-8878c3059c1e>
CC-MAIN-2017-04
http://www.agilityrecovery.com/business-continuity-el-ninos-impact/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00551-ip-10-171-10-70.ec2.internal.warc.gz
en
0.879005
223
2.796875
3
You’ve just deployed an ecommerce site for your small business or developed the next hot iPhone MMORGP. Now what? Don’t get hacked! An often overlooked, but very important process in the development of any Internet-facing service is testing it for vulnerabilities, knowing if those vulnerabilities are actually exploitable in your particular environment and, lastly, knowing what the risks of those vulnerabilities are to your firm or product launch. These three different processes are known as a vulnerability assessment, penetration test and a risk analysis. Knowing the difference is critical when hiring an outside firm to test the security of your infrastructure or a particular component of your network. Let’s examine the differences in depth and see how they complement each other. Vulnerability assessments are most often confused with penetration tests and often used interchangeably, but they are worlds apart. Vulnerability assessments are performed by using an off-the-shelf software package, such as Nessus or OpenVas to scan an IP address or range of IP addresses for known vulnerabilities. For example, the software has signatures for the Heartbleed bug or missing Apache web server patches and will alert if found. The software then produces a report that lists out found vulnerabilities and (depending on the software and options selected) will give an indication of the severity of the vulnerability and basic remediation steps. It’s important to keep in mind that these scanners use a list of known vulnerabilities, meaning they are already known to the security community, hackers and the software vendors. There are vulnerabilities that are unknown to the public at large and these scanners will not find them. Many “professional penetration testers” will actually just run a vulnerability scan, package up the report in a nice, pretty bow and call it a day. Nope – this is only a first step in a penetration test. A good penetration tester takes the output of a network scan or a vulnerability assessment and takes it to 11 – they probe an open port and see what can be exploited. For example, let’s say a website is vulnerable to Heartbleed. Many websites still are. It’s one thing to run a scan and say “you are vulnerable to Heartbleed” and a completely different thing to exploit the bug and discover the depth of the problem and find out exactly what type of information could be revealed if it was exploited. This is the main difference – the website or service is actually being penetrated, just like a hacker would do. Similar to a vulnerability scan, the results are usually ranked by severity and exploitability with remediation steps provided. Penetration tests can be performed using automated tools, such as Metasploit, but veteran testers will write their own exploits from scratch. A risk analysis is often confused with the previous two terms, but it is also a very different animal. A risk analysis doesn't require any scanning tools or applications – it’s a discipline that analyzes a specific vulnerability (such as a line item from a penetration test) and attempts to ascertain the risk – including financial, reputational, business continuity, regulatory and others - to the company if the vulnerability were to be exploited. Many factors are considered when performing a risk analysis: asset, vulnerability, threat and impact to the company. An example of this would be an analyst trying to find the risk to the company of a server that is vulnerable to Heartbleed. The analyst would first look at the vulnerable server, where it is on the network infrastructure and the type of data it stores. A server sitting on an internal network without outside connectivity, storing no data but vulnerable to Heartbleed has a much different risk posture than a customer-facing web server that stores credit card data and is also vulnerable to Heartbleed. A vulnerability scan does not make these distinctions. Next, the analyst examines threats that are likely to exploit the vulnerability, such as organized crime or insiders, and builds a profile of capabilities, motivations and objectives. Last, the impact to the company is ascertained – specifically, what bad thing would happen to the firm if an organized crime ring exploited Heartbleed and acquired cardholder data? A risk analysis, when completed, will have a final risk rating with mitigating controls that can further reduce the risk. Business managers can then take the risk statement and mitigating controls and decide whether or not to implement them. The three different concepts explained here are not exclusive of each other, but rather complement each other. In many information security programs, vulnerability assessments are the first step – they are used to perform wide sweeps of a network to find missing patches or misconfigured software. From there, one can either perform a penetration test to see how exploitable the vulnerability is or a risk analysis to ascertain the cost/benefit of fixing the vulnerability. Of course, you don’t need either to perform a risk analysis. Risk can be determined anywhere a threat and an asset is present. It can be data center in a hurricane zone or confidential papers sitting in a wastebasket. It’s important to know the difference – each are significant in their own way and have vastly different purposes and outcomes. Make sure any company you hire to perform these services also knows the difference. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:938efc48-4280-4ff4-aba0-6991d00fbca5>
CC-MAIN-2017-04
http://www.csoonline.com/article/2921148/network-security/whats-the-difference-between-a-vulnerability-scan-penetration-test-and-a-risk-analysis.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00267-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9412
1,091
2.734375
3
A disc brake is a wheel brake to slow or stop the rotation of a wheel using brake pads, brake disc, and brake calipers. The most prominent difference between a disc brake and a drum brake is that in a drum brake, friction is created using a brake shoe whereas, in a disc brake the same is created using brake pads. In order to stop a vehicle, the brake pads are pressed against the disc using calipers in a disc brake. whereas in drum brakes, the brake shoes are pushed against the drum. Usually the disc is made up of different materials like cast iron, reinforced carbon, and ceramic composites. Currently, disc brakes are used in almost all of the modern vehicles for the front two wheels, and hence the Asia-Pacific region is the largest producer of automobiles across all the regions, and is the largest market for disc brake. Europe is the second-largest market for disc brake, as it is also the second largest market for automobile production globally and the use of disc brakes in this sector has increased exponentially in the recent times. The road safety norms which have reduced the stopping distance and the enhanced performance of vehicles is the main reason for the growth of the disc brakes market in this region. The North American region is projected to surpass Europe in terms of vehicle production in the coming years. The growing demand for luxury and performance vehicles, along with the safety norms for stopping of a vehicle are projected to boost the disc brakes market in the North American region. Other factors for the growth of disc brake market on a global scale are the growing demand for higher performing vehicles and longer service intervals. However, the low-end vehicle manufacturers avaoiding the use of disc brakes due to the high cost involved are expected to restrain the growth of the disc brake market in this region. The global disc brake market was valued at $9.78 billion in 2013, and is anticipated to grow at a CAGR of 7.9%, to reach $14.3 billion by 2018. 1.1 Analyst Insights 1.2 Market Definitions 1.3 Market Segmentation & Aspects Covered 1.4 Research Methodology 2 Executive Summary 3 Market Overview 4 Disc Brake by Applications 4.1 Passenger Cars 5 Disc Brake by Geographies 5.3 North America 5.4 Rest of World 6 Disc Brake by Companies 6.1 Aisin Seiki Co Ltd 6.2 Kiriu Corporation 6.3 Nissin Kogyo Co. Ltd 6.4 Sundaram Brake Linings Limited 6.5 TMD Friction Group S.A. 6.6 Zhejiang Asia-Pacific Mechanical & Electronic Co. Ltd 6.7 Mando Corp. 6.8 Accuride Gunite 6.9 Haldex Foundation Brakes 6.10 Hyundai Mobis Module & Parts Mfg 6.11 Knorr-Bremse Commercial vehicle systems 6.12 Meritor Commercial Truck 6.13 TRW Chassis Systems 6.14 Robert Bosch Gmbh Automotive Technology 6.15 Akebono Brake Industry Co. Ltd 6.16 Automotive Components Europe S.A. (ACE) 6.17 Brembo S.P.A. 6.18 Continental Automotive Group 6.19 Federal-Mogul Vehicle Components Solutions 6.20 Nisshinbo Brake Inc. Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement Asia-Pacific Disc Brake The Asia-Pacific disc brake market is projected to reach $7.9 billion by 2018 from $5.1 billion in 2013, growing at 9.2% annually. The growing demand for higher performing vehicles and longer service intervals are some of the factors driving the disc brake market for this region, while concentration of the market with smaller and cheaper vehicles using drum brakes acts as a restraining factor. Europe Disc Brake The European disc brake market was valued at $2.78 billion in 2013, and is expected to grow at a CAGR of 5.8%. It is projected to reach $3.68 billion by 2018. North America Disc Brake The North American disc brake market is driven by factors such as fast growth in the luxury and sports vehicle segment and stringent emission norms. The low-end vehicles in the region refrain from using disc brakes due to its initial cost. The North American disc brake market is projected to reach $1.7 billion by 2018 from an estimated value of $1.2 billion in 2013, growing at 6.9% annually.
<urn:uuid:8c24ca9f-5d5e-4a08-a90c-9744f9d03dbd>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market-report/disc-brake-reports-3391408288.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00019-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913215
985
3.03125
3
“If you have a spam in your inbox, there’s an almost one in ten chance that it was relayed from an Indian computer,” says Sophos’ Graham Cluley. According to the latest stats by the security company, India has overtaken the US and has become the top spam-relayer country in the world. With 9.3 percent of all spam coming from computers located within its borders, India is followed by the US (8.3 percent), South Korea (5.7 percent), Indonesia and Russia (both 5 percent). The shift is thought to be due to the fact that more first-time Internet users get online in growing economies, but are unaware of the need of keeping their machines clean with antivirus solutions and unfamiliar with a lot of online scams that lead to malware infection and to the computer becoming a spam-relaying machine. Sophos’ researchers say that, all in all, the global amount of spam emails sent every day is in decline because spammers have discovered better platforms for targeting users: Facebook, Twitter and (lately) Pinterest. Also, ISPs around the world are becoming better at detecting and blocking regular, email spam. Unfortunately, no matter how good the news is about the decline of spamming, the numbers for email phishing attempts and malware-laden emails are rising, so there is a whole new set of approaches that users must learn to detect.
<urn:uuid:9333ca8b-83ad-4f5b-b02f-000b7cc57787>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2012/04/23/india-becomes-top-spamming-country/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00073-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959395
295
2.53125
3
7 Simple Tips to Prevent Malware Infections There are some simple, common sense things you can do that can vastly improve your security posture and lessen the chances of a major malware infection on your system. Antivirus Software – If you don’t have some kind of reliable antivirus software always running in the background, you should consider yourself already compromised. In fact, chances are very good that an intruder has access to your system and/or data right now. There are even free solutions like Avast and AVG which prevent many common threats, so there is absolutely no excuse to not have at least minimal protection. Whatever AV solution you use, set it up to accept automatic updates (very important) and scheduled it to run scans daily. This step alone will protect you from over 90% of the threats out there. Beware of Phishing and Spear Phishing Emails – A phishing email looks like it comes from a well-known organization, like PayPal, Amazon, or a national bank, containing a malicious attachment or a link for you to click which will open the door to an infection, or worse. Hackers blast phishing emails to thousands or millions of email addresses hoping someone clicks. Spear phishing campaigns, on the other hand, are targeted and designed to make them much more effective against a specific organization, or even an individual. Hackers will often do extensive research to make their email very convincing, using personal or business information acquired from social networking sites like Facebook, LinkedIn and Twitter, or other publicly available information. They will usually make them look like they are coming from a trusted source, like family, friends, or internal personnel or departments. Sometimes they’ll be disguised as a notification from within the organization for an incoming fax, a scanned document, or a voicemail message, all designed to look “trustworthy” enough to entice the target to open an infected attachment or follow a link to a malicious site. The primary rule concerning email is question everything. Don’t follow links in any email to check an account or verify the “problem” you are being notified about, and don’t download and open attachments you are not absolutely sure about. And don’t be afraid to make a phone call to whoever just sent you an unexpected email to verify it came from them. But don’t use the phone number included in the email – hackers set up boiler rooms to receive those calls! Look for bad English and grammar, as many of these campaigns originate in foreign countries where prosecuting offenders is much more difficult. Pop-ups – Whenever you’re browsing the web and see a pop-up message appear, exercise extreme caution: pop-ups are a favorite means of delivering viruses. Even clicking the close button or the “x” may be enough to get you into hot water. A favorite tactic of hackers is generating messages that pop up and look legitimate, such as your Flash player is out of date and needs an update, prompting you to click for the update. DON’T DO IT. And never trust a pop-up that says you’ve been infected with something and to “click here” to get rid of it. Go to the source yourself with valid URLs you are sure about. Here are some useful links to check your Flash and Java versions: Keep all software and applications up to date – A favorite exploit vector for hackers is out-of-date software. Operating systems like Windows, and popular software like Flash and Java, are in use every day on billions of systems and devices worldwide, and that’s a numbers game hackers just love to take advantage of. So when your system or software tells you an update is available, take care of it right away. And remember to beware of pop-ups informing you about updates, as discussed above! USB Sticks/Flash Drives – Small, convenient and with tremendous storage capacity, USB storage devices are a great way for hackers to get their foot in the door and even move past air gaps to more sensitive areas within an organization. It’s a common practice for hackers to load them with a virus and then leave them laying around in smoking areas, waiting and break rooms, or even on subway or park benches, hoping someone will pick it up, plug it in and deliver the malicious payload. With a little thought and ingenuity in selecting “drop-spots,” it’s also a favorite method for hackers to target specific organizations or individuals. Never plug in any USB storage devices from questionable or unknown sources. You may just want to see what it is so you can return it, but the “reward” you may get is not anything you want. And beware of freebies and gifts. It was widely reported that at the 2013 G-20 Summit in Russia, USB devices like memory sticks and specially modified mobile phone chargers containing spyware, emblazoned with Russia’s G20 summit logo, were included in gift bags passed out to high ranking delegates. Gifts like these can keep on giving – your data to hackers. Web Habits – Some common sense goes a long way. Illegal download sites for software, games, music and movies are notorious conduits for hackers to deliver viruses and other dangerous malware, so always consider the source. If you have doubts, run a check on the URL to see a little of their history. We’ve even made a handy tool for you to use. Passwords – Using the same password for everything is a very dangerous habit, and unfortunately, an all too common practice that hackers rely upon. Create strong passwords (a mix of letters, different case, numbers and special characters) and change them regularly. Using the same password(s) for many things makes it easy for a hacker to turn one stolen password into a skeleton key of sorts, allowing them to compromise a target on multiple fronts. Following these seven simple tips will get you off to a great start to protecting your network and your valuable data, but it still won’t mean you are safe from every kind of threat out there. If you suspect your system is compromised, or if you’d like more information, contact Global Digital Forensics at 1 (800) 868-8189, or use the contact us link below: copyright 2013 by Global Digital Forensics. All rights reserved.
<urn:uuid:f54ff502-88de-4ffd-b85b-249680474442>
CC-MAIN-2017-04
https://evestigate.com/7-tips-to-help-prevent-malware-infections/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00470-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928244
1,309
2.640625
3
What do field sales people, home teleworkers, medical personnel, and any one working remotely from a central site have in common? A need for up to the minute information. One of the most successful models for using the Internet for business is the information dissemination model. One of the most common method for business communication today is email. Email can be sent/received in many ways; pagers, cell phones, and the like. However, one email communication option that holds promise for increased and more timely information flow is web based email systems. However, many businesses choose to not deploy web mail due to perceived security risk of web based applications in general. More specifically, not wanting to increase the risk of exposing corporate mail systems to external threats. Viruses, spam, worms, and other malicious attacks and non-malicious events can bring email infrastructures to their knees. With recent government legislation in countries such as the U.S., email confidentiality has become a growing concern. So, what approaches are there for deploying web mail systems in a secure manner? What are the options for web mail deployment? Understanding how web mail system work can help in deciding if web mail systems can be securely deployed. Web Mail Security Goals Most web mail systems are designed using a multi-tiered architecture. Usually, a web server serves as a reverse proxy to a backend email server that actually services the users mail requests. Most web mail systems use a separate database to store the mail versus the user authentication information. The main security issues for web mail are: Identity management, privacy, data integrity and availability. Part of identity management is user authentication. User identity verification is important because without verifying the identity of sender or receiver identity theft can occur. Fortunately, many web mail systems support a wide range of authentication schemes. For example, web mail user authentication can be done using authentication protocols native to the mail server O/S or 3rd party authentication methods such RADIUS, LDAP or SecureID. Privacy has to do with keeping information from unauthorized exposure. The primary method for ensuring privacy is the use of cryptography. Various cryptographic schemes are in use today. PGP and S/MIME, both widely implemented in the form of browser plug-ins and/or integration API, are widely used and well understood. Both PGP and S/MIME encrypt the message itself. SSL and IPSec encrypt at lower levels of session and network layers. SSL is the more widely used security protocol for basic web mail. Data integrity has to do with protection from unauthorized modification of email. Data integrity can be preserved by cryptographic techniques such as hashing and signing of messages. PGP and S/MIME provide the facility of digitally signing messages in such a way that tampering with the data will result in missed matched message hash results. Availability involves ensuring that the web mail system is as accessible as possible. The use of redundant servers, load balancing and fail over, and server clustering are all common ways to increase the probability that the web mail system will be available at the right time. An added plus to redundancy is continuous availability even during maintenance windows. After a web mail user is positively identified and authorized the next step is to initiate retrieval of that users’ email. Using a set of stored procedures and scripts, the web server formats the user HTML requests so that the back end email server can serve up mail. The usual backend mail server includes Microsoft Exchange, Netware Mail or Lotus Notes. Each of these systems includes a web mail service that uses default ports of 80 for HTTP and 443 for HTTP/SSL. Most web mail policies require the use of HTTP over an encrypted channel such as Secure Sockets Layer (SSL) or Secure Shell protocol (SSH). In rare cases, the IP security (IPSec) is used as the secure communication channel for web mail systems. After the user has finished sending / receiving and viewing mail the user will either log out or simply close the web browser. What happens next is dependent on the specific session management design of the web mail solution. The Cookie Problem The issue with web mail session management is centered around how session cookies are managed. Session cookies are files containing information about the state of the session. The web mail server records this information in a text file and stores this file on the web mail user’s hard drive (web browser). The session cookie sometimes contains authentication information along with the usual information about such things as the last URL (page) that the user viewed. By design this makes it easier for the user to move from one page of mail to the next without having to re-authenticate for page change. The problem comes though when the user “logs off”. If the web mail system does not erase the session cookie stored on the users computer and if the user does not close their browser, an attacker can easily re-log in to the web mail system while impersonating the authorized user. Why does this happen? Because the session cookie, which contains in some cases the authentication information, is still cached in the browser. This is a major security flaw in the design of several web mail systems. How does this happen? 1. The attacker presses the “back” browser button, 2. The attacker is presented with the web mail logon dialog screen (if using standard HTTP authentication) 3. Attacker simply presses the “OK” button – Voila! The attacker is now logged in as the authorized user. This vulnerability alone is enough for many security conscious organization to not allow web mail access unless some countermeasure to the “log off” problem is deployed. Small wonder why web mail access requests are greeted with suspicion. Fortunately, there are countermeasures that are available to reduce risk of such attacks on web mail systems. Web Mail Security Approaches There are three ways that web mail security can be done: 1. Development In-house 2. Deploy a web mail Security technology/product 3. Outsource to 3rd party Many businesses refuse to deploy web mail due to concerns over security issues inherent to web based access to mail. Figure 1 highlights some of the issues that are, in fact, valid concerns. However, there are countermeasures that can be applied to mitigate most of the security issues. One such countermeasure is application knowledge. Having security minded development staffs who are properly trained in secure software development principles could minimize poor programming habits that introduce vulnerabilities into the web mail application. A resource to organization who are establishing secure programming standards include: Foundstone, or online training available from the International Webmasters Association IWA-HWG. Also, a well-written guide in secure application development can be found here. These resources can be used to establish a baseline of secure programming ideas within an organization. The second approach is the use of security technology. Technology is available now that be immediately deployed as a protective layer around a web mail infrastructure. Most of these products are based on the idea of a reverse proxy. The difference in products is the technology being used to implement the reverse proxy functionality. For example, IronMail email security appliance from CipherTrust uses hardened version of Apache as the reverse proxy. The IronMail appliance features a protocol anomaly- based intrusion detection system built in to the secure web mail application on the appliance. The IDS can detect several hundred known exploits unique to web mail. In addition, classes of exploits such as buffer overflow, directory traversal, path obfuscation, and malformed HTTP requests. As an all-in-one approach to web mail security there are few such products that do the job as well. Outsourced Web Mail service A third approach to web mail security is via out-sourced or hosted web mail service. Yahoo and MSN provide a webmail access. However, very few people using their services would rate such services as ‘secure’. Thus the need for business class level of secure web mail access provided by managed security service providers such Co-Mail. The Co-Mail secure mail service, offered by Ireland based NR Lab LTD, provides a web based secure email service with a user interface that can be used by anyone. Co-Mail security architecture allows this service to be a good choice for any size organization. Co-Mail allows a company to use its own or a Co-Mail registered domain for mail routing. This mail service provides mail confidentiality and is cryptography based on OpenPGP and SSL. Other security features of this on line email service include, rudimentary anti spam, file encryption, strong user authentication via (optional) Rainbow iKey support. Through an administrative web interface an admin can register for the service, set up new users among other housekeeping tasks. From the admin interface can be viewed organizational email statistics such as near-immediate or historical user account activity. The administrator can customize the look and feel for end user by uploading company logo’s, modifying the background header, and selecting header text color. In addition, a company can use its own domain name or become a sub domain to the Co-Mail service. Co-Mail can integrate into the end user’s current email environment via a downloadable proxy software called Co-Mail Express. Co-Mail Express is a light weight-software application that resides on the end users desktop tray. Its job is to intercept mail directed to port 25 in order to encrypt/decrypt a mail message. Although this feature is not mandatory, some may find helpful if web based mail interfaces are not your cup of tea. Once an end user logs into the service, the user can perform the usual email tasks such sending and receiving mail. In addition, the user can encrypt/decrypt files for secure storage using the Encrypt/Decrypt option within the Co-Mail web interface or the Co-Mail Express interface. The user can also manage the address book, export the address book, turn on/off antispam, set up auto reply texts and so on. Although, very easy to use for small to medium user communities, traditional large enterprises may be hesitant to outsource their entire email service to a third party. ISPs in particular may want to think seriously about this service value to their customers. This service is worth a look due to potential cost savings in up front setup, and ongoing maintenance. Lower cost and implementation speed are two reasons a large may want to outsource its email system Co-Mail. However, the strength of the security employed by the service provider is also a central concern. Technical details for Co-Mail are available here. Web mail is becoming more acceptable as security awareness increases. While security knowledge helps, management commitment is a key for development of in-house web mail solutions. There is a trend in the secure web mail technology sector toward the use of appliances that provide web mail protection as well as other email infrastructure security objectives. The appliance approach simplifies management and requires internal knowledge of how to handle the web mail security. Service-based web mail reduces the up front cost of self-deployment and ongoing management. Prefer service based web mail services that understand the threat environment of web mail and provide security and scalability that can respond to your business environment.
<urn:uuid:9645fffe-a0a6-430a-ba9c-d5f352b667b0>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2004/01/27/secure-web-based-mail-services/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00378-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921085
2,282
2.578125
3
Glibc flaw leaves Linux open to hackers – including thousands of IoT devices. A major vulnerability in the GNU C Library could result in Linux-based IoT devices being hacked, according to security researchers. The flaw affects all versions of the library, known as glibc, since version 2.9. According to Fermin J. Serna, staff security engineer and Kevin Stadmeyer, technical program manager at Google, a fully working exploit has been discovered but a patch has also been made available. In a blog post, the engineers said the fault to result in remote code execution on the target device. “We immediately began an in-depth analysis of the issue to determine whether it could be exploited, and possible fixes. We saw this as a challenge, and after some intense hacking sessions, we were able to craft a full working exploit,” the engineers said. “The glibc DNS client side resolver is vulnerable to a stack-based buffer overflow when the getaddrinfo() library function is used. Software using this function may be exploited with attacker-controlled domain names, attacker-controlled DNS servers, or through a man-in-the-middle attack,” they added. While the patch is now available, the problem could be exacerbated as Linux forms the core operating system in many IoT devices which are difficult to update in the field. The engineers said that the flaw was found ages ago but not fixed. “To our surprise, we learned that the glibc maintainers had previously been alerted of the issue via their bug tracker in July 2015,” the engineers said. IoT security needs work Ross Brewer, vice president and managing director of international markets at LogRhythm, said unless the new patch is installed quickly, hackers are going to have a field day accessing confidential information via computers, mobile phones or internet routers. “What’s worrying is that the bug has been around since 2008 and was identified last year, but overlooked as a low priority. In all honestly, it’s baffling that nothing was done about it sooner,” he said. “Mobile and internet-connected devices are now an essential part of business life, but there’s no doubt that they have opened up new ways for hackers to get their hands on company data.” Mark James, security specialist at ESET, told Internet of Business that hackers could implant code into the device’s memory when domain look-ups are performed. “Once compromised remote code could be executed thus taking complete control of the device, once this happens realistically anything could happen at their command,” he said. Meanwhile, in related news, Forbes reports that Samsung’s SmartThings devices have a number of security vulnerabilities that remain unpatched, potentially allowing criminals to enter connected homes undetected.
<urn:uuid:3a3d941b-edb2-490a-b9e2-5b33820d99c5>
CC-MAIN-2017-04
https://internetofbusiness.com/major-security-bug-affects-thousands-of-iot-devices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00102-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959496
592
2.59375
3
Enhancing Your Reading Skills In this week’s Study Guide, we’re going to talk about how to read. Now some of you may be thinking, “Dude, I already know how to read. I’m reading this right now!” I hear ya. What makes a reader a good one is not the ability to merely discern words on a page, though, but rather the ability to extract central meanings from them. This isn’t always easy. As esteemed reader and writer Thoreau said, “To read well, that is, to read true books in a true spirit, is a noble exercise, and one that will task the reader more than any other exercise which the customs of the day esteem. It requires a training such as the athletes underwent, the steady intention almost of the whole life to this object.” So, presuming a certain grasp of the mechanics of reading on your part, we can proceed into the methods that make for a good reader. To be sure, this isn’t necessarily someone who has a voluminous vocabulary or can take in more than a thousand words per minute. And you certainly don’t have to be able to make it through James Joyce’s book “Ulysses” — renowned for its difficult passages and cumbersome style — to consider yourself a good reader. If after reading, say, this article, you can immediately boil down its theme down to one or two short, relatively simple sentences, then you probably have the makings of a good reader already. This ability to wring out the essential significance of vast collections of words does not usually come from some innate understanding of how language works and how it’s used to express ideas effectively (although it can come from that). Good readers typically arrive at their status through tried-and-true methods of gleaning key points and lessons from an overall corpus. One of the ways in which they do this is by marking major parts of a printed text. Speaking of which, don’t ever loan a book to a good reader, unless you want your copy returned with dog-eared pages, highlighted paragraphs, and sentences and phrases underlined in pen. Also, most good readers will tackle a book or manual or guide or whatever with a dictionary and other references. Again, good readers are obsessed with meaning, and they don’t want their understanding of something to be incomplete just because they skipped over a word they didn’t understand. Go, and do thou likewise. At the very least, keep a dictionary on hand. If you’re laboring through an especially technical text, perhaps you should have a Web browser open to www.whatis.com at a computer nearby. Additionally, good readers are typically good writers (and vice versa). I’m not saying you’ve got to try to write a novel or anything like that. But if you concentrate on writing well in e-mail messages, forums, blogs and other forms of media — with proper consideration of correct grammar, clarity of communication, and solid organization of points and facts — then you’ll also find yourself absorbing information in print as never before. You’ll also find yourself realizing how many bad writers are out there, which brings me to my last point: Good readers usually don’t waste much time with poorly written text. Life’s too short to try to toil through shoddily composed and organized materials. Be sure to skim through a book before paying money for it if you can. And if you should find that you actually paid money for a horribly written book, contact the company and ask for your money back. You might not get your money back — caveat emptor, always — but your complaint will send a clear message: Bad writing will not be tolerated by good readers.
<urn:uuid:5f1dc6ac-aa73-4520-9c0f-1e05ecf1d9b5>
CC-MAIN-2017-04
http://certmag.com/enhancing-your-reading-skills/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00524-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947558
795
3.453125
3
Ultimately supercomputing is a visual endeavor. Turning the so-called “data deluge” into pretty pictures and animations has always been the most straightforward way to extract insight from HPC simulations. But with the size of simulation datasets growing in tandem with the size of supercomputers, visualization has never been more challenging. Visualization at scale is a problem peculiar to HPC, and therefore solutions are sometimes hard to come by. Generally, users have a choice of buying (or building) a domain-specific solution, purchasing a proprietary general-purpose product, or opting for an open-source solution. It’s in the latter category that Kitware has made its mark. Founded in 1998, the company built a business around supporting the Visualization Toolkit (VTK), an open-source software library designed for computer graphics, image processing and visualization. VTK was born in 1993 at the GE Research Center in Schenectady, New York, as a software demonstration package that accompanied a visualization textbook, titled “The Visualization Toolkit: An Object-Oriented Approach to 3D Graphics.” Both the software and the book were developed by GE employees Will Schroeder, Ken Martin and Bill Lorensen. After the book was published, interest in the software took off, and Schroeder and Martin went on to start up Kitware to support the burgeoning VTK community. Although Kitware’s original business model centered on visualization consulting services, the company has since expanded, adding a group that specializes in computer vision, one that does scientific data publishing, and another that focuses on software process and development. But it’s the scientific computing group at Kitware that is aimed squarely at the HPC space. After Kitware was founded, Sandia and Los Alamos National labs contracted the company to help develop visualization tools for the simulations being run on the supercomputers at that time. A lot of the existing tools back then were either completely serial or used the shared-memory model of parallel computing and could not handle the data being produced by these large-scale supercomputer clusters. To fill that need, Kitware extended VTK to support distributed-memory parallel architectures. As part of that work, they developed ParaView, an open-source visualization app that could be applied across a range of HPC-type applications. The initial funding for the effort was provided by a three-year contract from the DOE’s Accelerated Strategic Computing (ASCI) program, and in 2002, the first release of ParaView was made public. DoD labs, including the Army’s Engineer Research and Development Center (ERDC) and the Army Research Lab (ARL), subsequently kicked in additional money to expand the scope and functionality of the software. ParaView, which itself is based on VTK, has become one of the premier general-purpose applications for HPC visualization. Today, ParaView can scale up and down across HPC infrastructure, from workstations to the largest supercomputers. When running on a supercomputer or (more commonly) a visualization cluster connected to one, ParaView runs in the client-server mode. In this case, the backend visualization runs in as a distributed parallel application with just user interface on the client side. ParaView has been ported to the Blue Gene architecture, the Cray XT line of supercomputers, Lawrence Livermore’s ASC Purple, as well as all commodity-based Linux and Windows clusters. On the client side, the software supports the usual suspects: Windows, Linux and Mac OS. The largest users of VTK and ParaView are still in the DoD and DOE communities, which is not surprising when you consider how much of the big supercomputing hardware these two agencies own. Kitware has also worked with a few organizations in Europe, including the French electricity provider, Électricité de France (EDF), considered to be the world’s largest utility company. EDF also contributes to the VTK code base and has built its own tools on top of the library. ParaView is especially good at working with Finite Element Analysis (FEA) codes, making it well-suited to a variety of applications at DOE and DoD computing centers. At Sandia, it’s being used to visualize the results of CTH, a shock physics code, as well as a number of material sciences applications used for nuclear weapons research. At Los Alamos, ParaView is being used across the open science domains, including cosmology, magnetohydrodynamics, and wind turbine modeling, to name a few. ParaView is also being applied to some of the climate simulations run at the lab, and the visualization results may wind up in the next Intergovernmental Panel on Climate Change (IPCC) report. Besides the government space, Kitware also targets academia and commercial markets. Industry customers include oil & gas, pharmaceutical, and medical companies. However, because of the economic downturn, the majority of the company’s revenue is currently derived from the federal sector. Even in boom times, though, more than half of Kitware’s revenues come from government contracts. According to Dr. Berk Geveci, who leads Kitware’s scientific computing group, approximately a quarter of the company’s revenue comes from small business grants issues by government agencies. These fall under the Small Business Innovation Research (SBIR) and the Small Business Technology Transfer (STTR) programs. Because a lot of these grants involve development of the open VTK and ParaView source code base, the business model becomes a virtuous circle for both Kitware and its customers. “We use those [grants] to develop our tools and toolkit, to add more functionality and make them available to the wider community,” explains Geveci. “We’ve been lucky that a lot of our collaborators understand the value of open source.” One of Kitware’s more recent efforts is to provide an API for ParaView so that it can be directly coupled to a simulation code and run in the same process space. The goal in coupling is to avoid I/O as much as possible, keeping what would have been post-processing inside the simulation itself. At the same time, doing the visualization in-situ makes better use of the available computational resources. According to Geveci, the nice thing about the ParaView API is that you don’t have to change the internal data structures. You just add ParaView calls in the application to do the initialization, visualization functions and then finalization. The ParaView library is currently being used with PHASTA, a CFD simulation code that can scale extremely well. Early testing has been performed on an IBM Blue Gene/L supercomputer at the Rensselaer Polytechnic Institute. At some point they would like to run ParaView in this coprocessing model on the Jaguar supercomputer at Oak Ridge National Lab. “Our goal is to scale that functionality all the way to petascale and beyond,” says Geveci. ParaView is not alone in the open source arena. VisIt was originally developed at Lawrence Livermore National Laboratory under ASCI and is now supported by the DOE’s Scientific Discovery Through Advanced Computing (SciDAC) program. Like ParaView, VisIt is based on the VTK library. VisTrails, 3DSlicer, MayaVi, and OsiriX are other visualization apps developed with VTK, but they tend to be more specialized and are not targeted for large-scale HPC. CEI’s EnSight is the big competitor to ParaView in the commercial arena, especially in verticals like aerospace, where Kitware is trying to make inroads. Compared to open source visualization, EnSight has been around much longer and is more fully featured, but is less common at the big government labs. “Government and academic supercomputing sites tend to prefer open source,” notes Geveci. “So you won’t necessarilly see EnSight in many of them. In industry, EnSight may be on more machines than Visit and ParaView. Hopefully, in time, we’ll change that too.” A future area of interest for Kitware is support for distance visualization. Being able to view the results of a simulation without having to move the data off the supercomputing site is becoming more necessary as datasets grow in size. Along those same lines is the concept of collaborative visualization, enabling multiple researchers at different sites to share results and look at the data together. A lot of this will be enabled by Web-based interfaces, which are slowly edging out the traditional desktop GUI. “The idea is to share data and visualization of data as a larger community,” explains Geveci. “Enabling sharing of data and results through distance visualization and collaboration is very important to us and I think is going to be important to the community at large.”
<urn:uuid:22e139cc-35b1-4ad4-8608-6ebd23d9c766>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/06/22/high-end_visualization_the_open_source_way/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00340-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945271
1,890
2.65625
3
Defining the Issue Defining the Issue The Internet privacy debate is concerned with how companies use consumer data for the purpose of improving their customers’ online experience. To ensure consumer confidence and expand e-commerce, businesses must be committed to protecting their customers’ privacy. Privacy is a global issue without national borders, and policymakers around the world are struggling to determine the best way to protect consumers' online privacy. Sensitive and confidential data is routinely transmitted across computer networks. Whether it is through e-commerce transactions or simple online polling, businesses collect and use data to maximize consumers' online experience. For example, online booksellers collect data to make reading recommendations; advertisers profile users based on Internet-surfing habits; and media sites collect data that allow visitors to customize the news they receive. Businesses must address the integrity of consumer data to ensure consumer confidence, or they will not fully realize the benefits of e-commerce. At the same time, they must provide consumers with the best possible online experience and allow them to reap the benefits of online interactions. The global nature of the Internet complicates privacy issues because of cultural and geographic differences related to privacy, security, and the role of the government. Cross-border approaches to dealing with privacy include voluntary consumer protection cooperation, multilateral treaties for criminal law enforcement, cyber incident response teams, and consumer education and awareness. To address the privacy issue, state, provincial, and national governments must develop policies that meet the needs of a global economy. Overly burdensome privacy policies can become barriers to trade, preventing the free flow of information across borders. Stringent rules impeding the cross-border flow of data may hinder new technology development as well as educational, commercial, and entertainment applications. - Where governments do regulate to address privacy concerns, this should be based on internationally recognized principles, and not to mandate the use of specific technologies or business models. - Cisco believes that industry self-regulation can be effective in protecting privacy, strengthened by innovative tools to provide consumers with choices to protect their personal data and understand how it is collected and used. - Several ambitious and successful industry-led initiatives, such as the Online Privacy Alliance and TRUSTe, have achieved a reasonable balance between consumer protection and business requirements. Online Privacy Alliance Electronic Privacy Information Center Cisco Privacy Statement
<urn:uuid:7e79a4da-884d-4f97-a5ab-0e32b3df1f10>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/government-affairs/government-policy-issues/privacy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00158-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894985
477
3.203125
3
The first Mac malware of 2017 was discovered by an IT admin, who spotted some strange outgoing network traffic from a particular Mac. This led to the discovery of a new piece of malware unlike anything I’ve seen before and the first new piece of malware for the Mac in 2017. If you follow cybersecurity news, you may have heard of the latest Linux exploit referenced under CVE-2016-5195, which has been dubbed Dirty COW. The name is derived based on the exploitation of the copy-on-write (COW) mechanism in Linux. What is UEFI and its predecessor the MBR? What challenges are there to get a dual boot environment with an alternative operating system? We often hear about botnets (networks of infected computers) being used to send out spam, perform Distributed Denial of Service attacks or other nefarious activities by the bad guys. Well, an unidentified researcher thought there was much more that could be done with a botnet and took on an unprecedented mission to map out the…
<urn:uuid:8e4bc757-ca5d-48df-b48f-6e80bd4c799f>
CC-MAIN-2017-04
https://blog.malwarebytes.com/tag/linux/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00460-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957505
208
2.765625
3
Math Comprehension Made Easy June 16, 2012 Want to dive into analytics as a data scientist? Get started with Stonehill College‘s “How to Read Mathematics.” The well-structured article by Shai Simonson and Fernando Gouvea details the reading protocol that will allow anyone to get the most out of reading mathematical explanations as opposed to, say, reading poetry or fiction. The authors explain: “Students need to learn how to read mathematics, in the same way they learn how to read a novel or a poem, listen to music, or view a painting. . . . Mathematical ideas are by nature precise and well defined, so that a precise description is possible in a very short space. Both a mathematics article and a novel are telling a story and developing complex ideas, but a math article does the job with a tiny fraction of the words and symbols of those used in a novel. “ The article goes on to explain common mistakes math readers make, such as missing the big picture for the details, reading passively, and reading too fast. A wealth of tips for understanding math texts follows, including examples. Much of this is information I knew, but had trouble articulating when my son was in pre-calc. How I wish I had had this piece then! For anyone looking at a math-heavy field like data analytics, this article is a must-read. Cynthia Murrell, June 16, 2012 Sponsored by PolySpot
<urn:uuid:1f3765be-6686-410b-82e1-2bdf77ec9f0f>
CC-MAIN-2017-04
http://arnoldit.com/wordpress/2012/06/16/math-comprehension-made-easy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00094-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955154
308
3.359375
3
Student Loans: An Overview If you’ve read the news lately, you know money is tight. The economy’s not looking good, a lot of people have lost their savings and, not surprisingly, debt is rising. Further, banks are reluctant to loan out money. If you’re currently in school or looking to matriculate, you may be at a loss as to how to pay for your education. But fear not: You still have many resources at your fingertips. Read on for details about everything from what kinds of loans are available to what you should do if you have existing payments. If You Need a Loan According to FinAid.org, a public service site, there are three main types of academic loans: federal student loans, parent loans and private student loans. Federal student loans typically have low interest rates and don’t involve credit checks. There are two types of federal loans: Stafford and Perkins. Stafford Loans can be either subsidized, meaning the government pays the interest while the student is in school, or unsubsidized, meaning the student pays the interest, which may or may not be deferrable. Interest in 2008-09 is set at 6 percent, but it gets progressively lower through 2012. (For example, rates for 2009-2010 are set at 5.6 percent.) According to FinAid.org, about two-thirds of subsidized Stafford Loans are given to students from families with household incomes of less than $50,000. You can read more about them here. Perkins Loans are similar to Stafford Loans but are awarded to undergraduate and graduate students with “exceptional financial need,” according to FinAid.org. All Perkins Loans are subsidized with an interest rate of 5 percent. Next, parents can request funds either from the federal government or via a private source such as a bank. These are officially known as Parent Loans for Undergraduate Students (PLUS). PLUS Loans from the government — referred to as Direct PLUS — offer an interest rate of 7.9 percent, while loans from private sources (FFEL PLUS) have a fixed rate of 8.5 percent. Interest is not subsidized, and PLUS loans carry an origination fee of 4 percent. Learn more about PLUS loans here. The final option is for the student to take out a loan from a private lender. These are known as private education loans, or alternative education loans. Families typically turn to these loans when the federal variety do not provide enough funding. Private loans cost more, however. The interest rates and fees depend on the student’s credit score, and if your score is less than 650 — using the FICO (Fair Isaac Corp.) standard — you’re unlikely to be approved. However, you can boost your chances by including a co-signer, as this person’s credit score is factored into the decision, as well. Finally, if you need funding, don’t forget to look at scholarships and federal grants. Scholarships typically are based on academic prowess, athletic achievement or financial need. Grants typically come from the government and can be based on a variety of factors, similar to scholarships. Relevant grants include the Federal Pell Grant, Federal Supplemental Educational Opportunity Grant (FSEOG), Academic Competitiveness Grant (ACG) and the National SMART Grant. Private loan giant Sallie Mae includes more information about grants and scholarships, including how to apply, here on its Web site. And don’t hesitate to try, even if you think your chances are slim. Since scholarships and grants are basically free money — they don’t have to be repaid — it’s worth a shot. If You Have an Existing Loan If you’re struggling to repay an existing loan, the first step is to understand your options. There are different kinds of repayment plans, such as standard, graduated or income based. You also can look into loan consolidation to lump any and all existing payments together. Or you might want to prepay some of the loan to avoid extended interest. Study up on your rights and responsibilities as a borrower. You can check out this page on the Sallie Mae Web site to get started. If you have additional questions, the Student Loan Borrower Assistance Project offers advice and gives step-by-step instructions on how to solve loan problems. Now is the time to take a cold, hard look at your student loans. If you apply the same dedication and attention to detail to researching your funding as you do to your schoolwork, you can really maximize your savings. – Agatha Gilmore, email@example.com
<urn:uuid:f16798af-4612-4e04-949e-20315105f2ec>
CC-MAIN-2017-04
http://certmag.com/student-loans-an-overview/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00212-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959237
958
3.21875
3
Physicists at The University of Texas at Dallas built an invisibility cloak they can use to hide things by superheating a sheet of carbon nanotubes, which changes in appearance from a transparent sheet of incredibly expensive plastic into what looks like a heat-shimmer mirage in the desert, but is as convenient and portable as a sheet of superheated carbon nanotubes. You can see the video below, or read all the details of the process and principles at Nanotechnology, which is published by the Institute of Physics, a professional association for physicists and invisibility cloak makers. The mirage effect is an optical illusion that happens when light waves bend, as they do while passing from cool air high above the ground to hotter air just above it. Because human brains prefer that light would always behave the way they expect it to, we don't interpret the image as reality bending. Instead, our optical-processing centers pick the image it already understands that most closely resembles what we see in the bent light – usually water -- and just decides that's what it is. Our brains do this because our eyes and the portion of our brains that drives them evolved from the simple ability to detect light into sophisticated image processors during millions of years of evolution in hundreds of thousands of species that progressed from the sea to the land and, ultimately, to the asphalt, without ever learning how to look at bent light or superheated carbon nanotubes the right way. Carbon nanotubes, btw, are long single molecules of carbon wrapped in a clear material that keeps each strand of carbon in place and insulates it from the air. The atomic bonds holding each strand of carbon to another hold together so tightly that a sheet of them only one nanotube (one molecule) thick is stronger than steel would be if you could slice it that thin, but has the density of air. The heat-shimmer-mirage invisibility effect is created by running a current through a stack of sheets carefully arranged so the carbon nanotubes line up neatly. Under current the sheets heat up and cool down so quickly the effect appears and disappears as quickly as turning out a light. The weird thing – weirder even than the IOP referring to carbon nanotube sheets as CNTs, as if carbon nanotubery were common enough for routine TLA-ification – is that the invisibility effect works better under water than in the air, because the water helps the heat dissipate more quickly. While the invisibility thing is cool, it's not really invisibility except to people stumbling through the desert dying of thirst or stumbling for other reasons that would cause them not to question the appearance of a desert heat mirage right where something valuable – an enemy tank or adolescent wizard, say – would most likely be found. In addition to the proto-invisibility, according to The University of Texas at Dallas research team leader Ali Aliev, the experiment probes the behavior of carbon nanotubes in ways that will make it easier to develop them into thermoacoustic projectors for loud speakers and for sonar devices that produce sound by electrically stimulating nanotubes rather than by having sailors yell into huge loudspeakers, as they do now. Which is kind of a disappointing way to use even a bad invisibility cloak, if you ask me. But covert creeping around castles at night is not everyone's idea of fun. Not even when there are millions to be made from the merchandising rights. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:f71e7589-2031-40e2-a73a-a76e44d56ebc>
CC-MAIN-2017-04
http://www.itworld.com/article/2737842/consumer-tech-science/researchers-use-mirage-effect-for-invisibility-cloak--prefer-it-as-a-noisemaker.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00056-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943145
771
3.375
3
In the previous installment, we looked at and discussed strategies for business simulation and the infrastructure needed to make such initiatives successful. Now, we’re ready to discuss some practical examples of business simulation. Imagine a mail order company selling products together with the necessary financing. Assume they’re considering replacing one of their credit risk models while at the same time trying to boost sales of a certain widget. Let’s further assume that their overall decision strategy to determine the best product to offer to a customer may be overruled in circumstances where the risk model determines that additional selling is not desirable because the underlying loan is too likely to default. In the example, this company makes two changes to its in-production decision strategy. First, they replace the existing credit risk model with a new version. Second, they multiply the outcome of the propensity model for the widget by some factor greater than 1 to make it more likely to be prioritized as the product to offer. As the next step they want to apply this revised strategy to a selection of the recorded data. As stated above, every single product recommendation and every credit risk evaluation has been recorded. Because their mail order business sells fashion in addition to other products, it is sensitive to seasons. So to understand the new strategy’s effect during the summer they decide to apply the new strategy to last year’s interactions over the same period and study the deltas. Slice and dice data Once that slice of recorded data has been loaded, the company may take a sample from it. With so many millions of interactions recorded, a large enough sample will be representative of all of them. They will then proceed to apply the revised strategies, with all its predictive propensity models, risk models, and rules, and look at the distribution of the results. How many more widgets will be sold? It’s possible to simulate this because the company is using propensity models to predict the likelihood of a customer accepting an offer for a widget. Thus, the change they made to boost the offer rate of the widget should see more (simulated) interactions where the widget is being offered and accepted by the customer. Unless, that is, widgets are expensive and it turns out the new risk model will reject more widget offers in favor of lower prioritized products (per the new strategy) that keep the company’s exposure within the desired bandwidth. The company can thus study both metrics. How many widgets would we have sold if this had been the marketing strategy used during last year’s summer season? And how many write-offs on the financing would have been the result of using the new risk strategy alongside the new, Go Widget, sales strategy? If the metrics show favorable improvements the new strategy can be taken into production. If not, the marketing and sales teams and their colleagues from the risk department can tweak their strategies and see if it makes the desired difference when applied to last year’s interactions. Cause and effect This simulation is not perfect. For instance, last year’s economy may have been worse than this year’s, allowing more customers to pay back their loans now. Unless some economic data is part of the credit risk strategy, the overall strategy may not be sensitive to it and the simulation will therefore miss it. And a causal chain of events will also be increasingly hard to predict. If the revised strategy would have offered product X instead of Y to a customer, the actual service interaction about a problem with product Y which is part of the recorded data didn’t actually happen. So while it’s quite possible to predict the one-time effects of a strategy change, simulating the downstream effects of those new outcomes quickly becomes less useful conjecture. There are other caveats as well, a bit too detailed to cover here. However, don’t compare this with a hypothetical oracle that can tell you exactly how your strategy will fare, compare it to the common practice of making changes and hope for the best. The more explicit a company is around the decision strategies that govern its processes – customer processes or otherwise – the fewer surprises. And when those decisions are based on predictive analytics and carefully recorded data it becomes possible to simulate future business outcomes by replaying the past, and making the effect of changes, even in complex strategies, more predictable.
<urn:uuid:fb3bef04-41a2-4e3b-a9b4-49ffeafd7f97>
CC-MAIN-2017-04
http://www.computerworld.com/article/2475992/business-intelligence/replay--the-value-of-business-simulation--part-2-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95663
880
2.859375
3
If you needed any more evidence as to how important unmanned aircraft have become to the US military operations, the US Air Force today said drones have amassed over one million combat hours flown. While that number is impressive, it took the planes known as Global Hawk, Predator and Reaper, almost 14 years to do it, but it could take only a little over another two years to cross the two million mark according to Air Force officials. More on unmanned aircraft: DARPA challenge offers public $100,000 for small unmanned aircraft "Global Hawk, Reaper, and Predator are worth 1.2 million and change, in combat hours. A weapons system that has revolutionized the course of this war." said James G. Clark, the deputy chief of staff for Intelligence, Surveillance and Reconnaissance at the Pentagon in a statement. "Predator, Reaper and Global Hawk are now the most requested capability of today's warfighters in conflict. Where a normal weapons system would take years and months, the track record for these aircraft can be measured in hours and days." The Air Force Times recently noted: In March, Air Force unmanned aircraft surpassed 1 million combat operations. Its Predator and Reaper aircraft have tracked 19,000 targets. Even fighting wars in Afghanistan and Iraq, the Air Force has had to send unmanned aircraft to aid disaster relief in Haiti and Japan and help allies over Libya. Demand will only grow. In five years, the amount of data transmitted by Air Force unmanned aircraft is projected to reach an exabyte a day. That's 1.1 billion gigabytes, equivalent to 228.5 million DVDs. The U.S. Army said it went past the one million unmanned-hour mark in April of 2010. At the time the DefenseTalk.com site noted the growth in the use of Army drones was staggering -- the Army inventory jumped from a handful of systems in 2001 to roughly 1,000 aircraft by 2010 and is now logging up to 25,000 of UAV flight hours per month in support of combat operations in Iraq and Afghanistan. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:f96edcfe-61f7-4cd5-9893-a6eaae17debb>
CC-MAIN-2017-04
http://www.networkworld.com/article/2229566/security/air-force--unmanned-aircraft-hit-1-million-combat-hours.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00287-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95382
435
2.609375
3
Now that we know the basics of standard IP access lists from previous posts, let’s learn some more about them. As our first example, we’ll write an ACL 6 that permits packets sourced by the host with IP address 192.168.100.123, thus: - Router(config)#access-list 6 permit 192.168.100.123 We could also do this using a wildcard mask: - Router(config)#access-list 6 permit 192.168.100.123 0.0.0.0 Remember that a zero in a wildcard mask bit position specifies a match in that bit of the address. Thus, a mask of all zeros in dotted-decimal (which represents 32 binary zeros) means match all bits of the address exactly. If you omit the wildcard mask (as in the first example), a WCM of all zeros is assumed, thus the two versions of ACL 6 are functionally equivalent. Interestingly enough, we can also write this ACL line a third way, by using the keyword “host”: - Router(config)#access-list 6 permit host 192.168.100.123 Note that when using this method, the keyword “host” is placed before the address, and that no wildcard mask is used. Thus there are three functionally equivalent methods for specifying a single host address in an ACL, and the router doesn’t care which one you use: - Specify the address, without a WCM - Specify the address, followed by a WCM of all zeros - Precede the address by the keyword “host”, with no WCM used In a standard ACL (the type we’ve examined so far), I generally use the first option, because it’s brief, concise and specific (in other words, easy to type and read). The second option gains us nothing, so I never use it. The third option is also commonly used. Now, let’s put our ACL to work. This time, though, instead of using it to control user data flowing through a router’s interfaces, we’ll use it to enforce security on a router (or an IOS-based switch). To do this, instead of placing the ACL in service by using the “ip access-group” command on an interface, we’ll use the “access-class” command on the vty lines, like this: - Router(config)#line vty 0 4 - Router(config-line)#access-class 6 in Remember that inbound Telnet sessions are via the vty (virtual terminal) lines. What the above commands do is place ACL 6 in use inbound on the vty lines, which has the effect of constraining inbound Telnet traffic to hosts permitted by ACL 6 (in this case, the host with address 192.168.100.123 only). Note that this ACL only affects Telnet traffic targeted to this router. It has no effect on traffic flowing through the router. Of course, you can also build more sophisticated ACLs using wildcard masks, and use them to control vty access. An example would be: - Router(config)#access-list 7 deny 10.0.0.0 0.255.255.255 - Router(config)#access-list 7 deny 172.16.0.0 0.0.15.255 - Router(config)#access-list 7 deny 192.168.0.0 0.0.255.255 - Router(config)#access-list 7 permit any - Router(config)#line vty 0 4 - Router(config-line)#access-class 7 in ACL 7 would permit any public address to Telnet to this router, but block attempts at Telnet from any private address. Note that we are placing the ACL inbound on the vty lines, which controls Telnet access to the router. If you place the ACL in service outbound on the vty lines, it will affect the router’s being used as the “middleman” in a string of Telnet sessions. For example, let’s say that R1 wants to Telnet to R2. The ability of R1 to do this is controlled by R2’s inbound vty ACL. If there is no inbound vty ACL on R2, then any host can freely Telnet into R2 (assuming that R2’s vty password is known, of course). Now, assuming that R1 has used Telnet to access R2, the ability of R1 to then Telnet onward from R2 to another host would be controlled by R2’s outbound vty ACL. If there is no outbound vty ACL on R2, then R1 could freely Telnet to any other host via R2 (assuming that the target host’s Telnet password is known). Note that when the ACL is used inbound on the vty lines, the ACL specifies source addresses (from which hosts are inbound Telnet into our router allowed). When the ACL is used outbound on the vty lines, the ACL specifies destination addresses (to which hosts are outbound Telnet sessions allowed). The latter is an unusual usage of a standard IP ACL, which normally specifies source addresses only. Okay, now it’s Quiz Time: Let’s suppose that the following commands are placed on our router. What effect do they have? - Router(config)#access-list 8 permit 172.16.1.1 - Router(config)#access-list 9 permit 10.1.2.3 - Router(config)#line vty 0 4 - Router(config-line)#access-class 8 in - Router(config-line)#access-class 9 out Since ACL 8 is placed inbound on the vty lines, it controls which hosts can Telnet into our router. In this case, only the host with address 172.16.1.1 will succeed (don’t forget about the implicit “deny any” at the bottom of the ACL). Now, assuming that it has established a Telnet session with our router, to where could it Telnet from our router? That’s controlled by ACL 9, which is in effect outbound on the vty lines. Because of ACL 9, if host 172.16.1.1 accesses our router by Telnet, it can only start Telnet sessions with host 10.1.2.3 while using our router as the “middleman” (again, don’t forget the “implicit deny” at the end). Note that host 172.16.1.1 (or any other host) can still Telnet through our router to anywhere. The ACLs placed on our router’s vty lines are only controlling Telnet sessions for which our router is an endpoint. In other words, the “access-class” statements on the vty lines have absolutely no effect on data passing through our router, but only on Telnet sessions terminating at (or starting from) our router (or switch). In addition to controlling Telnet access (TCP port 23), “access-class” statements on vty lines also affect SSH sessions (SSH is the encrypted version of Telnet, and it uses TCP port 22). Finally, remember that ACLs can be used to control Telnet or SSH access to and from IOS-based switches, as well. So why use “access-class” on the vty lines? - It allows you to easily control Telnet and/or SSH sessions to a router or switch. - It covers all of the data interfaces (and a large switch could have hundreds of data interfaces). - It affects only Telnet and SSH traffic targeting our router or switch, not traffic traversing our router or switch. - It uses standard ACLs, which are easier to write than extended ACLs. That makes the vty “access-class” statement a slick solution. Next time, we’ll do more (and there’s still lots more to do) with access-lists. Author: Al Friebe
<urn:uuid:9ae6c1ae-a6cf-4c93-bb12-dcd10bc1afab>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/06/08/access-control-lists-acls-part-3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00195-ip-10-171-10-70.ec2.internal.warc.gz
en
0.876732
1,769
2.984375
3
Condensing Data: Summary Sheets and Flash Cards When confronted with the vast amount of information you’ll have to have down prior to taking a certification exam, your first instinct might be to freak out. Fortunately, there are ways to organize and abridge all of that data to manageable sizes that won’t tax your sanity. Two of the more effective ways in which to do this are with those study-aid mainstays that most of you probably used in high school: flash cards and summary sheets. Flash cards, of course, are those familiar two-sided pieces of paper with a question or blurb on the front and an answer or explanation on the back. They’re an especially good tool for memorization, because they condense facts and figures down to easy-to-understand, quick hits of data. Here are a few examples of flash cards, courtesy of Kaplan IT, that cover subjects in Cisco exam #640-801 – NetCert: Certified Network Associate: - Q: Can full-duplex Ethernet be used on a connection that connects a hub to a switch? - Q: Which Ethernet transmission mode is more susceptible to collisions: full-duplex or half-duplex? - Q: How many broadcast domains exist in a network that was migrated from a single Ethernet segment to a network with three segments separated by a router? - Q: Which device could you use to reduce the number of collisions on an Ethernet network while keeping a single broadcast domain throughout the network? Note how the questions presented are very simple and straightforward. They ask “What?” instead of “Why?” The answers are pithy as well, containing no more than a few words. Often, it’s just one word—a yes or no, a number or an object. On the other hand, summary sheets usually contain more information than flash cards and are better suited for organization of data, usually in the form of an outline. As such, they place the various data points of flash cards into a context. Still, summary sheets are fairly stripped down and don’t go into a great deal of detail. One of the main shared advantages of flash cards and summary sheets is that the very act of creating them is a means of study. As you compile and record the information contained therein, you begin to retain it. This knowledge is reinforced as you use these exam preparation tools time after time—they’re gifts that keep on giving! Here are a few ways in which you can optimize your studying efforts with flash cards and summary sheets: - Make a Game of It: Because the way the data is presented in these study aids, they’re perfect for a game-style format. Invite a few tech buddies over, get some chips and salsa, and come up with a game to help you learn the topics. (The game show “Jeopardy!”—with its reverse Q&A set-up—comes to mind as a possible format for this.) - Take It with You: Both flash cards and summary sheets are portable and easy to go over in virtually any setting. If you’ve got a long morning commute on a bus or train, have a cross-country flight or have to spend an idle hour or two at the DMV or doctor’s office, bring these with you. You’ll learn more and pass the time. - Get Help on the Web: The Internet has several resources for flash cards, which include sites that can help you figure out how to design them and archived offerings that cover many different subjects. You can start your search online here, here or here.
<urn:uuid:a1235d86-d6ca-4e49-9de4-527848be1cdd>
CC-MAIN-2017-04
http://certmag.com/condensing-data-summary-sheets-and-flash-cards/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00039-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948832
760
3.296875
3
Social and collaborative technology has potentially deep ramifications for health care. So far there’s limited data on how effective social media is at improving health quality. But the fundamental qualities that social technologies reinforce -- immediacy, transparency, openness, and connectedness -- could amplify the impact of that environments and behavior have been observed to have on health and wellness. The hypothesis is promising. As health care becomes increasingly participatory and collaborative, digital-social health has the potential to transform the patient populace from being mere passengers to responsible drivers of their health. An empowered, educated, and responsible patient, family and community is then more motivated and better positioned to access information and understand the implications of lifestyle and health care options. Once they have more support and knowledge, they are better positioned to leverage social health platforms to spread what they learn, create a larger social health support team, and eventually make choices that improve individual health as well as the health of their children, families and communities. The health care industry is working figure out the role that social technology can play. A quick scan of the recent news indicates increasing acceptance and adoption of digital-social health: - The U.S. Department of Veterans Affairs (VA) has outlined a social media policy, which encourages veterans to use social media to seek information from the VA. “Veterans should have consistent and convenient access to reliable VA information real time using social media -- whether on a smartphone or a computer,” according to Secretary of Veterans Affairs Eric Shinseki. - Over 1,200 hospitals now have a social media presence to share news and announcements, showcase awards and engage patients in an ongoing dialogue. Industry-leading providers, including the Kaiser Permanente, Mayo Clinic and Cleveland Clinics, use social media as a direct and immediate channel to share their latest research, developments and connect with patients and industry influencers. Mayo Clinic also has created a Center for Social Media to accelerate effective application of social media tools to improve health globally. - Many innovative data-driven and social health companies now enable their customers to share treatment, condition, symptom, expertise and even genetic profile information on their social-health platforms. - According to a recent Pricewaterhouse Coopers survey, about 33 percent of U.S. consumers use social media websites like Facebook and Twitter to obtain health information and track/share symptoms. Seventy-two percent of respondents said they would use social media sites for scheduling physician appointments, and 42 percent of respondents reported they have used social media to look up consumer reviews of health treatments or physicians. As IT groups explore social technologies, there are a few factors to consider: Align strategies and explore new ways to deliver on organizational values. Centralize the research, pilot the acquisition and management of social technology/initiatives, including uniform taxonomy, vendor engagement, and content strategy to avoid fragmented and ad-hoc implementations. By getting ahead of the curve and starting from the idea of alignment early, you can tie new policies and best practices to the critical, organizational level thinking that has already been done on basic values and procedures. This keeps you aligned with organization’s existing values and strategies (e.g. communications, care delivery, information security) and reduces policy debates and the chances that implementing new innovation will have unintended consequences. Also, carefully examine the integration risks and benefits of combining social platform and virtual care strategies. Currently most providers use social platforms primarily for brand building and education. Savvy and empowered social media users are likely to appreciate (or even demand) the convenience and efficiency of virtual care via social health platforms. IT departments must ensure the appropriate infrastructures and policies are in place to support care delivery (i.e. diagnosis, consultation, treatment, and transfer of medical data) using secure digital communications including various social platforms. - Transform customer experiences and stories into insights. IT departments should also develop the appropriate information architecture to effectively integrate information harvested from social platforms to analytical systems. This helps ascertain that the effective presence in the social world and appropriate business and care delivery values are achieved. Many users of social media and networks also share their recent experiences or seek out social-health groups for treatment information. If providers can successfully leverage social platforms/data to better understand and engage/educate their patients, this may eventually lead to early detection of potential problems, which translate into fewer complications, readmissions and a step closer to personalized and preventive care. Personally, I’m hopeful that digital social health innovation will help achieve personalized, participatory, and preventive medicine. What do you think?
<urn:uuid:26574f15-1ba3-42b6-b5ba-d9b77a804cb1>
CC-MAIN-2017-04
http://www.computerworld.com/article/2472579/healthcare-it/social-and-collaborative-technologies-are-a-compelling-fit-for-health-care.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00525-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931775
929
2.8125
3
“Twisted Pair” is another way to identify a network cabling solution that’s also called Unshielded Twisted Pair (UTP) and was invented by Alexander Graham Bell in 1881. Indoor business telephone applications use them in 25-pair bundles. In homes, they were down to four wires, but in networking we use them in 8-wire cables. By twisting the pairs at different rates (twists per foot), cable manufacturers can reduce the electromagnetic pulses coming from the cable while improving the cable’s ability to reject common electronic noise from the environment. Each pair carrying data has one wire that is the positive lead, while the other acts as the negative lead. Comparing the data received on both wires, the receiver eliminates common-mode noise picked up along the cable and gets an accurate value for the data. When we use twisted pair cabling for networking, it follows the standards set by the Electronics Industry Alliance and the Telecommunications Industry Association (EIA/TIA). Since EIA stopped operations in 2011, the standards now fall under TIA. The latest standard is TIA-568-C. The shades on the wires may vary from pastel to bright, but they follow the same colors: white with an orange stripe, all orange, white with a green stripe, all green, white with a blue stripe, all blue, white with a brown stripe, and all brown. The white with a color stripe and the matching color are twisted together in a pair, which gives us the name. There are two standard ways of connecting ends to the cables: T568A and T568B. As you can see, they are the same except that pairs 2 and 3 are swapped. TIA recommends T568A for new installations though T568B is more compatible with previous cabling solutions. The physical connectors are 8P8C (8–position, 8-conductor) though many people refer to them as RJ45. RJ45 is a keyed, single data line connector that’s rarely used. UTP is capable of carrying two data lines at less than 1 billion bits per second each. If the cable connects different kinds of equipment (i.e. from your computer to the switch built into your home router), you will want to use a straight-through cable with the exact same wire or pin layout on both ends. If you have a large multi-floor house or building to wire and you need to connect a switch on one floor to the main switch built into your home router (connecting the same kind of equipment at both ends), a crossover cable will be the right cable. Depending on how far you need it to go, you can get speeds of 10 million bits per second (10 Mbps) and up to 100 billion bits per second (100 Gbps). Most equipment connections run at 100 Mbps or 1 Gbps. Standards set the distance limit to 100 meters for all speeds. Though most people think of data over network cables, UTP can carry data, voice, and video equally well. At data speeds below 1 Gbps, the unused wires also have the ability to carry electricity to power remotely mounted equipment.
<urn:uuid:017d8ded-1449-4d8d-b649-133621b2863a>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/06/12/what-is-twisted-pair-and-does-it-work/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00433-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93066
652
3.03125
3
Developing SQL Databases (M20762) Learn to design and develop a Microsoft SQL Server 2016 database. In this course, you will learn the major tools of the SQL Server 2016 platform, logical table design, indexing, query plans, and data and domain integrity. You will focus on creating database objects, including views, stored procedures, parameters and functions. You will also learn procedure coding, such as indexes, triggers, SQL Common Language Runtime (CLR), SQL Server spatial data, and considerations for binary large object (BLOB) data. This course uses Microsoft SQL Server 2016 and incorporates material from the Official Microsoft Learning Product 20762: Developing SQL Databases. - Redeem four SATVs* for a Classroom, Virtual Classroom, or Virtual Classroom Fit session - Redeem four SATVs for an individual GK Digital Learning course plus digital Microsoft Official Courseware (dMOC) - Redeem two SATVs for an individual GK Digital Learning course *For more information on Microsoft SATVs, click here. GK Digital Learning is also available with digital Microsoft Official Courseware (dMOC). Click here to purchase.
<urn:uuid:ffad3b6a-8c6f-48fe-a0ed-30ad20bce733>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/151308/developing-sql-databases-m20762/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00067-ip-10-171-10-70.ec2.internal.warc.gz
en
0.758192
240
2.5625
3
Overview of 3GPP options for Wi-Fi access The 3GPP standard defines two types of access; trusted and untrusted non-3GPP access. Non-3GPP access includes access from for instance Wi-Fi, WiMAX, fixed and CDMA networks. Trusted 3GPP Wi-Fi access Trusted non-3GPP Wi-Fi access was first introduced with the LTE standard in 3GPP Release 8 (2008). Trusted access is often assumed to be an operator-built Wi-Fi access with encryption in the Wi-Fi radio access network (RAN) and a secure authentication method. However, it is always up to the home operator to decide what is to be considered trusted. In practice the Wi-Fi access network must support the following features to be considered trusted: - 802.1x-based authentication which in turn also requires encryption of the RAN - 3GPP-based network access using EAP method for authentication - IPv4 and/or IPv6 In a trusted access, the device (UE) is connected through a TWAG (Trusted Wireless Access Gateway) in the Wi-Fi core. The TWAG is in turn connected directly with the P-GW (Packet Gateway) in the Evolved Packet Core (EPC) through a secure tunnel (GTP, MIP or PMIP). A similar concept is also used in non-EPC 3G networks where a WAG (Wireless Access Gateway) is connected with the GGSN through a secure GTP tunnel. Parameters in the subscriber profile are needed in order to setup the GTP tunnel. This will normally in turn require knowledge about the user’s IMSI (unique SIM card identifier). Therefore trusted 3GPP Wi-Fi access is not possible for devices without SIM cards. However, Aptilo’s innovative features make the impossible possible providing trusted 3GPP access for all kinds of devices. Untrusted 3GPP Wi-Fi access The untrusted model was first introduced in the Wi-Fi specification in 3GPP Release 6 (2005). At that time it was rare with Wi-Fi access points with advanced security features. Hence Wi-Fi was considered open and unsecured by default. Untrusted access includes any type of Wi-Fi access that the operator has no control over such as public hotspots, subscribers’ home Wi-Fi and Corporate Wi-Fi. It also includes Wi-Fi access that does not provide sufficient security mechanisms such as authentication and radio link encryption. The untrusted model requires no changes to the Wi-Fi RAN (Radio Access Network) but has an impact on the device side which requires an IPSec client in the device. The device is connected directly to the ePDG (Evolved Packet Data Gateway) in the EPC through a secure IPSec tunnel. The ePDG is connected to the P-GW where each user session is transported through a secure tunnel (GTP or PMIP). A similar concept is also used in non-EPC 3G networks where the device is connected to a TTG (Tunnel Termination Gateway) through a secure IPSec tunnel. The TTG is in turn connected to the GGSN via GTP. Because the communication is secured end-to-end between the device and EPC, this option can be used with any Wi-Fi network. The untrusted 3GPP Wi-Fi access model is used for Wi-Fi Calling. This means that smartphone voice (VoWiFi) calls will work over any Wi-Fi connection, even the subscriber’s own network at home. Learn more about Aptilo’s Wi-Fi Calling solution. IP mobility with session continuity in 3GPP Wi-Fi access Dual-radio device will require a client based solution on the end-user device to provide full IP mobility between the networks. IP mobility within the same radio network can be provided without a client. Many popular applications on the smartphones are today designed in a way that make them resilient for network changes such as change of IP-address. This allows for an seamless end-user experience even while moving between for instance the 3G or LTE network over to Wi-Fi. Different options for 3GPP Wi-Fi access The 3GPP AAA server is located within the 3GPP HPLMN. For 3GPP Wi-Fi access, it provides authentication, authorization, policy enforcement and routing information to the packet gateways in the Wi-Fi core and mobile core. It can perform EAP-SIM/AKA authentication, via the SIM-card, for an automatic and secure authentication of Wi-Fi enabled devices. In order to create a good business case for Wi-Fi offloading, all types of devices must be supported. Devices with no support for the EAP-SIM/AKA method or even with no SIM-card at all. Hence there is a need for alternative authentication methods. Read more about how Aptilo’s innovative Wi-Fi offload features enable 3GPP Wi-Fi access for devices lacking support EAP-SIM/AKA. Furthermore, the operator may want to monetize their Wi-Fi network by opening it for public use. We have created the Aptilo SMP 3GPP AAA+™ for this purpose with added critical functionality to the 3GPP AAA in the Aptilo SMP SIM Authentication™ . With this added support with portals, Wi-Fi AAA, Wi-Fi Policy & Charging and Wi-Fi subscriber management, the mobile operator can add additional revenue by allowing paying ad-hoc users as well as supporting all type of terminals for offload. Below we will discuss the role of the Aptilo SMP 3GPP AAA+ in different Wi-Fi access scenarios including all the 3GPP specified options for 3GPP Wi-Fi access. 1. Wi-Fi access with 3G core and local WLAN break-out This option is currently the most deployed solution by operators doing EAP-SIM/AKA authentication. The option provides local traffic breakout for all clients at the Wi-Fi access gateway (such as the Aptilo Access Controller) and is based on standard RADIUS and EAP methods for authentication with HLR. The Wi-Fi access point requires support for 802.1x authentication with EAP-SIM/AKA. No additional 3GPP interfaces are required. 2. Wi-Fi access with 3G core (DPI) All traffic from smartphones/tablets with EAP-SIM/AKA support is terminated at the Deep Packet Inspection (DPI) node in the 3G core network while traffic from non-SIM devices are directed to the Internet locally. This option uses standard RADIUS and EAP methods for authentication with HLR. The Wi-Fi access point requires support for 802.1x authentication with EAP-SIM/AKA. In this case the DPI is typically used by the operator also to inspect and enforce policies for 3G data services. No additional 3GPP interfaces are required. 3. Wi-Fi access with 3G core and WAG (GTP) This option is partly aligned with 3GPP TS23.234 specifications with the introduction of the Wireless Access Gateway (WAG) node in the Wi-Fi core for access to the 3G core. The WAG, emulating an SGSN, establishes GTP tunnels for client traffic for EAP capable clients that are terminated in the GGSN. The 3GPP Wm interface is used for EAP client authentication with HLR and tunnel establishment. The Wi-Fi access point requires support for 802.1x authentication with EAP-SIM/AKA. A DPI can potentially also be used after the GGSN. 4. Wi-Fi access with 3G core (I-WLAN) This option is aligned with 3GPP TS23.234 specs for “untrusted” access with 3G core. This option requires an EAP client in the device with IPSec support. No impact on the Wi-Fi core or Wi-Fi RAN, legacy Wi-Fi hotspot networks will work. IPSec tunnels will be terminated in the Tunnel Terminating Gateway (TTG) node – a new mobile core node introduced for this purpose. The TTG maps the IPSec tunnels into GTP tunnels terminated in the GGSN (GGSN can typically not terminate IPSec). The 3GPP Wa interface is used for EAP client authentication with HLR and the Wm interface is used for tunnel mapping in the TTG. This option will most likely be replaced by the “untrusted EPC” option in most practical implementations. 5. Trusted Wi-Fi access in EPC This option is based on 3GPP specification TS23.402 with the introduction of the Trusted Wireless Access Gateway (TWAG) node. The TWAG establishes GTPv2, PMIP or MIP tunnel (the S2a interface) to the P-GW in the EPC core for all trusted traffic. “Trusted” traffic will most likely mean an operator controlled Wi-Fi environment based on a Hotspot 2.0 compatible Wi-Fi Core with 802.1x and EAP authentication support to the HSS/HLR. The Wi-Fi access point requires support for 802.1x authentication and EAP-SIM/AKA methods. This option also requires support for EAP-SIM/AKA in the device. The STa interface is mainly used for EAP client authentication with HSS and S2a option selection (which tunnel type to use). The S6b interface between 3GPP AAA and P-GW is used for tunnel authentication, static QoS and mobility (if applicable), etc. The 3GPP specification allow also for full or partial local breakout of Wi-Fi traffic at the TWAG in the Wi-Fi core. 6. Untrusted Wi-Fi access in EPC This option is based on 3GPP spec TS23.402 with the introduction of the evolved Packet Data Gateway (ePDG) node. This option requires an EAP client in the device with IPSec support. No impact on the Wi-Fi core or Wi-Fi RAN, legacy Wi-Fi hotspot networks will work. IPSec tunnels will be terminated in the ePDG – a new mobile core node introduced for this purpose. The ePDG maps the IPSec tunnels into GTP or PMIP tunnels terminated in the Packet Gateway P-GW. “Untrusted” will most likely mean a non-operator controlled network or partner network with a legacy Wi-Fi hotspot networks not supporting 802.1x. The 3GPP SWa interface is mainly used for EAP client authentication with HSS. The SWm interface is used for additional authentication parameters including subscription profiles and S2b option selection (which tunnel type to use). The S6b interface is used between Wi-Fi AAA and P-GW for tunnel authentication, static QoS and mobility (if applicable), etc.
<urn:uuid:0108b1fb-67c9-4df7-8f81-6f340c2aca0c>
CC-MAIN-2017-04
https://www.aptilo.com/mobile-data-offloading/3gpp-wifi-access
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00277-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884591
2,345
2.71875
3
Low-power communications transmitters can be operated in the United States without radio licenses, provided that their power is low enough that they will not cause interference. Garage door openers, wireless LANs, wireless headphones and motion sensors are all examples of unlicensed low-power transmitters. Part 15 of the FCC Rules provides the details on allowable power levels, and permissible uses. Part 15 lists a wide range of frequency ranges and power limits. However, for the most part, the frequencies allocated for TV broadcasting may not be used for unlicensed, low-power transmitters. This is because the FCC decided that widespread deployment of consumer products like garage door openers and wireless headsets would eventually pollute the radio spectrum and cause interference to TV reception. This would be particularly likely at the edge of a TV station's coverage area, where the signals are very weak. There are two major exceptions. Biomedical telemetry devices may operate under Part 15 on TV channel frequencies, and wireless microphones can also operate in these frequencies.Biomedical telemetry WFAA, which operates on channel 8, has been assigned channel 9 for digital broadcasting. When it began channel 9 transmissions on February 27, biomedical telemetry transmissions in the 186–192 MHz band occupied by channel 9 began to receive interference. The telemetry receivers began receiving signals at much higher power than expected, with an unknown signal format, so they basically stopped working. These telemetry systems are allowed under FCC rules to operate only indoors, and only in health care facilities such as hospitals. They are used to monitor blood pressure, respiration and other patient data. They allow monitoring while patients walk around. They allow one health care worker to monitor several patients remotely, which decreases health care costs. Using them only indoors should decrease interference. And the transmitters are usually tunable, so that in case of interference, they can be tuned to a different frequency. At any rate, that was the FCC's logic in allowing these products to use TV frequencies, even though you might be able to conceive of a scenario where interference of this sort could cause loss of life. But the FCC figured that hospitals (and their engineers) would know which frequencies were free of interference. That used to be the case, but with digital TV stations coming on the air over the next few years, that's just too optimistic. The Dallas TV station shut down its transmissions temporarily, once the hospitals figured out where the interference was coming from and notified the station. The TV station had no obligation to do this. One of the key requirements of FCC Part 15 is that unlicensed transmitters have no interference protection rights. Licensed stations can continue operating, and unlicensed devices must accept whatever interference comes along. Most business users of Part 15 devices, such as wireless LANs, are unaware of this limitation. At any rate, what finally happened in Dallas was that one hospital retuned its transmitters, and the other hospital decided to buy new equipment and now also operates on different frequencies (in this case, channel 12 at 204–210 MHz). But it took a while, after the displays went dead, to track down the interference. The problem arose in Dallas because the hospitals didn't know that a TV station would start broadcasting on a channel that had always been unused. And the TV station didn't know that any hospitals were using transmitters on its new frequency. Even if it had known, it had no obligation to protect them. The obligation was, and is, on the Part 15 user. The American Hospital Association is notifying its members of this problem. But will it provide hospitals with a listing of what TV channels the new digital television stations will be using, and when? Wireless mics also operate on TV broadcast frequencies. These devices aren't supposed to be unlicensed, but they are widely sold and used illegally, without a license. Legal users include broadcasters, cable TV operators, TV program producers and movie producers. Illegal users include most rock concerts and live theater performances. (Some wireless mics are perfectly legal for use by anyone, but not those that operate on TV frequencies.) Wireless mics operate at higher power levels than the medical telemetry transmitters, so maybe they will be less susceptible to interference. And they are often itinerant. But many are fixed and used night after night at particular theaters. I've got tickets to see "Showboat" in July. I'll let you know if I hear anything.
<urn:uuid:8beef63a-c47d-4139-b1bb-851dffad5960>
CC-MAIN-2017-04
https://www.cedmagazine.com/article/1998/04/hdtv-interference-unlicensed-devices?cmpid=related_content
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00333-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967897
900
3.328125
3
The Long and the Short of It We, the people, deserve compatible wireless access. And when it comes to the 802.11b Wi-Fi standard, we usually get it, with a niggling caveat.We, the people, deserve compatible wireless access. And when it comes to the 802.11b Wi-Fi standard, we usually get it, with a niggling caveat. This minor headache centers on the preamble, which is the part of the 802.11 specification that deals with how packets are sent and received over the airwaves. The IEEE 802.11 committee specified a long preamble so that 802.11b wireless LANs could interoperate with 802.11 DSSS networks that run at 1M bps to 2M bps. According to Al Petrick, vice president of the 802.11 committee, a short preamble was also specified, but it was intended to be a "turbo" mode "for those devices and applications requiring higher throughput in a network." Of course, the two are incompatible, and thats why 802.11 specified the default to be the long preamble. The vendors of 802.11 equipment, meanwhile, probably wanting that equipment to look speedier than it actually was, began defaulting to short preambles.
<urn:uuid:8f07f2d1-929e-402a-abc2-79da00d592a6>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Enterprise-Networking/The-Long-and-the-Short-of-It
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00149-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922419
268
2.546875
3
Some notes from Chapter 11. More to follow. Cryptography is the science of hiding information. Cryptography Through the Ages: • Substitution Cipher: Substitutes one character for another according to a formula • Vigenere Cipher: Encrypts text using different substitution ciphers that are determined by the plain text that is to be encrypted. Susceptible to frequency analysis. • Transposition Ciphers: Transposes clear text according to a set of characters arranged in a rail. • One-Time Pads: Cryptography using a random one-time key that performs OR operations. RC4 is one implementation of this general concept. Encryption Process and Application at Different Layers: In encryption, an algorithm is applied to plain text according to a key. At different layers: • Application: Encrypted email, secure storage, and messaging. • Sessions: Secure sessions using SSL or TLS. • Network: Encrypted packets using the IPSec security suite • Chosen Plain Text: Observers the cipher text output from plain text • Chosen Cipher Text: Observers what cipher text decrypts into plaintext • Birthday: Hash focused brute-force attack • Meet-in-the Middle: Knows part of the plain text and corresponding cipher text. • Brute-Force: Every possible key combination is tried. • Cipher Text-Only: Look for patterns from collections of cipher text encrypted with the same algorithm and key. • Known Plain-Text: Has some plain text and some cipher text. Analyzes for patterns. Features of Good Encryption Algorithms: • Resistant to attacks • Support variable and long key lengths • Create an avalanche effect in which small changes in plain text will result in radically different cipher text. • No import or export restrictions. Classes of Encryption Algorithms: • Symmetric – Same key encrypts and decrypts • Asymmetric – Public/Private key pair encrypts/decrypts Popular Symmetric Encryption Algorithms: • DES – 56 bit • 3DES – 112 and 168 bit • AES – 128, 192, and 256 bit • RC2 – 40 and 64 bit • RC4 – 1 to 256 bit • RC5 – 0 to 2040 • RC6 – 128, 192, and 256 bit • IDEA – 128 bit • Blowfish – 32 to 448 bit Symmetric Encryption Techniques: Block, Stream, and Message Authentication Codes (MAC). • DES and 3DES running ECB or CBC • RSA (asymmetric) • DES and 3DES running OFB or CFB Block and Stream Operation: In a block cipher implementation, a fixed group of bits called a block is used statically for the transformation. DES uses two standardized modes for block ciphering: Electronic Code Book (ECB) and Cipher Block Chaining (CBC). ECB is considered insecure as it serially encrypts data. This result is two plaintext data blocks being transformed into two identical cipher text blocks if the same key is used. Therefore, CBC, which uses bitwise scrambling where each block is dependent on the order of the previous block, is considered more secure. Stream ciphers are similar in that they have two modes: Cipher Feedback (CFB), similar to CBC, and Output Feedback (OFB) that uses XORed in generating the cipher text. Increasing DES Security: • Frequently change and securely exchange keys. • Use CBC or OFB mode • Avoid weak keys 3DES Encryption Process: In the 3DES encryption process, plain text is encrypted 3 different times with 3 different 56-bit keys. AES: AES uses a Rijndael variable length block cipher to transform plain text multiple times. AES is younger, faster, and stronger than DES. AES Availability on Cisco Products: • PIX 6.3 and later • ASA version 7.0 and later • VPN 3000 Software 3.6 and later • Cisco IOS Release 12.2 (13)T and later SEAL: Seal has lower performance requirements and the 160-bit symmetric encryption algorithm is available on IOS Release 12.3(7)T and later. However: • Only Cisco routers, on both ends, running IPSec and the k9 subsystem, and IOS Release 12.3.7T may run Seal. • RC2: variable length replacement for DES. 40 to 64 bits. • RC4: variable length stream cipher used in SSL. 1-256 bits. • RC5: fast block cipher. 0 to 2040 bits. • RC6: Similar to AES. 128, 192, and 256 bits. Weak Keys: Keys are considered weak when they show regularities. SSL VPNs: An SSL VPN utilizes symmetric encryption for bulk data encryption and asymmetric encryption for key exchange. The steps to establish a tunnel are: • Client initiates outbound connection to gateway on port 443. • Gateway responds with trusted digital signature and public key. • Client generates the symmetric encryption key that will be used by both parties. • Gateway’s public key is used to encrypt symmetric key. • The symmetric key encrypts the SSL Tunnel.
<urn:uuid:27fa2e42-fb00-4d70-9c03-b2b895601486>
CC-MAIN-2017-04
http://networking-forum.com/viewtopic.php?f=71&t=26150&view=previous
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00543-ip-10-171-10-70.ec2.internal.warc.gz
en
0.809086
1,109
4.03125
4
A majority (56%) of people in emerging economies thinks climate change will certainly have a direct impact on their life against only 28% in developed economies. Yet, 70% of people in emerging economies are optimistic that climate change can be solved, against less than half (48%) in developed economies. The polarization of concern and confidence is reflected in the difference in the willingness to act. Over half (53%) of people in emerging markets said they would certainly switch to a new product if it was certified to minimize damage to the climate, versus a mere 24% in developed economies. And 61% said they would certainly switch to an energy provider offering lower carbon products and services if this was an option, versus only 30% in developed economies “Governments in North America and Europe cannot assume their countries will lead climate change solutions or policy,” said Sander van ’t Noordende, group chief executive of Accenture’s Resources operating group. “Low carbon investments will be drawn to the most concerned and active consumers and to those economies that can leapfrog to new technologies and implement cutting edge policies. There is a small window of opportunity for western governments to act before a global climate change policy agreement gives emerging economies the incentive to attract investment away from developed markets.” Consumers need more help to reduce carbon emissions Accenture’s research indicates that disparities have opened up in all countries between intentions and actions related to climate change. In 2007, 89% of people contacted said they would be willing to switch to energy companies offering low carbon emission products and services. But in 2008, only 12% of those in countries where switching one’s gas or electricity provider was an option actually took that step. Differentiation between energy providers is a major obstacle to consumer action. Three-quarters say that their current electricity/natural gas provider’s climate friendly products and services are no different from those of competitor providers, against only 18% who say they are better. “Consumer power can compel companies to deliver products and services that address climate change,” said Luca Cesari, Global Managing Director of Accenture’s Utility Industries Group. “Energy providers must provide a thriving market for low-carbon services, and governments must enable this transformation with clear policy and properly aligned incentives. Utility companies are the linchpin and must see the commercial opportunities of delivering affordable and innovative low-carbon services.” Cost is the largest inhibitor to buying services that help address climate change. Of those interviewed, 46% consider cost as a very important factor. Consumers also want energy providers to improve financial arrangements. For instance, 80% of respondents would consider installing a domestic electricity generator if they could pay a monthly fee instead of an upfront cost. A lack of information was cited by 36% of respondents, with nearly half (49%) saying they do not understand enough about how they can personally act to combat the effects of climate change. “Energy providers can learn from manufacturers of consumer goods how to differentiate themselves through further product and service innovation,” said Sander van ’t Noordende. “Governments and businesses must work together to deploy new technologies that stimulate the shift to a low carbon economy.” Accenture End Consumer Observatory on Climate Change study is based on an online survey conducted in native languages with 10,733 consumers in 22 countries worldwide, during September and October 2008. Consumers were interviewed in North America (1,732 interviewees), Western Europe (4,244 interviewees), Japan and Australia (1,100 interviewees), as well as in the emerging-economy countries of Brazil, Russia, India, China, Argentina, Chile and South Africa (3,657 interviewees). The sample was representative of the general population in the different countries except in the emerging-economy countries, where a sample representative of each country’s urban population was interviewed. Accenture is a global management consulting, technology services and outsourcing company. Combining unparalleled experience, comprehensive capabilities across all industries and business functions, and extensive research on the world’s most successful companies, Accenture collaborates with clients to help them become high-performance businesses and governments. With more than 186,000 people serving clients in over 120 countries, the company generated net revenues of US$23.39 billion for the fiscal year ended Aug. 31, 2008. Its home page is www.accenture.com. +44 20 7844 9683 +44 77 400 38921 +1 216 535 5092
<urn:uuid:83b9521a-6610-4f93-8fee-91666c20c396>
CC-MAIN-2017-04
https://newsroom.accenture.com/news/emerging-economy-consumers-more-willing-to-act-against-climate-change-than-developed-market-consumers-accenture-research-shows.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00543-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945032
931
2.515625
3
Phidgets are hardware interfaces that let your computer interact with the environment. In this first blogpost of a new series, I explain how to automatically power-cycle a crashed embedded device. I’ve been playing with Phidgets over the holiday season. Phidgets are inexpensive hardware interfaces for your computer. You connect them via USB, thus extending your machine with digital inputs/outputs and analogue inputs. There are several aspects I like about the API-software: - it’s available for Windows, Linux and Mac - the Linux version is open-source (in a next post, I’ll show it running on my nslu2) - there’s support for many programming languages, even Python - input changes can trigger events (avoids polling loops) One problem with automated fuzzing of embedded devices (for example a WiFi AP) is that you’ve to power-cycle a device when it crashed. And that’s a problem when you let it run unattended (i.e. overnight). So it would be handy to have your fuzzer power-cycle the device each time it detects the device became unresponsive. This Phidget Interface Kit with 4 relays lets you do this. Connect the power supply of the embedded device to the NC (Normally Closed) connector of the relay. This way, the un-powered relay will let the current flow through the power-supply and feed the embedded device. To power-cycle the device, activate the relay for a second or two. This will open the circuit and shutdown the embedded device. Activating a relay for a second is very easy with the Phidgets sofware, here is a Python example for an Interface Kit: oInterfaceKit = Phidgets.Devices.InterfaceKit.InterfaceKit() oInterfaceKit.openPhidget() oInterfaceKit.waitForAttach(10000) oInterfaceKit.setOutputState(0, True) time.sleep(1) oInterfaceKit.setOutputState(0, False) oInterfaceKit.closePhidget() setOutputState is the actual command used to control the relay on output 0. The other statements are necessary to setup the interface. Before OSes took full control over the input and output ports, a popular solution was to connect a relay to a Centronics printer port and control the output of the port directly from your program. But nowadays, OSes like Windows take full control over the Centronics port (if your machine still has one…), making it much harder to control from user software. Phidgets were used (but not hurt) for my TweetXmasTree:
<urn:uuid:a29f6a91-0788-4570-b880-b49a8a3f97cd>
CC-MAIN-2017-04
https://blog.didierstevens.com/2009/01/12/a-hardware-tip-for-fuzzing-embedded-devices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00021-ip-10-171-10-70.ec2.internal.warc.gz
en
0.879004
555
3.125
3
Tech Glossary – P to R A partition is a section of a hard disk. When you format a hard disk, you can usually choose the number of partitions you want. The computer will recognize each partition as a separate disk, and each will show up under “My Computer” (Windows) or on the desktop (Macintosh). First came PCI, then PCI-X, then PCI Express. PCI Express can be abbreviated as PCIe or, less commonly and more confusingly, PCX. Unlike earlier PCI standards, PCI Express does not use a parallel bus structure, but instead is a network of serial connections controlled by a hub on the computer’s motherboard. This enables PCI Express cards to run significantly faster than previous PCI cards. A software plug-in is an add-on for a program that adds functionality to it. A quad-core CPU has four processing cores in a single chip. It is similar to a dual-core CPU, but has four separate processors (rather than two), which can process instructions at the same time. RAM is made up of small memory chips that form a memory module. These modules are installed in the RAM slots on the motherboard of your computer. When you reimage a hard disk, you restore the entire disk from a disk image file. Since this restore process involves erasing all the current data on the hard disk, it is typically used a last resort system recovery option.
<urn:uuid:8b2cee79-fb48-4ad2-acbe-1b46acb5b57d>
CC-MAIN-2017-04
http://icomputerdenver.com/tech-glossary/tech-glossary-p-r/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00075-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899685
294
3.1875
3
Details Emerge On “Summit” Power Tesla AI Supercomputer November 20, 2016 Timothy Prickett Morgan The future “Summit” pre-exascale supercomputer that is being built out in late 2017 and early 2018 for the US Department of Energy for its Oak Ridge National Laboratory looks like a giant cluster of systems that might be used for training neural networks. And that is an extremely convenient development. More than once during the SC16 supercomputing conference this week in Salt Lake City, the Summit system and its companion “Sierra” system that will be deployed at Lawrence Livermore National Laboratory, were referred to as “AI supercomputers.” This is a reflection of the fact that the national labs around the world are being asked to do machine learning on the same machines that would normally just do simulation and modeling to advance science, not just to advance the art of computation but to make these systems all look more cool and more useful. With pre-exascale machines costing hundreds of millions of dollars, it is important to get the most use out of them as possible. Considering how long ago the architectures for Summit and Sierra were done, it is perhaps a happy coincidence for IBM, Nvidia, and Mellanox Technologies that a hybrid and hefty combination of CPUs and GPUs has become the preferred node type for training the neural networks that drive machine learning as well being the system architecture of choice for boosting the speed of GPU-accelerated database management systems. Details of the final Summit and Sierra nodes were divulged at the SC16 conference, and they are interesting in a few ways. And they reflect the fact that any procurement of a supercomputer is done so far in advance that it is hard to predict exactly what configuration of system will come to market years after the funds are allocated in budgets to build machines like Summit and Sierra. Way back in November 2014, when the CORAL procurements were announced to put three capability-class, pre-exascale systems into Oak Ridge, Lawrence Livermore, and Argonne National Laboratory, the total bill for these three machines was set at $325 million. This included the 180 petaflops “Aurora” supercomputer at Argonne National Laboratory, which is based on Intel’s future “Knights Hill” Xeon Phi processors and its 200 Gb/sec Omni-Path 2 interconnect and storage based on the Lustre parallel file system (also from Intel), all in a projected 13 megawatts of power consumption. Intel is the primary contractor (a first) on Aurora and Cray is the subcontractor actually doing the system integration, with the system being installed in 2018. For the Summit and Sierra systems, which were originally slated to weigh in at 150 petaflops each as a minimum peak performance, the machines are based on a mix of Power9 processors from IBM and Tesla “Volta” GV100 GPU accelerators with 100 Gb/sec InfiniBand EDR networking and storage based on IBM’s GPFS parallel file system, all crammed into a 10 megawatt power envelope. If you allocated the money for the CORAL contract linearly (which is not necessarily how it was done), then the Aurora machine costs $122 million and the Summit and Sierra machines cost $101.5 million each. And again assuming that linear pricing across architectures, that means Aurora would have a bang for the buck of $677 per teraflops of peak theoretical performance, exactly the same price/performance of the Summit and Sierra machines. The Aurora machine has 20 percent more performance as specified, but consumed 30 percent more watts as estimated in the original RFP, so in a sense it is less power efficient than either Summit or Sierra. It is hard to predict the future, maybe especially with a supercomputer. You might need one to make such simulations. . . . Pushing The Envelope The original Summit specification called for the system to be comprised of around 3,400 nodes with more than 40 teraflops of performance using an unspecified combination of Power9 CPUs and Volta GPUs. Some presentations, such as this one we detailed by Nvidia chief technology officer Steve Oberlin from ISC15 comparing the two architectures, had Summit at 3,500 nodes to deliver that performance. It looks like it is going to take more nodes to get Summit to its performance level, but nowhere near the more than 50,000 “Knights Hill” Xeon Phi nodes that are expected to be ganged up to get Aurora to 180 petaflops. And it looks like Summit will be getting a networking upgrade, too, with Mellanox confirming last week that it would be able to get 200 Gb/sec High Data Rate (HDR) InfiniBand into the field in the middle of next year and shipping in both the Summit and Sierra machines. We caught wind of the final Summit node configurations at SC16 this week, and here is what they look like: As you can see, the node count for Summit has been boosted to 4,600 total, an increase of 35 percent from the original discussion two years ago about the configuration of the machine. Oak Ridge confirmed in presentations that each of the Summit nodes will have two Power9 chips and six of the Tesla Volta coprocessors. The Power9 chips have 24 cores each and have a midpoint design clock speed of 4 GHz, just like the Power8 chips; clock speeds can be anywhere from a low of 2.5 GHz to as high as 5 GHz, we reckon, depending on the thermal envelope customers want. Oak Ridge has not talked about what core counts and clock speeds it will use for the Power9 processors in the Summit machines. The Power9 chips will have 48 lanes of PCI-Express 4.0 peripheral I/O per socket, for an aggregate of 192 GB/sec of duplex bandwidth, as well as 48 lanes of 25 Gb/sec “Bluelink” connectivity, with an aggregate bandwidth of 300 GB/sec for linking various kinds of accelerators. These Bluelink ports are used to run the NVLink 2.0 protocol that will be supported on the Volta GPUs from Nvidia, and which have about 56 percent more bandwidth than the PCI-Express ports. IBM could support a lot of the SMX2-style, on-motherboard Tesla cards in a system, given all of these Bluelink ports, but remember it needs to allow the Volta accelerators to link to each other over NVLink so they can share memory as well as using NVLink to share memory back with the two Power9 chips. It will be interesting to see what topology the NVLink network has on the Summit systems and if Volta will support four or eight ports on the die. Obviously, you want as few hops as possible across those six GPUs and between the GPU cluster and the CPU pair. Each Pascal GP100 GPU has four NVLink 1.0 ports on it, and that is not enough to use NVLink to connect four GPUs to each other and to both of the pair of Power8 CPUs at the same time. But if Volta has eight NVLink 2.0 ports, as we have been led to believe it probably will, that really opens up the connectivity for NVLink. You can directly connect each GPU accelerator with each Power9 chip, and still have six ports left over to cross link all of the GPUs to each other. You only need five more NVLink ports to link all of the GPUs to each other in a mesh without having to resort to other kind of switching, and that still leaves and eighth NVLink port leftover. It might look something like this: Yeah, we still like paper. Given that IBM has said that it believes that a ratio of two processors to four GPU accelerators is the sweet spot in the enterprise and that it is trying to keep the form factor of the Power Systems LC HPC-style nodes at a consistent 2U of rack space, it is interesting that IBM is delivering a “Witherspoon” Power Systems LC node that has six Tesla Volta V100 accelerators in it. We think that IBM and Nvidia are under pressure to make the Summit machine more powerful, and is cramming more GPUs in the nodes and scaling up the number of nodes to make it happen. At 43.5 teraflops peak, the Summit machine at 4,600 nodes would break through 200 petaflops of peak theoretical performance, which would probably put IBM at the top of the Top 500 supercomputer rankings in November 2017 if the machine can be up and running with Linpack by then. This 200 petaflops number is a psychological barrier, not a technical one, but in a Trump Administration, it might be a very important psychological barrier indeed. (China is winning the Petaflops War.) The original plan called for Summit to be in a 10 megawatt thermal envelope, but that has been boosted by 30 percent to 13 megawatts. Some of that increased power budget could be due to the extra compute capacity, some to extra storage capacity. The original Summit specs called for a 120 PB GPFS file system with 1 TB/sec of bandwidth, but as you can see from the chart above, that file system has grown to 250 PB of capacity with a whopping 2.5 TB/sec of bandwidth. That is 2.1X more capacity and 2.5X more bandwidth, and that is very likely some of the extra power consumption. Ditto for the boosted compute capacity on the larger Summit cluster, and we think IBM and Nvidia might have geared down the Volta GPUs to get better performance per watt in the Summit nodes and boosted the GPU count in the box by 50 percent from four accelerators to six accelerators to scale the performance a bit. (We are admittedly guessing on that. But if IBM and Nvidia are not doing this, the obvious question is: Why not?) The upshot is that we think Oak Ridge is going to be getting a more capacious Summit machine than it was originally thought possible, and that is a very good thing. IBM, Nvidia, Mellanox, and the Department of Energy might have been hedging their bets on the initial specs, under-promising so they could over-deliver later, and this is understandable given the number of technology transitions that were happening at the same time in the Summit (and therefore Sierra) machines. The Summit configuration also tells us, perhaps, something about the Volta GPUs, or at least the ones being used inside of Summit. Way back when, in early 2015, Nvidia said that it would be able to deliver Pascal GPUs with 32 GB of HBM memory on the package that delivered 1 TB/sec of bandwidth into and out of that GPU memory. What really happened was that Nvidia was only able to get 16 GB of memory on the package and only delivered 720 GB/sec of bandwidth with that on-package HBM with the Tesla P100 card. No one is making promises about the amount of GPU memory or bandwidth coming with Volta, as you can see above. What Nvidia has said, way back in 2015, is that the Volta GP100 GPUs would deliver a relative single precision floating general matrix multiply (SGEMM) of 72 gigaflops per watt, compared to 40 SGEMM gigaflops per watt for the Pascal GP100. If you use that ratio, and then cut it in half for double precision, then a Volta GPU held at a constant 300 watts (the same as the Pascal package) would have a little over 9.5 teraflops of double precision performance, and four of them would deliver 38.2 teraflops of oomph, the vast majority of the more than 40 teraflops of performance expected in the Summit node. Six GPUs at this performance level would deliver a total of 57.2 teraflops just from the GPUs alone, which no one has promised and that is why we think Nvidia is gearing these Volta GPUs down to maybe 200 watts. If you cut the clocks and therefore the thermals down by 100 watts on each card, you can stay in the same 1,200 watt GPU envelope as four Pascal P100 cards but maybe only cut performance by 20 percent to 25 percent against that 33 percent wattage drop. By moving from four Voltas to six Voltas per node, the HBM memory per node could increase by a lot (100 percent in capacity per card and another 50 percent from having more cards) and the performance per watt and the aggregate performance could be pushed a little further, too. With a geared down Volta card running at 200 watts, you could have a V100 card that deliver 7.6 teraflops at double precision and 38.2 gigaflops per watt compared to something like 38.2 gigaflops per watt for the faster V100 card we theorized above. For fun, let’s call it 50 teraflops per node in Summit. That is a total of 512 GB of main memory (with 120 GB/sec of bandwidth, according to specs provided by IBM earlier this year), and if Nvidia can reach its original goal of 32 GB of HBM memory per GPU accelerator it hoped to hit with Pascal on the Voltas, that works out to 192 GB of HBM memory with 6 TB/sec of bandwidth. That is a lot more than the 64 GB of HBM memory and aggregate 2.8 TB/sec of GPU memory bandwidth in the current “Minsky” Power Systems LC precursor to the Summit’s Witherspoon node. There is another 800 GB of non-volatile memory in the Summit node, and we are pretty sure it is not Intel’s 3D XPoint memory and we would guess it is flash capacity (probably NVM-Express drives) from Seagate Technologies but Oak Ridge has not said. The math works with this scenario, with 512 GB of DDR4 main memory, a total of 192 GB of HBM memory on the GPUs and 800 GB of flash, across 4,600 nodes that is a total of 6.9 TB of aggregate memory. (By the way, that chart has an error. The “Titan” supercomputer has 32 GB of DDR3 memory plus 6 GB of GDDR5 memory per node to reach a total of 693 TB of aggregate memory.) At that 50 teraflops of performance per node, which we think is doable if the feeds and speeds for Volta work out, that is a 230 petaflops cluster peak, and if the performance of the Volta GPUs can be pushed to an aggregate of 54.5 teraflops, then we are talking about crossing through 250 petaflops – a quarter of the way to exascale. And this is also a massive machine that could, in theory, run 4,600 neural network training runs side-by-side for machine learning workloads (we are not saying it will), but at the half precision math used in machine learning, that is above an exaflops of aggregate compute capacity. Maybe Summit is not exactly pre-exascale after all.
<urn:uuid:9b1b4c30-7bb3-433d-946d-1afb1e42bcf8>
CC-MAIN-2017-04
https://www.nextplatform.com/2016/11/20/details-emerge-summit-power-tesla-ai-supercomputer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00012-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95131
3,139
2.671875
3
One of the best things about modern cryptography is the beautiful terminology. You could start any number of punk bands (or Tumblrs) named after cryptography terms like ‘hard-core predicate’, ‘trapdoor function’, ‘ or ‘impossible differential cryptanalysis’. And of course, I haven’t even mentioned the one term that surpasses all of these. That term is ‘zero knowledge‘. In fact, the term ‘zero knowledge’ is so appealing that it leads to problems. People misuse it, assuming that zero knowledge must be synonymous with ‘really, really secure‘. Hence it gets tacked onto all kinds of stuff — like encryption systems and anonymity networks — that really have nothing to do with true zero knowledge protocols. This all serves to underscore a point: zero-knowledge proofs are one of the most powerful tools cryptographers have ever devised. But unfortunately they’re also relatively poorly understood. In this series of posts I’m going try to give a (mostly) non–mathematical description of what ZK proofs are, and what makes them so special. In this post and the next I’ll talk about some of the ZK protocols we actually use. Origins of Zero Knowledge The notion of ‘zero knowledge’ was first proposed in the 1980s by MIT researchers Shafi Goldwasser, Silvio Micali and Charles Rackoff. These researchers were working on problems related to interactive proof systems, theoretical systems where a first party (called a ‘Prover’) exchanges messages with a second party (‘Verifier’) to convince the Verifier that some mathematical statement is true.* Prior to Goldwasser et al., most work in this area focused the soundness of the proof system. That is, it considered the case where a malicious Prover attempts to ‘trick’ a Verifier into believing a false statement. What Goldwasser, Micali and Rackoff did was to turn this problem on its head. Instead of worrying only about the Prover, they asked: what happens if you don’t trust the Verifier? The specific concern they raised was information leakage. Concretely, they asked, how much extra information is the Verifier going to learn during the course of this proof, beyond the mere fact that the statement is true? It’s important to note that this is not simply of theoretical interest. There are real, practical applications where this kind of thing matters. Here’s one: imagine that a real-world client wishes to log into a web server using a password. The standard ‘real world’ approach to this problem involves storing a hashed version of the password on the server. The login can thus be viewed as a sort of ‘proof’ that a given password hash is the output of a hash function on some password — and more to the point, that the client actually knows the password. Most real systems implement this ‘proof’ in the absolute worst possible way. The client simply transmits the original password to the server, which re-computes the password hash and compares it to the stored value. The problem here is obvious: at the conclusion of the protocol, the server has learned my cleartext password. Modern password hygiene therefore involves a good deal of praying that servers aren’t compromised. What Goldwasser, Micali and Rackoff proposed was a new hope for conducting such proofs. If fully realized, zero knowledge proofs would allow us to prove statements like the one above, while provably revealing no information beyond the single bit of information corresponding to ‘this statement is true’. A ‘real world’ example So far this discussion has been pretty abstract. To make things a bit more concrete, let’s go ahead and give a ‘real’ example of a (slightly insane) zero knowledge protocol. For the purposes of this example, I’d like you to imagine that I’m a telecom magnate in the process of deploying a new cellular communications network. My network structure is represented by the graph below. Each vertex in this graph represents a cellular radio tower, and the connecting lines (edges) indicate locations where two cells overlap, meaning that their transmissions are likely to interfere with each other. This overlap is problematic, since it means that signals from adjacent towers are likely to scramble reception. Fortunately my network design allows me to configure each tower to one of three different frequency bands to avoid such interference. Thus the challenge in deploying my network is to assign frequency bands to the towers such that no two overlapping cells share the same frequencies. If we use colors to represent the frequency bands, we can quickly work out one solution to the problem: Of course, many of you will notice that what I’m describing here is simply an instance of the famous theory problem called the graph three-coloring problem. You might also know that what makes this problem interesting is that, for some graphs, it can be quite hard to find a solution, or even to determine if a solution exists. In fact, graph three-coloring — specifically, the decision problem of whether a given graph supports a solution with three colors — is known to be in the complexity class NP-complete. It goes without saying that the toy example above is easy to solve by hand. But what if it wasn’t? For example, imagine that my cellular network was very large and complex, so much so that the computing power at my disposal was not sufficient to find a solution. In this instance, it would be desirable to outsource the problem to someone else who has plenty of computing power. For example, I might hire my friends at Google to solve it for me on spec. But this leads to a problem. Suppose that Google devotes a large percentage of their computing infrastructure to searching for a valid coloring for my graph. I’m certainly not going to pay them until I know that they really have such a coloring. At the same time, Google isn’t going to give me a copy of their solution until I’ve paid up. We’ll wind up at an impasse. In real life there’s probably a common-sense answer to this dilemma, one that involves lawyers and escrow accounts. But this is not a blog about real life, it’s a blog about cryptography. And if you’ve ever read a crypto paper, you’ll understand that the right way to solve this problem is to dream up an absolutely crazy technical solution. A crazy technical solution (with hats!) The engineers at Google consult with Silvio Micali at MIT, who in consultation with his colleagues Oded Goldreich and Avi Wigderson, comes up with the following clever protocol — one so elegant that it doesn’t even require any computers. All it requires is a large warehouse, lots of crayons, and plenty of paper. Oh yes, and a whole bunch of hats.** Here’s how it works. First I will enter the warehouse, cover the floor with paper, and draw a blank representation of my cell network graph. Then I’ll exit the warehouse. Google can now enter enter, shuffle a collection of three crayons to pick a random assignment of the three agreed-upon crayon colors (red/blue/purple, as in the example above), and color in the graph in with their solution. Note that it doesn’t matter which specific crayons they use, only that the coloring is valid. Before leaving the warehouse, Google covers up each of the vertices with a hat. When I come back in, this is what I’ll see: Obviously this approach protects Google’s secret coloring perfectly. But it doesn’t help me at all. For all I know, Google might have filled in the graph with a random, invalid solution. They might not even have colored the graph at all. To address my valid concerns, Google now gives me an opportunity to ‘challenge’ their solution to the graph coloring. I’m allowed to pick — at random — a single ‘edge’ of this graph (that is, one line between two adjacent hats). Google will then remove the two corresponding hats, revealing a small portion of their solution: Notice that there are two outcomes to my experiment: - If the two revealed vertices are the same color (or aren’t colored in at all!) then I definitely know that Google is lying to me. Clearly I’m not going to pay Google a cent. - If the two revealed vertices are different colors, then Google might not be lying to me. Fortunately Google has an answer to this. We’ll just run the protocol again! We put down fresh paper with a new, blank copy of the graph. Google now picks a new (random) shuffle of the three crayons. Next they fill in the graph with a valid solution, but using the new random ordering of the three colors. The hats go back on. I come back in and repeat the challenge process, picking a new random edge. Once again the logic above applies. Only this time if all goes well, I should now be slightly more confident that Google is telling me the truth. That’s because in order to cheat me, Google would have had to get lucky twice in a row. That can happen — but it happens with relatively lower probability. The chance that Google fools me twice in a row is now (E-1)/E * (E-1)/E (or about 99.8% probability for our 1,000 edge example above). Fortunately we don’t have to stop at two challenges. In fact, we can keep trying this over and over again until I’m confident that Google is probably telling me the truth. Note that I’ll never be perfectly certain that Google is being honest — there’s always going to be a tiny probability that they’re cheating me. But after a large number of iterations (E^2, as it happens) I can eventually raise my confidence to the point where Google can only cheat me with negligible probability — low enough that for all practical purposes it’s not worth worrying about. And then I’ll be able to safely hand Google my money. What you need to believe is that Google is also protected. Even if I try to learn something about their solution by keeping notes between protocol runs, it shouldn’t matter. I’m foiled by Google’s decision to randomize their color choices between each iteration. The limited information I obtain does me no good, and there’s no way for me to link the data I learn between interactions. What makes it ‘zero knowledge’? I’ve claimed to you that this protocol leaks no information about Google’s solution. But don’t let me get away with this! The first rule of modern cryptography is never to trust people who claim such things without proof. Goldwasser, Micali and Rackoff proposed three following properties that every zero-knowledge protocol must satisfy. Stated informally, they are: - Completeness. If Google is telling the truth, then they will eventually convince me (at least with high probability). - Soundness. Google can only convince me if they’re actually telling the truth. - Zero-knowledgeness. (Yes it’s really called this.) I don’t learn anything else about Google’s solution. The hard part here is the ‘zero knowledgeness’ property. To do this, we need to conduct a very strange thought experiment. A thought experiment (with time machines) First, let’s start with a crazy hypothetical. Imagine that Google’s engineers aren’t quite as capable as people make them out to be. They work on this problem for weeks and weeks, but they never manage to come up with a solution. With twelve hours to go until showtime, the Googlers get desperate. They decide to trick me into thinking they have a coloring for the graph, even though they don’t. Their idea is to sneak into the GoogleX workshop and borrow Google’s prototype time machine. Initially the plan is to travel backwards a few years and use the extra working time to take another crack at solving the problem. Unfortunately it turns out that, like most Google prototypes, the time machine has some limitations. Most critically: it’s only capable of going backwards in time four and a half minutes. So using the time machine to manufacture more working time is out. But still, it turns out that even this very limited technology can still be used to trick me. |I don’t really know what’s going on here but it seemed apropos. The plan is diabolically simple. Since Google doesn’t actually know a valid coloring for the graph, they’ll simply color the paper with a bunch of random colors, then put the hats on. If by sheer luck, I challenge them on a pair of vertices that happen to be different colors, everyone will heave a sigh of relief and we’ll continue with the protocol. So far so good. Inevitably, though, I’m going to pull off a pair of hats and discover two vertices of the same color. In the normal protocol, Google would now be totally busted. And this is where the time machine comes in. Whenever Google finds themselves in this awkward situation, they simply fix it. That is, a designated Googler pulls a switch, ‘rewinds’ time about four minutes, and the Google team recolors the graph with a completely new random solution. Now they let time roll forward and try again. In effect, the time machine allows Google to ‘repair’ any accidents that happen during their bogus protocol execution, which makes the experience look totally legitimate to me. Since bad challenge results will occur only 1/3 of the time, the expected runtime of the protocol (from Google’s perspective) is only moderately greater than the time it takes to run the honest protocol. From my perspective I don’t even know that the extra time machine trips are happening. This last point is the most important. In fact, from my perspective, being unaware that the time machine is in the picture, the resulting interaction is exactly the same as the real thing. It’s statistically identical. And yet it’s worth pointing out again that in the time machine version, Google has absolutely no information about how to color the graph. What the hell is the point of this? What we’ve just shown is an example of a simulation. Note that in a world where time runs only forward and nobody can trick me with a time machine, the hat-based protocol is correct and sound, meaning that after E^2 rounds I should be convinced (with all but negligible probability) that the graph really is colorable and that Google is putting valid inputs into the protocol. What we’ve just shown is that if time doesn’t run only forward — specifically, if Google can ‘rewind’ my view of time — then they can fake a valid protocol run even if they have no information at all about the actual graph coloring. From my perspective, what’s the difference between the two protocol transcripts? When we consider the statistical distribution of the two, there’s no difference at all. Both convey exactly the same amount of useful information. Believe it or not, this proves something very important. Specifically, assume that I (the Verifier) have some strategy that ‘extracts’ useful information about Google’s coloring after observing an execution of the honest protocol. Then my strategy should work equally well in the case where I’m being fooled with a time machine. The protocol runs are, from my perspective, statistically identical. I physically cannot tell the difference. Thus if the amount of information I can extract is identical in the ‘real experiment’ and the ‘time machine experiment’, yet the amount of information Google puts into the ‘time machine’ experiment is exactly zero — then this implies that even in the real world the protocol must not leak any useful information. Thus it remains only to show that computer scientists have time machines. We do! (It’s a well-kept secret.) Getting rid of the hats (and time machines) Of course we don’t actually want to run a protocol with hats. And even Google (probably?) doesn’t have a literal time machine. To tie things together, we first need to bring our protocol into the digital world. This requires that we construct the digital equivalent of a ‘hat’: something that both hides a digital value, while simultaneously ‘binding’ (or ‘committing’) the maker to it, so she can’t change her mind after the fact. Fortunately we have a perfect tool for this application. It’s called a digital commitment scheme. A commitment scheme allows one party to ‘commit’ to a given message while keeping it secret, and then later ‘open’ the resulting commitment to reveal what’s inside. They can be built out of various ingredients, including (strong) cryptographic hash functions.****** Given a commitment scheme, we now have all the ingredients we need to run the zero knowledge protocol electronically. The Prover first encodes its vertex colorings as a set of digital messages (for example, the numbers 0, 1, 2), then generates digital commitments to each one. These commitments get sent over to the Verifier. When the Verifier challenges on an edge, the Prover simply reveals the opening values for the commitments corresponding to the two vertices. So we’ve managed to eliminate the hats. But how do we prove that this protocol is zero knowledge? Fortunately now that we’re in the digital world, we no longer need a real time machine to prove things about this protocol. A key trick is to specify in our setting that the protocol is not going to be run between two people, but rather between two different computer programs (or, to be more formal, probabilistic Turing machines.) What we can now prove is the following theorem: if you could ever come up with a computer program (for the Verifier) that extracts useful information after participating in a run of the protocol, then it would be possible to use a ‘time machine’ on that program in order to make it extract the same amount of useful information from a ‘fake’ run of the protocol where the Prover doesn’t put in any information to begin with. And since we’re now talking about computer programs, it should be obvious that rewinding time isn’t such an extraordinary feat at all. In fact, we rewind computer programs all the time. For example, consider using virtual machine software with a snapshot capability. |Example of rewinding through VM snapshots. An initial VM is played forward, rewound to an initial snapshot, then execution is forked to a new path. Even if you don’t have fancy virtual machine software, any computer program can be ‘rewound’ to an earlier state, simply by starting the program over again from the beginning and feeding it exactly the same inputs. Provided that the inputs — including all random numbers — are fixed, the program will always follow the same execution path. Thus you can rewind a program just by running it from the start and ‘forking’ its execution when it reaches some desired point. Ultimately what we get is the following theorem. If there exists any Verifier computer program that successfully extracts information by interactively running this protocol with some Prover, then we can simply use the rewinding trick on that program to commit to a random solution, then ‘trick’ the Verifier by rewinding its execution whenever we can’t answer its challenge correctly. The same logic holds as we gave above: if such a Verifier succeeds in extracting information after running the real protocol, then it should be able to extract the same amount of information from the simulated, rewinding-based protocol. But since there’s no information going into the simulated protocol, there’s no information to extract. Thus the information the Verifier can extract must always be zero. Ok, so what does this all mean? So let’s recap. We know that the protocol is complete and sound, based on our analysis above. The soundness argument holds in any situation where we know that nobody is fiddling with time — that is, the Verifier is running normally and nobody is rewinding its execution. At the same time, the protocol is also zero knowledge. To prove this, we showed that any Verifier program that succeeds in extracting information must also be able to extract information from a protocol run where rewinding is used and no information is available in the first place. Which leads to an obvious contradiction, and tells us that the protocol can’t leak information in either situation. There’s an important benefit to all this. Since it’s trivial for anyone to ‘fake’ a protocol transcript, even after Google proves to me that they have a solution, I can’t re-play a recording of the protocol transcript to prove anything to anyone else (say, a judge). That’s because the judge would have no guarantee that the video was recorded honestly, and that I didn’t simply edit in the same way Google might have done using the time machine. This means that protocol transcripts themselves contain no information. The protocol is only meaningful if I myself participated, and I can be sure that it happened in real time. Proofs for all of NP! If you’ve made it this far, I’m pretty sure you’re ready for the big news. Which is that 3-coloring cellphone networks isn’t all that interesting of a problem — at least, not in and of itself. In summary, and next time Of course, actually running this protocol for interesting statements would be an insanely silly thing for anyone to do, since the cost of doing so would include the total size of the original statement and witness, plus the reduction cost to convert it into a graph, plus the E^2 protocol rounds you’d have to conduct in order to convince someone that the proof is valid. Theoretically this is ‘efficient’, since the total cost of the proof would be polynomial in the input size, but in practice it would be anything but. So what we’ve shown so far is that such proofs are possible. It remains for us to actually find proofs that are practical enough for real-world use. In the next post I’ll talk about some of those — specifically, the efficient proofs that we use for various useful statements. I’ll give some examples (from real applications) where these things have been used. Also at reader request: I’ll also talk about why I dislike SRP so much. * Formally, the goal of an interactive proof is to convince the Verifier that a particular string belongs to some language. Typically the Prover is very powerful (unbounded), but the Verifier is limited in computation. ** This example is based on the original solution of Goldwasser, Micali and Rackoff, and the teaching example using hats is based on an explanation by Silvio Micali. I take credit only for the silly mistakes. ****** A simple example of a commitment can be built using a hash function. To commit to the value “x” simply generate some (suitably long) string of random numbers, which we’ll call ‘salt’, and output the commitment C = Hash(salt || x). To open the commitment, you simply reveal ‘x’ and ‘salt’. Anyone can check that the original commitment is valid by recomputing the hash. This is secure under some (moderately strong) assumptions about the function itself.
<urn:uuid:57f90573-1cd6-4f32-8c3c-079aeeca2a62>
CC-MAIN-2017-04
https://blog.cryptographyengineering.com/2014/11/27/zero-knowledge-proofs-illustrated-primer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00132-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932168
5,124
2.53125
3
http://news.com.com/2100-1002_3-5055759.html By Robert Lemos Staff Writer, CNET News.com July 25, 2003 A hacker group released code designed to exploit a widespread Windows flaw, paving the way for a major worm attack as soon as this weekend, security researchers warned. The warning came Friday, after hackers from the Chinese X Focus security group forwarded source code to several public security lists. The code is for a program designed to allow an intruder to enter Windows computers. The X Focus program takes advantage of a hole in the Microsoft operating system that lets attackers break in remotely. The flaw has been characterized by some security experts as the most widespread ever found in Windows. "An exploit (program) like this is very easy to turn into a worm," said Marc Maiffret, chief hacking officer for network protection firm eEye Digital Security. "I wouldn't be surprised if we see a worm sooner rather than later." While many security researchers believe the publication of such information can encourage security personnel in businesses to patch holes faster, the release of exploit code has typically preceded the largest worm attacks of the past few years. Maiffret and other security researchers worried that next week's Defcon hacker conference in Las Vegas will act as a catalyst and spur a malicious hacker to create and release such a worm. In January, the Slammer worm spread to corporate networks worldwide, causing databases to go down, bank teller machines to stop working and some airline flights to be canceled. Six months earlier, a researcher had released code that exploited the major Microsoft SQL vulnerability used by the worm to spread. Maiffret is quite familiar with how exploits and explicit details about vulnerabilities can be turned into malicious code. In June 2001, his company released details of another Microsoft flaw, in a component of Web server software. A month later, the flaw became the mechanism by which the Code Red worm spread. Release tension Maiffret, who doesn't support the release of exploit code, points to the X Focus notice as proof that exploits can be created without explicit details in the advisory. Few details were available to hackers and security researchers about the Windows flaw it was based on, but the exploit program was created quickly nevertheless. Jeff Jones, senior director for Microsoft's Trustworthy Computing initiative, took the creators of the code to task, saying that the release of a program to exploit a specific vulnerability doesn't help make companies more secure. "We believe publication of exploit code in cases like this is not good for customers," he said. Jones hinted that Microsoft may attempt to identify the issuer of a worm and to take legal action against the culprit. "While the release of exploits are protected in the United States under the First Amendment, intentional use of that code to cause damage is criminal." Microsoft released details of the exploited vulnerability on July 16. The flaw is in a component of the operating system that allows other computers to request the Windows system perform an action or service. The component, known as the remote procedure call (RPC) process, facilitates such activities such as sharing files and allowing others to use the computer's printer. By sending too much data to the RPC process, an attacker can cause the system to grant full access to the system. The Chinese code worked on only three variants of Windows, but could show knowledgeable hackers how to take advantage of the flaw. 'So I fixed it' HD Moore, a security researcher and the founder of the Metasploit Project, has done just that. A well-known hacker and programmer of security code, Moore has taken the Chinese code and improved it. Now the code works for at least seven versions of the operating system, including Windows 2000 Service Pack 0 to Service Pack 4 and Windows XP Service Pack 0 and Service Pack 1. "I don't like broken exploits, so I fixed it," he said. Moore posted his improved code for the program to a Web site hosted on his home network and found an unexpected amount of interest in the program. After other security researchers became aware of the code, Moore's site started receiving 300 to 400 download requests every second, taking down his cable modem connection. He planned to move the site to a hosting provider later this weekend. Moore also believed that the code could easily be turned into a worm. "This is probably the most widespread vulnerability that lets you get remote root," he said. "It's almost guaranteed to be turned into a worm." Remote root is a security term for the ability to take control of a computer over the Internet. The prospect has financial companies worried, said another security researcher, who asked not to be named. The companies have had only two weeks to evaluate the Microsoft patch and apply it--an impossible task for chronically overworked network administrators. "It's a huge problem, because they haven't had time," said the researcher. "It takes weeks to remediate a whole Class-B (about 65,000 addresses) network." And even companies that have patched all the flaws and taken prescriptive measures to harden their firewalls have to be sure they haven't missed anything, said eEye's Maiffret. "This is going to be something like the SQL Slammer worm," he said. "It won't affect the outside networks (such as the Internet); it's going to affect the inside networks. All it takes is one server to get infected. You think it (was) bad when your database servers went down. This will take those servers and every other computer down as well." He has advice: Patch quickly and disable the vulnerable service. - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail. This archive was generated by hypermail 2b30 : Mon Jul 28 2003 - 06:27:22 PDT
<urn:uuid:67a741ea-507f-4ccf-9f26-8bc0c592bdd5>
CC-MAIN-2017-04
http://lists.jammed.com/ISN/2003/07/0123.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00040-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966851
1,187
2.6875
3
The Automation of Social Engineering We’ve talked extensively about the automation and industrialization of hacking has changed the face of cyber crime. With the advent of social networking, hackers turned to sites like Facebook to create another attack channel. However, these attacks were typically manual, such as uploading malware or creating fake pictures of a dead Osama Bin Laden. Social engineering may now be entering the next phase: automation. Recently, a new tool emerged which automates social engineering on Facebook. Unlike hacking software, this tool doesn’t demonstrate any new theoretical security vulnerability. However, the automation of the social engineering process may have significant practical security implications as it can be launched by every script kiddie. The attack package is hosted on code.google.com: http://code.google.com/p/fbpwn/ What does the software do? It sends friend requests to a list of Facebook profiles, and polls for the acceptance notification. Once the victim accepts the invitation, it dumps all their information, photos and friend list to a local folder. In other words, it automates the process of friending, sees who accepted and then collects all personal information in your profile as well as photos. How does it work? The guide explains (with spelling and grammar errors preserved): A typical scenario is to gather the information from a user profile. The plugins are just a series of normal operations on FB, automated to increase the chance of you getting the info. Typically, first you create a new blank account for the purpose of the test. Then, the friending plugin works first, by adding all the friends of the victim (to have some common friends). Then the clonning plugin asks you to choose one of the victims friends. The cloning plugin clones only the display picture and the display name of the chosen friend of victim and set it to the authenticated account. Afterwards, a friend request is sent to the victim's account. The dumper polls waiting for the friend to accept. As soon as the victim accepts the friend request, the dumper starts to save all accessable HTML pages (info, images, tags, ...etc) for offline examining. After a a few minutes, probably the victim will unfriend the fake account after he/she figures out it's a fake, but probably it's too late!" The cloning plug-in is a critical part how this works: It means that the victim may get a friend request from a real friend name and picture – so one would accept it with no hesitation. Cloning is virtual identity theft – if two profiles are exactly the same, which one is real? Who developed it? Employees of a security company from Egypt. They (uselessly) caution, “This project is a PoC. Use it on your own risk and please do not abuse!” Why did they develop it and then release it publicly? Here’s what they stated on their website (with grammar and spelling errors preserved): On behalf of Ahmed Saafan (project owner and administrator) I have taken a significant amount of time thinking about releasing the program or not for the same reasons that everybody is discussing, abuse. However, I came to the conclusion that we should release it in the old “Full disclosure” way. My main goals for the release are: User awareness for what is happening already in the wild but in a covert way: I already have seen many cases of innocent people being socially engineered and blackmailed because they do not know the implications of their actions online. This tool should make the people aware of the implications of their actions online. Accepting friend requests for even the smallest period of time without manually verifying that the friend is actually who he claims to be, is an example of wrong actions that we wanted to demonstrate. I have tried telling as many social media entities as possible about our PoC so that people get to know the risks as fast as possible and start being more careful about what they do online. Also, with the code being online, we tried to send a message of good intention; that we are not hiding anything within the binary code and that we don’t want any compensation. Facebook attention to their flawed user verification process: From Facebook’s perspective, I think Facebook should have a more strict policy for verifying that people are who they claim to be, and filter out fake or impersonating accounts. I know that this contradicts with usability in a great way, but Facebook should figure out a way to do it. The tool demonstrates the risks that are already out there for user impersonation. I believe without fake accounts on Facebook, people wouldn’t risk their own account to be used in cons, or at least it the numbers will be reduced significantly. Also, we have seen a very successful example of full disclosure, i.e. Firesheep. I think Firesheep has achieved in a very short time a significant amount of user awareness and got the people’s attention to the importance of SSL without being abused (to a great extent). However, now, non-technical users think as long as they have SSL enabled they are safe. So the tool is just another step into having –hopefully- a more secure cyber social network. In fairness, it was a matter of time before someone else developed a similar tool—but security professionals shouldn’t be facilitators. Not surprisingly, to date there have been around 5,000 downloads since it was made public a week ago: And here’s the GUI: What can an attacker gain from such attack? The attacker gains access to all data the victim exposes to the world, i.e., it steals a virtual identity. - The data itself may be valuable and have value on the black market. For example, there is an active market for suggestive photos of scantily clad females. - The attacker now can impersonate the victim. For example: - Give job recommendations over Linkedin. - Provide a bridgehead for further social engineering. - Ask your IT admin (over FB –since you are friends now!) “I can't login to something, can you reset may password?” - Defraud or relatives with money scams: "I'm stuck in Vegas with no money." - Facebook’s security team takes some action to make their platform immune to this attack by, for example, applying anti-automation measures. - Facebook will help make consumers aware as this problem as it will likely proliferate. - Consumers of social networks should: - Never approve friend request from people you don't know. - Be cautious when accepting friend requests: - Verify he/she is not already in your friends list, since if they were, your friend profile was probably cloned. - Look at the friendship applicant profile before accepting the request. Find out if he\she already is a friend with a mutual friend and be alert if they are not. - Look for "old" data – cloned profile cannot clone the history. So dated posts to the wall, pictures, etc may serve as evidence of fraud. - You may want to use another medium to verify that the request is genuine: email, phone, etc. Social engineering’s appeal is growing rapidly within the hacking community—which shouldn’t surprise anyone. While software vulnerabilities can be fixed or patched, human vulnerability is here to stay. Authors & Topics:
<urn:uuid:d4923719-7ecb-4e41-8665-2df0ae42180f>
CC-MAIN-2017-04
http://blog.imperva.com/2011/09/the-automation-of-social-engineering.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00526-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947192
1,533
2.765625
3
Scientists at Stellenbosch University (SU) in Africa have completed the first sequencing of the human genome on that continent with the help of a 5500xl Next Generation Sequencer, nicknamed MegaMind. This achievement was featured in a recent issue of Quest: Science for South Africa, a popular science magazine, published each quarter by the Academy of Science for South Africa. In 2011, Stellenbosch University (SU) acquired MegaMind with a grant from the National Research Foundation. The sequencer was deployed in the DNA Sequencing Unit, part of the university’s Central Analytical Facilities (CAF). Earlier this year, scientists there sequenced a human genome, marking the first time that this has been done on the African continent. As the world celebrates the 60th anniversary of the discovery of the DNA double helix, the sequencing of large genomes, like that of humans, requires “highly sophisticated instruments, technical expertise and supercomputing power,” notes Stellenbosch University science writer Wiida Fourie-Basson. The African continent has not had access to all the necessary resources to take on this challenge until now. Sequencing of the first complete human genome under the Human Genome Project took place over a 13 year-period at a cost of $3 billion. The ground-breaking effort, requiring the work of over one-thousand scientists, culminated in 2001 with the first draft of the genome being published in two separate papers in Nature and Science. The project was declared complete in April 2003, and with that the world was welcomed the genome era. In 2005, next-generation sequencing (aka next-gen) took sequencing to a new level. The pace of progress advanced to the point where it was possible to sequence the whole human genome in a matter of weeks, not years. In a similar trajectory, cost went from over a billion dollars to less than $100,000. Now, genomes are routinely sequenced for around $6,000 and the “thousand-dollar genome” is rapidly approaching. In the last five to ten years, numerous countries have successfully sequenced the human genome, but Africa had only been involved in sequencing smaller genomes, such as bacteria and fungi. “Several resources had to be in place first,” writes Fourie-Basson, returning to the themes of “technical expertise, sophisticated instruments and supercomputing power.” Once MegaMind was in place, the DNA Sequencing Unit at Stellenbosch University’s Central Analytical Facilities CAF was able to begin the complex process: preparation, loading the DNA fragments onto a special glass flow cell, and then beginning the actual sequencing process. During sequencing, MegaMind deploys special primers containing fluorescent probes. These primers attach to the DNA fragments and emit a fluorescent signal when excited by a laser. “Millions of these fluorescent data points are collected by a microscope lens and interpreted by software, similar to the way the Hubble telescope will interpret data from the stars,” explains the author. The points are eventually consolidated into a single data file used for downstream analysis. Analyzing this data requires some serious hardware, says “For this purpose, Stellenbosch University acquired a high-performing computing cluster with over one Terabytes of RAM, more than 200 processors and over 60 Terabytes of storage.” Megamind completed the sequencing run in just two weeks. According to Carel van Heerden, manager of the DNA Sequencing Unit, the sequencing work itself is pretty-straight forward, but “it must just be done correctly.” “It is like playing a piano concerto,” says van Heerden. “You have to read the notes from the paper and do what it says. But just as it is with playing the piano, you have to practise until you get it right.”
<urn:uuid:abe07211-e5d3-4cd9-a3ab-c1fa4c559552>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/12/12/first-human-dna-sequenced-africa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00342-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936617
811
3.34375
3
Aug 31, 2012 -- Global supercomputer leader Cray Inc. (NASDAQ: CRAY) today announced that the Consortium for Advanced Research on Transport of Hydrocarbon in the Environment (CARTHE), in collaboration with the University of Miami Center for Computational Science (CCS), will acquire a Cray XE6m supercomputer as part of the organization's goal to develop and improve computational tools to accurately predict the fate of hydrocarbons released into the environment during normal and hurricane weather conditions. "This supercomputer is more important than ever to our project in light of Hurricane Isaac cutting directly through our on-going experiment in the Gulf of Mexico. Data collected during the hurricane may help shed light on how pollutants behave should an oil spill occur before or during a major weather event like Hurricane Isaac," said Tamay Özgökmen, CARTHE director. An animated movie of Hurricane Isaac going through the experiment can be seen at http://laplace.ceoe.udel.edu/GLAD/DRIFTERS/GLAD_movie.gif. Özgökmen added, "We have some challenging goals ahead of us as we produce comprehensive modeling hierarchy that provides a four dimensional description of oil/dispersant fate and transport in the Gulf of Mexico and coastal environments across all relevant time and space scales, and in multiple weather conditions. High performance computing is a critical element of our research, and we needed a system that has the performance, usability and demonstrated capabilities that will allow us to start our work now. The Cray XE6m is a great fit for us." CARTHE is funded by the Gulf of Mexico Research Initiative (GoMRI), which is a 10-year, $500 million independent research program that was established to study the effects of the Deepwater Horizon incident. GoMRI investigates the impacts of oil, dispersed oil, and dispersant on the ecosystems of the Gulf of Mexico and affected coastal States in a broad context of improving fundamental understanding of the dynamics of such events and their environmental stresses and public health implications. The Cray XE6m supercomputer, which will be located at the University of Miami's Rosentiel School of Marine & Atmospheric Science (RSMAS), will be an important computational resource for a CARTHE program that is studying the surface ocean currents that transport pollutants in real time. "The Cray XE6m is quite unique and much like a very tightly knit computational ecosystem," said Nick Tsinoremas, CCS director. "It is likely the very best solution for problems of this type today." CCS staff oversaw real-time data management from the information collected from 300 drifting buoys this summer that occurred in five-minute intervals, and they will continue to provide logistical support to scientists as the data are analyzed throughout the project. "The Cray supercomputer not only provides impressive computing power, but it represents an entirely new form of computing for many principal investigators whose problems fit into the same sort of paradigm as the CARTHE project," said Joel Zysman, CCS director of high-performance computing. "With the system scheduled to be up and running in approximately nine weeks, we have a wonderful new tool for these researchers." "The scientists participating in the CARTHE program are performing some vitally important research, and we are honored that a Cray supercomputer will provide the high performance computing resources that are necessary for their studies," said Per Nyberg, Cray's director of business development. "Many of the world's leading weather, climate and oceanography centers run their simulations on Cray supercomputers, and we are pleased that CARTHE has joined our growing list of customers in this segment." The Cray XE6m system includes the same petascale technologies found in high-end Cray supercomputers, such as Cray's Gemini interconnect, the Cray Linux Environment and powerful AMD Opteron processors. The system is designed to maintain an attractive cost of ownership and extend Cray's presence in market segments that have needs for technical enterprise supercomputing systems, such as the university, manufacturing, weather and life sciences communities. Fully upgradeable from previous generations of Cray supercomputers, the Cray XE6m system is also designed to give customers the ability to upgrade to future Cray systems and technologies. CARTHE comprises 26 principal investigators from 12 universities and research institutions distributed across four Gulf of Mexico states and four other states. It fuses into one group of investigators with scientific and technical knowledge and publications related to oil fate/transport processes, oceanic and atmospheric turbulence, air-sea interactions, tropical cyclones and winter storms, and coastal and nearshore modeling and observations. Visit http://www.carthe.org/ for more information. The University of Miami Center for Computational Science (CCS) was created to catalyze transdisciplinary research in science and engineering with software, hardware and expertise to address complex problems of the 21st century and beyond. CCS provides a framework for promoting collaborative and multidisciplinary activities with partners within the university and around the world. With eight focus areas, it strives for excellence in research, teaching, and service covering the fundamental, as well as applied aspects, of computational science. About Cray Inc. As a global leader in supercomputing, Cray provides highly advanced supercomputers and world-class services and support to government, industry and academia. Cray technology is designed to enable scientists and engineers to achieve remarkable breakthroughs by accelerating performance, improving efficiency and extending the capabilities of their most demanding applications. Cray's Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to surpass today's limitations and meeting the market's continued demand for realized performance. Go to http://www.cray.com/ for more information. This press release contains forward-looking statements within the meaning of Section 21E of the Securities Exchange Act of 1934 and Section 27A of the Securities Act of 1933, including, but not limited to, statements related to Cray's ability to deliver the system required by CARTHE when required and that meets CARTHE's needs. These statements involve current expectations, forecasts of future events and other statements that are not historical facts. Inaccurate assumptions and known and unknown risks and uncertainties can affect the accuracy of forward-looking statements and cause actual results to differ materially from those anticipated by these forward-looking statements. Factors that could affect actual future events or results include, but are not limited to, the risk that the system required by CARTHE is not delivered in a timely fashion or does not perform as expected and such other risks as identified in the Company's quarterly report on Form 10-Q for the quarter ended June 30, 2012, and from time to time in other reports filed by Cray with the U.S. Securities and Exchange Commission. You should not rely unduly on these forward-looking statements, which apply only as of the date of this release. Cray undertakes no duty to publicly announce or report revisions to these statements as new information becomes available that may change the Company's expectations. Cray is a registered trademark of Cray Inc. in the United States and other countries, and Cray XE6m and Cray Linux Environment are trademarks of Cray Inc. Other product and service names mentioned herein are the trademarks of their respective owners.
<urn:uuid:10603f77-945d-4ab7-9e66-e81412e2206d>
CC-MAIN-2017-04
http://investors.cray.com/phoenix.zhtml?c=98390&p=irol-newsArticle&ID=1730535
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00068-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938365
1,519
2.921875
3
Backup and Recovery Who uses backup and recovery, and why? Enterprises of all sizes rely on backup and recovery to maintain business continuity. In the event that their business data is lost, they can get back up and running by restoring data from a backup copy. Government regulations require companies to store and archive electronic records without alteration. A backup and recovery process also enables enterprises to maintain legal and regulatory compliance. How backup and recovery works Companies try to back up their data on a regular schedule such as once every 24 hours. At these times they create one or more duplicate or deduplicated copies of the primary data and write it to a new disk or to a tape. For disaster recovery purposes, a backup copy needs to be transported or replicated offsite to ensure the data is safe in the event of a disaster. Benefits of backup and recovery Backup and recovery enables companies to protect and preserve their information. Information protection is critical to a company's day-to-day operations. In the digital age, information is one of the most important assets a company owns, and having an efficient and manageable backup and recovery strategy has become an IT imperative.
<urn:uuid:163b177c-ade1-4804-8082-e573e4dc6bec>
CC-MAIN-2017-04
https://www.emc.com/corporate/glossary/backup-and-recovery.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00068-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948303
240
2.78125
3
All of the Server Express SQL preprocessors make use of the SQL Communications Area (SQLCA) and the SQL Descriptor Area (SQLDA) data structures. After each embedded SQL statement is executed, error and status information is returned in the SQL Communications Area (SQLCA). The SQLCA data structure is shown below: 01 SQLCA. 03 SQLCAID PIC X(8) VALUE "SQLCA". 03 SQLCABC PIC S9(9) COMP-5 VALUE 136. 03 SQLCODE PIC S9(9) COMP-5 VALUE 0. 03 SQLERRM. 49 SQLERRML PIC S9(4) COMP-5. 49 SQLERRMC PIC X(70). 03 SQLERRP PIC X(8). 03 SQLERRD PIC S9(9) COMP-5 OCCURS 6 VALUE 0. 03 SQLWARN. 05 SQLWARN0 PIC X. 05 SQLWARN1 PIC X. 05 SQLWARN2 PIC X. 05 SQLWARN3 PIC X. 05 SQLWARN4 PIC X. 05 SQLWARN5 PIC X. 05 SQLWARN6 PIC X. 05 SQLWARN7 PIC X. 03 FILLER PIC X(3). 03 SQLSTATE PIC X(5). Oracle, Sybase and Informix all have different versions of the SQLCA. The SQLCA presented above is for OpenESQL and the DB2 ECM. The Oracle, Sybase and Informix SQLCA's all have a SQLCODE, SQLERRML, SQLERRMC and a SQLWARN. The sizes and the positions of these fields can differ between the precompilers. The table below describes the contents of the SQLCA data structure: ||The text string "SQLCA". ||The length of the SQLCA data structure. ||The status code for the last-run SQL statement. ||The length of the error message in SQLERRMC (0 through 70). ||Error message text. Error messages longer than 70 bytes are ||Reserved (diagnostic information). ||An array of six integer status codes (those not listed below are SQLERRD(1) The native error code returned by the database. SQLERRD(2) The severity of the error returned by the database. Severity levels are different on different database systems. SQLERRD(3) The number of rows affected. ||Eight warning flags, each containing a blank or "W" (those not listed below are reserved): A warning flag will be set if SQLCODE contains a value of +1: ||A summary of all warning fields. Blank means there are no warnings. ||"W" indicates that data was truncated on output to a character host variable. ||"W" indicates that a null value exists, but no indicator variable was provided. ||"W" indicates that the number of columns is less than the number of host variables or that the number of host variables provided does not match the number of parameter markers in the statement. The lower of the two numbers is used. ||"W" indicates a singleton select that returns more than one row (only the first row is returned). ||Status indicator for the last-run SQL statement. The SQLCA contains two variables ( plus a number of warning flags which are used to indicate whether an error has occurred in the most recently executed SQL statement. SQLSTATE is a separate data item. For the currently supported versions of Oracle and Sybase, the SQLCA should be used in preference to the SQLSTATE variable. The SQLSTATE variable will eventually supersede the SQLCA as the preferred method of passing data between the database and the client application, but this is not yet the Testing the value of sqlcode is the most common way of determining the success or failure of an embedded SQL statement. The possible values for |0||The statement ran without error.| |1||The statement ran, but a warning was generated. The values of the |100||Data matching the query was not found or the end of the results set has been reached. No rows were processed.| |< 0 (negative)||The statement did not run due to an application, database, system, or network error.| If you are using OpenESQL, the following error codes are defined for SQLCODE. ||SQL(INIT) was used, and automatic CONNECT failed. Programs which use SQL(INIT) need to check SQLCODE immediately on startup. ||ODBC driver or database specific error message. Check the contents of SQLERRMC to determine what happened. ||Unable to retrieve ODBC error ||An ODBC error occurred, but no more details are available. This usually indicates a serious run-time condition, such as severe ||Invalid ODBC catalog query ||This is caused by invalid parameters to a QUERY ODBC ||Statement too long ||ESQL Keyword(s) detected in PREPARE/EXECUTE IMMEDIATE statement ||Too few host variables ||Data overflow occurred during decimal data conversion ||NULL value returned but no indicator variable supplied ||No cursor declared ||Cursor is not prepared |-19701||NULL connection name Connection name not found |These two errors (-19701 and -19702) occur when a program refers to a connection which does not exist. The most likely cause is attempting to execute an Embedded SQL statement before a CONNECT has executed successfully, or after all connections have been |-19702||Connection name not found Attempt to close non-existent connection ||Could not make connection. ||Duplicate connection name. ||Improperly initialized User SQLDA ||Statement text not found or empty ||Unimplemented embedded SQL feature ||The COBOL compiler may accept some Embedded SQL syntax which is not yet supported by the OpenESQL run-time module. If an attempt is made to execute such a statement, this condition will result. COBSQL and DB2 For COBSQL and DB2, it is possible to get other positive values. This means that the SQL statement has executed but produced a warning. sqlwarnflags should be checked to determine the type of warning. For Oracle, Sybase and Informix, sqlwarn0will always be set when the database server has sent a warning back to the application. The SQLSTATE variable was introduced in the SQL-92 standard and is the recommended mechanism for future applications. It is divided into two components: classcode. Any class code that begins with the letters A through H or the digits 0 through 4 indicates a SQLSTATE value that is defined by the SQL standard or another standard. A value of "00000" indicates that the previous embedded SQL statement executed successfully. For specific details of the values returned in SQLSTATE when using Oracle, Sybase or Informix, refer to the relevant Database Error Messages manual. Full details of SQLSTATE values are given below: ||Privilege not revoked ||Invalid connection string attribute ||Error in row ||Option value changed ||No rows updated or deleted ||More than one row updated or deleted ||Cancel treated as SQLFreeStmt with the ||Attempt to fetch before the result set returned the ||Wrong number of parameters ||Restricted data type attribute violation ||Invalid use of default parameter ||Unable to connect to data source ||Connection in use ||Connection not open ||Data source rejected establishment of connection ||Connection failure during transaction ||Communication link failure ||Insert value list does not match column list ||Degree of derived table does not match column list ||String data right truncation ||Indicator variable required but not supplied ||Numeric value out of range ||Error in assignment ||Datetime field overflow ||Division by zero ||String data, length mismatch ||Integrity constraint violation ||Invalid cursor state ||Invalid transaction state ||Invalid authorization specification ||Invalid cursor name ||Syntax error or access violation ||Duplicate cursor name ||Syntax error or access violation ||Driver does not support this function ||Data source name not found and no default driver ||Specified driver could not be loaded ||Driver's SQLAllocEnv failed ||Driver's SQLAllocConnect failed ||Driver's SQLSetConnect-Option failed ||No data source or driver specified; dialog prohibited ||Unable to load translation .dll file ||Data source name too long ||Driver name too long ||DRIVER keyword syntax error ||Trace file error ||Base table or view already exists ||Base table not found ||Index already exists ||Index not found ||Column already exists ||Column not found ||No default for column ||Memory allocation failure ||Invalid column number ||Program type out of range ||SQL data type out of range ||Invalid argument value ||Function sequence error ||Operation invalid at this time ||Invalid transaction operation code specified ||No cursor name available ||Invalid string or buffer length ||Descriptor type out of range ||Option type out of range ||Invalid parameter number ||Function type out of range ||Information type out of range ||Column type out of range ||Scope type out of range ||Nullable type out of range ||Uniqueness option type out of range ||Accuracy option type out of range ||Direction option out of range ||Invalid parameter type ||Fetch type out of range ||Row value out of range ||Concurrency option out of range ||Invalid cursor position ||Invalid driver completion ||Invalid bookmark value ||Driver not capable DB2 Universal Database returns SQL-92 compliant SQLSTATE values. DB2 Version 2.1 does not. Some statements may cause warnings to be generated. To determine the type of warning, your application should examine the contents of the SQLWARN flags. Each flag returns one of the following values: Each SQLWARN flag has a specific meaning. For more information on the meaning of the SQLWARN flags, refer to the section SQL Communications Area. To check explicitly the value of SQLCODE or SQLSTATE after each embedded SQL statement can involve writing a lot of code. As an alternative, check the status of the SQL statement by using a WHENEVER statement in your application. The WHENEVER statement is not an executable statement. The WHENEVER statement is a directive to the Compiler to generate automatically the code that handles errors after each executable embedded SQL statement. The WHENEVER statement allows one of three default actions (CONTINUE, GOTO or PERFORM) to be registered for each of the following conditions: |Condition||Value of sqlcode| |SQLERROR||< 0 (negative)| A WHENEVER statement for a particular condition replaces all previous WHENEVER statements for that condition. The scope of a WHENEVER statement is related to its physical position in the source program, not its logical position in the run sequence. For example, in the following code if the first SELECT statement does not return anything, paragraph A is performed, not paragraph C: EXEC SQL WHENEVER NOT FOUND PERFORM A END-EXEC. PERFORM B. EXEC SQL SELECT col1 into :host-var1 FROM table1 WHERE col2 = :host-var2 END-EXEC. A. DISPLAY "First item not found". B. EXEC SQL WHENEVER NOT FOUND PERFORM C. END-EXEC. C. DISPLAY "Second item not found". For Oracle, Sybase and Informix, setting SQLWARN0 to W triggers the SQLWARNING clause. When no data is returned from a SELECT or FETCH statement, the condition NOT FOUND is triggered, regardless of the setting of the Oracle precompiler directive MODE. Informix allows you to perform a STOP or a CALL from within a WHENEVER statement. These are additions to the ANSI standard and are documented in the Informix ESQL/COBOL programmers manual. The SQLERRM data area is used to pass error messages to the application from the database server. The SQLERRM data area is split into two parts: SQLERRML holds the length of the error message SQLERRMC holds the error text. Within an error routine, the following code can be used to display the SQL error message: IF (SQLERRML > ZERO) and (SQLERRML < 80) DISPLAY 'Error Message: ', SQLERRMC(1:SQLERRML) ELSE DISPLAY 'Error Message: ', SQLERRMC END-IF. SQLERRD data area is an array of six integer status Oracle, Sybase and Informix may set one (or more) of the six values within the SQLERRD array. These indicate how many rows were affected by the SQL statement just executed. For example, SQLERRD(3) holds the total number of rows returned by a SELECT or a series of FETCH statements. The third element of SQLERRD in the SQLCA, records the number of rows processed for INSERT, UPDATE, DELETE and SELECT INTO statements. For FETCH statements, it records the cumulative sum of For DB2, SQLERRD(3) contains the following: For DB2, SQLERRD(4) contains the following: For DB2, SQLERRD(5) contains the following: The SQLDA is unique to each precompiler. The Oracle SQLDA is not compatible with that used by Sybase, OpenESQL or DB2 and vice versa. When either the number of parameters to be passed, or their data types, are unknown at compilation time, you can use an SQL Descriptor Area (SQLDA) instead of host variables. An SQLDA contains descriptive information about each input parameter or output column. It contains the column name, data type, length, and a pointer to the actual data buffer for each input or output parameter. An SQLDA is ordinarily used with parameter markers to specify input values for prepared SQL statements but you can also use an SQLDA with the DESCRIBE statement (or the INTO option of a PREPARE statement) to receive data from a prepared SELECT statement. Although you cannot use an SQLDA with static SQL statements, you can use a SQLDA with a cursor FETCH statement. The following table describes the contents of the SQLDA data structure. ||The text string SQLDA. ||Length of the SQLDA data structure (SQLN * 44 + 16). ||Total number of SQLVAR entries allocated, equal to the number of input parameters or output columns. ||Number of SQLVAR entries used. ||SQLVAR is a group item, the number of occurrences of which depends on the value of SQLD. ||A number representing the data type of the column or host variable and indicating whether null values are allowed (see the table below for ||Length of a value from a column. If the data is decimal (including money), SQLLEN is split into two parts: the first byte contains the precision; the second byte contains the scale. ||For FETCH, OPEN, and EXECUTE, the address of the host variable (must be inserted by the application). For DESCRIBE and PREPARE, SQLDATA is not used. ||For FETCH, OPEN, and EXECUTE, the address of an associated indicator variable, if one exists. If the column does not permit a null value, the field is undefined. If the column permits a null value, SQLIND is set to -1 if the data value is null or to 0 if the data value is not null. For DESCRIBE and PREPARE, SQLIND is not used. ||A group item containing the name and length of the column (not used for FETCH, OPEN or EXECUTE). ||Length of the name column ||Name of the column. For a derived column, this field contains the ASCII numeric literal value that represents the derived column's original position within the select list For Oracle, Sybase and Informix, the SQLDA is only required if your program uses dynamic SQL. Oracle, Sybase and Informix do not allow the SQLDA to be included in your program using the following syntax statement: EXEC SQL INCLUDE SQLDA END-EXEC For Oracle, Sybase and Informix, the SQLDA must be defined as a standard COBOL copyfile. Oracle provides an extra copyfile, ORACA, for use with dynamic SQL. This can be included in your program using the following syntax: EXEC SQL INCLUDE ORACA END-EXEC You must set the Oracle precompiler option, ORACA=YES before you can use the ORACA copyfile. For more information on setting Oracle precompiler options, refer to the Programmer's Guide to the Oracle Precompilers. Oracle does not supply an SQLDA. For a clearer explanation of this and the ORACA copyfile, refer to the Programmer's Guide to the Oracle Precompilers. Sybase does not supply an SQLDA copyfile. The Sybase precompiler documentation describes the layout of the SQLDA and how to assign values to the various items within it. The documentation also describes how to get Sybase to convert between COBOL and Sybase data types. Informix does not supply an SQLDA copyfile. The Informix precompiler documentation describes the layout of the data items that need to be defined to be able to use Dynamic SQL with Informix. The SQLDA structure is supplied in the file sqlda.cpy in the source directory under your Server Express base installation directory. You can include it in your COBOL program by adding the following statement to your Data Division: EXEC SQL INCLUDE SQLDA END-EXEC The SQLDA data structure is shown below: 01 SQLDA sync. 05 SQLDAID PIC X(8) VALUE "SQLDA ". 05 SQLDABC PIC S9(9) COMP-5 value 0. 05 SQLN PIC S9(4) COMP-5 value 0. 05 SQLD PIC S9(9) COMP-5 value 0. 05 SQLVAR OCCURS 0 to 1489 TIMES DEPENDING ON SQLN. 10 SQLTYPE PIC S9(4) COMP-5. 10 SQLLEN PIC S9(4) COMP-5. 10 SQLDATA USAGE POINTER. 10 SQLIND USAGE POINTER. 10 SQLNAME. 15 SQLNAMEL PIC S9(4) COMP-5. 15 SQLNAMEC PIC X(30). Odd-numbered code values indicate that null values are allowed. In the table below: (1) - These types can be returned in COBOL by a PREPARE INTO or DESCRIBE statement. (2) - These types can be set by an application using Dynamic SQL. (3) - These types are supported for COBOL host variables. ||SQL Data Type ||COBOL Data Type ||10-byte date string ||8-byte time string ||26-byte timestamp string ||Large variable length binary 49 PIC LEN S9(9) COMP-5 49 PIC VAL X(n) ||Large variable length character 49 PIC LEN S9(9) COMP-5 49 PIC VAL X(n) ||Variable length binary 49 PIC LEN S9(4) COMP-5 49 PIC VAL X(n) ||Variable length character 49 PIC LEN S9(4) COMP-5 49 PIC VAL X(n) ||8-byte floating point ||float or double ||4-byte floating point ||decimal, numeric or bigint ||PIC S9(9) COMP-5 ||PIC S9(4) COMP-5 ||PIC S9(4) COMP-5 Before an SQLDA structure is used, your application must initialize the following fields: |SQLN||This must be set to the maximum number of SQLVAR entries that the structure can hold.| |SQLDABC||The maximum size of the SQLDA. This is calculated as SQLN * 44 + 16| You can use the DESCRIBE statement (or the PREPARE statement with the INTO option) to enter the column name, data type, and other data into the appropriate fields of the SQLDA structure. Before the statement is executed, the SQLN and SQLDABC fields should be initialized as described above. After the statement has been executed, the SQLD field will contain the number of parameters in the prepared statement. A SQLVAR record is set up for each of the parameters with the SQLTYPE and SQLLEN fields completed. If you do not know how big the value of SQLN should be, you can issue a DESCRIBE statement with SQLN set to 1 and SQLD set to 0. No column detail information is moved into the SQLDA structure, but the number of columns in the results set is inserted into SQLD. Before performing a FETCH statement using an SQLDA structure, follow the procedure below: The data type field (SQLTYPE) and length (SQLLEN) are filled with information from a PREPARE INTO or a DESCRIBE statement. These values can be overwritten by the application prior to a FETCH statement. To use an SQLDA structure to specify input data to an OPEN or EXECUTE statement, your application must supply the data for the fields of the entire SQLDA structure, including the SQLN, SQLD, SQLDABC, and SQLTYPE, SQLLEN, and SQLDATA fields for each variable. If the value of the SQLTYPE field is an odd number, the address of the indicator variable must also be supplied in SQLIND. After a PREPARE statement, you can execute a DESCRIBE statement to retrieve information about the data type, length and column name of each column returned by the specified prepared statement. This information is returned in the SQL Descriptor Area (SQLDA): EXEC SQL DESCRIBE stmt1 INTO :sqlda END-EXEC If you want to execute a DESCRIBE statement immediately after a PREPARE statement, you can use the INTO option on the PREPARE statement to perform both steps at once: EXEC SQL PREPARE stmt1 INTO :sqlda FROM :stmtbuf END-EXEC Copyright © 2000 MERANT International Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law.
<urn:uuid:df954786-8f51-415b-abc5-ff225bfc163e>
CC-MAIN-2017-04
https://supportline.microfocus.com/documentation/books/sx20books/dbdata.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00004-ip-10-171-10-70.ec2.internal.warc.gz
en
0.674663
4,921
2.71875
3
Cern, the world's largest particle physics laboratory, has built the first working intercontinental 10 Gigabit Ethernet wide area network. It will ultimately make up part of the fabric of the next-generation internet, dubbed Internet2. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Cern's Wan is based on the Terascale E-Series family of switches and routers, which underpin a high-performance grid computing farm with a data throughput of up to 2.4 terabits per second. The computing farm is based in Geneva and made up of between 6,000 and 8,000 Intel-based Linux servers and 2,000 storage devices, co-ordinated by technology from Force10 Networks. Cern's computing system will be able to process large amounts of data from the Large Hadron Collider (LHC) particle accelerator, which is due to go online in 2007. The LHC will rely on data from the server farm, and use the transatlantic infrastructure to send out information to what Cern calls tier-one locations. The 11 tier-one locations comprise some of the world's largest and most complex research networks, including TeraGrid, the National Center for Supercomputing Applications, the California Institute of Technology and the Korea Institute of Science and Technology Information. The Force10 Terascale E-Series will also provide 10 Gigabit Ethernet connections to Cern's multiple-campus-based experiments, which comprise individual computing clusters. Prior to 2007, Cern will be ramping up use of this infrastructure, increasing the load and usage to test and debug the system. The whole system is a set of federated grids, said David Foster, communications systems and networking group leader at Cern. The individual grid initiatives form part of a larger picture - a high-speed internet capable of hosting high-quality graphics, streaming video and vast amounts of transactional data. Cern's project is part of the LHC Computing Grid Initiative, and EU projects such as Enabling Grids for E-Science. Foster said, "This is a technology that is evolving, with the academic and scientific arenas pushing it forward and showing that it is really usable in a production environment. The industrial applications for this will come along later as a wave of production grids for many areas of science, though it is applicable in other areas, such as financial analysis. We can expect a ramp-up of uptake as the technology matures and as the business models become defined." He said there had been a great increase in connectivity and bandwidth the world over, which was bringing in new ways of doing science and engaging more people in the process. "It is taking the science to the scientists because you can move data across the globe very cost effectively. Scientists can work more locally than ever before. The grid infrastructure is the tool that facilitates this," he said. Cern employs 2,600 people. Its most famous alumnus is scientist Tim Berners-Lee, who in 1989 invented the world wide web to meet the demand for automatic information sharing between scientists working in different universities and institutes around the world. What is Internet2? Cern's 10 Gigabit Ethernet wireless area network will link into the Internet2 initiative, which is based on other 10 Gigabit Ethernet networks. Internet2 is a consortium led by 207 universities working in partnership with industry and government to develop and deploy advanced network applications and technologies. One of the projects to come out of Internet2 is Abilene, part of the Internet2 backbone network, and a system that enables US-wide testing of applications such as uncompressed high-definition TV-quality video; remote control of scientific instruments such as mountaintop telescopes and electron microscopes; collaboration using immersive virtual reality; and grid computing.
<urn:uuid:0ca1d49f-1855-4473-a10b-c013a524f60f>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240076002/Cern-lays-foundations-for-next-generation-internet-with-intercontinental-10-Gig-Wan
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00490-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94195
789
2.78125
3
The open source R programming language is the most popular statistical software in use today. It’s used by more than 2 million data scientists and statisticians worldwide, and usage continues to grow rapidly. Given that R programmers command premium salaries (according to surveys by Dice.com and O’Reilly), it’s clear that much of that growth is coming from adoption of R for business applications. Social media companies were among the first to recognize the value of mining their rich user behavior databases to understand the needs of their users and enhance their platforms with new data-driven features. Facebook, which processes more than 500 terabytes of data a day, uses R to understand how its users interact with the service. Exploratory data analysis helps Facebook understand what its users are doing throughout the day and how viral memes propagate through the social network. Data visualization is a big part of this work, and Facebook has shared its best practices in an online Udacity course, and even used a chart created with R in its IPO prospectus. Data analysis has also become increasingly important in media, where the availability of public data sources has given rise to the practice of data journalism. The New York Times has been a pioneer in this area, using R as the basis for interactive data analysis features that forecast upcoming elections and that can even identify your birthplace based on your dialect. The Times regularly uses R to enhance its traditional reporting as well, in articles ranging from wealth distribution in the United States to baseball’s greatest pitchers. R’s rapid prototyping capabilities mean that data journalists can go from a concept, to a graphic, to a complete illustration in just hours — essential for rapid analysis of breaking news. A fast-growing industry for R is marketing analytics. As retailers collect more detailed data about customer buying habits, preferences, and backgrounds, marketing analytics companies have sprung up to help companies make sense of these rich new sources of information. DataSong uses a statistical technique called time-to-event analysis to help retailers like Williams Sonoma understand the marketing events (like advertisements, catalogs, or emails) that led a customer to make a purchase. Similarly, X+1 analyzes terabytes of data to give companies like JP Morgan Chase and Verizon real-time analysis of customer behavior to optimize marketing efforts. The finance and insurance industries have always been leading users of advanced statistical analysis, so it’s no surprise that R is in widespread use to develop new trading, pricing, and optimization strategies to increase returns and minimize risk. American Century Investmentsuses R to analyze a “social network” of companies, in which financial relationships are used in place of friendships. (Understanding how the performance of suppliers ultimately affects those of downstream manufacturers allows them to optimize their financial investment portfolios.) On the retail banking side, ANZ Bankuses R to estimate the risk associated with home mortgages. Estimating risk is of critical importance in the insurance industry as well, and Lloyds of Londonuses R to model the potential costs associated with catastrophes like hurricanes and earthquakes. It’s not only big businesses that are using R. The programming language also is used to improve the lives of vulnerable people and for the general public good. The National Weather Service uses R to predict river levels and issue flood alerts, and Realclimate.org uses R to visualize the effects of climate change, such as the recent declines in Arctic sea ice. And in volatile regions, like Syria, the Human Rights Data Analysis Group uses R to get better estimates of war casualties from incomplete information. These are just a few examples of the organizations that are using R on a daily basis, and the number grows daily. One consequence of the big data revolution is that companies in every industry now recognize that the key to success is being able to collect, analyze, and act on data better and faster than their competitors. This is now a strategic initiative within competitive organizations, and companies are rapidly hiring new data scientists. R enables these data scientists to analyze data more quickly and more powerfully than other software, which explains its rapid growth across industries. David Smith is Chief Community Officer at Revolution Analytics, the leading commercial provider of software and services based on the open source R project for statistical computing. With a background in data science, he writes daily about applications of R and predictive analytics at the Revolutions blog (blog.revolutionanalytics.com), and was named a top 10 influencer on the topic of Big Data by Forbes. Follow David on Twitter as @revodavid.
<urn:uuid:033a6d97-8829-4475-9c50-bd150e1aad20>
CC-MAIN-2017-04
http://data-informed.com/companies-use-r-compete-data-driven-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00122-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941859
916
2.609375
3
Explaining the bang: Stanford center defines new objectives - By Michael Hardy - Sep 08, 2003 Scientists at the Stanford Linear Accelerator Center are filling their databases mostly with information from BaBar, an experiment named for the fictional elephant of children's literature. They are creating subatomic particles called B-Mesons and their antimatter equivalents, and then colliding them into one another. When the universe began, matter and antimatter should have quickly eliminated each other, said Richard Mount, director of SLAC's Computing Services and assistant director of SLAC's Research Division. A matter-filled universe suggests that there was an asymmetry between matter and antimatter in the beginning. BaBar is intended to help explain why that imbalance existed. "This is an excellent system for studying the small asymmetries between matter and antimatter," he said. "When we create [the particles], we certainly believe they are created in equal numbers, but as they travel across space they can change into each other at slightly different rates until we see an asymmetry that we can measure here." The collisions generate great amounts of data, but much of it is expected and can be dismissed as noise, Mount said. The remaining data is about 25 kilobytes per explosion, which has added up to a petabyte since SLAC began using a database from Objectivity Inc. in 1999.
<urn:uuid:862e351d-9620-468e-9e43-7ae3057dcefe>
CC-MAIN-2017-04
https://fcw.com/Articles/2003/09/08/Explaining-the-bang-Stanford-center-defines-new-objectives.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00334-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963249
278
3.015625
3
Molinari-Jobin A.,KORA | Kery M.,Swiss Ornithological Institute | Marboutin E.,ONCFS | Molinari P.,Italian Lynx Project | And 8 more authors. Animal Conservation | Year: 2012 Inferring the distribution and abundance of a species from field records must deal with false-negative and false-positive errors. False-negative errors occur if a species present goes undetected, while false-positive errors are typically a consequence of species misidentification. False-positive observations in studies of rare species may cause an overestimation of the distribution or abundance of the species and distort trend indices. We illustrate this issue with the monitoring of the Eurasian lynx in the Alps. We developed a three-level classification of field records according to their reliability as inferred from whether they were validated or not. The first category (C1) represents 'hard fact' data (e.g. dead lynx); the second category (C2) includes confirmed data (e.g. tracks verified by an expert); and the third category (C3) are unconfirmed data (e.g. any kind of direct visual observation). For lynx, which is a comparatively well-known species in the Alps, we use site-occupancy modelling to estimate its distribution and show that the inferred lynx distribution is highly sensitive to presence sign category: it is larger if based on C3 records compared with the more reliable C1 and C2 records. We believe that the reason for this is a fairly high frequency of false-positive errors among C3 records. This suggests that distribution records for many lesser-known species may be similarly unreliable, because they are mostly or exclusively based on unconfirmed and thus soft data. Nevertheless, such soft data form a considerable part of species assessments as presented, for example in the International Union for Conservation of Nature Red List. However, C3 records can often not be discarded because they may be the only information available. When inferring the distribution of rare carnivores, especially for species with an expanding or shrinking range, we recommend a rigorous discrimination between fully reliable and un- or only partly reliable data, in order to identify possible methodological problems in the distribution maps related to false-positive records. © 2011 The Authors. Animal Conservation © 2011 The Zoological Society of London. Source Chapron G.,Swedish University of Agricultural Sciences | Kaczensky P.,University of Veterinary Medicine Vienna | Linnell J.D.C.,Norwegian Institute for Nature Research | Von Arx M.,KORA | And 76 more authors. Science | Year: 2014 The conservation of large carnivores is a formidable challenge for biodiversity conservation. Using a data set on the past and current status of brown bears (Ursus arctos), Eurasian lynx (Lynx lynx), gray wolves (Canis lupus), and wolverines (Gulo gulo) in European countries, we show that roughly one-third of mainland Europe hosts at least one large carnivore species, with stable or increasing abundance in most cases in 21st-century records. The reasons for this overall conservation success include protective legislation, supportive public opinion, and a variety of practices making coexistence between large carnivores and people possible. The European situation reveals that large carnivores and people can share the same landscape. Source
<urn:uuid:39e466e0-4264-4b4a-9905-2fcfa03faa4b>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/bavarian-environment-agency-46713/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00058-ip-10-171-10-70.ec2.internal.warc.gz
en
0.892906
702
2.78125
3
Artisanal cheesemaking is an important industry in Mexico, but many varieties of artisanal Mexican cheeses are in danger of disappearing because they have not been adequately documented. A team of dairy science experts is working to prevent that loss by collecting the information needed to standardize, protect, and preserve traditional artisanal production processes and to seek protected designation of origin (PDO) status for those that qualify. Their review is published in Articles in Press and will appear in the May 2016 issue of the Journal of Dairy Science. "Currently, cheesemaking is one of the most important industries in Mexico," explained lead investigators Aarón F. González-Córdova and Belinda Vallejo-Cordoba, of the Laboratorio de Química y Biotecnología de Productos Lácteos, Coordinación de Tecnología de Alimentos de Origen Animal, at the Centro de Investigación en Alimentación y Desarrollo, A.C., in Hermosillo, Mexico. "The importance of artisanal cheesemaking is reflected in the estimation that around 70% of all Mexican cheese comes from small-scale productions." González-Córdova, Vallejo-Cordoba and colleagues examined the challenges facing artisanal cheesemaking in Mexico. Among those challenges are: In their review, the authors describe the production methods and characteristics of eight important artisanal cheese varieties produced in Mexico and discuss efforts that have been made to preserve these cheeses. "Certain varieties of artisanal Mexican cheese, such as Bola de Ocosingo, Poro de Balancan, Crema de Chiapas, and regional Cotija cheeses, possess unique characteristics that make them potentially eligible to be protected as PDO products. This distinction could help to expand their frontiers and allow them to become better known and appreciated in other parts of the world," added González-Córdova and Vallejo-Cordoba. "With sufficient information, official Mexican regulations could be established that would encompass and regulate the manufacture of Mexican artisanal cheeses." "There is a rich cultural heritage in the dairy foods that we eat. Artisanal Mexican cheeses are part of that heritage. Unfortunately, a lack of scientific information on manufacturing endangers the future of these unique foods. Preservation of these cheeses will depend, therefore, on dairy foods research," said Matt Lucy, PhD, Editor-in-Chief, Journal of Dairy Science, and Professor of Animal Science, University of Missouri, USA. Explore further: Gustatory richness and health quality assured by natural cheese microbiota More information: "Invited review: Artisanal Mexican cheeses," by A. F. González-Córdova, C. Yescas, Á. M. Ortiz-Estrada, M. de los Ángeles De la Rosa-Alcaraz, A. Hernández-Mendoza, and B. Vallejo-Cordoba, Journal of Dairy Science, published online in advance of Volume 99, Issue 5 (May 2016) News Article | December 18, 2015 Statoil ASA has submitted to Norwegian authorities the plan for development and operation (PDO) of Oseberg Vestflanken 2 in the North Sea. It’s official: 2015 was the hottest year on record. Global data show that a powerful El Niño system, marked by warmed waters in the tropical Pacific Ocean, helped to drive atmospheric temperatures well past 2014’s record highs. Some researchers suggest that broader Pacific trends could spell even more dramatic temperature increases in years to come. Released on 20 January, the global temperature data come from three independent records maintained by NASA, the US National Oceanic and Atmospheric Administration (NOAA) and the UK Met Office. All three data sets document unprecedented high temperatures in 2015, pushing the global average to at least 1 ºC above pre-industrial levels. Although El Niño boosted temperatures late in the year, US government scientists say that the steady increase in atmospheric concentrations of greenhouse gases continues to drive overall warming. “The reason why this is such a warm record year is because of the long-term trend,” says Gavin Schmidt, director of NASA’s Goddard Institute for Space Studies in New York City. “And there is no evidence that this long-term trend has slowed.” Average global surface temperatures in 2015 were 0.16 °C higher than in 2014, the next-warmest year on record, says NOAA. Almost all areas of the globe, including both land and sea, experienced above-normal temperatures. Satellite and balloon records of temperatures in the upper atmosphere showed less warming owing to a delayed response to El Niño, but are expected to rise faster in 2016. Overall, global temperatures have increased by 0.1–0.2 ºC per decade since the 1970s, says Thomas Karl, director of NOAA’s National Centers for Environmental Information in Asheville, North Carolina. “Clearly the 2015 data continues the pattern,” Karl says. “This trend will continue.” The current El Niño is predicted to continue to boost the average global temperature over the next several months. This could translate into another year of record heat. But the question facing scientists is whether the near-record El Niño that developed in 2015 has helped to flip the Pacific Ocean into a warmer state that will favour the development of such systems in future, and will boost global surface temperatures. The Pacific Decadal Oscillation (PDO) is a 15- to 30-year cycle that increases sea surface temperatures across the eastern Pacific in its positive phase and produces cooler temperatures in its negative phase. Since 1998, after the last major El Niño and a subsequent La Niña cooling system, the PDO has been mostly negative. Some scientists say that the cooling helped to suppress the increase in global temperatures in the early part of the millennium. But since early 2014, the PDO has been largely positive. “It sure looks to me like we’ve changed phases in the PDO,” says Kevin Trenberth, a climate scientist at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. Some studies have tied the PDO to long-term temperature trends1. The PDO was largely negative in the mid-1970s, when global temperature increases slowed. It was mostly positive in the 1980s and 1990s, when temperatures registered faster increases. But scientists debate the climatic links between the PDO and both El Niño and global temperature. “If you try to look at PDO and global temperatures, you can come up with a variety of relationships,” says Karl, who questions whether the oscillation is an independent phenomenon or merely an extension of El Nino. Trenberth notes that the PDO is related to the El Niño–La Niña cycle in the tropical Pacific. That leaves open the possibility that it could fade with El Niño, which models predict will diminish over the next several months. But he says that the PDO is also a result of longer-term fluctuations in ocean currents that push warm water deep into the ocean or keep it closer to the surface. Jerry Meehl, a climate modeller at the NCAR, has a study under review that suggests that the PDO is likely to remain in a positive state over the coming decade. In that analysis, Meehl and his colleagues plugged actual atmospheric and ocean data from 2013 into a global climate model and then ran the model forward to simulate how the climate might change. The model evolved into a positive PDO and remained there. “Over the next ten years, we see higher rates of warming,” says Meehl. He adds that this does not change the overall assessment that global warming has proceeded apace over the past century. Rather, he says that global temperatures vary and often increase in stepwise fashion over decades. Scientists are now trying to understand how ocean circulation works and how much changes in it can affect global temperatures. “We are still trying to figure that out,” says Meehl. “It’s really intriguing. That’s why it’s exciting for climate science.” The crimson stigma of the saffron flower (Crocus sativus) is one of the oldest and most expensive spices in the world, particularly those varieties which are internationally recognised for their quality, such as saffron grown in Spain. This has led to the fraudulent labelling of non-Spanish saffron. "Over the past few years the media have been reporting this fraudulent activity, but up until now there were barely any analytical tools that could be used to detect said fraud. So, we created a new strategy to determine the authenticity of saffron based on metabolomics or, in other words, the chemical fingerprints of foods," explains Josep Rubert, a researcher at University of Chemistry and Technology (UCT Prague, Czech Republic) and the University of Valencia (Spain). The new technique allows for three types of saffron to be defined: one which is certified with the Protected Designation of Origin (PDO) from La Mancha or Aragon, another which is grown and packaged in Spain (although it does not have the PDO certificate) and a third category which is packaged as 'Spanish saffron' but, despite its name, is of unknown origin (although most likely packaged in Spain). With these possibilities, scientists from the UCT Prague leaded by Prof. Jana Hajslova—and where Rubert is also carrying out postdoctoral research including this study —, collected 44 commercial saffron samples in order to test the authenticity of what's stated in the product labels. The findings, published this month by the journal Food Chemistry, revealed that more than 50% of the samples were fraudulent, as 26 ones labelled as 'Spanish saffron' were neither grown nor processed in Spain. "It is highly likely that lower quality saffron is purchased in other countries (such as Morocoo, Iran and India according to our data) at a much lower price than in Spain -indicates the researcher —, to later be packaged and sold as Spanish saffron despite being of unknown origin a fraudulent activity that gambles with consumers' trust". The technique developed by scientists from the Czech Republic and Spain has confirmed that the saffron labelled with the PDO Certificate from La Mancha (and Aragon) were indeed grown and processed in Spain. "Here there was no fraudulent activity the saffron perfectly matched up with our models," emphasises Rubert, "unlike the samples of 'Spanish saffron' that had either a completely different chemical fingerprint or a different collection of small molecules". Chemistry and statistics to expose the fraud The authors of this study combined chemistry with statistics in order to develop their methodology. The first phase of the study consisted in identifying the metabolites or small molecules characteristic of saffron. After, a method was created to detect these small molecules using liquid chromatography coupled with high-resolution mass spectrometry. On one hand, the statistical analyses have served to detect the clear differences between the three types of saffron in addition to validating the technique. According to the authors, the result "is a top-quality model that correctly classified 100% of these samples in addition to having the capacity to correctly categorise others (even if they are unknown and do not have a label) more than 85% of the time". The authors suggest that glycerophospholipids and their oxidised lipids are the best molecular markers for determining the origin of saffron. They have also observed that the saffron technology and processing play a crucial role, "specifically during the drying process, wherein transformation of the product is determined by the temperature at which the process is carried out. The place where the saffron originates also has an influence on the end product". For saffron originating from La Mancha, for example, the drying process involves laying out the fresh stigmas over sieves that are placed next to a heat source such as a fire, hot coal, a stove or a brazier. Saffron dehydration happens quickly -in half an hour- and is carried out at a temperature of 70 ºC which accelerates lipid oxidation. Over recent decades, saffron originating from Castile-La Mancha has represented over 97% of Spain's domestic production a statistic that presents an enormous gap with regard to exportations. Between 1997 and 2013, an average of 2,813 kg of saffron was produced annually in Spain. However, Spain exported 35,978 kg of this product on average each year. Where did those remaining 33,165 kg come from? "They came from other countries, such as Iran or Morocco," mentions Pedro M. Pérez again, manager of the Protected Designation of Origin Regulatory Body in La Mancha. He insists that: "That foreign saffron is brought to Spain and labelled as 'produced and packaged in Spain', which is true, but the label fails to indicate the saffron's true origin, meaning that the consumer does not have enough information to assess the product". The manager of the regulatory body reiterates that there is a Spanish national law dated 1999 in addition to a European law from 2011 regarding the proper labelling of foodstuffs, "but the competent authorities of Spain's Autonomous Communities are not successfully fulfilling their responsibilities with regard to saffron". Explore further: Smartphone maker HTC invests in UK, US firms More information: Josep Rubert et al. Saffron authentication based on liquid chromatography high resolution tandem mass spectrometry and multivariate data analysis, Food Chemistry (2016). DOI: 10.1016/j.foodchem.2016.01.003 It's the phenomenon called El Nino, which is happening now as ocean water temperatures rise above normal across the central and eastern Pacific, near the equator. Its effects will leave the U.S. Northeast warmer than usual, the Midwest drier, and the West and the South wetter. And scientists have a message for everyone bracing for one of the strongest El Nino events on record: get used to it. While El Nino oscillates on a more or less yearly cycle, another dynamic in Pacific Ocean water temperatures, known as the Pacific Decadal Oscillation (PDO), has the potential to accelerate global warming and increase the severity of El Nino episodes, scientists said. The last time the PDO was, as it may be now, in a prolonged positive, or "warm" phase, it corresponded with two of the strongest El Ninos on record. "When you really have a monster El Nino, it could be enough to flip the PDO into a new phase for a decade or so," said William Patzert, a climatologist at NASA's Jet Propulsion Laboratory in California. "Keep your eyeballs peeled because maybe we're in for a decadal shift." Previous warm phases have also coincided with increased precipitation on the U.S. West Coast, signaling potential relief for California from a severe drought. Before January of 2014, the world experienced a 15-year period of mostly negative values for the Pacific oscillation, according to data maintained by Nathan Mantua, an atmospheric scientist at the National Oceanic and Atmospheric Administration's (NOAA) Joint Institute for the Study of the Atmosphere and Oceans. That period saw only weak or moderate El Nino events. During the 21 years before that, the Pacific oscillation values trended mostly positive, a period that coincided with the 1982-83 and 1997-98 El Nino events, two of the strongest on record. Now, scientists are beginning to wonder if the 15-year period of relative El Nino calm is coming to a close, marking the start of a warmer, stormier era akin to the 1980s and 90s. The PDO index has been positive for 22 months through October, the longest such streak since a 26-month positive period between 2002 and 2004. Scientists are not sure if the current streak marks a longer-term turnaround or just a temporary blip like the 2002-2004 streak. "It's more likely that we'll have a change in phase and we'll remain in positive territory," said Kevin Trenberth of the National Center for Atmospheric Research in Boulder, Colorado, noting that while a decadal shift was far from a guarantee, the odds in favor are approximately 2-to-1. In many ways, the weather of the 15 years before 2014 has resembled that of the mid-1940s to mid-1970s, the last prolonged period of a negative Pacific oscillation cycle, with drought in the American West and very few El Ninos, Patzert said. The recent period saw several moderate La Nina events, a counterpart to El Nino defined as cooler than normal sea surface temperatures in the eastern and central tropical Pacific that dumps rain on Australia and Indonesia but leaves the Southwest United States dry, including episodes in 1998-99, 1999-2000, 2007-08 and 2010-11. The warmer sea surface temperatures in the northern Pacific during the positive PDO phase tend to amplify El Nino's effects, Trenbirth said. Several scientists said the current El Nino could contribute to more positive PDO conditions at the moment and in the future. "The key ingredient is the strong El Nino," said NASA's Veronica Nieves, noting that strong episodes have historically triggered decadal shifts. She has submitted a paper to an academic journal noting arguing that the Pacific may be in store for another 20 years or more of warmer sea surface temperatures. To be sure, the two-year period of positive Pacific oscillation values that happened from 2002 to 2004, which saw weak and moderate El Ninos, is still fresh in scientists' minds, preventing them from being certain that the world is truly on the cusp of a decadal shift. But so far in these past two years, the values have been more sharply positive than the 2002-04 streak. This has implications beyond El Nino: the recent decade has been referred to as a 'hiatus' in global warming, with the negative PDO value seen as limiting global temperature gains. "If [PDO] transitions back into positive, we'd see a resumption in these more rapid rates of global warming," said Gerald Meehl, a climate scientist at the National Center for Atmospheric Research in Boulder, Colorado. "Having that shift in the background base state means that the peaks of the El Nino are going to be higher."
<urn:uuid:3c940b26-a243-4694-a8fa-6878d93959fb>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/pdo-303103/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00544-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935588
3,836
3.046875
3
Drone mania is starting to grip the area. New Jersey’s selection as one of six national testing sites for drone technology will allow a local college to capitalize on what is expected to be the next big step in the evolution of aviation. Atlantic Cape Community College, based in Mays Landing, plans to offer a new course in the spring on drone technology called “Introduction to Unmanned Aerial Systems.” The course is part of the college’s aviation studies program. Atlantic Cape spokeswoman Stacey Clapp said the college anticipated growing interest in the field of unmanned aircraft and began planning for the course well in advance of the Federal Aviation Administration’s announcement Dec. 30 that New Jersey would be a drone-testing site. Although usually associated with the military, drones also are used for civilian purposes in border patrol, law enforcement, farming and academic research. Amazon.com is testing delivering packages using drones. Atlantic Cape sees the emerging industry of unmanned aerial systems, or UAS, as a career opportunity for the college’s graduates. “Interest in UAS technology is expanding, and it is expected UAS operations and analytics personnel will be required by law enforcement, homeland security, fire safety, etc., in the future,” Clapp said. New Jersey and Virginia submitted a joint application that will combine the resources of Virginia Tech and Rutgers University as research centers for drone testing. The William J. Hughes Technical Center, the FAA’s national scientific facility in Egg Harbor Township, also will provide research support for the New Jersey-Virginia partnership. Atlantic Cape’s new course includes tie-ins with the tech center, including taking students for visits to the FAA facility’s research and development laboratories. The course will be taught by two of the tech center’s experts: Adam Greco, an air traffic domain director in the Technical Strategies and Integration Division, and Michael Konyak, an aeronautical engineer in the Laboratory Services Division. Dennis Filler, the tech center’s director, predicted that unmanned aerial vehicles, or UAVs, may revolutionize transportation much like trains, automobiles and airplanes did after their invention. “I can imagine UAV systems being used in agriculture, search and rescue, shore monitoring, lifesaving, firefighting and other constructive uses,” Filler said in a Facebook posting. “America’s willingness to embrace change, seizing the opportunity to improve our quality of life, has what has in the past propelled this nation to its leadership status in the world. We need to continue to manage the risks, explore this technology and continue to adopt new technologies and their peaceful applications.” Since 2007, the tech center has had a team of aerospace engineers, computer technicians and other researchers working on how to safely integrate unmanned aircraft into the national airspace system. Data collected from the national drone-testing sites will be fed to the tech center to support its UAS research, the agency said. Other states that were selected by the FAA for drone testing include Alaska, Nevada, New York, North Dakota and Texas. As many as 70,000 jobs and $13.6 billion in economic activity nationwide could be created by drone technology between 2015 and 2018, according to a study published in March by the Association for Unmanned Vehicle Systems. The study also predicts 1,353 jobs added in New Jersey by 2017, with $263 million in economic impact. New Jersey’s selection as a drone-testing site also is expected to provide a local boost for the Stockton Aviation Research and Technology Park, formerly known as the NextGen Aviation Research and Technology Park. The park is designed to capitalize on the FAA’s national NextGen program — which hopes to modernize the air traffic control system by using satellites instead of a radar-based network — by providing locally based research, jobs and consulting contracts. Richard Stockton College of New Jersey took over the aviation park in September to give it financial stability following a series of management troubles that delayed construction on the proposed seven-building complex.
<urn:uuid:dc033af2-df78-4a76-80d3-96fff63309ef>
CC-MAIN-2017-04
http://www.govtech.com/education/New-Jersey-Community-College-to-Offer-Drone-Technology-Course.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00362-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945136
840
2.625
3
3.6.11 What other hash functions are there? The best review of hash function techniques is provided by Preneel [Pre93]. For a brief overview here, we note that hash functions are often divided into three classes: - Hash functions built around block ciphers. - Hash functions using modular arithmetic. - Hash functions with what is termed a "dedicated" design. By building a hash function around a block cipher, a designer aims to leverage the security of a well-trusted block cipher such as DES (see Section 3.2) to obtain a well-trusted hash function. The so-called Davies-Meyer hash function [Pre93] is an example of a hash function built around the use of DES. The purpose of employing modular arithmetic in the second class of hash functions is to save on implementation costs. A hash function is generally used in conjunction with a digital signature algorithm which itself makes use of modular arithmetic. Unfortunately, the track record of such hash functions is not good from a security perspective and there are no hash functions in this second class that can be recommended for use today. The hash functions in the third class, with their so-called "dedicated" design, tend to be fast, achieving a considerable advantage over algorithms that are based around the use of a block cipher. MD4 is an early example of a popular hash function with such a design. Although MD4 is no longer considered secure for most cryptographic applications, most new dedicated hash functions make use of the same design principles as MD4 in a strengthened version. Their strength varies depending on the techniques, or combinations of techniques, employed in their design. Dedicated hash functions in current use include MD5 and SHA-1 (see Questions 3.6.5 and 3.6.6), as well as RIPEMD-160 [DBP96] and HAVAL [ZPS93].
<urn:uuid:3d6ef405-ac9d-4b10-876c-2f57c78c34f8>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/other-hash-functions.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00572-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930246
393
3.359375
3
When the big earthquake hit Haiti in 2010, the country had immediate needs, including food, water and shelter. There was also an urgent need for cops on the beat, yet the country had no cash reserves with which to pay them. Within just a few days an $8 million insurance payment arrived, ensuring there would be a police presence wherever it was needed. This speedy payment came via a mechanism known as parametric insurance. Haiti had bought into a plan with 16 other Caribbean nations, and when the crisis came, it paid off. Though parametric insurance has been around for about a decade, it’s still a little known vehicle. That’s changing, though, as government entities come to see its potential value. Unlike traditional insurance, which pays on the actual value of a loss, parametric insurance pays for a predefined event. “It could be a hurricane, an earthquake, excess rainfall, draught, wildfire, tornado: any quantifiable, measureable natural event,” said Alex Kaplan, vice president of global partnerships at Swiss Re, which wrote the Caribbean policy. Insurer and insured agree in advance how much will be paid in case of a specific event, and the money goes out as soon as the event occurs. If coverage includes a Category 3 hurricane, for instance, money will be on the move as soon as the winds hit those speeds. “This is the true benefit of the parametric insurance,” Kaplan said. “Because it is defined based upon the characteristics of an event and not necessarily the actual loss, it can produce a very fast payout, in some instances as little as 10 days, which is critically important for governments. There are still people trying to figure out how much money they are going to get from Hurricane Sandy.” Speed is a virtue, and so is predictability. Because the value of the policy has been predefined, emergency planners can work their budgets well in advance of an event, and know for sure what will be coming in when the big one hits. An obvious question arises: What if planners and insurers together have gotten it wrong? What if the policy does not cover the actual extent of the damage? While traditional insurance may be slower, it’s based on the actual loss. Kaplan’s answer speaks to the sensibilities of emergency planners. It takes careful planning to use a parametric policy appropriately, he said. “They have to have a very good understanding of what their risks are, what their needs are, how quickly they need to receive the payouts.” Parametric coverage is no silver bullet, Kaplan readily acknowledges. Rather, it can be one part in an overall financial strategy in times of crisis. “You do need to have other mechanisms in place, including physical adaptation measures to make your city more resilient, as well as other financing mechanisms, for example, by having financing in place, reserve funds or contingency plans that can fill in the gaps,” he said. “The concept we are trying to promote is the idea of comprehensive risk management.” That logic may prove increasingly appealing as the seeming upward drive in emergency costs continues year after year. “Given the increase in extreme events and the fact the U.S. hits a new record for the number of presidentially declared disasters each year, we may be at a tipping point,” Kaplan said. With the costs of disasters becoming increasingly unsustainable, “governments and communities at all levels must take a comprehensive and proactive role in protecting themselves both physically and financially from disasters. Emergency managers, as the shepherds of these communities, are well positioned to tackle these issues and ensure we consider all the impacts of disasters and their lasting effects.” Parametric insurance offers one more tool toward achieving those necessary ends.
<urn:uuid:47fc2039-c302-4ed5-8203-89859d3dac02>
CC-MAIN-2017-04
http://www.govtech.com/em/disaster/Emergency-Funds-Parametric-Insurance.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00444-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96802
785
2.515625
3
Quality of Service (QoS) is a common subject in the classroom. There seems to be an element of trepidation from students with regards to their understanding and comprehension of QoS. In considering important topics to address, QoS stood out as a subject most students want to talk about. While reviewing the available reference materials, the reason behind the trepidation I had sensed from some of my students is clear. The books and white papers on QoS are lengthy and packed with exceedingly complicated subjects. So, I have decided to write a blog series to help simplify the mysteries of QoS. Let’s get started. QoS ensures more predictable network services by providing dedicated bandwidth, controlled jitter and latency, and improved loss characteristics. QoS provides tools for managing network congestion, shaping network traffic, using WAN links more efficiently, and setting traffic policies across the network. QoS helps provide consistent, predictable network performance by offering intelligent network services. With all of the above being said, it is no wonder so many students feel uneasy with the subject of QoS. In this series of blogs I will take a closer look at the many functions of QoS. Where it all started In any review of the public switched telephone network (PSTN), it is important to note it was all built with circuit-switched networks. These networks were fixed bandwidth networks with dedicated circuits that were well suited for real-time traffic, such as voice. In addition, most enterprises had multiple networks, where data was carried on one network and voice was carried on another. Multiple networks allowed for dedicated network resources for specific traffic flows. Usage of the PSTN for internal calls and maintaining multiple networks was extremely costly and labor intensive for most companies. With the popularity of Internet Protocol (IP) and packet-switched networks as the underlying fabric of the internet, the ever-shifting paradigm for networks became “everything over IP”. There are a multitude of applications such as Voice over IP (VoIP), streaming video, e-mail, e-commerce to name a few. The underlying issue was that IP was designed to provide best-effort service for delivery of data packets, where all network packets are treated exactly the same with no preference given to time sensitive packets. Different applications have varying needs for access to the network resources and how their packets are handled on the network. Another popular term in the industry is “converged network” meaning to combine voice, video, and data on a packet-switched network running IP. The one technology that enables IP to converge all these packets is QoS. In essence, QoS is the differential treatment of the voice, video, and data packets that flow on the IP network, creating a system of managed unfairness. QoS technologies allow different types of traffic to contend inequitably for network resources. Time-sensitive applications, such as voice or interactive video packets, can be given priority over data applications. With converged networks merging many different traffic streams, each with different requirements, problems can arise. Voice traffic is typically small and cannot tolerate delay as it traverses the network. Data packets, such as file transfers, are typically large and can survive delays and drops with retransmissions of the packets. Small (voice) packets compete with bursty data flow on the converged network, with QoS acting as the mediator to ensure priority goes to time sensitive packets. For the network to provide secure, predictable, measurable, and sometimes guaranteed services, the fixed qualities of a network and the flow of packets must be managed with QoS. Some of the issues that can occur within a network that can have an impact on our time-sensitive packets are: - Bandwidth – Lack of bandwidth on the network the IP packets are traversing. - Packet Loss – Dropping of packets because of network congestion, not network outages. - Delay Variation (Jitter) – The time difference between how long it takes packets to traverse the network. - Out-of-Order Delivery – Different packets may take different routes and arrive at the destination in a different order than they were sent. - Delay – The time it takes to get the packet end-to-end, or from the mouth to the ear. - Packetization Delay – Time required to sample and encode voice or video into an IP packet - Serialization Delay – Time required to put the packet on to the wire - Propagation Delay – Time required for the packet to traverse the media A few of the ways we reduce the effects of limited bandwidth and delay on the network are as follows: - Upgrade the links to increase bandwidth, though this can be an inefficient use of resources and very expensive - Compress the payload of the packet (voice or video) - Compress the IP header (CRTP) - Forward the important packets first Forwarding the important or time-sensitive packet first, or differential treatment of packets, will be the focus of the next few posts. In subsequent posts the different types of QoS will be examined, as well as how they operate, and the tools used to implement QoS. - End-To-End QoS Network Design by Tim Szigeti and Christina Hattingh - DiffServ – The Scalable End-To-End QoS Model Author: Paul Stryer
<urn:uuid:15ceb1ec-47f3-458d-87d9-9a9951ff54ac>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/10/19/quality-of-service-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00197-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947176
1,105
3.375
3
New Details Covering The Opasoft Worm 03 Oct 2002 How should you protect your computer? As was reported on October 1st, the Internet suffered a new epidemic courtesy of the "Opasoft" worm, which aggressively asserted itself as one of the three most widespread malicious programs by recording numerous incidences in a myriad of countries the world over. Currently, 40% of all cases Kaspersky Lab technical support is dealing with are connected to Opasoft, a figure exceeding even those of other dangers worms such as "Klez" and "Tanatos". Distinguishing "Opasoft" is the way it spreads over the Internet. The worm scans the global network and determines which computers are running Windows 95/98/ME and on which to attempt to gain access to drive C. Next, "Opasoft" goes through access passwords to these resources and if it is successful in gaining access it promptly infects victim machines with copies of itself. To search infected computers "Opasoft" uses communications ports (137 and 139) accepted in Windows networks for exchanging data. It is precisely this fact that these ports are targeted for hacker attack. This together with the circumstance that so many users and system administrators do not follow secure policies for computer resources, predetermined the rapid spread of the Opasoft worm. Kaspersky Lab strongly recommends taking the following actions in order to avert the possibility of "Opasoft" penetration: must check if any computer services have been assigned for user files or printers. To do this, users should right click on the Network Neighborhood icon, select Properties and click on File and Printer Sharing . A window opens showing the current status of services, if system access to services has been established inappropriately users can then correct it. If a user knowingly opens access to Disk C, it is then necessary to make certain that it is password protected with a long password with no less than two symbols. are recommended to protect access to ports 137 and 139 from external access. On all computers that must transmit data to external networks via these ports, it is important to check the shared resources list to make sure they are properly password protected. Finally, Internet Providers are also recommended to close ports 137 and 139 to their clients and open them only upon special request to execute specific tasks. Kaspersky Lab points out that "Opasoft" infects only computers running Windows 95/98/ME, therefore the measures outlined above are not needed for computers using other operating systems, for example, Windows 2000 or Windows XP. Please become familiar with the updated technical description of the "Opasoft" network worm in the Kaspersky Virus Encyclopedia
<urn:uuid:35ed0a48-64f1-42fe-9888-79d26296ad2e>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2002/New_Details_Covering_The_Opasoft_Worm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912536
548
2.59375
3
By Carsten Eiram, Chief Security Specialist During our daily work analysing vulnerabilities in-depth, we come across cases on a regular basis where a single vulnerability with multiple attack vectors is being reported as separate vulnerabilities. To quickly cover our definitions of the terms: A "vulnerability" is a specific problem in the code having a security impact while an "attack vector" is a way of triggering / reaching the vulnerability. There may be a number of reasons why we see different attack vectors being reported as separate software vulnerabilities. Perhaps it's because it may take a lot of time and skill to fully understand some vulnerabilities, making it faster and/or easier to just report something as multiple vulnerabilities without determining anything else than that there is "memory corruption"; an increasingly popular term. As an example: Not that long ago, we did a quick test run of an internally developed fuzzer by pegging it against a product from Adobe Systems. Overnight, the fuzzer generated 400+ crash reports. Out of those crashes, about 80 of them occurred due to "memory corruption"; as half of these were triggered by manipulating different fields, this could mean that our fuzzer had found about 40 separate vulnerabilities. However, after properly analysing each crash, they all turned out to be caused by just four different vulnerabilities (having a large number of attack vectors). As evident from the example, it may take quite a lot of time to properly understand the core problem of software vulnerabilities. In this case, it took a Senior Security Specialist, who's an experienced vulnerability analyst and reverse engineer, almost a week to go through the interesting crashes and confirm the root causes. A less experienced person would have spent a lot longer, if ever, figuring some of the problems out. Generally, the reasons for not fully determining the root cause of software vulnerabilities before reporting it can probably be divided into three categories: 1) The reporter simply does not want to spend the time and effort required to figure out the core problem, but leaves that part up to someone else (e.g. the software vendor). 2) The reporter lacks the skills to properly analyse and understand the root cause of the vulnerability. 3) The reporter purposefully reports each attack vector as a separate vulnerability because it looks "better" (i.e. more vulnerabilities were discovered). Reasons #1 and #2 are, of course, perfectly fair if the reporter is a hobby researcher not doing this as a full-time job. However, whatever the reason, reporting multiple attack vectors as multiple software vulnerabilities remains a problem for both the software vendors (looks like there are more vulnerabilities in their products than is the case), vulnerability databases like Secunia and similar organisations (risk of issuing duplicate identifiers/advisories for the same vulnerability), and many other actors in the field that rely on the reported number of vulnerabilities (e.g. various organisations documenting vulnerability statistics and doing comparisons, which may be anywhere from slightly flawed to completely wrong). Hopefully, both software vendors and researchers will do their part to ensure that vulnerabilities are being reported more accurately. At Secunia, we try to do our part by spending a large amount of resources on analysing and properly understanding both the vulnerabilities reported by third parties as well as the ones discovered internally by the Secunia Research team.
<urn:uuid:33153d55-1185-4230-8d5a-ac4ee830a0c7>
CC-MAIN-2017-04
http://blogs.flexerasoftware.com/vulnerability-management/2010/04/vulnerabilities-vs-attack-vectors.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00343-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961073
670
2.578125
3
NCSA’s Wilhelmson discusses first research projects for supercomputer’s 2011 debut Blue Waters, expected to be one of the most powerful supercomputers in the world for open scientific research when it comes online next year, is being counted on to help solve some of the world’s most vexing scientific and social challenges, from figuring out how the first galaxies formed to simulating the spread of disease across large populations to better prepare us for such medical emergencies. At this year’s TeraGrid conference, Bob Wilhelmson, recently retired chief science officer of the National Center for Supercomputing Applications (NCSA) and former applications lead for the Blue Waters project, delivered a keynote address in which he discussed the Blue Waters architecture and shared several planned projects for the new supercomputer, a joint effort between NCSA, the University of Illinois, IBM, the Great Lakes Consortium for Petascale Computation, and the National Science Foundation (NSF). Eighty percent of the Blue Waters resource will be dedicated to NSF awardees through the Petascale Computing Resources Allocation Program or PRAC, Wilhelmson told TG’10 attendees in Pittsburgh, Pa. Each PRAC award identifies a scientific challenge requiring advanced modeling and simulation capabilities that can only be provided by a system that provides sustained performance approaching one petaflop. Awardees receive a $40,000 travel grant to learn about the new supercomputer system and prepare their algorithms to scale to hundreds of thousands of processors/cores. To date, 18 awards have been made and 55 proposals are under review. Approximately 10 new awards are expected. Some of the first areas of research selected for Blue Waters include: - The simulation of stellar weather, including high-resolution turbulence simulations. Simulation will help researchers better understand how convection in the Sun and other stars. - The study of chromatophores, or cells which are largely responsible for generating skin and eye color in cold-blooded animals. At the bacterial level, this study is expected to assist researchers in better understanding human disease and in new drug development. - Actions and interaction of quarks, elementary particles and a fundamental constituent of matter. This particle physics project will provide information of value for research in astronomy, physics, meteorology, and other fields. - The simulation of disease spread and pandemics in very large social networks. The project will model predictions of network behavior among human populations of 300 million or more, and provide guidance for medical emergency preparedness, such as mass vaccinations. - The formation of the first galaxies. Blue Waters’ massive computer power will allow researchers to simulate large numbers of galaxies with much higher resolution. - A ‘bio evolution’ project. This effort will focus on how bacteria mutate and how to clean up environmental contamination by developing multi-scale models of bacteria populations. - Simulating supercell storms and tornadoes. Blue Waters’ resources will be used to carry out simulations of tornadoes embedded in supercell storms with unprecedented detail and accuracy, with up to 8 million times as many grid points compared to what was possible to compute in the 1970s. Blue Waters is being built from the most advanced computing technologies under development at IBM, including the multicore ‘POWER7’ microprocessor. The system will have more than one petabyte of memory, more than 10 petabytes of disk storage, and eventually up to 500 petabytes of archival storage. It will take up approximately 5,000 square feet of floor space in a new state-of-the-art computer facility at the University of Illinois. “The concept here is to develop a well-balanced machine both in terms of compute power, memory size, disk/archive storage, and IO capability,” Wilhelmson told TG’10 attendees. “It is one of the things that many organizations struggle with.” While the numbers behind the data-intensive Blue Waters supercomputer are impressive — its 300,000-plus cores will help the system achieve peak performance of approximately 10 petaflops, or 10 quadrillion calculations per second, and deliver a sustained performance of at least one petaflop on a range of real-world science and engineering applications — Wilhelmson said it is the science and scientific advances that are really important. “Machines are just technology,” he said. “They live for five years and then they’re gone, replaced by something else. What does not die is the application, because it is developed and used to gain a deeper understanding of the world around us.” Wilhelmson, an atmospheric scientist at the University of Illinois at Urbana-Champaign, also had some advice for students and young researchers working with TeraGrid, the nation’s largest open-access scientific discovery infrastructure. “Expect to work in teams,” he said, adding that the days of researchers working alone on a project are over. “Teams and collaborations are crucial to solving interdisciplinary problems and furthering our understanding because the problems are so big and often quite complex.” Wilhelmson said that today’s scientists must be “nimble and adaptive,” willing to try new things, and “find new ways to deal with the data explosion which we are in part creating. “We will be able to do things on Blue Waters that I never dreamed about,” he said, adding that “we are now solving problems that we didn’t have enough computational power to solve in the past.” In conclusion, Wilhelmson stressed the need for funding at adequate levels for applications development and system support, calling it essential to progress and leadership. Yet he expressed doubt about the next frontier: exascale computing, which is a thousand-fold increase over the petascale level. “I’ll make a claim,” he told the TG’10 audience. “There will be no general purpose exascale machine ever built that anyone can afford to operate, much less buy,” largely because of the massive amount of funding that will be needed, along with the extreme power requirements. Upon reflection, Wilhelmson challenges today’s young computer scientists: “Who will show that this prediction is wrong?”
<urn:uuid:2dcdbf2c-82bf-48c1-9f09-6d477e432c9d>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/08/16/teragrid_2010_keynote_attendees_peer_into_blue_waters/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00253-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924953
1,313
2.828125
3
Computer clouds have been credited with making the workplace more efficient and giving consumers anytime-anywhere access to emails, photos, documents and music as well as helping companies crunch through masses of data to gain business intelligence. Now it looks like the cloud might help cure cancer too. The National Cancer Institute plans to sponsor three pilot computer clouds filled with genomic cancer information that researchers across the country will be able to access remotely and mine for information. The program is based on a simple revelation, George Komatsoulis, interim director and chief information officer of the National Cancer Institute’s Center for Biomedical Informatics and Information Technology, told Nextgov. It turns out the gross physiological characteristics we typically use to describe cancer -- a tumor’s size and its location in the body -- often say less about the disease’s true character and the best course of treatment than genomic data buried deep in the cancer’s DNA. That’s sort of like saying you’re probably more similar to your cousin than to your neighbor, even though you live in New York and your cousin lives in New Delhi. It means treatments designed for one cancer site might be useful for certain tumors at a different site, but, in most cases, we don’t know enough about those tumors’ genetic similarities yet to make that call. The largest barrier to gaining that information isn’t medical but technical, said Komatsoulis who’s leading the cancer institute’s cloud initiative. The National Cancer Institute is part of the National Institutes of Health. The largest source of data about cancer genetics, the cancer institute’s Cancer Genome Atlas, contains half a petabyte of information now, he said, or the equivalent of about 5 billion pages of text. Only a handful of research institutions can afford to store that amount of information on their servers let alone manipulate and analyze it. By 2014, officials expect the atlas to contain 2.5 petabytes of genomic data drawn from 11,000 patients. Just storing and securing that information would cost an institution $2 million per year, presuming the researchers already had enough storage space to fit it in, Komatsoulis told a meeting of the institute’s board of advisers in June. To download all that data at 10 gigabytes per second would take 23 days, he said. If five or 10 institutions wanted to share the data, download speeds would be even slower. It could take longer than six months to share all the information. That’s where computer clouds -- the massive banks of computer servers that can pack information more tightly than most conventional data centers and make it available remotely over the Internet -- come in. If the genomic information contained inside the atlas could be stored inside a cloud, he said, researchers across the world would be able to access and study it from the comfort of their offices. That would provide significant cost savings for researchers. More importantly, he said, it would democratize cancer genomics. “As one reviewer from our board of scientific advisers put it, this means a smart graduate student someplace will be able to develop some new, interesting analytic software to mine this information and they’ll be able to do it in a reasonable time frame,” Komatsoulis said, “and without requiring millions of dollars of investment in commodity information technology.” It’s not clear where all this genomic information will ultimately end up. If one or more of the pilots proves successful, a private sector cloud vendor may be interested in storing the information and making it available to researchers on a fee-for-service basis, Komatsoulis said. This is essentially what Amazon has done for basic genetic information captured by the international Thousand Genomes Project. A private sector cloud provider will have to be convinced that there’s a substantial enough market for genomic cancer information to make storing the data worth its while, Komatsoulis said. The vendor will also have to adhere to rigorous privacy standards, he said, because all the genomic data was donated by patients who were promised confidentiality. One or more genomic cancer clouds may also be managed by university consortiums, he said, and it’s possible the government may have an ongoing role. The cancer institute is seeking public input on the cloud through the crowdsourcing website Ideascale. The University of Chicago has already launched a cancer cloud to store some of that information. It’s not clear yet whether the university will apply to be one of the institute’s pilot clouds. Because the types of data and the tools used to mine it differ so greatly, it’s likely there will have to be at least two cancer clouds after the pilot phase is complete, Komatsoulis said. As genomic research into other diseases progresses, it’s possible that information could be integrated into the cancer clouds as well, he said. “Cancer research is on the bleeding edge of really large scale data generation, he said. “So, as a practical matter, cancer researchers happen to be the first group to hit the point where we need to change the paradigm by which we do computational analysis on this data . . . But much of the data that I think we’re going to incorporate will be the same or similar as in other diseases.” As scientists’ ability to sequence and understand genes improves, genome sequencing may one day become part of standard care for patients diagnosed with cancer, heart problems and other diseases with a genetic component, Komatsoulis said. “As we learn more about the molecular basis of diseases, there’s every reason to believe that in the future if you present with a cancer, the tumor will be sequenced and compared against known mutations and that will drive your physician’s treatment decisions,” he explained. “This is a very forward looking model but, at some level, the purpose of things like The Cancer Genome Atlas is to develop a knowledge base so that kind of a future is possible.”
<urn:uuid:48d7d424-9273-4081-a1ef-22fe216ebfe0>
CC-MAIN-2017-04
http://www.nextgov.com/cloud-computing/2013/08/computer-clouds-can-help-cure-cancer/68096/?oref=ng-HPriver
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00555-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931194
1,246
3
3
Okara R.M.,University of Nairobi | Sinka M.E.,University of Oxford | Minakawa N.,Nagasaki University | Mbogo C.M.,Center for Geographic Medicine Coast | And 3 more authors. Malaria Journal | Year: 2010 Background. A detailed knowledge of the distribution of the main Anopheles malaria vectors in Kenya should guide national vector control strategies. However, contemporary spatial distributions of the locally dominant Anopheles vectors including Anopheles gambiae, Anopheles arabiensis, Anopheles merus, Anopheles funestus, Anopheles pharoensis and Anopheles nili are lacking. The methods and approaches used to assemble contemporary available data on the present distribution of the dominant malaria vectors in Kenya are presented here. Method. Primary empirical data from published and unpublished sources were identified for the period 1990 to 2009. Details recorded for each source included the first author, year of publication, report type, survey location name, month and year of survey, the main Anopheles species reported as present and the sampling and identification methods used. Survey locations were geo-positioned using national digital place name archives and on-line geo-referencing resources. The geo-located species-presence data were displayed and described administratively, using first-level administrative units (province), and biologically, based on the predicted spatial margins of Plasmodium falciparum transmission intensity in Kenya for the year 2009. Each geo-located survey site was assigned an urban or rural classification and attributed an altitude value. Results. A total of 498 spatially unique descriptions of Anopheles vector species across Kenya sampled between 1990 and 2009 were identified, 53% were obtained from published sources and further communications with authors. More than half (54%) of the sites surveyed were investigated since 2005. A total of 174 sites reported the presence of An. gambiae complex without identification of sibling species. Anopheles arabiensis and An. funestus were the most widely reported at 244 and 265 spatially unique sites respectively with the former showing the most ubiquitous distribution nationally. Anopheles gambiae, An. arabiensis, An. funestus and An. pharoensis were reported at sites located in all the transmission intensity classes with more reports of An. gambiae in the highest transmission intensity areas than the very low transmission areas. Conclusion. A contemporary, spatially defined database of the main malaria vectors in Kenya provides a baseline for future compilations of data and helps identify areas where information is currently lacking. The data collated here are published alongside this paper where it may help guide future sampling location decisions, help with the planning of vector control suites nationally and encourage broader research inquiry into vector species niche modeling. © 2010 Okara et al; licensee BioMed Central Ltd. Source Snow R.W.,Kenya Medical Research Institute | Snow R.W.,University of Oxford | Kibuchi E.,Kenya Medical Research Institute | Karuri S.W.,Kenya Medical Research Institute | And 7 more authors. PLoS ONE | Year: 2015 Background: Progress toward reducing the malaria burden in Africa has been measured, or modeled, using datasets with relatively short time-windows. These restricted temporal analyses may miss the wider context of longer-term cycles of malaria risk and hence may lead to incorrect inferences regarding the impact of intervention. Methods: 1147 age-corrected Plasmodium falciparum parasite prevalence (PfPR2-10) surveys among rural communities along the Kenyan coast were assembled from 1974 to 2014. A Bayesian conditional autoregressive generalized linear mixed model was used to interpolate to 279 small areas for each of the 41 years since 1974. Best-fit polynomial splined curves of changing PfPR2-10were compared to a sequence of plausible explanatory variables related to rainfall, drug resistance and insecticide-treated bed net (ITN) use. Results: P. falciparum parasite prevalence initially rose from 1974 to 1987, dipped in 1991-92 but remained high until 1998. From 1998 onwards prevalence began to decline until 2011, then began to rise through to 2014. This major decline occurred before ITNs were widely distributed and variation in rainfall coincided with some, but not all, short-term transmission cycles. Emerging resistance to chloroquine and introduction of sulfadoxine/pyrimethamine provided plausible explanations for the rise and fall of malaria transmission along the Kenyan coast. Conclusions: Progress towards elimination might not be as predictable as we would like, where natural and extrinsic cycles of transmission confound evaluations of the effect of interventions. Deciding where a country lies on an elimination pathway requires careful empiric observation of the long-term epidemiology of malaria transmission. Copyright: © 2015 Snow et al. Source Kamali A.,Medical Research Council Uganda Virus Research Institute | Kamali A.,Uganda Virus Research Institute UVRI | Price M.A.,International AIDS Vaccine Initiative IAVI | Price M.A.,University of California at San Francisco | And 46 more authors. PLoS ONE | Year: 2015 HIV epidemiology informs prevention trial design and program planning. Nine clinical research centers (CRC) in sub-Saharan Africa conducted HIV observational epidemiology studies in populations at risk for HIV infection as part of an HIV prevention and vaccine trial network. Annual HIV incidence ranged from below 2% to above 10% and varied by CRC and risk group, with rates above 5% observed in Zambian men in an HIV-discordant relationship, Ugandan men from Lake Victoria fishing communities, men who have sex with men, and several cohorts of women. HIV incidence tended to fall after the first three months in the study and over calendar time. Among suspected transmission pairs, 28% of HIV infections were not from the reported partner. Volunteers with high incidence were successfully identified and enrolled into large scale cohort studies. Over a quarter of new cases in couples acquired infection from persons other than the suspected transmitting partner. © 2015 Kamali et al. Source Global proteomic analysis of plasma from mice infected with Plasmodium berghei ANKA using two dimensional gel electrophoresis and matrix assisted laser desorption ionization-time of flight mass spectrometry Gitau E.N.,Center for Geographic Medicine Coast | Kokwaro G.O.,University of Nairobi | Kokwaro G.O.,African Center for Clinical Trials | Kokwaro G.O.,Consortium for National Health Research | And 2 more authors. Malaria Journal | Year: 2011 Background: A global proteomic strategy was used to identify proteins, which are differentially expressed in the murine model of severe malaria in the hope of facilitating future development of novel diagnostic, disease monitoring and treatment strategies. Methods. Mice (4-week-old CD1 male mice) were infected with Plasmodium berghei ANKA strain, and infection allowed to establish until a parasitaemia of 30% was attained. Total plasma and albumin depleted plasma samples from infected and control (non-infected) mice were separated by two-dimensional gel electrophoresis (2-DE). After staining, the gels were imaged and differential protein expression patterns were interrogated using image analysis software. Spots of interest were then digested using trypsin and the proteins identified using matrix-assisted laser desorption and ionization-time of flight (MALDI-TOF) mass spectrometry (MS) and peptide mass fingerprinting software. Results: Master gels of control and infected mice, and the corresponding albumin depleted fractions exhibited distinctly different 2D patterns comparing control and infected plasma, respectively. A wide range of proteins demonstrated altered expression including; acute inflammatory proteins, transporters, binding proteins, protease inhibitors, enzymes, cytokines, hormones, and channel/receptor-derived proteins. Conclusions: Malaria-infection in mice results in a wide perturbation of the host serum proteome involving a range of proteins and functions. Of particular interest is the increased secretion of anti-inflammatory and anti apoptotic proteins. © 2011 Gitau et al; licensee BioMed Central Ltd. Source Gitau E.N.,Center for Geographic Medicine Coast | Tuju J.,Center for Geographic Medicine Coast | Stevenson L.,Center for Geographic Medicine Coast | Kimani E.,Center for Geographic Medicine Coast | And 7 more authors. PLoS ONE | Year: 2012 The Plasmodium falciparum erythrocyte membrane protein 1 (PfEMP1) is a variant surface antigen expressed on mature forms of infected erythrocytes. It is considered an important target of naturally acquired immunity. Despite its extreme sequence heterogeneity, variants of PfEMP1 can be stratified into distinct groups. Group A PfEMP1 have been independently associated with low host immunity and severe disease in several studies and are now of potential interest as vaccine candidates. Although antigen-specific antibodies are considered the main effector mechanism in immunity to malaria, the induction of efficient and long-lasting antibody responses requires CD4+ T-cell help. To date, very little is known about CD4+ T-cell responses to PfEMP1 expressed on clinical isolates. The DBLα-tag is a small region from the DBLα-domain of PfEMP1 that can be amplified with universal primers and is accessible in clinical parasite isolates. We identified the dominant expressed PfEMP1 in 41 individual clinical parasite isolates and expressed the corresponding DBLα-tag as recombinant antigen. Individual DBLα-tags were then used to activate CD4+ T-cells from acute and convalescent blood samples in children who were infected with the respective clinical parasite isolate. Here we show that CD4+ T-cell responses to the homologous DBLα-tag were induced in almost all children during acute malaria and maintained in some for 4 months. Children infected with parasites that dominantly expressed group A-like PfEMP1 were more likely to maintain antigen-specific IFNγ-producing CD4+ T-cells than children infected with parasites dominantly expressing other PfEMP1. These results suggest that group A-like PfEMP1 may induce long-lasting effector memory T-cells that might be able to provide rapid help to variant-specific B cells. Furthermore, a number of children induced CD4+ T-cell responses to heterologous DBLα-tags, suggesting that CD4+ T-cells may recognise shared epitopes between several DBLα-tags. © 2012 Gitau et al. Source
<urn:uuid:b8b48174-d3cb-4483-81a7-7796bbab8984>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/center-for-geographic-medicine-coast-1016090/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00555-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910812
2,216
2.6875
3