text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Since the 1950s, silicon has been the preferred medium of computing circuitry. While transistor density has continued to increase according to Moore’s law, the technology is beginning to reveal its limitations, resulting in increased heat and energy consumption to achieve more compute power. This has prompted physicist Michio Kaku to claim that advances in computing will need to originate from an alternative material. He shared his thoughts in an interview with The Daily Galaxy. According to Kaku: “In about ten years or so, we will see the collapse of Moore’s Law. In fact, already we see a slowing down of Moore’s Law. Computer power simply cannot maintain its rapid exponential rise using standard silicon technology.” The strains on silicon are leading to new types of designs from chipmakers. Last June, Intel VP Kirk Skaugen admitted the exascale roadmap would not be able to rely on Moore’s Law alone. At ISC 2011 though, he introduced the company’s tri-gate technology, an advancement that maintained Intel’s breakneck pace for transistor scaling. Kaku agrees that tri-gate, or 3D chip technology will alleviate some issues, but still views it as a stopgap solution. Like others, he believes that heat, leakage, and the inexorable laws of quantum mechanics will eventually spell silicon’s demise. Until then, the physicist says manufacturers will engage in more innovative practices to squeeze silicon to its physical limits. These approaches, including parallel computing, could be tapped out by the end of the next decade. Although the Sun may eventually set on the era of silicon-based computing, a number of potential alternatives are on the horizon. Kaku mentions machines based on protein, DNA, and optical devices as possible replacements. When the time comes to transition to a new medium, he thinks the world will migrate to 3-dimensional chips. That technology would be followed by molecular computers and, eventually, by quantum computers around the end of the 21st century. Kaku certainly predicts a dark future for silicon-based semiconductor technology. Regardless of the underlying technology though, he remains optimistic that future innovations will lead to new types computers even more powerful than the ones we use today. While silicon will drive the first exascale system, a new medium is likely to emerge for the zettascale and then yottascale eras.
<urn:uuid:5747e550-1d26-425b-8352-b80f49969fb9>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/05/01/famous_physicist_predicts_the_end_of_silicon-based_computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00472-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935789
488
3.28125
3
IP Camera Setup TrainingAuthor: John Honovich, Published on May 29, 2012 Once you can connect to an IP camera, you need to set it up so that the camera can integrate with a VMS or NVR. In this training, we show you how to do it and what issues to avoid. The most fundamental step in setting up IP cameras is assigning an IP address to the camera. In the video below, we explain: - Choosing between dynamic and static IP addresses - How to get the right IP address - When and why to use DNS information Watch the 6 minute video to see this in action: The next step is to verify that the correct firmware / software is loaded on both the IP camera and VMS side. This is very easy to overlook and is one of the most common problems in using IP video surveillance. While it is not particularly hard to resolve, often users are just not aware of these element. Watch the 4 minute video below for an explanation on the importance and impact of firmware: Take the 5 question quiz below to see how well you understand setting up IP cameras: Most Recent Industry Reports The world's leading video surveillance information source, IPVM provides the best reporting, testing and training for 10,000+ members globally. Dedicated to independent and objective information, we uniquely refuse any and all advertisements, sponsorship and consulting from manufacturers.
<urn:uuid:172d0dde-d840-478e-94a5-6480d48143f2>
CC-MAIN-2017-04
https://ipvm.com/reports/ip-camera-setup-training
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00106-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927036
285
2.546875
3
HijackThis is an advanced tool which requires advanced knowledge about the Windows Operating System. Most of the log entries are required to run a computer and removing essential ones can potentially cause serious damage such as loss of Internet connectivity or problems with your operating system which could preventing it from starting. HijackThis relies on trained experts to interpret the log entries and investigate them in order to determine what needs to be fixed. If you do not have advanced knowledge about computers or training in the use of this tool, you should NOT fix anything using HijackThis without consulting a expert as to what to fix. is a computer that provides services used by other computers. A Proxy Server is a server that acts as a go-between for requests from client machines and other servers. A proxy also hides the computers behind it. When you configure your computer/browser to use a proxy server, you are telling it to send all traffic to that server instead of going directly to the actual destination. In otherwords, the web browser contacts the proxy server for each web access instead of going directly to the target server. The proxy server then makes the request of the web server, receives a response and sends it back the information retrieved to your computer. For more information, see Proxy Servers and DMZ and What's a Proxy Server? In Internet Explorer, (Tools > Internet Options > Connections tab) the LAN Settings has several options: - Automatically detect settings. - Use automatic configuration script. - Use a proxy server. - Bypass proxy server for local addresses. If you enable the proxy you will then be able to click on the Advanced button which allows the following extra options: - Servers - This section will allow you to use the same proxy server for each of the protocols (HTTP, HTTPS, FTP, GOPHER, SOCKS) or assign different proxy servers for one or more proxy servers. - Exceptions - If you want to add more addresses that Internet Explorer will not use the proxy server for you can enter them in this field seperated by semi-colons. Though it may appear you are using a proxy server, it may not actually be enabled. Most people who are using a proxy usually know they are using it.
<urn:uuid:384fbb86-7380-4ce3-acb5-ea504cecb671>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/forums/t/224833/proxyoverride/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00014-ip-10-171-10-70.ec2.internal.warc.gz
en
0.871293
466
2.859375
3
Image Color Matching in Windows 2000 If you create a Web page that contains images, you probably take for granted the idea that the images will appear the same to everyone. Sure, some people will have their video cards set to different resolutions or numbers of colors, but on comparable systems, the images should look the same. Although this idea sounds simple, it really isn't. You've probably noticed that during the Windows Setup process, Windows attempts to determine the exact make and model of your monitor. The reason for this detection process is that every brand of monitor works a little differently. If you view an image that has with just a few colors, such as a 16- or 256-color image, you'd probably never notice these subtle differences. However, if you view a high-resolution photograph that contains lots of colors, the differences quickly become apparent. Image Color Management This is where Image Color Management (ICM) 2.0 comes in. ICM 2.0 is a Windows 2000 component; its job is to make sure that an image looks the same on every system. It accomplishes this task by looking at the profiles for each individual type of monitor. By knowing exactly how an individual monitor will display a known color, ICM can alter the color so that it's displayed correctly on that monitor. For example, suppose ICM knew that a particular monitor displays the color blue just a bit too dark. ICM could alter the way that images containing blue are displayed, so that the color blue is displayed accurately in spite of the monitor's inherent flaws. As cool as ICM is, its functionality doesn't stop with monitors. ICM performs the same type of task on a wide variety of other devices, such as scanners, digital cameras, and printers. ICM maintains a color profile for each type of input device. Because the specific color inaccuracies are known for each device, ICM can correct the images as they're fed into the computer. Likewise, because ICM knows about color inaccuracies associated with printers, it can intervene to make sure images are printed in accurate colors. Because ICM relies so heavily on device profiles, you may wonder what happens when a device has no profile. Usually, ICM is used in situations with both a source and a destination device. For example, a scanner might be the source, and the monitor might be the destination. If either device contains a color profile, that color profile is used--even if the other device doesn't have a profile. When a device doesn't have a profile, ICM uses a built-in color profile called sRGB in place of the missing device-specific profile. // Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all.
<urn:uuid:d57642d1-732b-45fc-a305-3661ffc3f28b>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netos/article.php/625661/Image-Color-Matching-in-Windows-2000.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00224-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946715
622
2.921875
3
The threat of cyber attacks is more prevalent today than ever before. Criminals and government organizations are in a fierce battle that many don’t notice until the aftermath. Credit cards and personal information are leaked from large companies and the breaches are widely publicized by the media. However, there’s another, less publicized type of cyber warfare occurring…and these strategic attacks often go unnoticed until it’s too late. Stuxnet is a perfect example. The goal of Stuxnet was to destroy the fragile equipment that’s used to refine uranium into a form that can be weaponized. This attack was executed with great precision so that only the intended target would be affected. Even though many systems contracted the worm, they were not harmed by it. Only the hackers’ target, the uranium facility, was harmed. The Stuxnet worm is still one of the greatest worms ever crafted. It moved system to system on USB drives until finally an infected USB was inserted in to the refining facility. And that’s when the worm started its attack. Organizations can learn two important lessons from Stuxnet: - Removable media is one the greatest threats to an air-gapped system. This type of system that has no connection to the internet and the only way into the network via an infected removable media device. - Users should be prohibited from inserting removable media into high-target systems – i.e. any system that’s air gapped – unless it’s been properly sanitized and approved for use. Stuxnet also draws attention to the fact that some systems are vulnerable no matter what. There was little the plant could’ve done to safeguard the components that were used to cause the failure in the facility. The dropper part of the worm is the only thing that could’ve been stopped. This means that a more in-depth defense and user training is needed to better protect systems that cannot full secured. Establishing a Fortified Security Policy There’s a lot that can be done to lower the risk of a security breach in this modern world of cyber warfare. And one of the most important is a good security policy that’s strictly enforced. A sound security policy takes into consideration how an organization does business and works to find and resolve issues before they become problems. It also takes into account the people responsible for its enforcement. The weakest point of any security policy is people. Because everyone make mistakes. By providing employees with on-boarding training, as well as annual cyber security training, employees can educated on cyber security best practices and red flags, such as phishing emails, to look out for. Once a staff is properly trained in cyber security procedures, an organization’s network should be assessed to ensure the most recent patches are installed on servers and workstations. All critical servers or server that are high value targets, such as credit card databases, should also require user input to be validated to protect the database from injections that can cause the table to be dropped onto the attacker’s machine. Another way to make sure a network is as secure as possible is via penetration tests and vulnerability scans. These exercises expose a network to a variety of attacks and show what hosts are on it, which helps to eliminate advanced persistent threats from rogue devices. Pen test and vulnerability scans also show where improvements can be made without causing interference to users and customers. While a good security policy can only do so much to protect a company, it’s an absolute necessity. And its something that should be revisited and updated regularly as new threats emerge and the organization’s needs change and evolve. Preventing Another Stuxnet To prevent another attack like Stuxnet, organizations need to take their cyber security more seriously. Implement a custom security policy, train users on best practices and threat detection and ensure that network engineers have an accurate account of all systems. Doing so will go a long way in strengthening an organization’s security posture and prevent a large scale breach. Custom malware can go undetected for years. So making it difficult for cyber criminals to infiltrate networks is a logical line of defense. However, even if attackers are able to access a high-value target, they still have to get the data out. Closely monitoring everything that is outbound from a network will help detect if and when it’s compromised. Yet in the case of Stuxnet, the key was preventing removable media from being inserted into the air-gapped network. There are many ways to eliminate the removable media options from computers. Either by physically removing the drives, or by logically severing the ability to make the connection. In the case of an air gapped system, physically removing the ability to insert any form of removable media is the best option; however it is a more extreme one. Hackers have nothing but time on their hands. They can create custom malware that won’t be detected by anti-virus. And they can bide their time and perform tests to ensure the attack’s will be successful. If an actor is able to get into a power gird or water systems that use SCAD systems, how would we see them? Could we stop them before they caused irreparable damage and extreme loss of life? Stuxnet brought to light the seriousness of this threat. It demonstrated the power of cyber warfare and the damage a strategically engineer attack can have on our nation’s critical infrastructure.
<urn:uuid:7e582eb6-75f6-4c6e-b7cb-441aa1d94e8c>
CC-MAIN-2017-04
https://lunarline.com/blog/2015/03/stuxnet-combating-modern-cyber-warfare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00436-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958655
1,129
2.671875
3
Over the last 50 years, the semiconductor business has enjoyed what is perhaps the most thrilling ride of any industry ever conceived. Today semiconductors are a $250 billion business that account for nearly 10 percent of the world’s GDP. At the foundation of its success is Moore’s Law, the chipmaker’s mantra that promises better, faster and cheaper transistors every 18 to 24 months. But the laws of physics are conspiring to bring this ride to an end. The problems are well known. CMOS-based transistors are increasingly harder to manufacture at nanometer scale. And even as technologies are perfected to do so, the materials themselves are becoming unsuitable for such small geometries. At 22 nm, Intel’s process node slated for 2011, gate oxide will be only 4 to 5 atoms thick and the gates themselves will be 42 atoms across. Manufacturing these devices in reasonable volumes and within reasonable power envelopes is going to be a challenge. In fact, the analyst team at iSuppli has predicted that the expense of manufacturing sub-20nm devices would not be economically feasible. That is, the cost of the fabs could not be recouped by the volume of chips produced at those process nodes. Thus, they concluded, Moore’s Law would be repealed in about five years. Most of the efforts to address the problem of shrinking transistor geometries have focused on making the devices behave more precisely, using technologies like X-ray lithography and hafnium insulators, to name just two. But what if instead of trying to make the transistors better, we purposefully try to make them worse. Although it sounds counter-intuitive, developing processors that are naturally error-prone is exactly what one team of researchers from the University of Illinois and the University of California, San Diego has set out to do. Called stochastic processors, the idea is to under-design the hardware, such that it is allowed to behave non-deterministically under both stressful and nominal conditions. Error tolerance can be provided by either the hardware or the software. The rationale is that by relaxing the design and manufacturing constraints, it will be much simpler and much cheaper to produce such processors in volume. And because voltage scaling and clock frequency restrictions are eased, significant power savings and performance increases can be realized. The stochastic model would represent a significant departure from the way semiconductor devices are designed today. Even though processors have evolved significantly over the decades — scalar to superscalar, single-core to multicore, etc. — the basic assumption has always been that the hardware must behave flawlessly. “It’s the contract that the hardware provides to the software today,” says Rakesh Kumar, a computer scientist at the University of Illinois, Urbana-Champaign, who is part of the Stochastic Processor Research group there. The research is being funded by Intel, DARPA, the NSF, and the GigaScale Systems Research Center (GSRC), a consortium of academic, government and industry organizations devoted to next-generation hardware and software. The idea behind stochastic processors is relatively simple: Build a chip that computes correctly, say, 99 percent of the time. Such a device is specifically designed to let errors occur under both worst-case and nominal conditions. The advantage of this model is that, compared to a 100 percent error-free processor, a stochastic implementation requires a lot less manufacturing precision and takes a lot less power to run. Kumar’s stochastic research group has designed a Niagara processor (an open source processor design developed by Sun Microsystems) that allows for a 1 to 4 percent error rate. Based on circuit level simulation with CAD design tools, the researchers determined they could save between 25 to 40 percent on power compared to the default (deterministic) design. That might seem like a lot, but it points to how much of a traditional processor design is now being devoted to keeping the transistors from throwing off errors. It also explains why multicore designs introduce another level of challenges for chipmakers. For example, if two of the cores on a quad-core processor can run (flawlessly) at 2.0 GHz, one can run at 1.5 GHz, and the last core can only run error-free at 1.0 GHz, the chip has to be binned at 1.0 GHz. That’s money down the drain as far as the chipmaker is concerned. Ideally, they would like to ship a 2.0 GHz product and use some sort of scheme to compensate for the variability in the other two cores. A stochastic design would make this possible. Of course, compensating for that variability is the tricky part. Kumar says error tolerance can be accomplished in hardware or in software. Hardware correction would be the most obvious and, from the programmer’s perspective, the most palatable way to ensure correct program execution. But error tolerance in software provides more flexibility. “Our vision is that all the errors that are produced get tolerated by the software,” says Kumar. Part of the group’s research involves how to write application software in such a way that takes into account a non-deterministic processor. Kumar believes this shift in thinking is inevitable. Because the hardware variability problem is going to keep getting worse as process geometries shrink, it will eventually make more sense for the programmer to code for non-determinism rather that write the software for the least common denominator hardware. On balance, Kumar believes the ideal would be to employ hardware correction only when it is too onerous to compensate for the errors in software. HPC applications might be especially at home on stochastic processors since many of these codes are fundamentally optimization problems. In other words, they are noise tolerant to a great extent, relying on probability distributions rather than a single correct computation. Monte Carlo methods are just one example of a class of algorithms used in HPC that rely on optimization techniques, but almost any simulation or matrix math-based code has some level of optimization built in — think climate modeling, data mining, and object recognition apps. In these cases, says Kumar, “you’re not going after one answer, you’re going after a good answer.”
<urn:uuid:08c96e50-b1c1-4d13-afe1-edaa7fc4c8fb>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/05/11/one_groups_answer_to_transistors_behaving_badly/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00160-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943365
1,303
3.53125
4
Hurricane Sandy caused major damage in both the Caribbean and the North-Eastern part of the USA. In an earlier article (RIPE Atlas - Superstorm Sandy) we showed data on 15 RIPE Atlas probes that are located in or near the affected areas in the USA. Most of these locations now appear to be back to normal round trip times to targets we monitor. But the effects of Hurricane Sandy were felt beyond the immediately affected area. The region in Hurricane Sandy's path happens to contain two major hubs for international Internet connectivity: the New York City (NYC) and Washington DC/Ashburn (ASH) areas. We looked at IPv4 traceroutes sent by RIPE Atlas probes that traverse paths through these areas to see what they could tell us about the impact that Hurricane Sandy had on the wider Internet. The image below shows the number of paths we identified going through NYC (red) and ASH (blue) for traceroute4 measurements from all RIPE Atlas probes towards ns.ripe.net around the time Sandy hit the East Coast of the US. Because ns.ripe.net is located in Amsterdam, the Netherlands, RIPE Atlas probes on the American continent need to cross the Atlantic for the shortest path towards this destination. The time Sandy made landfall (on 30 October 2012 at midnight UTC) is marked with an orange line. You can see that the paths identified as going through NYC (red) dramatically drop about an hour before Sandy made landfall, and again drop a few hours later. At the same time as the number of paths through NYC drops, the number of paths identified as going through the ASH (blue) area increases significantly. We also wanted to see the geographical extent of Sandy-correlated path changes from the NYC area to the ASH area. In the map below, each marker represents a RIPE Atlas probe. They are coloured as follows: The orange circle on the map represents the geographic location of the traceroute destination. We found that for the destination in the Netherlands, the paths that were diverted away from NYC (marked red) were mainly located in North America, Oceania and South-East Asia. What you can see from this article are examples of how the Internet routes around damage. It is telling that even with a major connectivity node under extreme stress the Internet kept going. The network operators who made this happen deserve a lot of credit for the work they put into planning their networks, and keeping them operational and interconnected under difficult circumstances. What is also striking about this particular situation is the geographic extent of end-points that saw paths move away from the New York City area. For the five destinations that we examined, we saw paths stop traversing the NYC area for a significant number of sources as far away as South-East Asia and Africa, even for destinations that were not close to the area affected by Hurricane Sandy. For many more graphs and maps and for a more detailed description of the methodology, please refer to the background article on RIPE Labs: RIPE Atlas: Hurricane Sandy And How the Internet Routes Around Damage.
<urn:uuid:c254ace2-a379-42a2-abf6-e788d78d2645>
CC-MAIN-2017-04
http://www.circleid.com/posts/print/20121115_a_look_at_how_internet_routes_around_damage_storm_sandy_effects/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00068-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961846
632
2.890625
3
One of the chief goals of NASA's Mars Science Lab and its Curiosity rover was to determine if the Red Planet could have supported life in some fashion and now comes news that apparently it could have. Confirmation of that major discovery came today as NASA said analysis of a rock sample collected by Curiosity rover shows ancient Mars could have supported living microbes. NASA said its scientists identified sulfur, nitrogen, hydrogen, oxygen, phosphorus and carbon -- some of the key chemical ingredients for life -- in the powder Curiosity drilled out of a sedimentary rock near an ancient stream bed in what's known as Gale Crater on the Red Planet last month. [RELATED: What is so infinitely cool about Mars? MORE: NASA's hot radiation mission] NASA said clues to what it called a habitable environment come from data run by the rover's onboard Sample Analysis at Mars (SAM) and Chemistry and Mineralogy (CheMin) instruments. The data indicate the area Curiosity is exploring, known as Yellowknife Bay, was the end of an ancient river system or an intermittently wet lake bed that could have provided chemical energy and other favorable conditions for microbes. The rock is made up of a fine grain mudstone containing clay minerals, sulfate minerals and other chemicals. This ancient wet environment, unlike some others on Mars, was not harshly oxidizing, acidic, or extremely salty, NASA says. "These clay minerals are a product of the reaction of relatively fresh water with igneous minerals, such as olivine, also present in the sediment. The reaction could have taken place within the sedimentary deposit, during transport of the sediment or in the source region of the sediment. The presence of calcium sulfate along with the clay suggests the soil is neutral or mildly alkaline," NASA said. The plan is for Curiosity to explore the Yellowknife Bay area for a number of weeks before beginning a long drive to Mount Sharp in the middle of Gale Crater where clay minerals and sulfate minerals have been identified from orbit, may add information about the duration and diversity of habitable conditions. "A fundamental question for this mission is whether Mars could have supported a habitable environment," said Michael Meyer, lead scientist for NASA's Mars Exploration Program at the agency's headquarters in Washington. "From what we know now, the answer is yes." Ironically NASA's Mars rover Curiosity isn't doing any exploring right now since the rover team is currently assessing and recovering from a memory glitch that affected the rover's main or A-side computer. Curiosity has two computers that are redundant of one another. The rover is currently operating using the B-side computer, which is operating as expected, NASA said. Controllers switched the rover to a redundant onboard computer, the rover's "B-side" computer, on Feb. 28 when the "A-side" computer that the rover had been using demonstrated symptoms of a corrupted memory location. The intentional computer swap put the rover into minimal-activity safe mode. Curiosity exited safe mode on Saturday, March 2, and resumed using its high-gain antenna the following day. The cause for the A-side's memory symptoms remains to be determined. "These tests have provided us with a great deal of information about the rover's A-side memory," said Jim Erickson, deputy project manager for the Mars Science Laboratory/Curiosity mission at NASA's Jet Propulsion Laboratory, Pasadena, Calif. "We have been able to store new data in many of the memory locations previously affected and believe more runs will demonstrate more memory is available." Check out these other hot stories:
<urn:uuid:5ff55b7a-d5c1-4947-9508-df21db1202e1>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224274/software/nasa--mars-rock-sample-shows-red-planet-could-have-supported-life.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00096-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942772
722
3.796875
4
Spyware are programs that monitor a user's web browsing activity and then report this information to a remote computer without the express permission of the user. This information would then be analyzed by the company to offer new services or advertisements to the end user. This information may also be sold to other companies for market analysis and the creation of targeted advertising campaigns. The term Spyware has also been expanded to include any application that phones home, or transmits data to a remote location, without your express permission or knowledge. Malicious Spyware has also evolved to transmit personal data such as login names, account passwords, and other personal information to a remote location. This information will then be used for identity theft or other criminal activities. Spyware of this type are typically much more difficult to remove and tend to utilize other malware to protect itself from removal. These types of Spyware are those targeted by anti-spyware and anti-virus program. The transmission of program usage, errors, and other information is also very common in legitimate applications. Companies, though, package this type of behavior in phrases such as helping them improve the program or to allow them to offer you a better end user experience by transmitting usage information back to them. The difference, though, is that these legitimate applications ask you first and allow you to opt of these types of programs. If you do not allow it, then the programs will not send any information to a remote location. It is also not uncommon for Freeware programs to include Spyware and Adware in their programs as a way of generating revenue. Therefore, when downloading a program that is considered Freeware you should always read the program's End User License Agreement, otherwise known as EULA. This license agreement should be shown before you install the software and will state whether or the program will transmit personal information from your computer to a remote location. From the information in the EULA, you can then decide whether or not you wish to install the program.
<urn:uuid:b87db858-68b2-4c70-9af7-5f48468e28e8>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/virus-removal/threat/spyware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00004-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939829
400
2.78125
3
In this conclusion of a two-part article, Oliver Rist covers what you need to know to develop a forensic-based response plan, evidence handling and documentation, and forensic tools and intrusion detection. Articles by Oliver Rist The science of finding, gathering, analyzing and documenting any sort of evidence is typically defined as 'forensics.' That discipline has branched off into a new specialty, that of 'computer forensics.' Network managers and corporate security teams don't need to be dedicated computer forensics specialists, but they do need to be at least acquainted with the edges of this discipline in order to effectively interact with law enforcement officials at the 'scene' of a computer crime. Oliver Rist reports.
<urn:uuid:32181908-0137-4f90-af2e-fbb08f44cbef>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/author/79080/Oliver-Rist
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00490-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941257
145
2.625
3
ContactCenterWorld - Definition A queue in British English refers to a line, usually of people, cars etc., assembled in the order they arrived and waiting for some event or service. The next person to be served is the person at the front of the queue. New arrivals go to the back of the queue. As people are served, they move forwards in the queue, until they reach the front and are served, thereby assuring that people are served in the order they arrived.
<urn:uuid:4a4de195-d166-4f56-9a35-37f17feb4379>
CC-MAIN-2017-04
https://www.contactcenterworld.com/define.aspx?id=be8c102b6ee541e18961f2a0b13baa7a
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00518-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964946
97
3.1875
3
Is it possible to develop an aircraft that can fly - and survive -- at over 20-times the speed of sound? It has been an idea that has met with only minimal success over the years but that may change. Next month the big idea guys at the Defense Advanced Research Projects Agency will hold a briefing to detail exactly what it expects out of a new program that will seek to develop hypersonic aircraft or missile technology. DARPA says the goal of its Integrated Hypersonics program is "to develop, mature, and test next- generation technologies needed for global-range, maneuverable, hypersonic flight at Mach 20 and above for missions ranging from space access to survivable, time-critical transport to conventional prompt global strike. The program seeks technological advances in the areas of: next generation aero-configurations; thermal protection systems (and hot structures; precision guidance, navigation, and control; enhanced range and data collection methods; and advanced propulsion concepts." Blast from the past: The world's 23 toughest math questions The program is designed to address technical challenges and improve understanding of long-range hypersonic flight through an initial full-scale baseline test of an existing hypersonic test vehicle, followed by a series of subscale flight tests, innovative ground-based testing, expanded modeling and simulation, and advanced analytic methods, culminating in a test flight of a full-scale hypersonic X-plane (HX) in 2016. "History is rife with examples of different designs for 'flying vehicles' and approaches to the traditional commercial flight we all take for granted today," explained Gabriel. "For an entirely new type of flight-extreme hypersonic-diverse solutions, approaches and perspectives informed by the knowledge gained from DoD's previous efforts are critical to achieving our goals" said acting DARPA director, Kaigham Gabriel in a statement. DARPA equates the development of hypersonic equipment to the development of stealth technology in the 1970s and 1980s. The strategic advantage once provided by stealth technology is threatened as other nations' abilities in stealth and counter-stealth improve. "Restoring that battle space advantage requires advanced speed, reach and range. Hypersonic technologies have the potential to provide the dominance once afforded by stealth to support a range of varied future national security missions," DARPA said. There are a ton of technological issues to be addressed, one of the biggest being the heat generated by extreme speeds. At Mach 20, vehicles flying inside the atmosphere experience intense heat, exceeding 3,500 degrees Fahrenheit, which is hotter than a blast furnace capable of melting steel, as well as extreme pressure on the shell of the aircraft, DARPA stated. The thermal protection materials and hot structures technology area aims to advance understanding of high-temperature material characteristics to withstand both high thermal and structural loads. Another goal is to build structural designs and manufacturing processes to enable faster production of high-speed aeroshells, DARPA stated. DARPA knows the risks first-hand. In a report this spring DARPA noted that is experimental Hypersonic Technology Vehicle (HTV-2), lost significant portions of its outer skin and became uncontrollable after three minutes of sustained Mach 20 speed in a flight last August. That was the conclusion of an independent engineering review board (ERB) investigating the cause of what DARPA calls a "flight anomaly" in the second test flight of the HTV-2. From that ERB report: "The flight successfully demonstrated stable aerodynamically-controlled flight at speeds up to Mach 20 (twenty times the speed of sound) for nearly three minutes. Approximately nine minutes into the test flight, the vehicle experienced a series of shocks culminating in an anomaly, which prompted the autonomous flight safety system to use the vehicle's aerodynamic systems to make a controlled descent and splashdown into the ocean. Based on state-of-the-art models, ground testing of high-temperature materials and understanding of thermal effects in other more well-known flight regimes, a gradual wearing away of the vehicle's skin as it reached stress tolerance limits was expected. However, larger than anticipated portions of the vehicle's skin peeled from the aerostructure. The resulting gaps created strong, impulsive shock waves around the vehicle as it travelled nearly 13,000 miles per hour, causing the vehicle to roll abruptly. Based on knowledge gained from the first flight in 2010 and incorporated into the second flight, the vehicle's aerodynamic stability allowed it to right itself successfully after several shockwave-induced rolls. Eventually, however, the severity of the continued disturbances finally exceeded the vehicle's ability to recover." "The initial shockwave disturbances experienced during second flight, from which the vehicle was able to recover and continue controlled flight, exceeded by more than 100 times what the vehicle was designed to withstand," Gabriel said last spring. DARPA will host an Integrated Hypersonics Proposers' Day on August 14, at its Arlington, VA headquarters to detail the technical areas for which it will ultimately contract. Layer 8 Extra Check out these other hot stories:
<urn:uuid:361477c4-5d72-417d-9e37-9b7e11461b9a>
CC-MAIN-2017-04
http://www.networkworld.com/article/2222748/security/mach-20-and-beyond--darpa-program-to-develop-hypersonic-flight.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00334-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937645
1,038
2.75
3
A research team is demonstrating tablets that form local networks among the devices laid upon their surfaces, while also providing wireless charging, at the Ceatec electronics show in Japan. The concept is meant to support ad-hoc networks that are more secure and local than current Wi-Fi networks, without the need for cables. The team from Japan's prestigious University of Tokyo that is working on the technology envisions the tables being used in business meetings or classrooms, where temporary, instant connections are useful. The tables are made using a thin sheet made of small mesh panels, which can contain electromagnetic waves in two dimensions, while also carrying a small electric current. Devices that interact with the sheet must generally be equipped with a special coupler, although team members said it is also possible to use traditional Wi-Fi antennas for Internet in some cases. "In standard wireless connections, electromagnetic waves are sent through the air, but here connections are made by making contact with the surface," said Akihito Noda, a doctoral student at the University of Tokyo. "Requiring surface contact creates a lot of restrictions, but on the other hand there are also some benefits. For instance, if you don't set devices on the surface, no false connections can be made, so there are security benefits." At a demonstration on the exhibition floor, tablets laid on a table connected easily to a networked router on the same table, avoiding the morass of competing Wi-Fi signals on the show floor. The table also provided enough power to slowly charge a mobile phone and as well as run small fans and LED lights. All devices displayed where equipped with the special couplers, and the low power supply was an obvious limit on the types of gadgets that could be displayed. Noda said Internet connections run at Wi-Fi speeds. For charging, 60-centimeter-square sheets have been tested as safely taking 10 watts of power without any ill effects on users. Devices laid on such sheets charge at about 4 watts. The team, which is working with large Japanese corporations like NEC to develop the technology, is also planning to incorporate it into home furniture, to create surfaces where users can lay their gadgets to automatically charge and join their personal network. The Ceatec exhibition, Japan's largest electronics show, runs this week at Makuhari, just outside of Tokyo.
<urn:uuid:d6802af9-f631-48a2-b607-7f7d5d6609d9>
CC-MAIN-2017-04
http://www.computerworld.com/article/2492001/personal-technology/lay-your-tablet-on-a-table--join-its-network.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00242-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951974
480
2.921875
3
What is a spreadsheet-based DSS? by Dan Power Most managers are familiar with spreadsheet packages like Microsoft Excel. If a decision support system (DSS) has been or will be implemented using a spreadsheet package it can be termed a spreadsheet-based DSS. A spreadsheet package is the enabling technology for the DSS. Both model-driven and small-scale, data-driven DSS can be implemented using desktop, client-server or cloud-based spreadsheet applications. Spreadsheet-based DSS can be very useful, but such systems often have errors, are inadequately documented and are often inappropriate. In the world of accounting, a spreadsheet spreads or shows all of the costs, income, taxes, and other financial data on a single sheet of paper for a manager to look at when making a decision. Also, a spreadsheet is a collection of cells whose values can be displayed on a computer screen. An electronic spreadsheet organizes data into columns and rows. The data can then be manipulated by a formula to give an average, maximum or sum. By changing cell definitions and having all cell values re-evaluated, a user performs "what if?" analysis and observes the effects of those changes (see Power, 2004). Are spreadsheet packages DSS generators? Sprague and Carlson (1982) defined a DSS generator as a computer software package that provides tools and capabilities that help a developer quickly and easily build a specific decision support system (see p. 11). Spreadsheet packages qualify as DSS generators because: a) they have sophisticated data handling and graphic capabilities; b) they can be used for "what if" analysis; and c) spreadsheet software can facilitate the building of a DSS. Model-driven and data-driven DSS are the most common types of DSS one would consider developing using a spreadsheet package. Spreadsheets seem especially appropriate for building a DSS with one or more small models. A developer would then add buttons, spinners and other tools to support a decision maker in "what if?" and sensitivity analysis. A data-driven DSS can also be implemented using a spreadsheet. A large data set can be downloaded to the DSS application from a DBMS, a web site or a delimited flat file. Then pivot tables and charts can be developed to help a decision maker summarize and manipulate the data. Spreadsheet-based DSS can be created in a single user or a multiuser development environment. Microsoft Excel is certainly the most popular spreadsheet application development environment. Add-in packages like Crystal Ball, Premium Solver and @Risk increase the capabilities of a spreadsheet and the variety of models that can be implemented. At DSSResources.COM, one can read spreadsheet-based DSS case examples from Decisioneering "SunTrust 'Banks' on Crystal Ball for assessing the risk of commercial loans" and Palisade "Procter & Gamble Uses @RISK and PrecisionTree World-Wide". Cliff Ragsdale of Virginia Tech and author of "Spreadsheet Modeling and Decision Analysis" commented in an email in 2001 that "if you want to give students hands-on experience creating a DSS, I don't think you can beat spreadsheets!" Spreadsheets can be used to create many useful "production DSS applications" that deliver real benefits at a modest cost (cf., Power, 2004). Spreadsheet-based DSS can however create problems. In recent years, Business Intelligence vendors and commentators have pointed out that excessive use of spreadsheets can create spreadsheet hell (cf., Raden, 2005, Murphy, 2007.) In a 2004 survey, Durfee found many finance executives felt 'Spreadsheet Hell' described their reliance on spreadsheets either completely or fairly well. The same survey noted 100% of respondents in large and small companies used spreadsheets (cf., Durfee, 2004). In recent years, reliance on spreadsheet-based DSS has declined as companies implemented enterprise business intelligence systems for decision support. Spreadsheets can satisfy many needs in small, stable organizations, but as Durfee found, spreadsheets can become a major burden for managers in larger companies. As always, we need to select the right tool for a specific decision support task. Albright, S. C., VBA for modelers: Developing decision support systems with Microsoft Excel. Pacific Grove, CA: Duxbury Press, 2001. Dunn, A. Spreadsheets - the Good, the Bad and the Downright Ugly, Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2010 pp. 157-164 at URL http://arxiv.org/abs/1009.5705 Durfee, D., Spreadsheet Hell, CFO Magazine, Summer 2004 issue, at URLhttp://www.cfo.com/article.cfm/3014451?f=related . Murphy, S. Spreadsheet Hell, Proc. European Spreadsheet Risks Int. Grp. 2007 pp. 15-20 at URL http://arxiv.org/abs/0801.3118. Power, D. J., "A Brief History of Spreadsheets". URL http://dssresources.com/history/sshistory.html, version 3.6, 08/30/2004. Power, D., What is a cost estimation DSS? DSS News, Vol. 5, No. 8, April 11, 2004, at URL http://dssresources.com/faq/index.php?action=artikel&id=71 . Raden, N., "Shedding Light on Shadow IT: Is Excel Running Your Business?", DSSResources.COM, 02/26/2005 at URL http://dssresources.com/papers/features/raden/raden02262005.html. Ragsdale, C., Spreadsheet Modeling and Decision Analysis, Cincinnati, OH: South-Western Thomson Learning, 2000. Ragsdale, C., D. J. Power, and P. K. Bergey, Spreadsheet-Based DSS Curriculum Issues, Communications of the Association for Information Systems, Vol. 9, Article 21, November 2002, pp. 356-365. Sprague, R. H. and E. D. Carlson. Building Effective Decision Support Systems. Englewood Cliffs, N.J.: Prentice-Hall, Inc.: 1982. The above is from Power, D., What is a Spreadsheet-based DSS? DSS News, Vol. 3, No. 6, Last update: 2011-10-16 07:56 Author: Daniel Power You cannot comment on this entry
<urn:uuid:151aa3fd-5831-4c01-a2a1-acfe436de5fb>
CC-MAIN-2017-04
http://dssresources.com/faq/index.php?action=artikel&id=56
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00060-ip-10-171-10-70.ec2.internal.warc.gz
en
0.857529
1,376
2.984375
3
Why you should know the difference between search tools and discovery tools Search, information discovery and e-discovery seek and display information in different ways - By Shawn McCarthy - Apr 28, 2010 Government information technology workers might have heard the following three phrases used interchangeably: search tools, information discovery tools and e-discovery tools. Depending on your definition, there is some overlap among the concepts. But there also are significant differences. Thus it’s important to understand the subtle and sometimes not-so-subtle differences among the terms, especially as government agencies are entering more information into sprawling storage and data archiving systems. All three terms relate to seeking information across multiple data archives. But the three concepts are differentiated by the way searches are conducted and the presentation of results. Search tools. This term often is used in a generic way to refer to multiple types of internal or external search engines, directories and information archives. Most search tools are usually designed to interact with a computer program — often a crawler, spider, indexing bot or similar system — that was created to retrieve documents or data. The crawler and its associated search tools can be set up to interact with one specific database, a set of databases, a single computer network or even the full Internet. When using such tools, searches often are based on a keyword, set of keywords, or a phrase that can be contained in one of the files that was indexed by the spider. Doing a simple keyword search can be useful, unless there is ambiguity about the meaning of the term. For example, if you search for the word "Saturn," do you mean the planet, car, rocket, or old Sega Saturn game console? To help resolve ambiguity, some search engines also collect information from a file's metadata fields. Metadata can be useful for setting the context of a keyword. If metadata indicates that a file contains information about the solar system and planets, a good search engine would assume that any matching keywords in that file refer to Saturn the planet, not Saturn the car brand. But what if someone searching for Saturn the planet doesn’t remember the name of the planet? Or what if they are looking for information about planets in general, and they simply enter the name Saturn as one example? What they really need is more guidance built into their search results. Information discovery tools. Some types of information discovery tools are simply multiple search results presented in a logical way to help users make additional choices. Some of the results are just interfaces to secondary search tools, arranged to help guide an evolving search. A basic example of a discovery tool is the “Did you mean” feature that Google presents if you misspell a search term. Besides executing a keyword search, Google's search system also looks through a database of common misspellings. If it finds a match, the search results page helps you discover a correct spelling. But it doesn’t automatically assume you meant the correct spelling, so it still offers keyword matches for the misspelled version of your word. Discovery tools can help refine your search or ask questions to help you make additional search decisions. Two excellent examples include the Recent Activity boxes on eBay or the “People who bought this book also bought” links on Amazon.com or Barnes and Noble's Web site. By tapping other databases and not just their own index of keywords and matches, those sites make fairly accurate predictions about other things that you might be looking for. Information discovery should not be confused with semantics. In general, semantics means identifying the meaning of a word or phrase, and the Semantic Web efforts championed by the World Wide Web Consortium have made great strides in helping people understand this issue. But the semantic approach is not a perfect solution when the people doing the searching don't know the specifics of what they are looking for, much less the exact word. Thus information discovery comes down to three things: available paths, context and pattern matching. Available paths can be represented through additional line items that offer parallel choices from other databases. This is similar to Google’s “Did you mean” choice but significantly expanded to many different conduits of information. When a good information discovery interface is used to search for Saturn, you might receive a straight set of search results that is complemented by other options. Sometimes, they are presented as small search results boxes with two or three matching choices, plus a link that will take you down that particular result’s path. Other sets of search results might include links from a database on the solar system, a few documents on gas giants, a handful of pictures of planets with rings, and so on. The results help you discover other paths and encourage you to refine your choices. Following one of those paths in turn takes you to other search tools and resources. Context comes into play when the search system already knows one thing about you or your search. It uses that knowledge to limit search results based on what it already knows. One great example of context is location. If you use your mobile device to search for, say, gas stations, the context can be limited to a 10-mile radius of your location. This function lets you discover nearby resources. Likewise, you also can find restaurants, ATMs or known criminals. By limiting our example to, say, police needs, the various pieces might combine this way: Your context is where you are. Your search is for people who own blue Ford trucks. Your discovery tools present the various paths that have been enabled for you in the search results. Possible examples: Pull-down menus that define the age of truck, people who live in apartment buildings, the age of truck owners, and so on. Truly flexible discovery tools let you follow one path and then adjust settings without needing to start your search over again — such as expanding your search to 20 miles or limiting results to just Ford F-150 pickups. Pattern matching applies if the discovery tools also recommend links that other people think are useful. E-discovery. This is a different concept than the two terms we just reviewed. It can involve search tools, but e-discovery usually refers to a discovery process related to court cases, in which someone searches for information stored electronically. Information that might be relevant as evidence in a lawsuit includes e-mail; instant messages; logs from online chat rooms; stored electronic documents of all types, including older versions of files; databases, including research, product information, and accounting or finance databases; Web sites; and even raw data files. Because litigators might need to review e-discovery materials in a number of ways, it's not unusual for discovered information to be saved in multiple formats. E-discovery tools often exist as specific applications, and they are popular with people who manage large archives of government information. In late 2009, EMC acquired e-discovery vendor Kazeon. With this addition, EMC offers a set of e-discovery and litigation readiness applications. Understanding the differences among searching, information discovery and e-discovery can help government employees understand and use the concepts. That goes a long way toward helping people find the right information at the right time to do their jobs.
<urn:uuid:0818b9d8-6a90-4b9b-bdf2-5c0465f2389f>
CC-MAIN-2017-04
https://gcn.com/articles/2010/05/03/internaut-search-discovery-tools.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00454-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927051
1,472
2.96875
3
What is Cloud Computing? Or How to Speak Intelligently about Cloud and Virtual Private Servers We are often asked questions about Cloud Servers and Virtual Private Servers (VPS) and which is better and in what circumstances. We also find that many customers are using these terms without a good understanding of what they mean and the differences between them. Virtual servers are very common these days. Anyone can have one on his/her own desktop computer using software like VirtualBox (FREE for Windows, Mac, and Linux), Parallels (Paid, for Mac), or VMWare (Paid, many platforms). Essentially, these software products allow you to run one or more other “computers” in windows on your desktop. For example, if you are using a Mac and need to occasionally use Windows applications, you could install one of these programs and then install an “instance” of Windows 7 or 8 in it. You could then “boot up” Windows 7 in a window on your Mac desktop. The Windows 7 “Virtual Server” thinks it is running on its own computer and behaves is if it is. It is fully isolated from your Mac (with the exception of file sharing and network sharing features that you may enable) and you can install any Windows 7 applications on it that you like. Personally, I do this with VMWare Fusion on my iMac so that I can access any program needed at any time. The “Virtual Server” is the separate “computer in a window” running on your desktop. You can have many separate such computers running at once, assuming that your computer has sufficient memory and horsepower to manage them. In our Quality Assurance Testing lab, we have beefy computers that run a large number of virtual servers at once — Windows 2000, Windows XP, Windows Vista, Windows 7, Mac, etc. We can switch between each with a click to run programs in different environments and with different versions of software programs. Virtual Private Servers A “Virtual Private Server” (VPS) is really just a “Virtual Server”, as described above. The “Private” means that the Virtual Server belongs to you and that no one else has access to it or is sharing it with you. This is different than a “shared” hosting account, for example, where you share a single server with hundreds or thousands of other customers. The Virtual Private Server is isolates your data and capacity from other customers, providing enhanced security, privacy, and reliability. It may provide other benefits like more access and customizations due to the fact that it is dedicated to you. The downside of a VPS is merely that it generally costs more than a shared environment (for customers with smallish needs). Virtual Private Servers that you may purchase from a service provider are generally going to be much better than those you might run on your desktop. Why? - The underlying server will generally be pretty beefy and redundant — with hot swappable redundant disk drives, hot swappable power supplies, lots of memory and horsepower. - The underlying server will be dedicated to running virtual machines. - Enterprise-level software will be used for running the virtual servers, providing higher levels of performance and reliability. I.e. VMWare ESX or Citrix XenServer. - The hardware and software will be highly optimized for running virtual servers and will be updated and maintained by trained professionals. The word “Cloud” is a popular marketing buzzword that has gone viral, had its ups and downs, and is now back in vogue with relatively positive connotations. People think it means something very special and cool — some kind of magical computing resource that is just “out there”. That is not exactly true, though it’s a nice thought. Lets say that you get your own beefy server or server(s) in a data center, like Rackspace, put them behind a good firewall, and install the VMWare ESX or perhaps OpenStack on them. Then, you can create and manage your own set of Virtual Private Servers on these dedicated machines. This is the definition of a “Private Cloud” … a set of Virtual Private Servers under your complete control — both the underlying hardware and the software are dedicated to you and not shared with anyone else. What is the advantage of a “Cloud” over, say, just buying some physical machines? - Cost Savings. It is less expensive to get a more powerful machine and “slice it up” into smaller parts that are running separate servers, than it is to buy many separate physical machines. - Over Provisioning. You can assign each of your Virtual Servers a certain fraction of the overall underlying server horsepower (CPU) … and you can over assign. E.g. if you have three VPS running on one machine, you could assign each of them 75% of the overall server processing capacity. When one server needs to do a lot of work, it can take advantage of up to 75% of the overall capacity. As long as multiple servers are not very busy at the same time, this over provisioning allows for efficient use of system resources, rather than dedicating processing power to a machine that is mostly idle. With physical servers, capacity is often idle just so it is there when needed. - Easy Upgrades and Migrations. If your servers need more capacity, you can just “Assign It” from the virtual server management console, assuming additional capacity is available in the underlying machine. If not, you can either (a) upgrade the underlying machine, or (b) move the virtual server to a new machine (which can often be done with minimal or no downtime). The difference between Private Cloud and Public Cloud is: - The underlying server hardware is not yours, it is owned by the Public Cloud Vendor. - You are probably sharing the underlying server with other Public Cloud customers. - You have to pay for any changes to your Public Cloud Virtual Server configurations. - The IP addresses associated with Public Cloud are well known and often black- or grey-listed for sending spam (because it is so cheap and easy to get these servers, they are popular with spammers). See Are Cloud Servers bad for sending email? In some special situations, customers of Public Cloud servers can have special blocks of IPs used with their servers so as to avoid this IP reputation issue (LuxSci has this arrangement with Rackspace, for example). The Down Sides of VPS and Cloud Virtual Servers clearly have many advantages. However, there are some notable caveats that need to be considered before making a purchase decision: - Over Provisioning: As multiple Virtual Servers are sharing the same underlying server, performance can become an issue if the capacity of the server is over provisioned to a degree where each virtual server cannot get enough processing power. This is mostly an issue with Private Cloud (where you make the optimization decisions) or VPS where the provider is trying to squeeze too many servers onto one machine. LuxSci does not over provision in any way that would affect your Private Cloud server performance. - Lower Performance: The Virtual Server Management operating system inserts a layer of processing and management between each Virtual Server and the underlying hardware. This has some effect in slowing down disk access speeds and other performance factors compared to running on the same hardware as a dedicated (non-Virtual) server. Of course, if you plan for this by selecting really good hardware, it won’t be an issue. Most cloud servers these days are running on very fast hardware, so this performance impact is not really important, compared to the other benefits of cloud over physical servers. - Single Point of Failure: Any hardware issue affecting the underlying server necessarily affects all Virtual Machines running on it. I.e. CPU failure, network failure, etc. Instead of one machine going down due to a hardware issue, you have have many going down. This issue is worse when you use large disk storage arrays and attach all of your Virtual Machines to them for storage. If that disk storage array, or its connectivity, has any issue — ALL of the connected Virtual Servers may be affected. These issues can be mitigated somewhat by using load balancing and other techniques, if your application/infrastructure allows it, but that also significantly increases the cost of the infrastructure. It is a common misconception that a “Cloud Server” is insulated from any kind of hardware failure and that it is redundantly hosted and always up. Such scenarios are possible, but the commodity Cloud Server solution is simply a Virtual Server on some single Physical Server. A single physical server may have some inherent redundancy (hot swappable drives and power supplies, etc.) but is not immune to failure. What Kind of Server is Best? Dedicated Physical Server (not Virtual or Cloud) - You need to own the server hardware. - You need the hardware to be dedicated to you. - You do not want any hardware issue to affect any other server. - You do not want one server to have any possible performance impact on another server. - You have hardware requirements (such as very large disk arrays or memory) that are not affordably met by Public Cloud or VPS options, and Private Cloud is too expensive. - The server needs to be so powerful or large, that there would be no extra capacity/room on the server for other Virtual Servers. - You are concerned about outbound email deliverability. - You are not concerned about unexpected downtime due to underlying hardware failure. Public Cloud / Virtual Private Server These are effectively the same thing. The main difference is often in billing and management. With Public Cloud, you can often pay for the server by the hour, all provisioning is automated, and you may get little or no support (unless you pay extra). With a VPS, you may pay monthly, have more options in the configuration, and may have better support. However, the underlying architectural concepts are essentially the same — a Virtual Server on a shared underlying machine. You might want a Public Cloud or VPS if: - Your hardware and capacity requirements are modest and you do not need large amounts of storage space. - Cost is an issue … Public Cloud and VPS will usually be cheaper than a Dedicated Server or Private Cloud - It is OK if you have no control over the underlying hardware. - You are not sending outbound email (See – Are Cloud Servers Bad for Sending Email?) or can use smarthosting to get around email blacklists. This may also be a problem with some VPS providers. Private Cloud is ideal if: - You have a large number of servers to manage. - You have complex or custom hardware requirements. - You have need for dedicated hardware (i.e. for compliance reasons). - You are concerned about performance. - You want to optimize reliability and minimize the chance of downtime - You are concerned about outbound email deliverability. Cloud Servers at LuxSci LuxSci offers physical servers for customers that have very specific requirements. For all others, we offer “Business” and “Enterprise” servers. These correspond to Public and Private Cloud servers in our highly customized Rackspace infrastructure. Our “Public Cloud” servers are setup such that they are behind our dedicated hardware firewalls and use a LuxSci-specific IP address space that is not tainted by the kinds of grey listing common to Public Cloud. LuxSci customers can thus decide on Business vs Enterprise servers simply based on their reliability requirements. The Enterprise servers are resistant to underlying hardware failure and the Basic servers are not. Both types can be HIPAA compliant and both are great for sending outbound email. Compare these options.
<urn:uuid:3035056a-e088-4412-a678-6b05cb2307a4>
CC-MAIN-2017-04
https://luxsci.com/blog/what-is-cloud-computing-or-how-to-speak-intelligently-about-cloud-and-virtual-private-servers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00086-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935034
2,459
2.90625
3
Active Directory is Microsofts implementation of LDAP directory services for Windows environments. It allows administrators to implement company wide policies on access to resources and services by the users. Active Directory is usually installed in Windows 2003 or 2000 server and together they are called the Domain Controllers. If active directory fails, it would affect the entire user base, as they wont be able to logon to their systems, access critical information from other servers, or send/receive emails. In this section lets see how a Network Monitoring Tool such as OpManager can help administrators prevent Active Directory nightmares! Imagine a scenario where your CEO logs into his laptop and it says access denied. Probably he just forgot to release the CAPS LOCK key or the Kerberos Key Distribution Center Service that plays a vital role in user authentication has stopped functioning and is forcing every Windows user from logging into the domain. Most of the IT helpdesk tickets originate from issues spawning from users trying to access resources outside ones computer. Active directory forms the crux of this ever-active access system. For instance common operations such as user authentication, exchange mail routing, depend on Active Directory. This makes continuous monitoring of Active Directory and related services very important so that you may also stay away from nasty nightmare! There are a little over half-a-dozen Active Directory components that can cause an access problem to a user. Few important factors that you need to monitor on AD are: Hardware failures, insufficient disk space etc., are common problems causing a server to crash. Requests to the Active Directory need to be served fast. This requires the CPU, Memory, and Disk Space of the server that hosts Active Directory to be running at optimal levels and monitored 24*7. LDAP is the client used to retrieve directory information. Monitoring LDAP parameters like LDAP Bind Time, number of Active Connections, LDAP Searches, and LDAP Writes is a proactive step in ensuring its availability. DNS lookup failure can cause problems. The Domain Controller might not have been able to register DNS records, which actually vouches for the Domain Controllers availability. This results in the other Domain Controllers, users, and computers in the domain in not locating this DC which again might lead to replication failure. Refer this article for troubleshooting AD related DNS problems. Active Directory depends on this service for authentication. Failure of this service leads to log-on failures. Refer this article to know how this service works. Request to authenticate users is served by this service. Failure of this service also makes the log-on impossible. The Domain Controller will not be able to accept log-on requests if this service is not available. FRS service replicates the objects in Active Directory among all the Domain Controllers in a network (if you have more than one domain controller). This is done to ensure round-the-clock accessibility to the information on the AD. This can be across the LAN or the WAN. When the FRS fails, the objects are not replicated on the other Domain Controllers. In the event of the primary DC failing, when the secondary (the slave) takes over the request, it will not have the user account replicated. This will cause the log-on failure. The replication failure can also occur because of incorrect DNS configuration. There can be other reasons like no network connectivity, too many applications accessing the DC at a time, etc. OpManager monitors all the services and resources on which Active Directory relies for proper functioning. You can configure thresholds and get instantly notified if something is crossing safe limits. OpManager offers a dashboard view of your domain controllers availability with options to see availability statistics for the past week, month, etc. System resources usage gives you real-time status of the health of your domain controller. Details such as CPU utilization, Memory utilization, and disc utilization can be viewed from here. Active directory performance counters such as directory reads, directory writes, Kerberos authentications, etc. can be viewed from here. Key active directory services such as Windows Time Service, DNS Client Service, File Replication Service, Inter-site Messaging Service, Kerberos Key Distribution Center Service, Security Accounts Manager Service, Server Service Workstation Service, RPC Service, and Net Logon Service. Heres a tree view of the entire set of parameters monitored by OpManager to ensure that your Active Directory doesnt popup unlikely surprises. Active Directory writes detailed event logs in the occasion of a failure. You can view event logs from your Windows Event Viewer (start - settings - control panel- administrative tools - event viewer). Each active directory component failure has a pre-defined event ID with a detailed message for the failure event. OpManager allows monitoring these windows event logs using pre-defined event log rules. OpManager monitors the event logs and based on the rule it generates OpManager alarms. Here are some IDs for which you might want OpManager to raise an alarm. (Please note that this is only a subset of a whole lot of Windows Event Logs for various services and parameters related to Active Directory.) |Net Logon Service||5774, 5775, 5781, 5783, 5805| |FRS Service||13508, 13509, 13511, 13522, 13526| |Windows Time Service||13, 14, 52 to 56, 60 to 64| |LDAP related||40960, 40961| |LSASS related||1000, 1015| |Kerberos related||675, 676, 1002, 1005, 9004 (last three are related to Exchange server)| |NTLM authentication||680, 681| Besides monitoring the Active Directory components, OpManager raises alarms when a service is unavailable. Configuring response time or resource utilization thresholds for the critical services and parameters alerts you much ahead of the actual problem. OpManager allows you to create and assign notification profiles to Domain Controllers. When any of the monitors fail, an email or SMS alert is sent to the pre-configured Ids. OpManager offers excellent Active Directory monitoring capabilities and helps you stay away from Active Directory nightmares. To test drive active directory monitoring, download the latest OpManager build from www.opmanager.com.
<urn:uuid:3406bc6a-7673-4991-820d-a425099b2518>
CC-MAIN-2017-04
https://www.manageengine.com/network-monitoring/monitoring-active-directory.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00390-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884223
1,283
2.734375
3
Common Core State Standards offer educators, parents and administrators a consistent and clear understanding of what students need to learn in order to be successful in the classroom, college and beyond. Technology is a critical component of Common Core that can support teaching, learning and student engagement and assessment. CDW-G’s latest research, Common Core Tech Report, surveyed 300 IT professionals in public schools around the country to understand how well prepared they are to meet the technology requirements of Common Core, how districts are prioritizing Common Core and the technology challenges they face. Common Core/Common Good - More than three-fourths of IT professionals expect Common Core to have a positive impact on their district - Strong infrastructure is a must to ensure teachers can move forward confidently, so update and upgrade before bringing in new technology - Change is hard. Develop and communicate a strong vision to all stakeholders to ensure everyone is speaking the same language - It’s not about a device, which should be transparent, it’s about the instructional shift that makes students active participants in learning so that they take over ownership of their education - In a year, your program will look very different. Continue to use pilot groups/leaders to share best practices and borrow ideas that unify your vision CDW-G surveyed 300 IT professionals from K-12 public school districts in May 2013. The survey excluded Alaska, Minnesota, Nebraska, Texas and Virginia, which had not adopted Common Core State Standards as of May 2013. The total sample size equates to a margin of error of ±3.0 percent at a 95 percent confidence level.Tweet
<urn:uuid:f8954ae1-38d2-4322-9d79-c44f4ee3cb2a>
CC-MAIN-2017-04
http://www.cdwnewsroom.com/cdw-g-common-core-tech/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00206-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933245
328
2.828125
3
Facebook has released as open source a code analyzer tool, typically called a linter, that can work with the latest version of the C++ language, C++11. The internally developed debugger, called Flint, may be of interest to other programmers because many similar open-source lint tools haven't been upgraded to work with C++11 yet, according to Facebook. Noted C++ programmer Andrei Alexandrescu, now working at Facebook, built the tool for internal use. A lint program typically scans software code to look for issues that a compiler does not catch, a process called static code analysis. Linters can be handy for enforcing organizational best practices in code development, or to look for code patterns that could cause security or performance issues. Although there are a number of static analysis programs already available for C++, Facebook found them mostly unsuitable for its own needs. Many were too slow or weren't updated to understand C++11, which Facebook is in the process of adopting. Flint reviews code and flags any potential issues in Facebook's code review system, called Phabricator. Flint can check for issues such as the use of outdated libraries, or keywords that have already been reserved for other uses within a system. It can catch subtle programming errors that a compiler would miss, such as an incorrectly formatted memory request. It can assure that headers are formatted correctly. It can also check for conflicting namespace directives. Alexandrescu wrote Flint using a programming language similar to C++ he helped develop, called D. As a result, Flint compiles fives times as fast as an equivalent program in C++, and it runs anywhere from 5 percent to 25 percent faster as well.
<urn:uuid:49cf1e92-4415-4135-87e0-0112233fd50b>
CC-MAIN-2017-04
http://www.networkworld.com/article/2174742/software/facebook-removes-c---lint-with-new-analysis-tool.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00508-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957051
349
2.578125
3
"Discovering the unexpected is more important than confirming the known." - Anonymous Testing is the process of finding all the possible defects or discovering a software product's deficiencies. It is also a process of executing a program with the intention of logging a defect against the software product. The primary benefit of testing is to make a workable software product better, to improve the quality of deliverables. It also provides a good indication of software reliability and reduces the risk of failures during deployment. The fundamental role or goal of a tester is to log as many defects as early as possible. Equally, we need to verify the reproducibility of those defects without compromising the quality of the software product, and track the raised defects until closure. Testers are only supposed to find the defects, not fix or resolve them. The tester should use fresh environments for testing or verifying purposes, and should not test in any of the used or development environments. These are some of the basic software testing fundamentals which one needs to have in mind while testing a particular software product or application. Perceptions about Software Testing A lay man's perception is that testing is about running or executing the software, or to be precise, the test cases. This is actually a negative mindset about testing. . In actuality, test execution is just a part of software testing, but does not denote a complete test activity. Test activities include test planning, developing test scenarios, test designing, test case creation, test execution, defect logging and tracking, reviewing product documents, and completing closure activities. All these together comprise software testing. Pro-activeness in reporting issues leads to a better product outcome. Periodic reviews, and the identification and resolution of issues at a very early stage will also contribute to product stability and reliability. Seven Facts about Testing The following basic principles and fundamentals are general guidelines applicable for all types of real-time testing: Testing proves the presence of defects. It is generally considered better when a test reveals defects than when it is error-free. Testing the product should be accomplished considering the risk factor and priorities Early testing helps identify issues prior to the development stage, which eases error correction and helps reduce cost Normally a defect is clustered around a set of modules or functionalities. Once they are identified, testing can be focused on the defective areas, and yet continue to find defects in other modules simultaneously. Testing will not be as effective and efficient if the same kinds of tests are performed over a long duration. Testing has to be performed in different ways, and cannot be tested in a similar way for all modules. All testers have their own individuality, likewise the system under test. Just identifying and fixing issues does not really help in setting user expectations. Even if testing is performed to showcase the software's reliability, it is better to assume that none of the software products are bug-free. Testing can never be successful if it reports that the software product is error-free or reports a non-existence of defects. There are no limitations for testing, and there are many ways to test a system, a few of which are mentioned below: - Black box testing - White box testing - Integration testing - Functional testing - System testing - Sanity testing - Regression testing Typically, all these types of testing are performed to identify defects in system behavior; to evaluate whether the system is ready for release, and to increase customer/end-user confidence that the software works properly and will provide the expected outcome. Equally important for a tester is to test the product for invalid scenarios, also called negative testing. Software product testing is a vast area, and plays a major role in the product lifecycle; each phase has its own importance. Software testing and the tester's job are not only to test or verify the software, but also to certify that the software is approved for use. Thus, testing becomes a very important step in software development, and like any other critical process, the fundamentals should be rightly applied here. To know more about how HCL has enabled product testing for leading software product companies – (ATF page)
<urn:uuid:7e0cbb8e-56a7-4c4b-b0e2-bd8675256661>
CC-MAIN-2017-04
https://www.hcltech.com/blogs/software-testing-fundamentals
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00419-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944855
847
3.234375
3
The cement & glass industry follows the business cycle and the overall economic activity. The cement market is also regional in its nature, as transportation costs are high and the end user landscape is highly fragmented. The cement industry is a capital- and energy-intensive business that employs a relatively low intensity of labor. Capital allocation in the industry averages about $225 million per million tons of production capacity. With these levels of investments and highly cyclical demand, it is imperative to run operations efficiently, reliably, and economically throughout a wide range of production rates. Equipment utilized in this industry involve relatively large investments with high power requirements. For example, mechanical drive trains typically fall in the range of 200 to 6,000 HP or even much higher for some applications such as large mills and kilns. Since 1995, the energy costs associated with operating a cement plant have increased more than three-fold, along with other business challenges in the industry. While a large percentage of these energy costs are for heating the kilns by coal, petcoke, or natural gas, a significant amount of electrical energy is also consumed. High-performance motors and highly efficient production machinery can provide manufacturers with a cost advantage. The industry and the production process can be described as: - Capital intensive - The cost of cement plants is usually above $150 million per million tons of annual capacity. ROI is long and plant modifications have to be carefully planned. - Energy intensive - Each ton of cement produced requires between 60 and 130 kilos of oil or its equivalent and about 105 KWh of electricity. - Low labor intensity - Less than 150 people are needed to operate a modern plant - Homogeneous product - There are only a few classes of cement. In each class, products are interchangeable. Price is the most important sales parameter - High transportation costs - Land transportation is costly and cement cannot be shipped economically beyond 300 km (except by sea). - Mature product - New markets only develop when countries experience general economic/population growth, otherwise production needs to follow the cycle. Trends in the cement industry largely center around efficiency improvements, including more flexible production scheduling to avoid peak electric prices, alternative fuels for cement kilns, CO2 capture, energy recovery from incinerators, co-generation with grid integration, and so on. Today, many plants use a wide variety of alternative fuels as part of their overall energy scheme to meet from 10 to 70 percent of their energy requirements.
<urn:uuid:1df02c2b-b10a-4a73-bdaa-c918974cbbc8>
CC-MAIN-2017-04
https://www.arcweb.com/industries/cement-glass
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00143-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951284
495
2.6875
3
Fixed means that one physical block on disk is one logical record and all the blocks and records are the same size. This format is seldom used. FB (Fixed Blocked) This format designation means that several logical records are combined into one physical block. This format can provide efficient space utilization and operation. This format is commonly used for fixed-length records. F indicates that the records are fixed length... FB indicates that the records are fixed length as well as blocked. Say for example, we have a file of LRECL 80 bytes and Blocksize is 8000. That means 1 Block consists of 100 record. In real time when we are reading record from file in cobol program, it is called as I/O processing(Input/Output). TO avoid the no of I/O processing time, we are using the file that defined with FB.
<urn:uuid:f0d8b591-c08f-4e0e-a52c-090153d186e4>
CC-MAIN-2017-04
http://ibmmainframes.com/about29367.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00051-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947982
177
3.203125
3
The Net Present Value (NPV) benefit is a calculation that measures the net benefit of a project in today’s dollar terms, taking into account that over time money today is more valuable than money in the future (discounted time value of money). The NPV savings calculation consists of two financial concepts: • The “net” part of the NPV savings calculation is the difference between all costs and all benefits (savings and other gains). • The present value portion of the NPV calculation takes into account the time value of money; so that adjusts to expenditures and returns, as they occur over time, can be evaluated equally. When examining a project investment decision, and knowing that money has a time value, future payments need to be higher than investments made today in order to be equivalent to today’s dollars. This time value accounts for the fact that: • Money typically inflates over time, meaning that a dollar invested today will be worth less in the future because of inflation. • A dollar invested today could earn interest over time, so the investment needs to make up for the lost opportunity. As the investment could earn interest elsewhere at the organizations weighted average the cost of capital, this is often called opportunity cost. The NPV calculation evaluates a set of costs and benefits over time in order to account for the time value of money. The cash flows are the amounts and timings of the various investment costs and benefits, and these are brought into a common term, today’s dollars, so that the net benefit can be quantified and compared if necessary to competing investment opportunities. Using an IT project as an example, let’s say that a company invests $100,000 in a new application and that the application requires $25,000 annually thereafter in maintenance and support costs. From this investment, the company expects to save $200,000 each year. An analysis of this investment over three years would yield the following negative (costs) and positive (benefit) cash flows: The NPV Savings calculation seems intimidating when expressed as a formula; however, when demonstrated in practical terms it is quite intuitive. Mathematically NPV calculation is represented by the formula: To put the calculation in practical, step-by-step terms, we will use the calculation applied against our example cash flows. The net present value calculation, using a cost of capital/discount rate of 7%, takes the initial costs and ongoing costs and benefit cash flows to create a single net cost or savings figure. For the example set of cash flows in the above table, the net benefits are as follows: The initial expense of $100,000 is not discounted because it is already in today’s dollars terms. However, Year 1 through Year 3 need to be adjusted to be brought into today’s dollar terms and is calculated as follows: The total NPV savings is the sum of the initial expense and the three-year NPV analysis, represented as: As shown, the net benefits from later years are discounted more in today’s dollar terms such that they mean less in the overall analysis. As a result, the total NPV savings is only $359,255 compared to the cumulative benefits of $425,000 when the discount rate is not considered. The higher the discount rate is and the further into the future that a cash flow will occur, typically the lower the present value of that cash flow will be. Because the net present value calculation increases the impact of current costs and near term savings while reducing the impact of future costs or benefits, the following holds true: • Projects with high initial costs and savings that grow slowly over time yield lower NPV savings values; • Projects with low initial costs and greater initial savings yield higher NPV savings calculations. The NPV Savings is one of the most popular and accurate methods used to assess business investment viability. NPV uses discounted cash flow to accurately quantify the net benefits from a project. However, the NPV calculation usually cannot be used alone to determine whether a project is viable. As an example, a project may yield a substantial $100M NPV savings over a three-year period, but the required initial investment of $10M may be so risky for the company that it is not considered a prudent risk. As well, a project might have a large NPV benefit but has a long payback period and derives much of its benefits through huge gains in outgoing years.
<urn:uuid:c46f5553-ea63-4c22-aa31-f1a649fa427b>
CC-MAIN-2017-04
http://blog.alinean.com/2010/08/net-present-value-npv-savings-defined.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948132
918
3.109375
3
Hribar L.J.,Florida Keys Mosquito Control District | Brown B.V.,Natural History Museum of Los Angeles County | Disney R.H.L.,University of Cambridge Florida Entomologist | Year: 2011 The aquatic scuttle flies, Megaselia hansonix Disney and M. imitatrix Borgmeier, (Diptera: Phoridae) are reported for the first time from Florida, USA. Source Ernst K.C.,University of Arizona | Haenchen S.,University of Arizona | Dickinson K.,U.S. National Center for Atmospheric Research | Doyle M.S.,Florida Keys Mosquito Control District | And 3 more authors. Emerging Infectious Diseases | Year: 2015 After a dengue outbreak in Key West, Florida, during 2009–2010, authorities, considered conducting the first US release of male Aedes aegypti mosquitoes genetically modified to prevent reproduction. Despite outreach and media attention, only half of the community was aware of the proposal; half of those were supportive. Novel public health strategies require community engagement. © Centers for Disease Control and Prevention (CDC). All rights reserved. Source Zhong H.,Florida A&M University | Hribar L.J.,Florida Keys Mosquito Control District | Daniels J.C.,University of Florida | Feken M.A.,Bureau of Pesticides | And 2 more authors. Environmental Entomology | Year: 2010 We assessed the exposure and acute toxicity of naled, applied aerially as an ultra-low-volume spray for mosquito control, on late instar larvae of the Miami blue (Cyclargus thomasi bethunebakeri) (Comstock and Huntington 1943) (Lepidoptera: Lycaenidae), an imperiled South Florida butterfly. We concurrently evaluated the control efficacy against caged adult female salt-marsh mosquitoes (Aedes taeniorhynchus) (Wiedemann 1821) (Diptera: Culicidae). This 3-yr study was conducted in north Key Largo (Monroe County, FL) beginning in 2006. The field trials incorporated 15 sampling stations: nine in the target spray zone, three in the spray drift zone at varying distances from the target zone, and three in the control zone not subjected to naled spray drift. A total of six field spray trials were completed, three at an altitude of 30.5 m (100 feet), and three at 45.7 m (150 feet). For all trials, the ultra-low-volume application of Trumpet EC insecticide (78% naled) at a rate of 54.8 ml/ha (0.75 fl. oz/acre) was effective in killing caged adult mosquitoes in the target zone. Butterfly larvae survival was significantly reduced in the spray zone compared with drift and control zones. Analysis of insecticide residue data revealed that the mortality of the late instar butterfly larvae was a result of exposure to excess residues of naled. Additional research is needed to determine mitigation strategies that can limit exposure of sensitive butterflies to naled while maintaining mosquito control efficacy. © 2010 Entomological Society of America. Source Leal A.L.,Florida College | Tambasco A.N.,Florida Keys Mosquito Control District Check List | Year: 2011 A list of the Culicidae collected in the Florida Keys is presented. Mosquito records were obtained from the scientific literature and from collections made by mosquito control personnel. Forty-eight species or species groups are known from the Florida Keys. © 2011 Check List and Authors. Source The agency's Center for Veterinary Medicine released a preliminary finding of no significant impact for the field trial on a method that aims to reduce populations of the mosquito that spreads dengue, chikungunya and the Zika virus among humans. The trial is proposed by the British biotech firm Oxitec. The Florida Keys Mosquito Control District wants to test Oxitec's mosquitoes in a small neighborhood north of Key West. The FDA still needs to review public comments on Oxitec's proposal before deciding whether to approve that trial. Oxitec modifies Aedes aegypti mosquitoes with synthetic DNA to produce offspring that won't survive outside a lab. Oxitec has conducted similar tests in Panama, Brazil and the Cayman Islands. With or without the test, the district is looking for additional options to kill Aedes aegypti, which it considers a significant and expensive threat. In a statement, executive director Michael Doyle said the district needs to be proactive, and the trial will to determine how efficient Oxitec's mosquitoes are at suppressing the local Aedes population. "A small trial like this is designed to see if highly reducing the population is possible with this technology here in the Keys. If so, we will then look at larger trial areas," Doyle said. A residents' group called the Florida Keys Environmental Coalition wants the district to instead try infecting mosquitoes with a bacteria that curbs their ability to transmit disease, arguing that Oxitec's proposal is mostly marketing hype and won't be subject to adequate federal oversight. In an email Monday to The Associated Press, the coalition's executive director, Barry Wray, questioned the ongoing costs Oxitec's method might incur. "Oxitec has exploited the fear surrounding Zika very effectively," Wray wrote. "When you start looking at the quantity of mosquitoes they need to continuously provide, in order to keep problems under control, the numbers are astounding. So is the money required!" Doyle said the district is looking at several different technologies for eradicating Aedes mosquitoes, but those other methods take years to develop and Oxitec is furthest along. In a statement, Oxitec CEO Hadyn Parry said the company was pleased that the FDA agreed with their own findings. "We look forward to this proposed trial and the potential to protect people from Aedes aegypti and the diseases it spreads," Parry said. Anti-GMO activists have criticized Oxitec's trials, saying more proof is needed that stray female modified mosquitoes that leave the labs aren't spreading genetic material through bites or that there are no other environmental risks, such as opening areas to infestation by another disease-carrying mosquito species. Modified females are manually separated in Oxitec labs from the modified males, which do not bite and are released to mate with wild female mosquitoes. In its preliminary finding, the FDA said it was "highly unlikely" that humans or animals bitten by female modified mosquitoes would be exposed to synthetic genetic material, and any bites wouldn't be any different from bites made by a wild mosquito. It's also unlikely that suppressing the local Aedes population during the trial would open the area to an infestation of another disease-carrying species during that period, the FDA said. The FDA also found no significant risks that the modified mosquitoes would disperse well beyond the trial area, develop resistance to insecticides or persist in the environment. "Based on the data and information submitted in the draft (environmental assessment), other submissions from the sponsor, and scientific literature, FDA found that the probability of adverse impacts on human or other animal health is negligible or low," the finding said. A draft environmental assessment on Oxitec's proposal will be available for public comment for 30 days. The FDA will review those comments and may require further documentation from Oxitec before deciding whether to approve that trial. There is no deadline for that decision, so no modified mosquitoes will be released anytime soon. The Centers for Disease Control and Prevention and the Environmental Protection Agency also have reviewed the proposal along with the FDA.
<urn:uuid:e0462b8e-8679-48f5-af9b-3032c2078048>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/florida-keys-mosquito-control-district-1605032/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00565-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933516
1,588
2.578125
3
Decrypting Encryption Myths Some of the more prominent headlines over the past year were dominated by incidents of data theft, where corporation after corporation had fallen victim to information theft on a large scale. While many victims had hackers and devious insiders to blame, other instances were simply due to human error such as lost data tapes and stolen laptops. In these cases, CIOs may think their information is not at risk because of encryption. But is this really enough? Many organizations assume information stored on laptops, desktops and tapes is completely secure if it is encrypted. To some extent this is true. But while encryption is an important piece of the security puzzle, it is only one piece. CIOs need to make data encryption but one part of a broader security strategy to avert data theft. Organizations are increasingly distributed and mobile, and the ability to ship and carry secure information is imperative for business continuity. Yet human error is always a factor be it leaving a laptop behind at the airport, or having your shipping carrier misplace your package of backup data tapes. So, while CIOs can’t necessarily control the shipping process, they can insure that controls are in place to protect data on these systems; preventing Mary Bad-gal from accessing the 250,000 social security numbers. Mary might be savvy enough to turn the laptop on, but she is very unlikely to be able to decrypt the encrypted information stored there. Looking at the challenge broadly, most security professionals differentiate between “data at rest” and “data in motion.” Prudence dictates protecting both data at rest—sitting on a server or on archive or backup media—as well as data in motion—flowing over networks. Since most readers are already using VPNs, the next most useful place to use cryptography is on portable computers that contain sensitive data. On a portable computer, the data is both at rest, in that it is locally stored, and in motion, in that the computer itself is easily transported. When a portable computer is lost or stolen, you hope that the thief was just after the hardware and software and not the data. Most companies can handle the computer loss better than the loss of trade secrets, business plans and personal information. Since the impact of loss may be very high, it makes sense to consider encrypting the data on portable computers carrying sensitive information. The material cost to implement this is very low. Both Microsoft Windows XP/Professional and Mac OS X support encryption of all user-area files. Without the login password, user data on the computer is inaccessible to anyone. Crypto Myths and Truths Myth: Crypto is hard to use. Truth: Writing cryptography algorithms and products is difficult. Using it is easy. Myth: Cryptography is expensive. Truth: Some cryptography is free to use for the end-user (such as SSL-encrypted Web pages). But, your organization will have to pay the price of purchasing, creating, protecting and managing server certificates. As with any security measures, deploying cryptography requires planning, counting the cost of deployment, user education, support and maintenance. Myth: Cryptography must be deployed everywhere in an organization. Truth: Cryptographic solutions should be deployed where and how your risk assessment indicates they will do the most good. Myth: When we have cryptography everywhere, we will no longer need firewalls or antivirus or ... Truth: Cryptographic solutions can and may be effectively deployed and used as part of an organizations overall risk mitigation plan. The Rest of the Story Crypto is not a magic bullet. It may be part of a computer and network security defensive arsenal. Since a user has to occasionally access sensitive data, all encrypted data, to be useful, becomes unencrypted for use. Sometimes individual files are decrypted, sometimes the whole hard drive. This is the point of vulnerability for sensitive information, and this is where other controls and practices are needed. To ensure that your encryption investment holds its value, an organization must rely on synergistic controls—combining various measures, mechanisms, and methods—shored up by encryption (where it makes sense). Strong encryption accessed via weak passwords, for example, merely slows down an attacker. As CIOs evaluate their organization’s security strategy, it is important they realize how powerful encryption can be when aligned with other security solutions and strategies. Otherwise, it becomes just another security step that seems right, but does little. Peter Tippett is CTO of Cybertrust and chief scientist for ICSA Labs, a division of Cybertrust. He specializes in the utilization of large-scale risk models and research to create pragmatic, corporate-wide security programs.
<urn:uuid:e5f85083-dd14-4b3c-b42f-ac7da41e5c29>
CC-MAIN-2017-04
http://www.cioupdate.com/print/trends/article.php/3583771/Decrypting-Encryption-Myths.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00529-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939012
962
2.65625
3
Authentication is one of the most crucial modules for any given system. Kerberos is a popular security mechanism used by systems for network authentication and secure transmission of data. Almost all UNIX®-based operating systems supporting pluggable authentication modules incorporates Kerberos-based authentication. IBM® AIX® V5.3 features a Kerberos-integrated login and has many of its utilities like SSH, rlogin, telnet, and more. IBM AIX ships with its own flavor of Kerberos called IBM Network Authentication Service, available in its expansion pack CD (see Resources). With an increase in the need for secure environments, the authentication modules of systems (including operating system) are moving from traditional one-factor authentication (usually based on a static password) to multi-factor authentication. Two-factor authentication is a popular method being practiced for increased security. This article describes how Kerberos can be used in two-factor authentication systems. It elaborates on how you can have a more highly secure system using Kerberos and One Time Password (OTP) in multi-factor authentication schemes. Finally, the article gives you the design on implementing a two-factor authentication system using Kerberos and Generic Security Service API (GSS-API). Two-factor authentication using Kerberos and OTP Authentication is a process to verify a person's identity for security purposes and authentication factor is the piece of information used to achieve it. Two-factor authentication is a process in which two different factors are used for authentication and hence delivers a higher level of security. Typically, in a two-factor authentication system, the first factor used for authenticating the user is something the user knows, like a static password or a pin, and the second factor is something the user has, like a credit card, mobile phone, or a hardware security token that generate a One-Time Password (OTP) token. OTP systems are typically built on a client-server model, where at a given time, for a given user, the same one-time passwords are generated on the client side. The client may be a handheld hardware device, a hardware device connected to a personal computer through an electronic interface such as USB, or a software module resident on a personal computer, as well as the server side which does the verification. In these systems, the passwords are constantly altered, which greatly reduces the risk from an unauthorized intruder. Hence, its usage in two-factor authentication is considered safer and suits many systems requiring highly secure authentication modules. For more information on OTP, please refer to IETF RFC 2289 (see Resources). Kerberos is a popular security mechanism used for network authentication. Most of the login modules of operating systems such as AIX, Linux®, and Windows® support Kerberos-based authentication. In addition, many remote login applications available on UNIX-like Open SSH, telnet, rlogin, and more have their Kerberized versions. Kerberos-based authentication modules are implemented directly using a Kerberos API or by using the GSS-API interface. With the ever-increasing use of Kerberos, practitioners might be faced with the need to have Kerberos mechanisms be a part of their two-factor authentication systems using OTP. In the proposed design for a two-factor authentication system, this article makes use of Kerberos for the first-factor authentication and then makes use of the Kerberos infrastructure (using the GSS-API interface) to securely achieve the second-factor authentication using the OTP token. In this way, you can achieve a secure two-factor authentication where the Kerberos protocol plays a dual role: - Usage of Kerberos as the authenticating protocol in the first-factor authentication. - Usage of Kerberos (using the GSS-API interface) as a secure channel to communicate with the OTP and its verification result between the client and the server during the second-factor authentication. Apart from the existing Kerberized applications (with single-factor authentication) the design can also be incorporated in network-based solutions requiring two-factor authentications along with the Kerberos protocol. As defined in IETF RFC 2078 (see Resources), The Generic Security Service Application Program Interface (GSS-API) provides security services to callers in a generic fashion, supportable with a range of underlying mechanisms and technologies and hence allowing source-level portability of applications to different environments. Kerberos is the most popular underlying security mechanism available with GSS-API. IBM Network Authentication Service V1.4 for AIX implements the GSS-API standard and exports its interfaces using GSS-API libraries, which support the Kerberos mechanism. Using these GSS-API interfaces, you can implement secure Keberized applications and systems. Typically, in GSS-API a secure connection between two communicating applications is represented by a data structure called a security context. The application that establishes the secure connection is called the context initiator. The application that accepts the secure connection is the context acceptor. The context establishment between the initiator and the acceptor is actually a handshake process that involves authenticating both the parties. On successful establishment of a GSS-API context, it is concluded that the user identification has been successfully verified. Important GSS-API interfaces Two vital aspects that can be programmed using Kerberos GSS-API is authentication followed by per-message encryption. Following are few of the vital interfaces that help achieve the stated: - Establishes a security context between the context initiator and the context acceptor. - Accepts a security context created by the context initiator. - Cryptographically signs and optionally encrypts a message. - gss_unwrap (...) - Unwraps a message sealed by the gss_wrap subroutine and verifies the embedded signature. For more information on the GSS-API, please refer to IBM Network Authentication Service Version 1.4 Application Development Reference, which is shipped with the product. Design a two-factor authentication system using GSS-API and OTP Figure 1 shows a Kerberized client configured to a Kerberos server system. Either system can be an AIX box with IBM Network Authentication Service installed and configured. Figure 1. Design steps for implementing a two-factor authentication system using GSS-API and OTP The following explains the steps that can be implemented using a standard Kerberos GSS-API to achieve a two-factor authentication login module with OTP. Step 1 - Prompt the user to enter the Kerberos user name and Kerberos password. Steps 2, 3, 4, 5, 6, 7 - Use the Kerberos User Name and the corresponding Kerberos password (which the user has to remember) to acquire the Kerberos credential (TGT- Ticket Granting Ticket). If the password entered is incorrect, the authentication fails and the user is not allowed any access. On successful acquisition of the Kerberos ticket (TGT), the first-factor authentication is completed. This is similar to any regular login module that based on Kerberos authentication. Steps 9, 10, 11 - Use the above-acquired Kerberos credential and establish a secure GSS-API context with the GSS-API based OTP application server residing on the system (assuming the OTP server has been Kerberized using GSS-API). This involves a handshake between the client login module and the GSS-API-based one-time password application server. Note that here both the login module and the OTP application server are GSS-API-based applications running on the Kerberos mechanism. The handshake exercises the gss_init_sec_context() and related APIs at the client side and gss_accept_sec_context() and related APIs at the server side. On successful handshake, a secure authenticated GSS-API context is established over the underlying Kerberos security mechanism. This context helps the client login module and the GSS-API-based server to communicate securely. If the handshake fails, the login program is terminated. Step 12 - Enter the One-Time Password. The OTP is the authentication information that the user posseses (which differs from the earlier one where the user needs to remember the Kerberos password). Steps 13, 14 - Encrypt the Username and One-Time Password using the securely established GSS-API context and send across the encrypted information to the server. GSS-API interfaces like gss_wrap() assist in encrypting the information, which can be decrypted only by the GSS-API based OTP server application. Steps 15, 16 - GSS-API-based OTP server application decrypts the authentication information and present the actual OTP server with "Username" and "One Time Password." (Using the earlier-established secure GSS-API context, interfaces like gss_unwrap() assist in decrypting the information). The Username and One Time Password are verified against the server database and the result of the verification (either success or failure) is passed back to the GSS-API-based OTP server application. Steps 17, 18 - GSS-API-based OTP server application encrypts the authentication result and passes it back to the client login module/application. Step 19 - The client login module/application decrypts and interprets the result. (Using the earlier-established secure GSS-API context, interfaces like gss_unwrap() assist in decrypting the information). If the result is successful, it completes the second-factor authentication and allows the user to log into the system. Otherwise, the client login program terminates, cleaning the entire GSS-API context and both client and server side. Note: These steps do not claim to have completely introduced two-factor authentication in a Kerberos protocol. They can be implemented by practitioners who require a two-factor authentication where the first factor needs to be a regular Kerberos authentication and the second factor needs to be OTP, or they can be implemented as a part of login modules of applications or secure systems This article highlighted the use of GSS-API and Kerberos. It emphasized the growing need of multi-factor authentication and the use of OTP in such a system. It finally explained the steps that can be used in implementing a two-factor authentication using Kerberos and OTP Password for highly secure systems. - Kerberos policy management in IBM Network Authentication Service for AIX Version 5.3 (developerWorks, Dec 2007): Read this article to learn about Kerberos password policy management. - Configuring AIX 5L for Kerberos Based Authentication Using IBM Network Authentication Service: Read this white paper to learn about using Kerberos as an alternative authentication mechanism to AIX. - A Kerberos Primer (developerWorks, Nov 2001): This article introduces Kerberos technology and Distributed Computing Environment-based applications. - A One-Time Password System , IETF RFC 2289. - The Kerberos Version 5 Generic Security Service Application Program Interface (GSS-API) Mechanism: Version 2, IETF RFC 4121. - AIX and UNIX : Want more? The developerWorks AIX and UNIX zone hosts hundreds of informative articles and introductory, intermediate, and advanced tutorials. - developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts. - The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration. - developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts. - Podcasts: Tune in and catch up with IBM technical experts. Get products and technologies - IBM Network Authentication Service for AIX: Download the IBM Network Authentication Service for AIX from IBM AIX Web Download Pack Programs. - IBM Network Authentication Service for Linux,Solaris: Download the IBM Network Authentication Service for Linux,Solaris from here. - IBM GUI-based Administration Tool for Network Authentication Service: Experience the GUI to perform the IBM NAS related administration tasks. Download from the IBM alphaWorks today. - AIX 5L Expansion Pack and Web Download Pack: Start downloading now. - Participate in the AIX and UNIX forums:
<urn:uuid:2f9e8c71-e7b3-4e13-8bb7-5c77a1fef71c>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/aix/library/au-twofactors/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00437-ip-10-171-10-70.ec2.internal.warc.gz
en
0.85517
2,580
3.03125
3
Google is developing an online publishing platform where people can write entries on subjects they know, an idea that's close to Wikipedia's user-contributed encyclopedia but with key differences. The project, which is in an invitation-only beta stage, lets users create clean-looking Web pages with their photo and write entries on, for example, insomnia. Those entries are called "knols" for "unit of knowledge," Google said. Google wants the knols to develop into a deep repository of knowledge, covering topics such as geography, history and entertainment. Google's project will have to catch up with Wikipedia, which includes more than 7 million articles in 200 languages. Anonymous users constantly update Wikipedia entries in an ever-growing online encyclopedia that's edited by a network of vetted editors. But Google asserts that the Web's development so far has neglected the importance of the bylined author. "We believe that knowing who wrote what will significantly help users make better use of web content," wrote Udi Manber, vice president of engineering, on the official google blog. Google said anyone can write about any topic, and repetition of entries on the same subjects is beneficial. Google will provide the Web hosting space, as well as editing tools. Contributors can choose whether to let Google place ads on the knols. Google said it will give the contributors a "substantial" portion of the revenue generated by those ads. While Wikipedia lacks ads, keyword advertising has underpinned Google's growth. Entries can't be edited or revised by other people, in contrast to Wikipedia. However, other readers will be able to rank and review others' entries, which will then be interpreted by Google's search engine when displaying results. The concept of peer-reviewed information is nothing new and is implemented in different ways on various Web sites. Yahoo, for example, has an "Answers" feature where users can ask questions, and the response is ranked on quality. Also, most blogs have forms where readers can comment on the author's entry. Despite those other formats, Google probably feels that "a service like Knol might be necessary to stay competitive," wrote Danny Sullivan, editor in chief of Search Engine Land, in a review.
<urn:uuid:caa859c8-b5b6-4e0b-be14-73320cbcb04e>
CC-MAIN-2017-04
http://www.cio.com/article/2437468/consumer-technology/google-develops-wikipedia-rival.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00189-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940746
451
2.53125
3
This was going to be a long article but I decided to cut it short. You have a Remote Server – You need to securely access some sensitive service or another (let’s say a MySQL connection) and don’t want to open the port to up to the internet. What’s a person to do? Local Port Forwarding over SSH using Putty. What does this do for me? It allows me to assign a local port on my computer, say 3540, and any traffic I direct to that port (so traffic directed to 127.0.0.1:3540) will hop through a secure, encrypted tunnel and land on the port of my choosing on a remote server. Furthermore, my traffic will appear to the server as if it is coming from the server itself. Here are few reasons: 1. You can encrypt normally unencrypted traffic, like HTTP, FTP, MySQL, etc. This makes it safe to send that traffic over the public internet without fear of snooping by the NSA, or the company you work for, or your ISP, etc. etc. 2. You can bypass local and remote firewalls. 3. You can access the server as if you were local. MySQL, for example, often is set by default to only allow database connections from clients that are local (i.e. your Apache web server running on the same machine). This makes it impossible to manage MySQL remotely… unless you use a tunnel at which point your traffic appears to be local. How do I do it? I am going to assume you know how to setup a normal SSH connection from Putty. If you want that connection to also tunnel traffic from a specific port on your machine to a specific port on your server, you would set it up like this. 1. Open up putty 2. Configure your connection by punching in the public IP of your host and the port for SSH. Make sure connection type is set to “SSH”. Like so: 3. On the left hand side, expand “Connection” –> SSH –> then click on “Tunnels” 4. Under “source port” add a random unused port on your local client machine. Ex. 3540 5. Under “Destination” enter: localhost:the port you want to connect to on the server –> example: localhost:3306 (note: 3306 is the default mysql port) 6. Make sure “Local” and “Auto” are ticked for options. Then click the “Add” button. The final setup would look something like this after you click “add” 7. Optionally, if you plan on using this often, go back to the “Session” (on the left) config screen, type a name in the box right under “save sessions” and click “save” – that way you don’t have to configure everything everytime you want to create this tunnel. 8. Finally, click “open” at the bottom. An SSH terminal window should pop-up and you will need to authenticate to the server a the prompt. After that you will have a normal CLI terminal window open. You must keep this window open to keep the tunnel active. 9. Finally, make the connection from your client machine. Rather than connecting to the server’s IP, or the MySQL port, you will instead use: 127.0.0.1:3540 — i.e. you are connecting to your local loopback address on your machine, on the port you specified earlier. 10. Enjoy the magic. Your traffic is securely encrypted and you didn’t have to open that port up to the world.
<urn:uuid:0a3b2bf1-ffda-41a8-94e2-215863d81f3b>
CC-MAIN-2017-04
https://www.kiloroot.com/tunneling-with-putty-for-dummies-like-me/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00399-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899919
801
2.78125
3
In case you dont already know, Web services are modular chunks of functionality that organizations publish and allow trading partners to access. Many of todays popular Web applications use Web services as the behind-the-scenes engine for their more complex functionality. This raises the troubling question: How do we secure these new interfaces we are developing? In order to secure something you need to first understand the threats to which it may be vulnerable. Web services have an interesting threat profile. They are standard pieces of functionality, typically written in .NET or Java, and often connect to file systems and databases like the programs we are accustomed to writing. As a result, Web services are not exempt from the major threats that we concern ourselves with when securing traditional software. Attack vectors like the buffer overflow, SQL injection and other parameter tampering threats, also apply to Web services. However, Web services introduce a few more, including: WSDL Scanning: A WSDL (Web services definition language) is used to describe the Web service to connecting parties. Our trading partners use these documents to discover what pieces of functionality are available to them and how to format their requests to the Web service. Care needs to be taken when creating and publishing these documents. Often the documents are automatically generated from the code and functionality not meant to be exposed to outside entities is included in our WSDL. This may allow an attacker unintended access functionality. XPath Injection: XPath is a language for querying information from XML documents. Similar to SQL Injection, if user input is not properly sanitized, it is possible for a malicious user to influence the XPath query being run by the software to garner more information than he/she would normally have access to. Recursive Payload: The communication sent back and forth via Web services is all XML based, giving the attacker a new avenue of attack. Knowing that the Web service will need to parse the XML message in order to process the request, an attacker can send a request which contains a large amount of nested opening tags, but never supply a closing tag. The Web service, when trying to parse this file, will often consume too many system resources or even crash as it needs to track open tags until the matching close tag occurs. This can cause a denial of service to the Web service. Opening pieces of functionality to third parties is wrought with threats, both old and new. For this reason it is paramount that developers understand these threats and how to protect their applications from potential attack. The biggest roadblock to securing Web services is understanding that it is difficult to do so. The three tenets of security are confidentiality, integrity and availability (CIA). In the world of Web services, availability is the most straightforward to achieve. Typical attacks against Web services availability would be based on bad data, which is determined to choke the application and cause it to crash. Developers need to define strict rules for their input to act as guidelines for validation. Any and all data is then validated against these rules prior to use by the system. This will help protect against availability attacks. Although protecting the availability of Web services is no simple task, it is much easier than protecting confidentiality and integrity.
<urn:uuid:df9b3e42-c4d2-4eb6-9850-a59f9e18626e>
CC-MAIN-2017-04
http://www.cioupdate.com/trends/article.php/3716751/The-Perils-of-Web-Services.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00125-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947199
644
3.015625
3
Date: 01 September 2011 Click here for printable version I am sure many of you have heard of Blowfish (if not have a read on Wikipedia or head over to the horse's mouth of Bruce Schneier). It is as symmetric block cipher that seems to be one of the faster methods of encryption out there. Recently (a month or two ago) a 13 year old bug in some implementations were found that had the potential to drastically reduce the security of data secured with those implementations. Specifically this came to light because of some differences noticed between the "crypt_blowfish" implementation when compared against another (correct) implementation. The fact that this took 13 years to find might lead you to the conclusion that it was a subtle bug - and you would be correct. This specific library is used by SUSE (and some others) for password hashing, which is the reason behind the recent 8 bit SUSE password bulletin. The bug comes about due to the method a compiler uses to interact with signed numbers when they need to have the number of bits used to represent that number increased. There are three bits of information you need to know before the bug will make sense. The first bit of information you will need to understand this bug is an idea of what Two's Complement representation of numbers is. The most important thing to note is that while positive numbers "default" to having leading 0's, negative numbers have leaving 1's. The second bit of background information you need is about how small or 8 bit numbers can interact with longer 16 or 32 bit numbers. In order for two numbers of different "size" to interact, they need to become the same size. Normally this would be as simple as sticking a bunch of 0's on the front, however negative numbers need 1's on the front. To accomplish this the x86 instruction set has two different "extend" instructions: MOVSX (Move with sign-extend), and MOVZX (Move with zero-extend). The first (MOVSX) adds 0's or 1's as is required, the second (MOVZX) always adds 0's to the front of the number (even if it is negative). The third bit of information is related to how a char is represented in the C programming language. Now, my knowledge of C is ok, but in this specific area I differ to the all-knowing Wikipedia for some easy to digest information. In C all types have an unsigned and signed version. Why you need a signed char I am not sure, but you get it. However what I found really interesting is that, unlike all the other types, char has three different types: unsigned (specified by "unsigned char"), signed (specified by "signed char"), and just char which is a magical specification that "may be a signed type or an unsigned type, depending on the compiler and the character set" (Wikipedia). So what I always thought of as one of the most simple types in C - char - is actually a bit of an unknown - YAY. In this instance it turns out to be signed. The Bug Exploding Blowfish Sushi These three items all tie together in the code that turns four 8 bit characters into a 32 bit ... collection of bits. To do this four char's are essentially put one after the other into a 32 bit variable. This is done by OR-ing the 32 bit result with the 8 bit char and then shifting the 32 bit result 8 bits over until all four 8 bit characters are combined. Or at least that is what is supposed to happen. In practice because any char that is over 0x80 (thankfully outside normal 7 bit ASCII, however the pound sign (£) triggers it) is considered signed and has to interact with the longer 32 bit result, it must be extended. Our friendly compiler chooses the MOVSX because the char is signed, and that results in any of the four characters that have already been put into the result getting wiped over with 1's. Confused? It took me a couple of re-reads to form a mental picture, so lets see if I can draw one for you. The first table is what we want to happen, the second shows how it does happen due to the sign-extension. In the second, the values are OR'ed together into the result meaning that the sign-extension causes the "cat" to be overwritten with 1's. In the tables I am assuming that the binary representation of £ (decimal 163) is represented as 10100011 in 8 bits which is interpreted as -93 (decimal) when extending into two's complement. I am reasonably sure this is what is happening, but it makes me further question the reason for having a signed char! If anyone has a reason why a signed char is a good idea I would love to hear from you. This page also has a good discussion about signed and unsigned types in C and where the unknown-signed "char" type came from. The Ending (almost) As you can (hopefully) see - the pound sign causes all 3 other letters to get over-written with 1's. This means that "cat£", "aaa£" and "zzz£" would all produce the same result. Of course this will only happen when you have non 7 bit ASCII characters, but that is becoming more normal these days given all the different languages that people speak and type. In the worst case a password of 16 characters (quite a reasonable length) could be reduced to the equivalent of 4 characters, fortunately I don't believe there will be too many examples of this. This problem has already been corrected and the patch includes a "sign_extension_bug" flag that can be set so that backwards compatibility can be maintained. If you are interested in reading more about this the following are some posts/articles that I found interesting: One Final Thing While I was looking for further information about this bug I found that a very similar bug was found back in 1996. This one seems to be the same sort of bug in a different part of a different code base for blowfish - but goes to show that I was not the only one who was slightly surprised to find that a char is C is (normally) a signed value. Hope you enjoyed reading - please feel free to send us any other interesting bugs like this that you find.
<urn:uuid:915a55c0-ba93-4733-92fb-de76a3d17293>
CC-MAIN-2017-04
https://auscert.org.au/render.html?it=14783
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00087-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952587
1,313
3.578125
4
There has been a lot of news on quantum computing recently, from reports on NPR to Nature to the usual technical trade publications. The hype is enormous given the potential for changing the landscape, but I think there are a number of outstanding questions. One of the biggest issues I see is that we have around 60 years of knowledge and understanding of programming languages. We have FORTRAN, COBOL, PL1, C, Python, Java and a whole myriad of languages all revolving around programming as we know it today. If quantum computing is to become successful there has to be an interface that allows it to be programmed easily. What will the interface be? There will always be instances where people will, for the sake of performance, write machine code. Yes, it still happens today, but for quantum computing, given what I have read, there will have to be a paradigm shift in how things are programmed. Quantum computing, given the cryogenics involved, is at best going to be relegated to large customers that have the proper facilities. That is not a great deal different than what happened in the 1950s when only the largest organizations had computers. I think the key to success of this completely disruptive technology (assuming that the technology matches the marketing spec sheet) is going to be the interface and training the right people to use the programing method to utilize the machine. In the beginning, this will be very basic, but the things to watch for, I think, will be the how quickly the infrastructure is developed. Of course, you are going to need to get data in and out of the machine, communicate with the machine and all the things that we have today. How fast these things come together will determine the success or failure of the technology, I believe. Labels: programming languages,quantum computing posted by: Henry Newman
<urn:uuid:6d3b5682-60e6-4b7d-a393-efca247784ec>
CC-MAIN-2017-04
http://www.infostor.com/index/blogs_new/Henry-Newman-Blog/blogs/infostor/Henry-Newman-Blog/post987_163151010.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00115-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952669
376
2.5625
3
Evaluating a Hosting Provider’s Network Connectivity: Transport & Transit Let’s start this topic off with some basic definitions: - Transport – The underlying technology that is used to connect a particular node a network. Some examples of transport networks are Metro Ethernet, xWDM Waves, Wireless, Microwave, Satellite Uplink, T1/T3, DS3, OC12-192, ATM, Dark Fiber, etc. - Redundant Physical Entrances: Separate physical networks entrances into a data center or build premises from the street. - IP Transit – Connectivity to the larger Internet via a particular Internet provider. - Multi-homed: A network that is connected to two or more IP Transit providers. - BGP4: A routing protocol used to exchange routing information across the Internet. BGP4 makes it possible, and is required, in order for a hosting company or ISP to connect to multiple IP Transit providers. - Single Point of Failure: A part of a system that will stop the entire system from working, should it fail. - Redundant Network: A marketing buzzword that is used to dupe, deceive and otherwise mislead you into buying services from a particular provider. I hope you can see from my list of terms that when evaluating a hosting provider’s connectivity, it is important to check many aspects. In particular, it is prudent to decide if a provider has multiple transport facilities, multiple IP Transit provider, is running BGP4, has redundant physical entrances and has no single points of failure along their entire data path from your server to their external network connectivity. Now, let’s dive deeper into each specific area and explore where typical design flaws and misrepresentations are made. Transport – Transport is typically relatively expensive. It is the underlying ‘connection’ that always your data center or hosting company to connect to the Internet. A good real world example of this construct is power that’s running into your home. While the power company has many power companies, there is only one power line coming into your house. Many hosting companies and data centers run on the same premise — they connect with many IP Transit providers, but only have one underlying transport connection. A well run network will have physically diverse paths into their data center to make sure that physical problems, like backhoes digging up fiber, do not result in a service outage. The very tricky thing about network transport is that there’s not much you can do to verify it is actually redundant, short of looking at the actual physical entrances into the building or looking at your provider’s invoices. Most providers will gladly show you their network; good luck getting a peek at their transport invoices, even under an NDA. IP Transit – There are many, many companies that offer IP Transit. In fact, there’s so many companies that there are categories of companies. Tier 1, Tier 2, Tier 3 and so on. The technical definition of a Tier 1 network is a bit obtuse, but the overall point is that anyone operating a Tier 1 network is operating a very large network that has direct connectivity to many or all parts of the world. Tier 2 networks connect directly to Tier 1 providers. In many cases, Tier 2 providers can actually enhance or give further value to Tier 1 IP transport by providing intelligent network routing, DDoS protection or other managed services that are not offered by the Tier 1 provider. Every data center or hosting company should have IP transit connections to at least two providers. There are many reasons why multiple connections are better than one. From an availability perspective, if your provider only has a single connection to the Internet, it only takes one error for their IP Transit provider to take you completely offline. Thus, two IP transit providers are the smallest you should look for, when evaluating a hosting company or data center. However, knowledgeable hosting providers and data centers will also recognize that even very large Tier 1 networks have certain regions where they do well or poorly. A well run network will be connected to multiple networks that offer the best access to Asia, Europe, North America, South America and the South Pacific/Australia. Such networks will also be running BGP4, and more than likely a routing optimization platform, such as InterNAP’s Flow Control Platform. If you want to check on your provider, you can confirm they are multihomed, running BGP4 using ‘looking glass’ tools or BGP Toolkits: And here’s the scary part — even if a hosting company or data center has diverse transport, multiple IP Transit providers, runs BGP4, they might bring these connections back to a single switch or router. Yikes!
<urn:uuid:0c0c49f1-5530-4842-9cc1-b9524675b7ec>
CC-MAIN-2017-04
https://www.handynetworks.com/blog/networking/evaluating-a-hosting-providers-network-connectivity-transport-transit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927527
981
2.890625
3
April 26 to come! 19 Apr 2000 The right time to check your computers for "Chernobyl" virus Cambridge, UK, April 20, 2000 - Kaspersky Lab Int., a fast-growing international anti-virus software development company, informs computer users that on April 26 a variation of the virus, deplorably known as "Chernobyl", activates its payload routines. Also known as CIH and Spacefiller virus it was the world's first malicious code able to affect PC's hardware by corrupting the Flash BIOS micro chip. The virus first appeared in June 1998 and is still one of the most wide spread computer viruses in the world. Moreover, a variation of the virus (Win95.CIH.1003) was triggered for activation exactly on April 26 (the anniversary of the Chernobyl meltdown). On this day the virus destroys data on all hard drives installed on infected PC and attempts to corrupt the Flash BIOS micro chip. In June 1998 Kaspersky Lab was the first to develop genuinely effective protection against "Chernobyl" virus including neutralizing it in the computer's memory. Last year on April 26th thousands PCs all over the world were damaged by this virus causing huge financial losses. The only good thing was that together with PCs the virus destroyed itself as well. However a number of "Chernobyl" copies are still "sleeping" on some removable storage devices, including CDs, floppies, ZIP drives etc. This means that there is still a danger of being affected by this virus. "We don't expect that this year the catastrophe will be the same as it was a year ago. Moreover we are sure that the number of infections will be insignificant," said Eugene Kaspersky, Head of anti-virus research at Kaspersky Lab "However this doesn't mean that users can ignore virus prevention. We highly recommend that users meticulously check theircomputers to ascertain if the "Chernobyl" virus is there. Just because protection is better than cure. The only thing you have to bear in mind is that you should use a really good anti-virus, because not all of them are able to handle such complex and dangerous virus as "Chernobyl." You can purchase fully functional version of AntiViral Toolkit Pro online via the Internet. About Kaspersky Lab Kaspersky Lab Ltd. is a fast growing international privately owned anti-virus software development company with offices in Moscow (Russia), Cambridge (UK) and Johannesburg (South Africa). Founded in 1997 the company concentrates its efforts on the development of world-leading anti-virus technologies and software. Kaspersky Lab also provides free online security related internet information services. The company markets, distributes and supports its software and services in more than 40 countries worldwide.
<urn:uuid:f2993903-65fe-4425-b363-7a92414aea80>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2000/April_26_to_come_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00290-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956446
578
2.59375
3
The US Department of Energy (DOE) will be the most likely recipient of the initial crop exascale supercomputers in the country. That would certainly come as no surprise, since according the latest TOP500 rankings, the top three US machines all live at DOE labs – Sequoia at Lawrence Livermore, Mira at Argonne, and Jaguar at Oak Ridge. These exascale machines will be 100 times as powerful as the top systems today, but will have to be something beyond a mere multiplication of today’s technology. While the first exascale supercomputers are still several years away, much thought has already gone into how they are to be designed and used. As a result of the dissolution of DARPA’s UHPC program, the driving force behind exascale research in the US now resides with the Department of Energy, which has embarked upon a program to help develop this technology. To get a lab-centric view of the path to exascale, HPCwire asked a three of the top directors at Argonne National Laboratory — Rick Stevens, Michael Papka, and Marc Snir — to provide some context for the challenges and benefits of developing these extreme scale systems. Rick Stevens is Argonne’s Associate Laboratory Director of the Computing, Environment, and Life Sciences Directorate; Michael Papka is the Deputy Associate Laboratory Director of the Computing, Environment, and Life Sciences Directorate and Director of the Argonne Leadership Computing Facility (ALCF); and Marc Snir is the Director of the Mathematics and Computer Science (MCS) Division at Argonne. Here’s what they had to say: HPCwire: What does the prospect of having exascale supercomputing mean for Argonne? What kinds of applications or application fidelity, will it enable that cannot be run with today’s petascale machines? Rick Stevens: The series of DOE-sponsored workshops on exascale challenges has identified many science problems that need an exascale or beyond computing capability to solve. For example, we want to use first principles to design new materials that will enable a 500-mile electric car battery pack. We want to build end-to-end simulations of advanced nuclear reactors that are modular, safe and affordable. We want to add full atmospheric chemistry and microbial processes to climate models and to increase the resolution of climate models to get at detailed regional impacts. We want to model controls for an electric grid that has 30 percent renewable generation and smart consumers. In basic science we would like to study dark matter and dark energy by building high-resolution cosmological simulations to interpret next generation observations. All of these require machines that have more than a hundred times the processing power of current supercomputers. Michael Papka: For Argonne, having an exascale machine means the next progression in computing resources at the lab. We have successfully housed and managed a steady succession of first-generation and otherwise groundbreaking resources over the years, and we hope this tradition continues. As for the kinds of applications exascale would enable, expect to see more multiscale codes and dramatic increases in both the spatial and temporal dimensions. Biologists could model cells and organisms and study their evolution at a meaningful scale. Climate scientists could run highly accurate predictive models of droughts at local and regional scales. Examples like this exist in nearly every scientific field. HPCwire: The first exascale systems will certainly be expensive to buy and, given the 20 or so megawatts power target, even more expensive to run over the machine’s lifetime – almost certainly more expensive that the petascale systems of today. How is the DOE going to rationalize spending increasing amounts of money to fund the work for essentially a handful of applications? Do you think it will mean there will be fewer top systems across the DOE than there have been in the past? Marc Snir: There is a clear need to have open science systems as well as NNSA systems. And though power is more expensive and the purchase price may be higher, amortization is spread across more years as Moore’s Law slows down. We already went from doubling processor complexity every two years to doubling it every three. This may also enable better options for mid-life upgrades. A supercomputer is still cheap compared to a major experimental facility, and yields a broader range of scientific discoveries. Stevens: DOE will need a mix of capability systems — exascale and beyond — as well as many capacity systems to serve the needs of DOE science and engineering. DOE will also need systems to handle increasing amounts of data and more sophisticated data analysis methods under development. The total cost, acquisition and operating will be bounded by the investments DOE is allowed to make in science and national defense. The push towards exascale systems will make all computers more power efficient and therefore more affordable. Papka: The outcome of the science is the important component. Research being done on DOE open science supercomputers today could lead to everything from more environmentally-friendly concrete to safer nuclear reactor designs. There is no real way to predict or quantify the advancements that any specific scientific discovery will have. An algorithm developed today may enable a piece of code that runs a simulation that leads to a cure to cancer. The investment has to be made. HPCwire: So does anyone at Argonne, or the DOE in general, believe money would be better spent on more petascale systems and fewer exascale systems because of escalating power costs and perhaps an anticipated dearth of applications that can make use of such systems? Snir: It is always possible to partition a larger machine; however, it is impossible to assemble an exascale machine by hooking together many petascale machines. The multiple DOE studies on exascale applications in 2008 and 2009 have clearly shown that progress in many application domains depends on the availability of exascale systems. While a jump in a factor of 1,000 in performance may seem huge, it is actually quite modest from the viewpoint of applications. In a 3D mesh code, such as used for representing the atmosphere in a climate simulation, this increase in performance enables refining meshes by a factor of less than 6(4√ 1000 ), since the time scale needs to be equally refined. This assumes no other changes. In fact, many other changes are needed, when precision increases, that is, to better represent clouds, or to do ensemble runs in order to quantify uncertainty. It is sometimes claimed that many petascale systems may be used more efficiently than one exascale system since ensemble runs are “embarrassingly parallel” and can be executed on distinct systems. However, this is a very inefficient way of running ensembles. One would input all the initialization data many times, and one would not take advantage of more efficient methods for sampling the probability space. Another common claim heard is that “big data” will replace “big computation.” Nothing could be further from the truth. As we collect increasingly large amounts of data through better telescopes, better satellite imagery, and better experimental facilities, we need increasingly powerful simulation capabilities. You are surely familiar with the aphorism: “All science is either physics or stamp collecting.” What I think Ernest Rutherford meant by that is that scientific progress requires the matching of deductions made from scientific hypotheses to experimental evidence. A scientific pursuit that only involves observation is “stamp collection.” As we study increasingly complex systems, this matching of hypothesis to evidence requires increasingly complex simulations. Consider, for example, climate evolution. A climate model may include tens of equations and detailed description of initial conditions. We validate the model by matching its predictions to past observations. This match requires detailed simulations. The complexity of these simulations increases rapidly as we refine our models and increase resolution. More detailed observations are useful only to the extent they enable better calibration of the climate models; this, in turn, requires a more detailed model, hence a more expensive simulation. The same phenomenon occurs in one discipline after another. It is also important to remember that research on exascale will be hugely beneficial to petascale computing. If an exascale consumes 20 megawatts, then a petascale system will consume less than 20 kilowatts and become available at the departmental level. If good software solutions for resilience are developed as part of exascale research, then it becomes possible to build petascale computers out of less reliable and much cheaper components. Papka: As we transition to the exascale era the hierarchy of systems will largely remain intact, so the advances needed for exascale will influence petascale resources and so on down through the computing space. Exascale resources will be required to tackle the next generation of computational problems. HPCwire: How is the lab preparing for these future systems? And given the hardware architecture and programming models have not been fully fleshed out, how deeply can this preparation go? Snir: Exascale systems will be deployed, at best, a decade from now – later if funding is not provided for the required research and development activities. Therefore, exascale is, at this stage, a research problem. The lab is heavily involved in exascale research, from architecture, through operating systems, runtime, storage, languages and libraries, to algorithms and application codes. This research is focused in Argonne’s Mathematics and Computer Science division, which works closely with technical and research staff at the Argonne Leadership Computing Facility. Both belong to the directorate headed by Rick Stevens. Technology developed in MCS is now being deployed on Mira, our Blue Gene/Q platform. The same will most likely be repeated in the exascale timeframe. The strong involvement of Argonne in exascale research increases our ability to predict the likely technology evolution and prepare for it. It increases our confidence that exascale is a reachable target a decade from now. Preparations will become more concrete 4 to 6 years from now, as research moves to development, and as exascale becomes the next procurement target. Stevens: While the precise programming models are yet to be determined, we do know that data motion is the thing we have to reduce to enable lower power consumption, and that data locality (both vertically in the memory hierarchy and horizontally in the internode sense) will need to be carefully managed and improved. Thus we can start today to think about new algorithms that will be “exascale ready” and we can build co-design teams that bring together computer scientists, mathematicians and scientific domain experts to begin the process of thinking together how to solve these problems. We can also work with existing applications communities to help them make smart choices about rewriting their codes for near term opportunities such that they will not have to throw out their codes and start again for exascale systems. Papka: We learn from each system we use, and continue to collaborate with our research colleagues in industry. Argonne along with Lawrence Livermore National Laboratory partnered with IBM in the design of the Blue Gene P and Q. Argonne has partnerships with other leading HPC vendors too, and I’m confident that these relationships with industry will grow as we move toward exascale. The key is to stay connected and move forward with an open mind. The ALCF has developed a suite of micro kernels and mini- and full-science DOE and HPC applications that allow us to study performance on both physical and virtual future-generation hardware. To address future programming model uncertainty,Argonne is actively involved in defining future standards. We are, of course, very involved in the MPI forum, as well as in the OpenMP forum for CPUs and accelerators. We have been developing benchmarks to study performance and measure characteristics of programming runtime systems and advanced and experimental features of modern HPC architectures. HPCwire: What type of architecture is Argonne expecting for its first exascale system — a homogeneous Blue Gene-like system, a heterogeneous CPU+accelerator-based machine, or something else entirely? Snir: It is, of course, hard to predict how a top supercomputer will look ten years from now. There is a general expectation that future high-end systems will use multiple core types that are specialized for different types of computation. One could have, for example, cores that can handle asynchronous events efficiently, such as OS or runtime requests, and cores that are optimized for deep floating point pipelines. One could have more types of cores, with only a subset of the cores active at any time, as proposed by Andrew Chien and others. There is also a general assumption that these cores will be tightly coupled in one multichip module with shared-memory type communication across cores, rather than having an accelerator on an I/O bus. Intel, AMD and NVIDIA all have or have announced products of this type. Both heterogeneity and tight coupling at the node level seems to be necessary in order to improve power consumption. The tighter integration will facilitate finer grain tasking across heterogeneous cores. Therefore, one will be able to largely handle core heterogeneity at the compiler and runtime level, rather than the application level. The execution model of an exascale machine should be at a higher level – dynamic tasking across cores and nodes – at a level where the specific architecture of the different cores is largely hidden; same way as the specific architecture of a core, for example, x86 versus Power is largely hidden from the execution model viewed by programmers and most software layers now. Therefore, we expect that the current dichotomy between monolithic systems and CPU-plus-accelerator-based systems will not be meaningful ten years from now. Stevens: To add to Marc’s comments, we believe there will be additional capabilities that some systems might have in the next ten years. One strategy for reducing power is to move compute elements closer to the memory. This could mean that new memory designs will have programmable logic close to the memory such that many types of operations could be offloaded from the traditional cores to the new “smart memory” systems. Similar ideas might apply to the storage systems, where operations that now require moving data from disk to RAM to CPU and back again might be carried out in “smart storage.” Finally, while current large-scale systems have occasionally put logic into the interconnection network to enable things like global reductions to be executed without using the CPU functional units, we could imagine that future systems might have a fair amount of computing capability in the network fabric again to try to reduce the need to move data more than necessary. I think we have learned that tightly integrated systems like Blue Gene have certain advantages. Fewer types of parts, lowest power consumption in their class, and very high metrics such as bisection bandwidth relative to compute performance, which let them perform extremely well on benchmarks like Graph 500 and Green500. They are also highly reliable. The challenge will be to see if in the future we can get any systems that combine the strengths needed to be affordable, reliable, programmable, and lower power consumption. HPCwire: How about the programming model? Will it be MPI+X, something more exotic, or both? Snir: Both. It will be necessary to run current codes on a future exascale machine – too many lines of code would be wasted, otherwise. Of course, the execution model of MPI+X may be quite different in ten years than it is now: MPI processes could be much lighter-weight and migratable, the MPI library could be compiled and/or accelerated with suitable hardware, etc. On the other hand, it is not clear that we have an X that can scale to thousands of threads, nor do we know how an MPI process can support such heavy multithreading. It is clear, however, that running many MPI processes on each node is wasteful. It is also still unclear how current programming models provide resilience, and help reduce energy consumption. We do know that using two or three programming models simultaneously is hard. Research on new programming models, and on mechanisms that facilitate the porting of existing code to new programming models is essential. Such research, if pursued diligently, can have a significant impact ten years from now. Our research focus in this area is to provide a deeper stack of programming models, from DSLs to low-level programming models, thus enabling different programmers to work at different levels of abstraction; to support automatic translation of code from one level to the next lower level, but ensure that a programmer can interact with the translator, so as to guide its decision; to provide programming models that largely hide heterogeneity – both the distinction between different types of cores and the distinction between different communication mechanisms, that is, shared memory versus message passing; to provide programming notations that facilitate error isolation and thus enable local recovery from failures; and to provide a runtime that is much more dynamic that currently available, in order to cope with a hardware that continuously change, due to power management and to frequent failures. Stevens: An interesting question in programming models is if we will get an X or perhaps a Y that integrates “data” into the programming model — so we have MPI + X for simulation and MPI + Y for data intensive — such that we can move smoothly to a new set of programming models that, while they retain continuity with existing MPI codes and can treat them as a subset, will provide fundamentally more power to developers targeting future machines. Ideally, of course, we would have one programming notation that is expressive for the applications, or a good target to compile domain specific languages too, and at the same time can be effectively mapped onto a high-performance execution model and ultimately real hardware. The simpler we can make the X’s or Y’s, the better for the community. A big concern is that some in the community might be assuming that GPUs are the future and waste considerable time trying to develop GPU-specific codes which might be useful in the near-term but probably not in the long-term for the reasons already articulated. That would suggest that X is probably not something like CUDA or OpenCL. HPCwire: The DOE exascale effort appears to have settled on co-design as the focus of the development approach. Why was this approach undertaken and what do you think its prospects are for developing workable exascale systems? Papka: It’s extremely important that the delivered exascale resources meet the needs of the domain scientists and their applications; therefore, effective collaboration with system vendors is crucial. The collaboration between Argonne,Livermore, and IBM that produced the Blue Gene series of machines is a great example of co-design. In addition to discussing our system needs, we as the end users know the types of DOE-relevant applications that both labs would be running on the resource. Co-design works, but requires lots more communication and continued refinement of ideas among a larger-than-normal group of stakeholders. Snir: The current structure of the software and hardware stack of supercomputers is more due to historical accidents than to principled design. For example, the use of a full-bodied OS on each node is due to the fact that current supercomputers evolved from server farms and clusters. A clean sheet design would never have mapped tightly coupled applications atop a loosely coupled, distributed OS. The incremental, ad-hoc evolution of supercomputing technology may have reduced the incremental development cost of each successive generation, but has also created systems that are increasingly inefficient in their use of power and transistor budgets and increasingly complex and error-prone. Many of us believe that “business as usual” is reaching the end of its useful life. The challenges of exascale will require significant changes both in the underlying hardware architecture and in the many layers of software above it. “Local optimizations,” whereby one layer is changed with no interaction with the other layers, are not likely to lead to a globally optimal solution. This means that one need to consider jointly the many layers that define the architecture of current supercomputers. This is the essence of co-design. While current co-design centers are focused on one aspect of co-design, namely the co-evolution of hardware and applications, co-design is likely to become increasingly prevalent at all levels. For example, co-design of hardware, runtime, and compilers. This is not a new idea: the “RISC revolution” entailed hardware and compiler co-design. Whenever one needs to effect a significant change in the capabilities of a system, then it becomes necessary to reconsider the functionality of its components and their relations. The supercomputer industry is also going through a “co-design” stage, as shown by the sale by Cray to Intel of interconnect technology. The division of labor between various technology providers and integrators ten years from now could be quite different than it is now. Consequently, the definition of the subsystems that compose a supercomputer and of the interfaces across subsystem boundaries could change quite significantly. Stevens: I believe that we will not reach exascale in the near term without an aggressive co-design process that makes visible to the whole team the costs and benefits of each set of decisions on the architecture, software stack, and algorithms. In the past it was typically the case that architects could use rules of thumb from broad classes of applications or benchmarks to resolve design choices. However many of the tradeoffs in exascale design are likely to be so dramatic that they need to be accompanied by an explicit agreement between the parties that they can work within the resulting design space and avoid producing machines that might technically meet some exascale objective but be effectively useless to real applications.
<urn:uuid:2dafbf13-589d-4e5e-88e0-4010618c5b29>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/06/21/exascale_computing_the_view_from_argonne/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00016-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946999
4,490
2.859375
3
A plane that can fly across the U.S. in just one month? Sure, it may sound unimpressive when a commercial flight can make the trip in less than six hours. But it’s pretty remarkable when you consider that this plane runs only on solar energy and travels at a mere 44 mph (70 kph). The plane, called the Solar Impulse, can theoretically fly forever since it is not limited by the need to refuel. So why will this historic flight take so long? The aircraft carries only one person. The plane needs to land so that the pilot can rest, recharge and, presumably, take a potty break. Piloted by Bertrand Piccard, the Solar Impulse took off from Moffett Airfield in Silicon Valley Friday, May 3 to begin the first cross-U.S. flight of an airplane powered only by solar energy. The first leg of the journey, which ended in Phoenix, AZ, took 18 hours and 18 minutes. The cross-country trip will end sometime in July, after additional stops in Dallas, TX, Washington D.C. and New York. With the wingspan of a jumbo jet but the weight of only a small car, the Swiss aircraft can fly both day and night by using energy stored in its batteries, which make up almost 25 percent of the plane’s weight. Piccard, who was the first person to complete a non-stop round-the-world balloon trip, called the flight “mythical in the history of aviation because all the big pioneers of the 20th century have tried to fly coast to coast.” He also cited the disconnect between restrictions involved in having to coordinate the flight with the FAA and air traffic control and the fact that the plane could have complete freedom since, without needing to refuel, it could, in theory, fly forever. Some of Solar Impulse’s features: - Its solar panels are as thin as a human hair. - The plane’s engines are 94 percent efficient. - Its fuselage is made from carbon fiber sheets three times thinner than paper. Part of the mission of the Solar Impulse project is “for the world of exploration and innovation to contribute to the cause of renewable energies, to demonstrate the importance of clean technologies for sustainable development; and to place dreams and emotions back at the heart of scientific adventure.”
<urn:uuid:4fd66f99-5e68-443f-9139-a369cfbac780>
CC-MAIN-2017-04
http://www.computerworld.com/article/2475385/emerging-technology/it-s-a-plane---it-s-solar---it-s-slow.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00254-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941412
497
3.375
3
A cornerstone of this countrys child- support enforcement program is the use of computer technology to ensure that parents provide financial support to their children. In fact, the federal government saw technology as so crucial to the programs success that it mandated statewide collection, disbursement and case-management systems in every state and territory. This mandate, originally set in 1988, began to make its mark after 1997 when eight states were fined by the U.S. Department of Health and Human Services (HHS) for not having statewide systems ready. Michigan missed the deadline for a number of reasons, but most notably because several counties already had their own systems in place. The 1996 welfare reform law requires states to operate a single system for collection and disbursement of child-support payments. With 10 counties, including two of the states largest, remaining separate from the states system, Michigan was fined $38 million this year by HHS. Next years fines could reach $50 million if the situation doesnt change. So, like a parent disciplining uncooperative children, Gov. John Engler said enough is enough. In his State of the State address this year, the governor expressed his dismay with the situation: "I am frustrated and, quite frankly, fed up that our child-support enforcement system has failed to serve so many children. Because a handful of counties have not participated in a federally mandated, statewide child-support system, Michigan will suffer a $38 million federal penalty." Then he drew a line in the sand. "If any county fails to participate in the state system, I will work with the Legislature and the chief justice to terminate that countys responsibility for child-support enforcement." And for good measure, Engler warned that any additional fines incurred by the state because of local foot dragging would be paid by the counties. Kicking and Screaming One by one, Michigans 10 rebellious counties have thrown in the towel and agreed to convert their computerized child-support systems over to the state. The latest was Oakland County, located near Detroit. With a population of more than 1 million and more than 75,000 child-support cases, Oakland County has one of the largest support programs in the state. Oakland Countys multimillion-dollar mainframe system has been continuously improved over its 20-year existence. As recently as last year, the county spent more than $1 million on upgrades, with more investment planned for 2001 until the governor announced his ultimatum. It is used by more than 250 county workers and interfaces with 20 other county databases, including the county court, law enforcement, management and budget, probation and the prosecutors office. In addition, the countys system is integrated with an imaging subsystem that manages more than 10 million document images and an interactive voice response (IVR) system that allows clients to check on the status of payments and schedule court appearances, among other activities. "We believe we have a superior system to the one the state is dictating we use," said Robert Daddow, assistant deputy county executive of special programs. "Our number-one concern is service to the clients. The states concern is about penalties, not about which system works best." Daddow and Phil Bertolini, Oakland Countys director of information technology, cite a litany of costly trade-offs the county will have to make in order to convert its system into one that is part of Michigans $200 million state system. At the top of the list is the lack of support for the imaging system, which will be useless once the county switches systems and is no longer able to index images with its child-support cases. The countys highly popular IVR system will be replaced by what Daddow calls nothing more than an answering system. Oakland County is also worried about the time it will take the state to convert the countys system and the cost. Daddow points out that Michigan has been working on a statewide child- support system since 1988, but only has half of the counties online so far. The system has been under development for so long that the platform on which the system runs is no longer marketed by the vendor who makes it. "The state refuses to tell us how many of its staff will work on the conversion project," complained Daddow. "And they have said they will pay conversion costs as they define it, so its not clear how the project will be done. One thing I know is that switching to their system is going to cost us in labor and money." The state readily admits that Oakland County has a good child-support enforcement system already in place. "But its a federal requirement that counties be part of our statewide system," explained state Family Independence Agency spokeswoman Karen Smith. "We are paying enormous penalties when were not in compliance." Smith points out that while Oakland Countys system has several unique features that meet its needs, the system lacks the ability to search the Federal Case Registry and the National Directory of New Hires or to exchange information with these national databases. These databases have been singled out as reasons why the nations child-support enforcement program has nearly doubled collections to $15.8 billion in a seven-year period, according to federal officials. In addition, the Oakland County system doesnt have access to Michigans data warehouse, which contains vital information on state cases. Coming to Terms In March, Oakland County capitulated and signed a letter of intent to cooperate with the state and convert its computers, software and databases over to the statewide system. The conversion is expected to be completed by September 2002. But the states strong-handed actions have angered Oakland County officials, who feel many questions remain unanswered, but most importantly that little effort was made to save some of the county systems more effective features. "They should have asked us what we are doing right so they could replicate it [at the state level]," said a frustrated Daddow. "Instead, they told us to drop our system or else."
<urn:uuid:ee3924d5-ffad-4a06-b0a9-2e3c892d1937>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Playing-for-Keeps.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00556-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969002
1,212
2.640625
3
Chapter 7 – Security in Networks Networks are becoming vital to modern life as we know it, having become critical to both computing and commerce. There has even been a movement to view most computers as Internet appliances – the network being the real tool and computers just appendages. This may be a bit of a stretch, but we must acknowledge our reliance on networks. Every time I visit my daughter in Minneapolis, MN and charge dinner on my credit card, I make use of networks and rely on their correctly transmitting the details of my charge and the subsequent approval of that charge. I also assume that only persons and processes authorized to access the credit card data actually do access those data. In other words, I assume a lot. In worrying about the number of possible attacks on networks, one should not overlook natural problems, such as solar storms that have been quite common in the fall preceding the writing of these notes. Most natural occurrences can be planned for, so we focus on malicious activities in our study of network security. Networks are studied extensively in a course that is prerequisite to this course, so we shall not spend much time in discussing them per se, but focus on their security problems. There are several features of the Internet that lead to security problems. Here is the famous New Yorker cartoon referenced in the book. A network removes many of the clues we normally use to assess a person’s age and other personal habits. If the author of these notes claims to be a 20-year old hacker, how would you tell he’s not via the web? When malicious hackers were limited to manual input of each attack, things were a lot safer. Now we have many tools that will automate attacks, such as port scans. I can set my computer to scan every port on every computer in a specific Internet address range and then go to the bar and have a drink while my little robot does my dirty work. Due to the structure of the Internet, one cannot tell if another user is local, in the same city, or even in the same country. Occasionally, a sophisticated timing test can be performed, but usually the only way to tell for sure is to do a back-trace, which requires special equipment. Similarly, one cannot tell the nature of the source – is it a laptop in an airport, a terminal in a school lunch room, or a PC in some malicious hacker’s basement. In physical security, the boundaries of an entity are often clear – anything within the walls of the company building, on the company property, etc. The Internet has no fixed boundaries; moreover a specific network may not have the actual boundaries it appears to have. Consider a very common problem that occurs when a worker will attach a telephone modem to a company computer in order to be able to access that computer from home. Network Transmission Media There are a number of ways to transmit information over a computer network. We cover each briefly, only in the context of access by unintended recipients. There are a number of technologies that use copper wire, including twisted pair and coaxial cable. In many network set-ups, the maximum distance over which these technologies are used is rather small, thus limiting the vulnerability to unauthorized access. One should note that methods to “tap” these technologies are well known and hard to detect if done well. Optical fiber cables have become more popular due to the very high bandwidth that they support, recently up to 1Gbps (109 bits per second) for a reasonable sized cable. Optical fibers are also considerably harder to “tap” or gain unauthorized access to the signal content without disrupting the signal as received by the authorized recipient. Wireless: Microwave and Infrared These media use electromagnetic waves to carry information. Depending on the wavelength of the radiation, we call these waves “radio”, “microwave”, “infrared”, or “optical”. The general rule of thumb is that the shorter the wavelength, the shorter the transmission distance. The real difference is whether or not the transmitter and receiver must share a common “line of sight”; i.e., be visible to each other. Longer-wave radiations, such as AM broadcast radio in the United States, can “bend” around corners and travel for hundreds of miles. One should note that the term “wireless”, when applied in the context of networks, generally refers to specific technologies, such as cell phones with a normal maximum range of a few kilometers or an 802.11 device with a normal range of a few hundred meters. It can be shown that the maximum distance between two microwave towers, each of height h, for transmission by line of sight is approximately D = 7.1·, where h is given in meters and D is the distance in kilometers. The books example of a 30 mile (or 50 kilometer) range seems to correspond to a tower height of about 50 meters. It should be noted that the interception of any electromagnetic waves is quite simple. For longer waves, such as AM radio transmission, the broadcast is omni-directional, so that all one has to do is place an antenna somewhere. Microwaves, infrared, and shorter-wavelength broadcasts tend to be directional, so that one has to be in the line of transmission in order to have access to the broadcast. However, this is also easy to do. The figure below illustrates the basic geometry for interception of a directional signal. We postulate that the transmitter has the beam focused towards the receiver and note that the interceptor can be anywhere in the beam, either closer or more distant from the transmitter than the intended receiver. No microwave beam can be focused only on the receiver. T (Transmitter), R (Intended Receiver) and I (two Interceptors) The textbook states that the beam is not focused on a single site, but allowed to “spread out”. In fact the fundamental laws of physics dictate that the beam will be spread, with a specific angular width. The beam will be most intense along its central axis and become less intense as one moves off the axis. One measure of beam width is the angular width for half power. The width of the beam can be considered to be 2·F, where F is given by the formula F » 1.22·l / D where l is the wavelength of the radiation D is the diameter of the antenna (same units as the wavelength) F is the half-angle in radians. Microwave frequencies lie in the 2 to 40 GHz range, corresponding to a wavelength of 15 to 0.75 centimeters. For a 0.75 centimeter wavelength and 1 meter (100 cm) diameter antenna, the half angle would be F » 1.22·0.75 / 100 = 0.915 / 100 = 9.15·10–3 radian, or about 0.5 degree for a full-width of about a degree. At 10 kilometers, the width of the beam would be about 2·104·tan(9.15·10–3 radian) = 2·104·9.15·10–3 = 183 meters. At 100 kilometers from the antenna, the full beam would span a lateral distance of 1.83 kilometer. Any antenna within that width would receive a high-quality signal. The ISO Open Systems Interconnection Model At this level, the student is expected to be familiar with the ISO 7-level model for network communications. As a practical matter, the more important model is TCP/IP (which stands for Transmission Control Protocol / Internet Protocol). The TCP/IP layers roughly match those used in the ISO model, but the mismatch is not important. In each model, an application is viewed as communicating directly with another application at the same level, despite the fact that the communication is indirect via lower levels of the protocol (excepting the physical layer). The ISO model is a good approach for creating network services and conceptualizing their interaction. The TCP/IP model is the more important for actual implementation of a network. Why Do People Attack Networks? Malicious hackers attack networks for a number of reasons, including the challenge of the “sport”, fame, money, revenge, and espionage. For practical reasons, there are only two motives, corresponding to a targeted attack (the attacker wants to get this network) and a random attack (the network is just a convenient target of opportunity). Targeted attacks are usually carried out by rather sophisticated hackers who have a specific reason to attack the targeted network and no other. Espionage attacks definitely fall in this category. If I want to steal some government secrets, I am less likely to hack into a network owned by a fast-food restaurant. On the other hand, if I carry a grudge against fast-food restaurants for allowing me to eat all that fattening food and consequently to get fat, I might target these web sites and networks. Untargeted (random) attacks seem to be more common. These attacks are often carried out by unskilled attackers, sometimes called “script kiddies” because all they can do is to copy and slightly modify attack scripts written by skilled hackers. One of the most potent attacks against a network takes advantage of the well-known weakest point of any computer network: the humans who interact with that network. As commonly defined, social engineering involves use of social skills in order to persuade a person to reveal information that should remain secret. One of the more interesting applications of social engineering comes from the days in which bomb threats seem to be telephoned to a different company every day. The instruction sheet for answering the telephone and talking to the person making the threat included instructions on getting him to describe the bomb, its location, and timing mechanism. The person answering the telephone was instructed to be polite and respectful when speaking to this criminal in order to obtain the maximum information before he discontinued the call. The last two questions in the list for those answering such a call were “Who are you” and “Where do you live”. A surprising number of callers actually answered the question. One of the best references on social engineering is the book The Art of Deception, written by Kevin Mitnick and published by Wiley Publishing, Inc. in 2002 (ISBN 0-471-23712-4) The book discusses a number of other attacks, including impersonation, spoofing, and session hijacking. One famous example of session hijacking occurred recently in which then-President Bill Clinton admitted to a fondness for internet pornography. What actually happened is that the president was being interviewed over the Internet and the session was hijacked by a malicious hacker who inserted the reference to pornography. The book describes a number of denial of service or DOS attacks, including the ping of death and smurf attacks (I am certain that Papa Smurf would disapprove). For those of you who are culturally impoverished, I have included a picture of Papa Smurf, taken from the web site www.smurf.com. The Smurfs is an animated cartoon show adapted from a comic strip that first appeared in France in 1958. It is still running. DDOS (Distributed Denial of Service) attacks are one of the more malicious attacks. The basic of a DOS attack is to send a target computer a stream of traffic too large for it to handle, thus shutting it down. The one problem for the hacker is the relative speeds of the attacking computer and the target computer; if the target is faster then the attack will fail. The result of this last observation is the DDOS attack, in which a malicious hacker infects a number of intermediate machines (called “zombies”) with code to attack the target machine. These all attack at once, possibly on a signal from the attacker and suddenly the target machine has to defend against a large number of attackers. Network Security Controls Right up front, we should mention the fact that network security controls, as all security precautions, should irritate everybody but not excessively. If the controls do not bother anybody, they are probably not sufficient. If they bother everybody, they will be ignored or circumvented. Passwords are a good example – if you make them too easy they will fall to a password cracker (such as a dictionary attack) and if you make them too hard to remember, such as “z79*Wq423Jftp$99” they will be written down and exposed. As an aside, everybody thinks he or she has a clever way to disguise passwords, such as writing a combination “32 – 47 – 15” as a telephone number “832-4715”, but all malicious hackers know these tricks. Suspecting that the above “telephone number” hides a six digit combination, a hacker would try the obvious 14 options. The first step in devising security controls is a risk assessment, which is discussed in the next chapter. For now, we merely claim that knowledge of what we have to protect goes a long way towards deciding how we should protect it. There is a corollary here – some controls are so simple that they should be applied in any case. Reasonable passwords and locks on office doors are examples of such simple controls. A vulnerability in a network is a weakness that might be attacked; it is a potential avenue of attack – a way by which the system might fail. In this it is differentiated from a threat, which is an action or event that might break the security of a system. One can classify either vulnerabilities or threats by the targets of the attack. The text presents a table of common network vulnerabilities on page 426. Encryption is probably the best protection against network vulnerabilities. It is amazingly easy for a practiced malicious hacker to break into a network, either by guessing a username and password pair or by use of social engineering to convince a user to give up a password. The next step is to make the files on a system hard to use except by those authorized to have access to them. Encryption is the key. Encryption is also applied to data in transit. Using the OSI model, we can name two layers at which the encryption might be applied – the Data Link Layer and the Presentation Layer (I know that the book says Application Layer for this, it is a small matter of semantics). Of course, the data could be encrypted at the Presentation Layer and again at the Data Link. Link encryption offers many advantages. The data are encrypted just prior to being presented to the physical layer for transmission and are decrypted just after receipt. There are other advantages that will be presented below as disadvantages of end-to-end encryption. The disadvantage of link encryption is that the data exist in the computer in the “plaintext” or unencrypted form and can be stolen there. End-to-end encryption offers the advantage that data exist in the computer system only very briefly in plaintext form and are mostly handled in the encrypted form. The difficulty here is that the message may contain certain clues, such as a priority level, that would help in setting up the routing. If the priority level is in the part that is encrypted by the end-to-end method, then it is unavailable. This actually appeared in a military system which followed a common security model called “red-black”. In the “red state” the data are in plaintext form. Data in this form are encrypted and passed as being in the “black state” or acceptable for handling by anybody – it is just a collection of bytes with no obvious structure or meaning. Then the requirement was levied that the messages in the “black state” be accorded priority routing. The problem is that, in this “black state”, the messages had no indication of priority, as that was considered sensitive (consider a FLASH message from the Pentagon to the U.S. missile submarine fleet – it is not likely to concern payroll data) and thus unavailable for use in the routing decisions. This author was not directly involved in this project and does not know how this conundrum was solved. The textbook discusses a number of applications of encryption to network security. One of the more common today is a VPN (Virtual Private Network), in which access to a network resource is through an encrypted link, thus mimicking a true private network, which is implemented on a dedicated (and costly) private point-to-point physical data line. PKI (Public Key Infrastructure) is an evolving technique that may enhance network security. Two other protocols are SSH (Secure Shell) and SSL (Secure Socket Layer). The security architecture to watch is the one associated with the new IPv6 protocol (version 6 of the IP Protocol Suite). The transition to IPv6 was motivated by the inadequacy of the existing 32-bit address structure for the ever-expanding Internet. As the change to a larger address space (128 bits, allowing for more than 3·1038 distinct addresses – is that enough) required a major overhaul of the protocol, it was decided to address other concerns, such as security. This author has been informed that one of the goals of the security redesign was to hinder spoofing, in which the sender of a message can alter the source IP address so that the message appears to come from another source. We can hope that this nuisance goes away. One of the primary services of network security is to guarantee content integrity; that is, to insure that the message has not been altered in transit. Here is an example taken from one this author’s favorite space-fantasy novels by David Weber. The message sent concerned the territorial interests of one of two antagonistic nations over a piece of disputed territory. Original message: “We are not intending to seize the planet by force”. Altered message: “We are intending to seize the planet by force”. In this novel, the omission of one word leads to war – a not unrealistic scenario. Encryption is one guarantee of message integrity, but only if the original message can be verified by sight. If I send you a message written in standard English and then encrypted, any alteration of the encrypted message would almost certainly cause the message to be decrypted as gibberish. But suppose that the message consists of a series of 32-bit integers, sent as four-byte entries. A corrupted message might not be so easily detectable. Error correcting codes provide simple guards against accidental message corruption, but are not really effective against an intentional attack. The reason for this lack of security is that the codes are so easy to compute. If you give me a message with a specified error correcting code, I can forge another message with the same error correcting code – this is only a bit more difficult for some of the cyclic redundancy check codes. For this reason, we now have what is called message digests or cryptographic checksums. There are two characteristics of a cryptographic checksum that the book forgets to mention. 1) It must be impossible to retrieve the entire message, given only the checksum. This requirement is met by any checksum that distills an entire message into 160 bits or less. No many-to-one function is invertible. 2) Given a message and a checksum, it must be computationally infeasible to produce another message with the same checksum. Within this context, computational infeasibility implies that it will take hundreds of years to produce the desired result. At this point in the discussion, we should mention that many of these security features are based on problems that belong to the mathematical class NP-Hard. While the precise definition of this class of problems is tedious, there is a practical difference that is important. Intractable a problem is classified as intractable if it can be proven that no solution to the problem exists or can exist. NP-Hard one of the characteristics of problems in this class is that there are no known efficient algorithms that solve the problem, but no proof that efficient solutions cannot exist. When you base security on one of these, you are betting that nobody can solve a problem that has resisted solution by the best mathematical minds for over 50 years. A good bet. Authentication in Distributed Networks What we are discussing is how to authenticate a user in a network of anonymous computers where the network links are not to be trusted. Passwords provide one mechanism for user authentication, but one wants to avoid sending a password in clear text over the network. The Kerberos protocol, developed by MIT, provides an interesting solution to the password problem. There is a ticket-granting server that each user’s password. When a user logs on to the network, the user’s workstation sends the user ID only to the ticket-granting server, which then responds with a ticket encrypted by the user’s password. If the user’s workstation can decrypt the ticket using the password just typed in, the user is OK. Note that the password is not stored on the workstation and never was transmitted on the network. Any serious student of network security should undertake a study of the Kerberos protocol, especially focusing on how the protocol evolved in response to new attacks as they were detected and analyzed. No product can be considered secure if it has not been under continuous attack by a “red team” for some time. Even then it may not be secure. How nasty is your red team and how dedicated are its members to detecting flaws? Kerberos is a complete solution, which means that every part of the network must use the protocol or it cannot be used. However, one can use the above insight on passwords to design a simpler system. In this version, a server would send a one-time password to the user, with this one-time password encrypted with the user’s password. The user could then use the decrypted one-time password for the specific session only. What are the weaknesses of such an approach? One that comes to mind is that the session might be hijacked. There may be other problems with this proposed protocol. The book then discusses routers and firewalls. Routers can be used as a part of a security solution by placing access control lists on the routers. This solution is of limited utility, mostly due to the design goals for routers – to facilitate traffic movement. The more practical approach involves firewalls, either as stand-alone computers or as software packages placed on a personal computer. Here is a general rule that is always true: No computer should be attached to the Internet unless it has a working firewall installed. For a company network, the preferred approach is to have a single computer designated as a firewall for the company’s interior network and all the assets associated with that interior network. A computer designated to be a firewall should be stripped of all software and data not directly related to its function as a firewall, such as editors, programming tools, password files, etc. The only user interaction with the firewall should be to scan its audit logs. The book then discusses intrusion detection systems (IDS’s), which monitor a network to identify activity that is malicious or suspicious. When the IDS operates as a separate device, they often operate in stealth mode, with a network interface card that listens to the network but never places any packets onto that network; the NID has no published network address and cannot be detected by an outside device – hence stealth mode. The chapter closes with a discussion of secure e-mail. The student is reminded never to trust any e-mail, especially from one’s friends as such messages could have been initiated by a virus without the friend’s knowledge. This author’s wife is a frequent computer user who accesses the Internet frequently as a part of her job; hence her vulnerability to attack by viruses is somewhat higher that normal. Imagine her surprise when her friends noted that she was sending out e-mail claiming to have, as attachments, pictures of her naked. This author has received e-mails entitled “I Love You”, but quickly discarded them as they were from people he had never heard of. You guessed it – it was the Love Bug virus. Appendix: Maximum Distance for Line-Of-Sight for a Given Tower The task here is to compute the maximum distance over which two towers can communicate if the towers may communicate only via line-of-sight. This means that the towers cannot communicate if they are not visible to each other. There are many reasons that the towers might not be mutually visible, intervening mountains and large buildings can certainly obscure the line-of-sight between them. Here we consider a theoretical upper limit to the line-of-sight due to the curvature of the earth. We shall assume a perfectly spherical earth and ignore terrain variations, such as mountains and atmospheric effects. It is for that reason that the distance obtained will be an upper limit that is not often realized. Consider a tower transmitting to a receiver that is on the surface of the earth. The maximum distance will be obtained when the beam barely grazes the surface; i.e. is tangent to the great circle drawn through the transmitter and receiver. This situation is illustrated in the figure. As the beam continues to propagate, we are faced with a similar problem – how high must an antenna be to be in the path of the beam as it radiates further and further from the earth’s surface and finally into space. The key to solving this problem is to obtain the distance from the transmitting tower to the point on the great circle at which the beam is tangent to the earth. We do a little geometry here. The first is to recall the definition of angular measurement in radians. If an angle projected from the center of a circle of radius R onto its circumference spans a distance of D, then the angular measure in radians is Q = D / R. Note that if it spans 2·p radians, then the total distance is D = Q·R = 2·p·R; thus 2·p radians = 360 degrees. Two Towers of Height h Communicating by Line-of-Sight Inspection of the figure shows that the distance from the tower of height h to the farthest point that can see the top of the tower is given by D = R·Q, where R is the radius of the earth and Q is determined by cos(Q) = » . Before using the approximation in our derivations, let’s justify it. The radius of the earth is approximately 6.3784·106 meters. Suppose that h/R = 2.0·10–4, corresponding to a tower height of 1.276 kilometers or 4186 feet. Then, we have the following. = 0.999800000000, for an error of 4·10–6 percent. This establishes the value as an acceptable estimate of cos(Q) for our purposes. So we are using the equality cos(Q) = to get a value of the angle Q. To avoid taking the inverse cosine of a number, we resort to another approximation. We use the series expansion for cos(Q), which begins cos(Q) = 1 – Q2/2 + Q4/24 – …. to conclude that for |Q| very small that we can say cos(Q) = 1 – Q2/2. Hence we have Q2/2 = h/R, or Q = , and D = R·Q = R· = . Suppose that h = 1 kilometer = 103 meters, a fairly tall tower. Then 2·h·R = 2·103 meters = 1.27568 1010 (meters)2 D = 1.1295 105 meters, or approximately 113 kilometers. One can make a general formula by noting that = = 3.572·103, so that the distance in meters is given by D = 3.572·103·, where h is given in meters. For the same tower, h = 103 meters and = 31.623, so that D = 1.1295 105 meters, as above. For two towers, each of height 1 kilometer, trying to transmit by line of sight, the maximum separation is approximately twice the above number, or 226 kilometers. Just to be complete, let’s estimate the error in using the partial series 1 – Q2/2 as the value of cos(Q). In the first example, with the monstrous tower of height given by h/R = 2.0·10–4, we would say that cos(Q) = 0.999800039992, without the first approximation used for the reciprocal of . Using the approximation of as the reciprocal of and using the approximation of 1 – Q2/2 for cos(Q), we arrived at Q = , or Q = = 0.02 radians. An exact calculation gives cos(0.02) = 0.9998000066666, for an error of 3.33·10–6 percent. Thus, we conclude that for very small numbers that we can use some of these approximations, and specifically that for any reasonable tower height that the formula derived above for maximum range is sufficiently accurate.
<urn:uuid:eb25fa44-ede3-420b-b096-9598c3967e36>
CC-MAIN-2017-04
http://www.edwardbosworth.com/CPSC6126/Lectures/CPSC6126_Ch07.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00336-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947995
6,113
2.828125
3
Bit Errors & the Internet of ThingsInternet traffic, misdirected to malicious bitsquatted domains, has plagued computer security for years. The consequences will be even worse for the IoT. The digits zero and one are the natural language of computers. Almost anything can be represented inside a computer's memory simply by arranging zeros and ones into the proper sequence. However, because most computer memory consists of nothing more than a microscopic magnetic charge, these binary digits (bits) can also be susceptible to the conditions of their physical environment. Our bits are stored inside increasingly compact devices that function outside in the harsh environment of Planet Earth. Many of our devices are routinely subjected to extremes in temperature, in addition to hazards such as cosmic rays, which strike the Earth's surface as often as 10,000 times per square meter, per second. Under adverse conditions such as these, a one occasionally and inadvertently flips state to become a zero, or vice versa. For us, the common Internet users, bit errors can have a profound effect on our Internet traffic. For example, through the flip of a single bit, the domain name "s.ytimg.com" can become the domain name "snytimg.com. When this happens, Internet traffic originally destined for YouTube is sent to a completely different address. That's because the letter n from this example is only one binary digit different from the dot character. Other letters share a similar relationship. The letter o and the forward slash (/) differ by only one binary digit, as do the letter c and the character #. These characters can also cause mischief in the routing of Internet traffic. There is even a word to describe the registration of these bit error domains: bitsquatting. Misdirecting Internet traffic to malicious bitsquatted domains has serious implications for computer security. However, bit errors can also have terrible, even life threatening, consequences. Consider a 2005 advisory from St. Jude Medical in Mississauga, Ontario, to doctors who surgically implanted one of five models of implantable cardioverter defibrillators (ICDs). These devices use electric shocks to stimulate the heart muscle and help prevent sudden cardiac arrest. According to the advisory, cosmic radiation-induced bit flips affecting ICD memory chips "can trigger a temporary loss of pacing function and permanent loss of defibrillation support." Among the 36,000 installed devices, there were 60 reported cases of the anomaly, the advisory said, resulting in a significant failure rate of 0.17%. Fasten your seat belt In Australia in 2008, Qantas Flight QF72 was carrying more than 300 passengers at cruising altitude when it suddenly nose dived 650 feet. The pilots were able to bring the plane back to its original altitude before it suddenly plunged again, this time falling 400 feet. Some passengers were thrown out of their seats, and some were ejected out of their seatbelts, according to a 313-page report by the Australian Transport Safety Bureau (ATSB). Some passengers were flung so violently that the impact damaged the aircraft cabin ceiling. The ATSB investigation was able to eliminate almost all the potential causes of failure except one -- an airplane computer bit error caused by cosmic radiation. According to the ATSB report, "The CPUmodules for the two affected units did not have error detection and correction (EDAC)." Bit errors were also the focus of attention in a series of highly publicized lawsuits against Toyota Motor Corp. over a flaw in the electronic throttle control system that caused cars to accelerate out of control spontaneously. Last fall, the company settled a lawsuit in Oklahoma City after a jury returned a $3 million verdict in favor of two victims of a crash (one of whom died). An expert witness testified that a single flipped bit in the car's computer memory, perhaps as a result of cosmic radiation, could cause runaway acceleration, and that the working memory in the throttle system did not possess EDAC. Just this week, Toyota reached a $1.2 billion settlement with the US Department of Justice after a criminal probe of the carmaker's safety record related to unintended acceleration. As we connect more and more with so-called smart devices, it's important to be mindful of potential consequences that may not be completely obvious from the start. Gartner predicts that, by the year 2020, there will be more than 26 billion Internet-connected "things" -- not including PCs, tablets, or smartphones. These things will range from smart home climate controllers and door locks to cloud-connected picture frames -- even smart Crock-Pots and toilets. They are all susceptible to bit errors, because the cost of adding error-checking and correcting memory inflates the base cost of an item beyond what consumers are willing to pay. A 2009 study conducted at one of Google's datacenters found the rate of these DRAM errors in the wild to average anywhere from 25,000 to 75,000 FIT (failures in time per billion hours of operation) per Mbit. If there are 26 billion things connected to the Internet, then by 2020, every hour there will be somewhere between 650,000 and 1,950,000 errors per hour per Mbit. A modest installation of only 128 Megabytes of RAM contains 1,024 Megabits. Thus we can expect to see, minimally, anwhere from 665.6 million to 1.996 billion errors per hour across the entire Internet of Things. These errors will undoubtedly affect us all. Let's chat about how in the comments. Cisco's Threat Research Analysis and Communications (TRAC) team is dedicated to advancing the state-of-the-art of threat defense. Jaeson has worked for more than 20 years in information security. Prior to joining Cisco's TRAC team, he held positions at Counterpane, ... View Full Bio
<urn:uuid:638432a2-1bb2-463f-ad72-1c35c9ebe85b>
CC-MAIN-2017-04
http://www.darkreading.com/mobile/bit-errors-and-the-internet-of-things/d/d-id/1127914?_mc=RSS_DR_EDT
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00062-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946289
1,186
3.296875
3
OK, it might not be as simple as the headline implies. There are no home kits or even outpatient procedures for this. And at this point it also might work better if you're a mouse. Still, researchers at Yale School of Medicine recently announced they were able to reverse the aging process in the brains of mice by blocking a particular gene. The results of the experiments were published in the journal Neuron. The scientists hope the breakthrough eventually leads to an ability for adult humans to recover faster and more thoroughly from strokes and other brain injuries. That's because, as Yale explains, adolescent brains -- like adolescent bodies -- are more malleable and flexible than adult brains: The comparative rigidity of the adult brain results in part from the function of a single gene that slows the rapid change in synaptic connections between neurons. ...The Nogo Receptor 1 gene is required to suppress high levels of plasticity in the adolescent brain and create the relatively quiescent levels of plasticity in adulthood. In mice without this gene, juvenile levels of brain plasticity persist throughout adulthood. When researchers blocked the function of this gene in old mice, they reset the old brain to adolescent levels of plasticity. Stephen Strittmatter, a Yale professor of neurology and neurobiology and senior author of the paper, says the ability of researchers to identify and block NR1 "suggests we can turn back the clock in the adult brain and recover from trauma the way kids recover." Indeed, researchers discovered that the adult mice without NR1 were able to bounce back from injuries as fast as adolescent mice. The mice lacking NR1 also were able to learn complex motor tasks more quickly than adults with an active NR1 gene, raising the possibility that manipulating the corresponding gene in humans could accelerate rehabilitation. Now read this:
<urn:uuid:625f3fd8-59b1-4b4e-9e80-3d556cdcec5d>
CC-MAIN-2017-04
http://www.itworld.com/article/2713320/hardware/turn-your-old-brain-young-by-blocking-this-gene-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00392-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948875
360
2.984375
3
To support emergency managers following a disaster, the U.S. Department of Health and Human Services (HHS) created a tool kit of prepared text messages that deliver public health information to citizens’ cell phones. The idea came from an HHS representative who helped create public service announcements for emergencies, said Elleen Kane, a public affairs specialist within the HHS’ Office of the Assistant Secretary for Preparedness and Response. The representative thought text messages would be an easy way to deliver information that people need during a disaster to protect their heath. The department worked with state and local emergency managers from 400 agencies spanning 44 states and Washington, D.C., to determine what topics would be most beneficial. Available on the Centers for Disease Control and Prevention’s website, emergency managers and public health officials can download the tool kit and distribute the text messages using their existing cell phone message distribution system. The messages are limited to 115 characters or fewer including spaces and can be customized by the user agency. Kane said having standardized messages reinforces information from other sources, like public service announcements, and can save officials valuable time during and after an emergency. “The text messages already been developed by people who are experts in the field,” she said. “So they know that it’s good, solid information and is one less thing that they have to worry about at the time.” Currently the text messages focus on hurricanes, floods and earthquakes, but Kane said the tool kit will be built out to include information about other natural disasters as well as biological and nuclear emergencies. An example of one of the prewritten messages is: “Prevent child drownings. Keep kids from playing in or around flood water. More info from CDC 800-232-4636 or http://go.usa.gov/bGa.” Agencies can register with the HHS to get updates to the tool kit by e-mailing their contact information to email@example.com. Kane said the department is also looking for people who are interested in participating in future development of the messages. The tool kit is a collaborative effort of five HHS divisions: the Office of the Assistant Secretary for Preparedness and Response; the Office of the Assistant Secretary for Public Affairs; the Centers for Disease Control and Prevention; the Food and Drug Administration; and the Substance Abuse and Mental Health Services Administration.
<urn:uuid:ea41d417-db25-46d3-937e-cc85ff61edb4>
CC-MAIN-2017-04
http://www.govtech.com/em/health/Text-Message-Alerts-Public-Health-Tool-Kit.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944245
494
3.359375
3
Using a hierarchical structure, treemaps provide meaningfully organized displays of high-volume information. Alert naturalists scan the forests and the trees, taking in the overview and noticing seasonal changes while being on the lookout for fires. They also are watching for insect invasions that could damage trees, and they consider sites for controlled burns to reinvigorate the forest. Alert naturalists know what to look for and are quick to spot surprises. In a similar manner, alert managers do more than recognize expected patterns in sales cycles, movements in supply chains, or yields in manufacturing. Successful managers are skillful at spotting exceptional events, identifying emerging fashions, and noticing what is missing. When problems develop, they take prompt action to keep plans on schedule. When novel trends emerge, they change plans to take advantage of opportunities. Making confident, correct, and bold decisions is a challenge, especially when the volume and pace of activity are high. Experience and intuition remain important, but generating reasonable alternatives based on accurate information is necessary. The 30-year emergence of computerized databases and data warehouses from internal and external sources provides the business intelligence that is needed to spot trends, notice outliers, and identify gaps. Early relational database systems with SQL queries were a great step forward, and then business intelligence tools provided still easier access for some kinds of data exploration; but these point-and-click interfaces still produced tabular results or static graphics. Software that produces visual displays of search results with interactive exploration has only recently become widely available. One family of new tools are the organization-wide and manager-specific information dashboards that help ensure daily or even minute-by-minute situation awareness by presenting current status and alerts. These dashboards employ spatial presentations, color-coded meters, and iconic markers to provide at-a-glance information that indicates all is well or that action is needed. A second family of new tools is the more powerful information visualization and visual analytic software that supports ambitious exploration of mission-critical data resources. Well-designed visual presentations make behavioral patterns, temporal trends, correlations, clusters, gaps, and outliers visible in seconds. Since scanning tabular data is time-consuming and difficult, effective visual presentations are becoming highly valued. Training and experience in using these new tools are important to derive the maximum benefit. Organizations are learning how a few statistical or data analysis professionals can develop displays that hundreds of managers can use effectively. This strategy is supported by commercial software developers who provide powerful studio toolkits for designers to make simplified displays that serve the needs of specific managers. The good news is that appropriate user interface designs can integrate data mining with information visualization so users can make influential discoveries and bold decisions. Treemaps are a space-filling approach to showing hierarchies in which the rectangular screen space is divided into regions, and then each region is divided again for each level in the hierarchy. The original motivation for treemaps was to visualize the contents of hard disks with tens of thousands of files in 5-15 levels of directories. Many treemap implementations have been produced, but you might want to start with the free version called SequoiaView (Figure 1), which lets you browse your hard drive. In Figure 1, the area indicates file size and color shows file type. An early popular application on SmartMoney Magazineís Web site shows 600 stocks organized by industry and by sub-industry in a 3-level hierarchy (Figure 2). The area encodes market capitalization and color shows rising or falling prices. Users become familiar with industry groups and specific stocks so when one group (such as energy stocks) is down, they notice immediately. Treemaps for stocks are especially interesting on days when an industry group is largely falling (shown as red), but one company is rising (green). Figure 2 shows that on a particular day, there is a mostly green communications sector with one bright red problem and an interesting bright green stock in utilities. Treemaps for Sales Monitoring Letís take a look at a simple example of sales force management that is available for your interactive exploration. The basic display shows 200 sales representatives in six sales regions, with size indicating total sales for the fourth quarter (Figure 3). Green regions indicate above quota and red below quota. This example reveals a typical mixed picture with some high- and some low-performing sales representatives. The main good news is from the Northeast and the Mountain West regions where many green regions indicate above quota performance. There is some cautionary news about the Southwest; but even there, one of the salespeople has delivered well above quota. A simple movement of the cursor over any region or group heading generates a pop-up box with detailed information. To get an understanding of the best sales representatives, users can use the filters on the right side control panel. Moving the Total Sales -- Q4 slider to show only high sales figures and moving % of Quota Met -- Q4 slider to limit the display to those above 100%, we see the top ten sales representatives in bright green (Figure 4). There are strong performers who are doing well above quota in all six regions. Turning to the problems, users can use the filters to remove all but those doing much below quota (Figure 5). These sixteen are only in the Mountain West and Southwest, so maybe a discussion with those region managers might help to understand what could be done to improve sales for the next quarter. These are simple cases meant to demonstrate possible analyses. Larger cases with hundreds of products take time to learn but provide managers with unusual powers to analyze their data by region, salesperson, product, and time period. Pharmaceutical companies are doing just that to understand which products are gaining or losing, while insurance companies are analyzing claims to detect patterns of fraud in tens of thousands of claims. Treemaps for Product Catalogs Another consumer-friendly application of treemaps is the Hive Groupís presentation of the daily status of the iTunes 100 most popular songs, grouped by genre (rock, pop, hip-hop, etc.) show in Figure 7. The highest ranked songs are larger, and color-coding shows whether a song has moved up or down in the past day. A final consumer example which has proven successful is Peets Coffee Selector shown in Figure 8. Itís a small treemap, but a survey of their customers revealed strong preferences for the treemap versus the tabular presentation of products. Sliders to filter data items allow users to limit the display to just those items that interest them, maybe the high-performing salespeople or the ones who are not meeting quotas in regions where most salespeople are above quota. Another way of zooming in on sections is to use the entire display to show just some branches of the hierarchy. The treemap algorithm used in many commercial applications is based on the squarified strategy that makes each box as square as possible, usually placing the large squares in the upper left and the small squares in the lower right. This is visually appealing and helpful in understanding the range of size differences. Sometimes it is important to keep the items in order by name or date, in which case the order-preserving treemap algorithms such as slice-and-dice or strip treemaps are helpful. Supportive evidence comes from a recent controlled experiment comparing spreadsheets to the Hive Group software. This study by Oracle found that treemaps were significantly faster for all eight tasks tested. The author concluded: "These results suggest that treemaps should be included as a standard graphical component in enterprise-level data analysis and monitoring applications." Improvements are inevitable as users apply treemaps for ever wider sets of problems. The good news is that new ideas and applications for treemaps are emerging weekly. One that I like especially was the cleverly designed newsmap that shows news stories from around the world in a way that makes prominent stories more visible. I wonder what business or consumer application will be the next one to cause excitement on the Web Ė maybe it will be yours. About Ben Shneiderman Ben Shneiderman is a Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory, and Member of the Institute for Advanced Computer Studies and the Institute for Systems Research, all at the University of Maryland at College Park. He has taught previously at the State University of New York and at Indiana University. He was made a Fellow of the ACM in 1997, elected a Fellow of the American Association for the Advancement of Science i n 2001, and received the ACM CHI (Computer Human Interaction) Lifetime Achievement Award in 2001. He was the Co-Chair of the ACM Policy 98 Conference, May 1998 and is the Founding Chair of the ACM Conference on Universal Usability, November 16-17, 2000. Dr. Shneiderman is the author of Software Psychology: Human Factors in Computer and Information Systems (1980). Shneiderman, B., "Using Treemap Visualizations for Decision Support", DSSResources.COM, 06/23/2006. Ben Shneiderman, Stephen Few and Jean M. Schauer provided permission to publish and archive this article at DSSResources.COM on April 11, 2006. A version of the article was originally published on The Business Intelligence Network on April 11, 2006 at www.BeyeNETWORK.com. This article was posted at DSSResources.COM on June 22, 2006.
<urn:uuid:5d83964d-d369-46b0-8420-e562579ac41f>
CC-MAIN-2017-04
http://dssresources.com/papers/features/shneiderman/shneiderman06232006.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00237-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926375
1,940
2.578125
3
Gillware deals with a lot of failed RAID systems on the side of data recovery. Partially because problems can arise with so many hard drives running, but more likely because their array was not handled correctly. RAID can be extremely complex, so it shouldn’t be surprising. In this post, we’ll explain some of the standard RAID levels so people can understand the purpose of RAID a bit better. First and foremost, RAID stands for Redundant Array of Independent (or Inexpensive) Disks. RAID is a way of utilizing multiple hard drives to improve performance or provide redundancy/fault tolerance, depending on which configuration you’re using. There are both single and double (and rarely triple) digit levels, with two digit configurations such as RAID 10 signifying a nested level that would more appropriately be referred to as RAID 1+0. To set up a Redundant Array of Independent Disks, a RAID controller is required. This can be either a piece of hardware or software that ensures your hard drives work together logically in a RAID setup. Without it, you just have multiple singular, separate hard drives. RAID 0 is purely for performance, requires a minimum of two drives, and uses disk striping to improve read/write speeds. It’s occasionally used by businesses but is more likely to be used by home PC users. The way it works is by essentially splitting your data up and saving half to one drive and half to the other. Because you now have two drives to perform the same task that was being performed by one before, it will be done much more quickly. You can also use more than two drives to even further improve read/write performance, though there are diminishing returns. Additionally, RAID 0 uses block-level striping, as opposed to either bit or byte-level (RAID 2 and 3 respectively). While RAID 0 improves performance, it doesn’t offer any redundancy or fault tolerance, so if one drive fails, all your data is gone. Therefore, it is absolutely imperative that you consistently back up your data if you’re running RAID 0. RAID 1 is simple disk mirroring and requires a minimum of two drives. When data is saved to one drive, it is immediately copied to the other drive. If one drive fails, the other one still has all your data. If both fail, well the outcome should be obvious. While it provides nice redundancy, one problem with RAID 1 is that because it’s saving everything to both drives, your write performance suffers a bit. Another drawback is that your effective storage is halved. For example, if you have two 1TB drives, equaling two Terabytes of total storage, you only have 1TB of effective storage in RAID 1 since both drives are saving all your data. RAID 5 is most often used by businesses and requires a minimum of three drives. First, the RAID-controller stripes the data onto the first two drives (acting the same as RAID 0). It then takes the data from drives 1 and 2 and creates parity bits that it then puts in drive 3. Using XOR (exclusive-or) logic, the parity bits make it possible to lose one drive and still have all of your data. Even though you should theoretically lose your data if you lose drives 1 or 2 (like in RAID 0), the RAID-controller is able to use parity bits along with the bits in the remaining drive to reconstruct the bits of the failed drive. Here’s the basic pattern for creating the parity bits: |Drive 1 bit||Drive 2 bit||Parity Drive bit| Using this table, the RAID-controller is able to figure out what the missing bit should be and then logically reconstruct the failed drive. RAID 6 can be thought of as a tougher version of RAID 5 because there are two parity drives rather than just one. Up to two drives can fail before data becomes unrecoverable. RAID 10 is one of the potential nested configurations we can discuss. Also referred to as RAID 1+0, this RAID configuration combines RAID 1 and RAID 0. Using the mirroring from 1, you get redundancy/fault tolerance. Using the striping from 0, you get better performance. Logically, it requires a minimum of four drives to function, since you need a minimum of two drives to stripe in addition to two drives to mirror both of those striping drives. This makes it a rather expensive RAID level, but can be extremely useful. To explain in more detail, it is a stripe of mirrors. You have two drives on either side, to make a total of four drives. Two drives on one side mirror each other, and the two on the other side mirror each other. The first thing to happen is the striping, or the RAID 0 function, which saves to one of the drives on either side. After that, the mirror drives on either side receive a copy of their half of the data. Up to one drive on both sides may fail without data loss. If both drives on one side fail, you have a catastrophic failure similar to that of RAID 0 where all your data is lost. Other Possible Levels There are many other potential RAID levels, however they’re all essentially varied combinations of the functions these RAID levels use. Now that you know the basics of what RAID setups can do, you should be able to understand the rest if you decide to learn about the other levels. Other potential RAID levels include RAID 2, 3, 4, 01 (different from 10 in how it works), 03, 50, 100, and so on. Remember, RAID can be extremely useful for your home or business, but you should understand what you’re doing before you decide to create an array. If you go in without a proper understanding, you might set it up improperly or put yourself in a situation that could lead to data loss. If you do your research, you’ll be fine!
<urn:uuid:0f41b169-a085-458f-97fc-4474cf9ba738>
CC-MAIN-2017-04
https://www.gillware.com/blog/data-recovery/standard-raid-levels/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00237-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93994
1,223
3.046875
3
There's good news, and not-so-good news. The good news is the number of people accessing the network over broadband continues to increase. The not-so-good news is that the term "broadband" is so broad that it's difficult to tell how good the good news really is. The Pew Research Center's Internet and American Life Project conducted a survey of adults in the United States to determine what percentage have made the transition from archaic dial-up Internet access to more modern broadband connections. The results are that broadband access has climbed to 70 percent, while dial-up remains steady at three percent. The problem is the things considered to be "broadband" cover a wide range of connection speeds. The actual survey questionA used by Pew was, "At home, do you connect to the Internet through a dial-up telephone line, or do you have some other type of connection, such as a DSL-enabled phone line, a cable TV modem, a wireless connection, or a fiber optic connection such as FIOS?" Whether a home relies on a 3G wireless connection, a DSL connection, a cable modem connection, or happens to be lucky enough to live in an area served by Google Fiber, all these technologies are considered "broadband." However, Google Fiber is thousands of times faster than some 3G wireless connections, so it's a little silly to lump them together at all--never mind suggesting they're all "high-speed broadband." It's great to see the percentage using broadband continue to inch up, but it's also a bit misleading to call everything that isn't dial-up "high-speed broadband." The National Broadband Plan put out by the FCC in 2010 set a bar that every household should have 4Mbps Internet access by 2020. Contrast that with South Korea, which established a goal to connect every home in the country with gigabit fiber connections by 2012. Clearly, the U.S. definition of "high-speed" is different than many other developed nations. Sadly, though, as pathetic as 4Mbps is by global standards it's still exponentially better than much of what we currently classify as "high-speed broadband." The difference is staggering, and it has a significant impact on other technologies, and whether or not businesses or consumers can take advantage of them. Consider the fact that it takes two and a half days to download a 5GB file over a 3G "high-speed broadband" connection, but less than a minute to download the same file over a gigabit fiber "high-speed broadband" connection. Would you rather be the business that can download and review a 5GB file over a cup of coffee, or the business that has to plan days in advance to download the file? Which business do you think has the strategic advantage? We've reached a point where dial-up should no longer be considered part of the debate. It's dead. Move on. As long as we focus on "broadband vs. dial-up" and pat ourselves on the back for the increased use of "broadband", we're missing the bigger picture--and bigger problem--that the term "broadband" is too broad, and that we need to raise the bar of what's considered adequate. This story, "Definition of 'Broadband' is Too Broad" was originally published by PCWorld.
<urn:uuid:bf1fa741-52c3-4694-844d-e0d972a84adb>
CC-MAIN-2017-04
http://www.cio.com/article/2382706/networking/definition-of--broadband--is-too-broad.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00053-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964475
686
2.625
3
Google to bring NOAA data to new heights - By William Jackson - Jan 26, 2010 Have you ever wanted to go to a Web site and get detailed information about weather patterns, ocean temperatures or rainfall averages? The National Oceanic and Atmospheric Administration is one of the federal agencies most responsible for getting scientific information out to the public, and it has chosen Google as a partner to collaborate on several research and development projects to make the data more accessible online. The cooperative R&D agreement will enable the two organizations to collaborate for their mutual benefit and for the benefit of the public without an exchange of money. Google gets access to vast amounts of data and Earth sciences expertise, and NOAA gets access to software and visualization acumen. “We have such vast amounts of information about the Earth’s systems, and we’re looking for better ways to get that information to an ever-increasing audience,” said Richard Spinrad, NOAA assistant administrator for research. “Google is arguably the world’s expert in handling data, particularly in large volumes.” The agreement is not obligatory and does not bind the parties to specific projects. But it gives each party broad access to the resources of the other that are more difficult to provide on case-by-case basis, and the chance to cooperate in areas of mutual interest. “Some may pan out and some may not,” Spinrad said. “But they are opportunities.” One of the opportunities is visualizing data. NOAA is no stranger to visualization. Its Science on a Sphere technology, which it licenses to museums, science centers, research facilities and schools around the world, uses a computer-driven multi-projector system to realistically display scientific data about the Earth on a spherical surface. Spinrad would like to expand that capability to new datasets. “Right now on Google Earth you can pull up data on a flat screen and see some really interesting things,” he said. “Imagine doing that on a sphere. That’s powerful stuff.” This would be done by rendering files in the Keyhole Markup Language used by Google Earth for display in the Science on a Sphere system. This would require translating files from two-coordinate dimensions to four coordinates (the three spatial dimensions plus time). Doing this would give Google a new way to display its data and would give NOAA new datasets to use with Science on a Sphere. A shortcoming in Google Earth is the difficulty of showing real-time data. NOAA would like to make this more efficient so that video and data from its new Okeanos Explorer exploration ship, now undergoing sea trials in the Pacific, could be displayed in Google. The ship has the latest communications technology for live, near-real-time audio, video and data transmission via satellite and Internet2 to five on-shore Exploration Command Centers that will give scientists on shore an opportunity to participate in the ship’s mission as they are needed. “We want to be able to use Google Earth the vehicle to let school kids all around the world see what’s going on on the floor of the Indian Ocean,” Spinrad said. Doing that would require plenty of bandwidth and sophisticated handling of large data flows. “You can’t just have streaming video, you have to have some metadata and context.” This is not the first collaboration between Google and NOAA. After Hurricane Katrina, digital aerial images of the disaster area from the National Geodetic Survey were made available within 48 hours on Google Earth, giving individuals quick access to detailed information of damage in specific areas. “It was a wonderful mix of the NOAA mission and the ability to reach out to the public,” Spinrad said. Spinrad was part of an advisory council that worked with Google to produce Google Ocean, launched last year to make geospatial data in the form of NOAA images, video and current data available in Google Earth. The current agreement is a way of extending these efforts, said Alan Leonardi, NOAA’s principal investigator for the agreement. He spent a five-month fellowship at Google in 2008 and 2009. During that time, he operated as what he called the “NOAA-Google dating service,” bringing together experts from each organization to work on projects. The agreement continues and expands that relationship in a more formal manner. Other possible areas of cooperation are compiling and improving bathymetric datasets for display and downloading, making data from the Integrated Ocean Observing System and Greenhouse Gas Monitoring System available online, and providing interactive access to marine zoning and regulatory information. “The public is paying for it,” Spinrad said, so it should be available to the public. William Jackson is a Maryland-based freelance writer.
<urn:uuid:de8eab76-e40a-463f-aebe-b80ea9b04439>
CC-MAIN-2017-04
https://gcn.com/articles/2010/01/26/noaa-google-coop-012610.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00265-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924761
989
2.640625
3
How have NASA's Mars robots lasted 24 times longer than expected (so far)? With a life expectancy of three months, the Spirit and Opportunity robotic vehicles are still in service six years later - By Alice Lipowicz - Jun 30, 2010 NASA’s Spirit and Opportunity Mars Rovers are the Energizer Bunnies of outer space; both are still operational after six years, after only being expected to run for 90 days. Even though the robotic vehicles have shown exceptional longevity in the field, their capabilities would be quickly outpaced by humans on Mars, Steve Squyres, principal investigator of NASA’s Mars Exploration Rover Mission, said at a seminar today sponsored by Federal Computer Week. “I am a robots guy, but what the Mars Rovers have done in six years a human could do in a week,” Squyres said. When asked whether he would he volunteer for the job, Squyres answered, “In a heartbeat.” According to Squyres, the Rovers owe their longevity to cautious testing and engineering while in development. “We used no new technologies, only proven technologies,” he said. “And we were very, very cautious in our parts selection, assembly and testing.” NASA, Google provide 3-D views of Mars NASA images reveal red planet However, the Rovers’ software is a different story. Some of the programs for moving and operating the Rovers was developed during the five months while the vehicles were on the way to Mars, he said. NASA also benefited from an unanticipated stroke of luck because winds have been regularly blowing dust and debris from the Rovers' solar panels, prolonging their usefulness, Squyres added. NASA has spent more than $900 million on the missions of the Rovers. One of the most heralded discoveries so far is that Mars once had abundant water. However, no evidence has been found yet of biological life there, which most likely would have been microbes, Squyres said. Currently, Opportunity is moving over sand dunes to the 14-mile-wide Endeavour Crater. The Rovers already have explored a number of craters and rock formations, discovering pebble-like hematite and a deposit of pure silica. Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week.
<urn:uuid:ed37a127-21c9-4213-ae98-054ebc5a1e2b>
CC-MAIN-2017-04
https://fcw.com/articles/2010/06/30/mars-rovers-owe-longevity-to-proven-technologies-nasa-leader-says.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00383-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959903
502
2.953125
3
The ubiquitous antenna was all the buzz last week as Apple tried to squelch the latest glitch in its popular iPhone. But those antenna issues have nothing on the renovations NASA is taking on to reinvigorate its 70-meter-wide (230-foot-wide) "Mars antenna." The antenna, a key cog in NASA's Deep Space Network, needs about $1.25M worth of what NASA calls major, delicate surgery. The revamp calls for lifting the antenna -- about 4 million kilograms (9 million pounds) of finely tuned scientific instruments - to a height of about 5 millimeters (0.2 inches) so workers can replace the steel runner, walls and supporting grout. This is the first time the runner has been replaced on the Mars antenna, NASA said. The operation on the historic 70-meter-wide (230-foot) antenna, which beamed data and watched missions to deep space for over 40 years, will replace a portion of what's known as the hydrostatic bearing assembly. This assembly enables the antenna to rotate horizontally, NASA stated. According to NASA, the bearing assembly puts the weight of the antenna on three pads, which glide on a film of oil around a large steel ring. The ring measures about 24 meters (79 feet) in diameter and must be flat to work efficiently. After 44 years of near-constant use, the Mars antenna needed a kind of joint replacement, since the bearing assembly had become uneven, NASA stated. A flat, stable surface is critical for the Mars antenna to rotate slowly as it tracks spacecraft, NASA said. Three steel pads support the weight of the antenna rotating structure, dish and other communications equipment above the circular steel runner. A film of oil about the thickness of a sheet of paper -- about 0.25 millimeters (0.010 inches) -- is produced by a hydraulic system to float the three pads, NASA stated. The repair will be done slowly but is expected to be done by early November. During that time, workers will also be replacing the elevation bearings, which let the antenna track up and down from the horizon. Meanwhile the network will still be able to provide full coverage for deep space missions by using the two other 70-meter antennas at Deep Space complexes near Madrid, Spain, and Canberra, Australia, and arraying several smaller 34-meter (110-foot) antennas together, NASA stated. While officially known as Deep Space Station 14, the antenna got its Mars moniker from its first mission: tracking NASA's Mariner 4 spacecraft, which had been lost by smaller antennas after its historic flyby of Mars, the space agency stated. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:3fd71e48-a8ce-4af7-a3a7-7e016414593e>
CC-MAIN-2017-04
http://www.networkworld.com/article/2231356/security/no-iphone-bumpers-here--nasa-revamps-historic-9-million-lb-mars-antenna.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00503-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939648
562
2.703125
3
Use of Non-Potable Water in the Facility’s Cooling Infrastructure Supports Environmental and Conservation Efforts in California’s Drought-Stricken Region SAN JOSE, CALIFORNIA – SEPTEMBER 16, 2015 – Infomart Data Centers, a national wholesale data center provider, today announces it has completed construction on infrastructure necessary to utilize recycled water for cooling and landscape irrigation at its Silicon Valley data center. The Company’s move to use non-potable, or “gray water,” for 100 percent of the mechanical infrastructure and irrigation surrounding the facility reduces costs and lowers its environmental impact. Data centers have come under fire in recent years for the significant amount of water required to cool high-density server environments. According to one estimate, a 15 MW data center can use up to 360,000 gallons of water per day and drought conditions, such as in California, can exacerbate the delicate balance between a data center facility’s water needs and limited resources. Infomart Silicon Valley, the first multi-tenant data center to achieve LEED Gold certification in California, is again leading the industry in sustainability efforts with the conversion to recycled water. Because potable water is an increasingly endangered resource around the planet, Infomart made the investment to convert to gray water usage, which helps preserve regional potable water and aids the public utility by easing pressure on strained supplies. “While water is necessary to keep our mission-critical data centers and server environments cool, the levels consumed by these facilities also place a great strain on water resources – especially in the drought-plagued Western U.S.,” says John Sheputis, President of Infomart Data Centers. “Infomart’s significant investment in the Silicon Valley recycled water conversion demonstrates our commitment to building energy-efficient, sustainable data centers.” The Infomart Silicon Valley recycled water system uses state-of-the-art water quality monitoring that provides advanced warning for operational issues caused by elevated water hardness, alkalinity or Total Suspended Solid (TSS) levels, while ensuring that no tainted water penetrates into the data center’s mechanical infrastructure, where it can cause corrosion. Based on modeling exercises, with assumptions of 60% load and 60% gray water utilization, Infomart expects to save 800,000 to 1 million gallons of potable water per month. “Our goal is to reach 100% gray water utilization, and we are working with the utility on these plans,” adds Sheputis. To learn more about Infomart Data Centers and Infomart Silicon Valley, visit http://infomartdatacenters.com/locations/silicon-valley/. # # # About Infomart Data Centers Founded in 2006, Infomart Data Centers (formerly Fortune Data Centers) is an award-winning industry leader in building, owning and operating highly efficient, cost-effective wholesale data centers. Each of its national facilities meet or exceed the highest industry standards for data centers in all operational categories of availability, security, connectivity and physical resilience. Infomart Data Centers offers wholesale and colocation facilities in four markets throughout the United States: San Jose, Calif.; Hillsboro, Ore.; Dallas, Texas; and Ashburn, Va. For more information, please visit www.infomartdatacenters.com or connect with Infomart on Twitter and LinkedIn. iMiller Public Relations for Infomart Data Centers
<urn:uuid:cbb80164-f3b0-4b05-89b9-2595c0d0cfbc>
CC-MAIN-2017-04
http://infomartdatacenters.com/press_item/infomart-data-centers-silicon-valley-recycled-water-conversion-saves-millions-of-gallons-of-potable-water-per-year/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884178
724
2.796875
3
by Geno Zaharie, National Partner Manager, LifeSize Visual imagery in sequence is made possible by the passing of time. Our eyes produce fantastic motion screenshots, but the human visual system does not see in terms of frames; it works with a continuous flow of light information. How we perceive this light grounds our unique independent reality from a visual perspective. Video and I have been fairly close for some time now. About 18 years ago, I put some videotape footage into a computer and made a movie. It was a movie about a squirrel that my dog followed which then ran up a telephone pole and made an escape over the power lines. My movie was also about my neighbor pulling weeds. He kept smiling and nodding while pulling endless dandelions from small patches of grass that he called a lawn. The movie ended with a close up shot of my garage door window that had a crack in it. The movie described above was about absolutely nothing. It was a movie produced solely for the purpose of my own education – how to make a movie with a computer. I showed the production to my family who were amazed that the videotape was magically inside the computer. My intent for the video was realized. It worked. Today, what may appear as purposeless video is everywhere. When Apple put a video camera in the iPhone 3GS, single click mobile uploads to YouTube increased by 400%. The YouTube video vault was now “accessible” to the masses through a device in everyone’s pocket. Every minute, 24 hours of new video footage is uploaded to YouTube. The majority of footage is about nothing. However, to someone, it’s about something or people wouldn’t be watching three billion videos daily. A study by two really smart people at the University of Illinois on The Effect of Context-Based Video Instruction on Learning and Motivation in Online Courses found some truths in using video for learning. The study determined that there was a significant difference in learners’ motivation in terms of attention between the video-based instruction and traditional text-based instruction. In addition, the learners responded that the video-based instruction was more memorable than the traditional text-based instruction in the online context-based learning situation. Motivation in terms of attention is the key point. We can assume a motivated person is more likely to learn something compared to an unmotivated person. If video increases motivation toward attention and is more memorable than text, imagine the results if your video held some information that someone actually needed. Even if they didn’t know they needed the information, the association qualities of video to recall what they saw and heard is a good thing. With endless social media distribution vehicles that enable micro-marketing to narrow band audiences, one has to really understand the social connection aspect of this new medium. Recently, LifeSize introduced a product called LifeSize® Video Center. The system can record/stream interactive HD video conferencing sessions, broadcast a single user to his/her target audience, receive uploads of produced video material, and at the same time serve as the all-in-one asset management engine with rules and permissions of who can watch. It’s a super mega visual information aggregator/distributor/communicator for spontaneous or planned video/audio/text content. Woah. Best of all, it just magically lives in our business/educational communities and becomes part of our communication fabric. The simplicity of one-click recording, uploading and playback make it as seamless to use as your smartphone. So, if people watch video about nothing and have a higher probability of remembering what that nothing was about, what if your video was about something important intended for the organization, team or individual? Since technology such as Video Center and YouTube remove accessibility barriers from an input-output perspective, let those videos pile up! Like your TV with over 700 channel choices and tethered DVR that records specific special interest content within those channels, load up your vault of visual information. I believe that all video about nothing or something will surely find its way to a motivated viewer – and you don’t have to be a heart surgeon to figure that out:
<urn:uuid:5f652ae8-27f7-434f-b289-b03f84cb83ca>
CC-MAIN-2017-04
http://www.lifesize.com/video-conferencing-blog/something-about-nothing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95718
855
2.609375
3
NAND Flash, the nonvolatile memory used in smartphones, SSDs (solid-state drives), and many other devices, keeps dropping in price. But the physics of silicon lithography indicates we can’t keep shrinking its size. “These warnings have been out a long time,” says Jim Handy, analyst at market research firm Objective Analysis. “Intel warned they couldn’t make flash below 60 nm [nanometers], but now they’re shipping chips made at 16 nm.” Companies are searching for new ways to make nonvolatile memory, and Santa Clara, Calif.-based start-up Crossbar Inc. believes it has the best option with its RRAM (resistive RAM). “RRAM is higher density, has more endurance and longevity, and is 20 times faster than NAND flash while using a fraction of the electricity,” says Hagop Nazarian, vice president of engineering at Crossbar. “RRAM works in 3D, so you can stack chips to increase memory from 16GB to 32GB with another layer, and 64GB with another two.” Crossbar has silicon wafers in testing now, and expects to ship product in two to three years. Crossbar’s RRAM uses a pair of electrodes separated by their proprietary amorphous silicon switching media that moves silver ions into a filament that dramatically lowers resistance. Reversing the current moves the silver ions in the other direction, breaking the filament and raising resistance. Handy counts numerous Crossbar competitors also trying to create the next flash memory breakthrough. “HP said they’d have their memristor [technology] shipping by now, but it’s not. RRAM, MRAM [magnetoresistive RAM], ReRAM, and PCRAM [phase-change memory] are all significantly more expensive than NAND flash, and cost is just about everything in memory.” Nazarian believes Crossbar’s RRAM has the inside track. “We have filed over 100 patents and 30 have been issued. Our technology is CMOS compatible with multiple layers, using techniques chip foundries already use, so we will be able to add embedded memory onto microcontrollers. We’ll serve all markets for memory, from the smallest device in the Internet of Things to the largest servers.” “NAND flash will be around for another decade,” says Handy. “There will be a couple more generations of current technology, then three generations or more of 3D NAND flash. But someday NAND flash prices will level out because of the technology difficulties, and these other memory options will drop enough to match their price.” When the switch does come, resellers will have few changes to operations to integrate the new memory, says Handy. Controller makers may be able to scale back complexity of the controller chips. “You may update your 2103 iPhone with 1TB of NAND flash to a 4TB model with a new memory type.”
<urn:uuid:1aa98252-35ab-42b1-80ca-0f3d10b7d3f8>
CC-MAIN-2017-04
http://www.channelpronetwork.com/article/rram-takes-flash
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00035-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920596
633
2.578125
3
The United States Food and Drug Administration (FDA) has issued a notice today to warn that a lot of pacemakers are vulnerable to hackers, the attackers can get full control of the devices. The FDA specially said that they are aware of what the [email protected] transmitters manufactured by St. Jude Medical can be hijacked by other hackers, once hacked, attackers can send various commands and can even develop shocks that can kill the patents. These transmitters use a wireless Radio Frequency signals that connect to home monitors and doctors’ systems. They transmit data regarding the cardiac activity and upload the information to the Merlin.net Patient Care Network, where the information is can be closely inspected by physicians. This is where hackers come in. They can intercept the signal and control the pacemakers. The FDA warns, that there’s a chance that this could put patients’ lives at stake. “The FDA has reviewed information concerning potential cybersecurity vulnerabilities associated with St. Jude Medical’s [email protected] Transmitter and has confirmed that these vulnerabilities, if exploited, could allow an unauthorized user, i.e., someone other than the patient’s physician, to remotely access a patient’s RF-enabled implanted cardiac device by altering the [email protected] Transmitter,” the FDA says in the notice. “The altered [email protected] Transmitter could then be used to modify programming commands to the implanted device, which could result in rapid battery depletion and/or administration of inappropriate pacing or shocks.” No attacks have been recorded so far, but the FDA says that St. Jude Medical has already developed a software patch, and all pacemakers need to be running it to be fully protected against the vulnerability. Available since January 9, the patch is automatically applied once the transmitter is plugged and connected to the Merlin.net network.
<urn:uuid:0cd14cba-2be8-4313-892d-82534bf83f22>
CC-MAIN-2017-04
https://latesthackingnews.com/2017/01/11/hackers-can-stop-pacemakers-and-kill-patients-warns-us-government/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00365-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935876
384
2.640625
3
FTP Means ‘First Try Pinging’ April 4, 2007 Cletus the Codeslinger File Transfer Protocol, or FTP, was obviously designed by academics. While academics are OK people (after all, my editor, Ted Holt, is a part-time instructor at a community college), they are not familiar with what goes on at the factory (like the one where I work full-time). That means that making FTP (and other Unix-type applications) work dependably in an automated environment can be a challenge. Here’s a tip that can help. FTP was intended to work this way: a human types a command into a computer. The computer responds. The human types another command. The computer responds. Etc. Etc. And so forth. Ad nauseum. That’s all well and good, but who wants to crawl out of bed at 3 a.m. every day to send a file to somebody? Unix has a “solution” to this problem–scripting. Put the FTP commands into a text file and tell the computer to read them and run them. While you’re at it, tell the computer to store the responses from the remote system in a text file. And whatever you do, don’t let the script read those responses, determine if an FTP command succeeded or failed, and continue accordingly, as a human would. It would help if the wienies who design this junk would add some useful features to FTP, such as the ability to check a return code and to make decisions accordingly. But I’m not holding my breath. Academicians are too busy publishing (to keep from perishing) and applying for grants. Seeing as we’re saddled with crippleware, let’s do the best we can do. One of the most common reasons FTP fails is that the remote server is down. Use the Verify TCP/IP Connection (PING or VFYTCPCNN) command to determine whether the server is up or not. This is really easy on my robust System i computer, because some practical someone at IBM thoughtfully provided a way for PING to send an escape message. It’s in the second positional value of the MSGMODE parameter. (This is not your standard ping.) In the following code example, the PING command tests to see if the server is up. If the PING fails, the system sends escape message TPC3210. dcl &Server *char 50 dcl &ServerIsUp *lgl chgvar &ServerIsUp '1' ping rmtsys(&Server) msgmode(*quiet *escape) monmsg tcp0000 exec(chgvar &ServerIsUp '0') if (*not &ServerIsUp) do /* whatever */ return enddo clrpfm ftplog mbr(SomeMbr) ovrdbf file(input) tofile(ftpscripts) mbr(SomeMbr) ovrdbf file(output) tofile(ftplog) mbr(SomeMbr) ftp rmtsys(&Server) dltovr *all monmsg cpf0000 I look forward to the day when everybody uses Unix. Instead of having just a few IT people at the factory, we’ll need an army, and that means we’ll all have jobs. Yes, Unix is truly the full-employment operating system.
<urn:uuid:040e2a5e-7c25-44c5-aed3-ca83fbad9a6d>
CC-MAIN-2017-04
https://www.itjungle.com/2007/04/04/fhg040407-story01/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00273-ip-10-171-10-70.ec2.internal.warc.gz
en
0.890871
720
2.515625
3
Since the dawn of the computer age, computing has been all about the interaction between man and the machine. In the beginning communication with a machine was very limited relying on complex mechanisms such as the punch card but as time went by and technology improved the interaction between humans and machines increased… drastically. Fast forwarding to the present day, computers are now extremely complex machines that can perform an impressive amount of calculations per second and we certainly do not let such power go to waste. Such power and complexity however is not without a price. A modern system consists of a lot of different software running simultaneously on a wide variety of hardware. When this mix of software and hardware works harmoniously we humans can get a lot of work done but if that balance is upset it can cost us a lot of time and money. Unfortunately there is a lot that can upset this all important harmony from hardware failures to bugs to hacking attacks both internal and external. All is not doom and gloom however because just as our interaction with computers has increased so did their interaction back to us and if we listen, computers will tell us when something has gone wrong. With a lot of different systems and complexity one can expect a lot of communication going on here and this is in fact the case. Each different system however mitigates this by centralizing this communication as much as possible. In the Windows environment this communication (or better yet logs) is generally centralized in the Windows Event System; on Linux/Unix Operating systems we find logs centralized in the SysLog System and we get Devices communicating to us using SNMP. That’s the general rule for in fact we find devices that use the Syslog System for logging and even application on both Windows and Linux that use SNMP. Now that we know where to look, what can we actually do with the data? A general misconception one encounters is that logs are only useful if you are doing forensic analysis. While this is obviously one possibility, logs can provide us with details on much more! Other useful information that one can find in logs includes: - System Health - when hardware such as Hard drives start to fail one can generally find reports in the logging system about this occurrence - Machine Performance - when system runs out of memory or applications crash there will be log entries regarding this - Monitoring Servers - All servers be it Mail, Web or Firewalls will log about their own activities and inform the administrator of any failures, lack of system resources or suspicious behavior they encounter - User Activities - Logs can also provide a picture on how a user is using a system as actions such as reboots, login operations and various system interactions will be logged - System Behavior - The system will log its own action, from the logs you can find out which services were loaded and when, what devices connected, what services came online or went offline and other such information - System Failure - While sometimes application failure is quite visual popping up error messages and such to inform the user of the failure, at other times applications, especially servers, might fail silently with the only proof of such failures residing exclusively in the log - A crucial part of compliance is to ensure that monitoring mechanisms are running effectively and are untampered. Such monitoring can only occur at a very low level stage that can generally only be achieved through the operating system logging itself. - Forensic Analysis - Logs are the central source on which to conduct a forensic analysis. Logs will help the administrator discover what events took place and when. In the second part of this blog post we will be seeing how one accesses these logs using Windows Events, Syslog and SNMP.
<urn:uuid:cf34894c-ab24-43f4-a0b3-419da6fdc783>
CC-MAIN-2017-04
https://techtalk.gfi.com/event-monitoring-overview-part-1-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00181-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953306
745
3.109375
3
Technology in Action Sandy shows storm-prediction progress - By Frank Konkel - Nov 05, 2012 Hurricane Sandy bears down on the mid-Atlantic on Oct. 29th in this NASA satellite image. By any account, the super-storm known as Hurricane Sandy was devastating, claiming 113 American lives and potentially an estimated $50 billion in economic damage as it wreaked havoc on the east coast on the way to becoming the second-costliest U.S. storm behind only Hurricane Katrina. But the damage could have been much worse – in loss of life and economic costs – were it not for the astoundingly accurate forecasts of Sandy’s path from the National Hurricane Center. Some of the Center’s model runs pegged the hurricane’s landfall five days out within 30 miles of the actual landfall point. That is 10 times better than the public and emergency personnel bracing for the storm could have hoped for 20 years ago, when NHC’s historic average for 72-hour (three-day) hurricane forecasts were only accurate to about 345 miles of landfall. Had that still been the case here, the best forecast would have predicted landfall anywhere from northeastern Connecticut to northern South Carolina. In fact, if a storm like Sandy happened more than two decades ago, there’s a good chance meteorologists wouldn’t have predicted landfall at all. Instead they would have used historical data to conclude such a system would take a right turn from a high-pressure system ridge into the eastern Atlantic ocean and away from the coastline, according to Dr. Sundararaman Gopalakrishnan, a senior meteorologist at the National Oceanic and Atmospheric Administration's Atlantic Oceanographic and Meteorological Laboratory in Miami. “If Hurricane Sandy happened 20 years back, it would almost certainly have been a disaster without much warning,” said Gopalakrishnan, who witnessed predictive computer models as they shifted Sandy’s trajectory west toward New Jersey as more and more data came in. He recalls thinking “Oh no. This is not good,” he said. “I’m feeling sad about the deaths caused by this hurricane, but I think if it was not for these kinds of forecasts, there would be much, much more,” Gopalakrishnan said. “I can clearly see that from forecasting this storm. This was a unique situation as Sandy was captured in hybrid circulation from the land. I wish it would have turned to the right (east), but it is better to know than not know.” The technology behind the forecasts When NOAA – a scientific agency housed within the Department of Commerce that consists of six agencies, including the National Weather Service – was formed in 1970, meteorologists had satellites and past historical data to help predict the weather, but they didn’t have supercomputers like the IBM Bluefire, which is housed in the bottom levels of the National Center for Atmospheric Research in Boulder, Colo. It has a peak speed of 76 teraflops, or about 76 trillion floating-point operations per second. The accuracy of predictions for the storm beat anything that could have done as recently as 20 years ago. (Image: NOAA) Massive amounts of data, however, aren’t worth much without modeling, and the NHC uses more than 30 different models to predict storm intensity. In 2007, the NHC began utilizing the Hurricane Weather Research and Forecasting (HWRF) model, which analyzes big data collected from satellites, weather buoys and airborne observations from Gulfstream-IV and P-3 jets and churns out high-resolution computer-modeled forecasts every six hours. Over the past year, Gopalakrishnan said hurricane forecasts have further improved through the use of higher-resolution computer models. Points spaced on weather maps have increased in resolution from 9 kilometers to 3 kilometers, making HWRF the “highest resolution hurricane model ever implemented for operations in the National Weather Services” according to Richard Pasch, senior hurricane specialist at the NHC. Forecasting a hurricane takes big data, high-resolution models that incorporate large-scale physics and an accurate representation of initial conditions, and NWRF’s simulations have been successful thus far, Gopalakrishnan said, helping to improve hurricane forecasts by up to 20 percent. Beginning five days before Sandy’s landfall, the NHC’s NWRF model ran 23 simulations of the hurricane’s expected path – one every six hours, and each simulation taking about 85 minutes to complete. Simulations and other such data are shared immediately with other federal agencies including the Federal Emergency Management Agency, according to Erica Rule, spokesperson for the NOAA. “When there is an active storm threatening landfall, there is extremely close coordination,” Rule said. Not everything about Sandy was predicted as accurately as its landfall. Critics have questioned why predicted wind speeds in Washington, D.C. were higher than what was experienced, and the storm’s predicated landfall missed by a few hours. Yet these errors in NHC forecasting seem small compared to yesteryear’s predictions. Today, the NHC’s average miss in hurricane landfall predictions is a scant 100 miles. In 1970, NHC meteorologists missed by an average of 518 miles. Major improvements have been recognized in precipitation estimates and wind speed forecasts as well, which can prove vital in predicting storm surges and imminent flooding. The increased predictive power decreases what scientists call a hurricane’s “cone of uncertainty,” or the area that includes all the possible paths a hurricane might go. The better the predicted path of a hurricane, the less uncertainty of where, when and who it might strike, and Gopalakrishnan said “next-generation” efforts in hurricane prediction promise to further reduce uncertainty. Frank Konkel is a former staff writer for FCW.
<urn:uuid:afe2dac5-7ad3-4bc9-befb-59d7799035f6>
CC-MAIN-2017-04
https://fcw.com/articles/2012/11/05/sandy-hurricane-center.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00238-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956704
1,232
3.234375
3
EU data protection legislation is facing huge changes. Data protection laws are built on fundamental rights enshrined in the Charter of Fundamental Rights of the European Union which are the core building blocks of the EU’s legal regime. Privacy issues arising from an exponential growth in consumer and mobile technologies, an increasingly connected planet and mass cross border data flows have pushed the EU to entirely rethink its data protection legislation to ensure that these fundamental rights are fully protected in today’s digital economy. In 2012, the European Commission published a draft regulation (the General Data Protection Regulation, 'GDPR'). Just over four years later, the final text of GDPR was published in the Official Journal of the European Union on 27 April 2016. Regulation 2016/679heralds some of the most stringent data protection laws in the world and shall apply from 25 May 2018. The current EU Data Protection Directive (95/46/EC) was adopted in 1995. It has been implemented differently by EU Member States into their respective national jurisdictions, resulting in the fragmentation of national data protection laws within the EU. As it is a Regulation, GDPR will come into effect immediately on 25 May 2018 without any need for additional domestic legislation in EU Member States. However, with more than 30 areas where Member States are permitted to legislate (differently) in their domestic laws there will continue to be significant variation in both substantive and procedural data protection laws among the EU’s different Member States. The clock is now ticking with fines of up to 4% of total worldwide annual turnover for failing to comply with the requirements of GDPR. Organisations have a great deal to do between now and 25 may 2018 to be ready for the new regime II. CURRENT SITUATION At present, personal data processed in the European Union is governed by the 1995 European Directive (95/46/EC) on the protection of individuals with regard to the processing of personal data and on the free movement of such data (Directive). The Directive establishes a number of key legal principles: - Fair and lawful processing - Purpose limitation and specification - Minimal storage term - Data quality - Special categories of data - Data minimisation These principles have been implemented in each of the 28 European Union Member States through national data protection law. Although all originating from the same core Directive, there is significant variation among Member State’s substantive and procedural data protection laws. III. FUTURE LEGAL FRAMEWORK After almost four years of often fractious negotiations, GDPR was published in the Official Journal of the European Union as Regulation 2016/679 on 27 April 2016. There will be a two year transition period to allow organisations and governments to adjust to the new requirements and procedures. Following the end of this transitional period, the Regulation will be directly applicable throughout the EU from 25 May 2018, without requiring implementation by the EU Member States through national law. The goal of European legislators was to harmonise the current legal framework, which is fragmented across Member States. A 'Regulation' (unlike a Directive) is directly applicable and has consistent effect in all Member States, and GDPR was intended to increase legal certainty, reduce the administrative burden and cost of compliance for organisations that are active in multiple EU Member States, and enhance consumer confidence in the single digital marketplace. However, in order to reach political agreement on the final text there are more than 30 areas covered by GDPR where Member States are permitted to legislate differently in their own domestic data protection laws. There continues to be room for different interpretation and enforcement practices among the Member States. There is therefore likely to continue to be significant differences in both substantive and procedural data protection laws and enforcement practice among EU Member States when GDPR comes into force. We have summarised the key changes that will be introduced by the GDPR in the following sections Key changes to the current data protection framework include: A. WIDER TERRITORIAL SCOPE Where organisations are established within the EU GDPR applies to processing of personal data “in the context of the activities of an establishment” (Article 3(1)) of any organization within the EU. For these purposes “establishment” implies the “effective and real exercise of activity through stable arrangements” (Recital 22) and “the legal form of such arrangements…is not the determining factor” (Recital 22), so there is a wide spectrum of what might be caught from fully functioning subsidiary undertakings on the one hand, to potentially a single individual sales representative depending on the circumstances. Europe’s highest court, the Court of Justice of the European Union (the CJEU) has been developing jurisprudence on this concept, recently finding (Google Spain SL, Google Inc. v AEPD, Mario Costeja Gonzalez (C-131/12)) that Google Inc with EU based sales and advertising operations (in that particular case, a Spanish subsidiary) was established within the EU. More recently, the same court concluded (Weltimmo v NAIH (C-230/14)) that a Slovakian property website was also established in Hungary and therefore subject to Hungarian data protection laws. Where organisations are not established within the EU Even if an organization is able to prove that it is not established within the EU, it will still be caught by GDPR if it processes personal data of data subjects who are in the Union where the processing activities are related "to the offering of goods or services" (Art 3(2)(a)) (no payment is required) to such data subjects in the EU or "the monitoring of their behaviour" (Art 3(2)(b)) as far as their behaviour takes place within the EU. Internet use profiling (Recital 24) is expressly referred to as an example of monitoring . 1. Compared to the current Directive, GDPR will capture many more overseas organisations. US tech should particularly take note as the provisions of GDPR have clearly been designed to capture them. 2. Overseas organisations not established within the EU who are nevertheless caught by one or both of the offering goods or services or monitoring tests must designate a representative within the EU (Article 27). B. TOUGHER SANCTIONS Revenue based fines GDPR joins anti-bribery and anti-trust laws as having some of the very highest sanctions for non-compliance including revenue based fines of up to 4% of annual worldwide turnover. To compound the risk for multinational businesses, fines are imposed by reference to the revenues of an undertaking rather than the revenues of the relevant controller or processor. Recital 150 of GDPR states that 'undertaking' should be understood in accordance with Articles 101 and 102 of the Treaty on the Functioning of the European Union which prohibit anti-competitive agreements between undertakings and abuse of a dominant position. Unhelpfully the Treaty doesn’t define the term either and the extensive case-law is not entirely straightforward with decisions often turning on the specific facts of each case. However, in many cases group companies have been regarded as part of the same undertaking. This is bad news for multinational businesses as it means that in many cases group revenues will be taken into account when calculating fines, even where some of those group companies have nothing to do with the processing of data to which the fine relates provided they are deemed to be part of the same undertaking. The assessment will turn on the facts of each case. Fines are split into two broad categories. The highest fines (Article 83(5)) of up to 20,000,000 Euros or in the case of an undertaking up to 4% of total worldwide turnover of the preceding year, whichever is higher apply to breach of: - the basic principles for processing including conditions for consent - data subjects’ rights - international transfer restrictions - any obligations imposed by Member State law for special cases such as processing employee data - certain orders of a supervisory authority The lower category of fines (Article 83(4)) of up to 10,000,000 Euros or in the case of an undertaking up to 2% of total worldwide turnover of the preceding year, whichever is the higher apply to breach of: - obligations of controllers and processors, including security and data breach notification obligations - obligations of certification bodies - obligations of a monitoring body Supervisory authorities are not required to impose fines but must ensure in each case that the sanctions imposed are effective, proportionate and dissuasive (Article 83(1)). Fines can be imposed in combination with other sanctions. Broad investigative and corrective powers Supervisory authorities also enjoy wide investigative and corrective powers (Article 58) including the power to undertake on-site data protection audits and the power to issue public warnings, reprimands and orders to carry out specific remediation activities. Right to claim compensation GDPR makes it considerably easier for individuals to bring private claims against data controllers and processors. In particular: - any person who has suffered "material or non-material damage" as a result of a breach of GDPR has the right to receive compensation (Article 82(1)) from the controller or processor. The inclusion of “non-material” damage means that individuals will be able to claim compensation for distress and hurt feelings even where they are not able to prove financial loss. - data subjects have the right to mandate a consumer protection body to exercise rights and bring claims on their behalf (Article 80). Although this falls someway short of a US style class action right, it certainly increases the risk of group privacy claims against consumer businesses. Employee group actions are also more likely under GDPR. Individuals also enjoy the right to lodge a complaint with a supervisory authority (Article 77). All natural and legal persons, including individuals, controllers and processors, have the right to an effective judicial remedy against a decision of a supervisory authority concerning them or for failing to make a decision (Article 78). Data subjects enjoy the right to an effective legal remedy against a controller or proessor (Article 79). 1. The scale of fines and risk of follow-on private claims under GDPR means that actual compliance is a must. GDPR is not a legal and compliance challenge – it is much broader than that, requiring organisations to completely transform the way that they collect, process, securely store, share and securely wipe personal data. Engagement of senior management and forming the right team is key to successful GDPR readiness. 2. GDPR will apply throughout the EU on 25 May 2018. Organisations caught by GDPR will need to map current data collection and use, carry out a gap analysis of their current compliance against GDPR and then create and implement a remediation plan, prioritizing high risk areas. 3. GDPR will require suppliers and customers to review supply chains and current contracts. Contracts will need to be renegotiated to ensure GDPR compliance and commercial terms will inevitably have to be revisited in many cases given the increased costs of compliance and higher risks of non-compliance. 4. The very broad concept of 'undertaking' is likely to put group revenues at risk when fines are calculated, whether or not all group companies are caught by GDPR or were responsible for the infringement of its requirements. Multinationals even with quite limited operations caught by GDPR will therefore need to carefully consider their exposure and ensure compliance. 5. Insurance arrangements will need to be reviewed and cyber and data protection exposure added to existing policies or purchased as stand-alone policies where possible. The terms of policies will require careful review as there is wide variation among wordings and many policies may not be suitable for the types of losses which are likely to occur under GDPR. C. MORE DATA CAUGHT Personal data is defined as "any information relating to an identified or identifiable natural person". (Article 4) A low bar is set for "identifiable" – if anyone can identify a natural person using “all means reasonably likely to be used” (Recital 26) the information is personal data, so data may be personal data even if the organisation holding the data cannot itself identify a natural person. A name is not necessary either – any identifier will do such as an identification number, location data, an online identifier or other factors which may identify that natural person. Online identifiers are expressly called out in Recital 30 with IP addresses, cookies and RFID tags all listed as examples. Although the definition and recitals are broader than the equivalent definitions in the current Directive, for the most part they are simply codifying current guidance and case law on the meaning of 'personal data'. GDPR also includes a broader definition of "special categories" (Article 9) of personal data which are more commonly known as sensitive personal data. The concept has been expanded to expressly include the processing of genetic data and biometric data. The processing of these data are subject to a much more restrictive regime. A new concept of 'pseudonymisation' (Article 4) is defined as the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person. Organisations which implement pseudonymisation techniques enjoy various benefits under GDPR. 1. If in any doubt, it is prudent to work on the assumption that data is personal data given the extremely wide definition of personal data in GDPR. 2. GDPR imposes such a high bar for compliance, with sanctions to match, that often the most effective approach to minimise exposure is not to process personal data in the first place and to securely wipe legacy personal data or render it fully anonymous, reducing the amount of data subject to the requirements of GDPR. 3. Where a degree of identification is required for a specific purpose, the next best option is only to collect and use pseudonymous data. Although this falls within the regulated perimeter, it enjoys a number of benefits for organisations in particular that in the event of a data breach it is much less likely that pseudonymous data will cause harm to the affected individuals, thereby also reducing the risk of sanctions and claims for the relevant organisation. 4. Organisations should only use identifiable personal data as a last resort where anonymous or pseudonymous data is not sufficient for the specific purpose. D. SUPPLIERS (PROCESSORS) CAUGHT TOO GDPR directly regulates data processors for the first time. The current Directive generally regulates controllers (ie those responsible for determining the purposes and means of the processing of personal data) rather than 'data processors' - organisations who may be engaged by a controller to process personal data on their behalf (eg as an agent or supplier). Under GDPR, processors will be required to comply with a number of specific obligations, including to maintain adequate documentation (Article 30), implement appropriate security standards (Article 32), carry out routine data protection impact assessments (Article 32), appoint a data protection officer (Article 37), comply with rules on international data transfers (Chapter V) and cooperate with national supervisory authorities (Article 31). These are in addition to the requirement for controllers to ensure that when appointing a processor, a written data processing agreement is put in place meeting the requirements of GDPR (Article 28). Again, these requirements have been enhanced and gold-plated compared to the equivalent requirements in the Directive. Processors will be directly liable to sanctions (Article 83) if they fail to meet these criteria and may also face private claims by individuals for compensation (Article 79). 1. GDPR completely changes the risk profile for suppliers processing personal data on behalf of their customers. Suppliers now face the threat of revenue based fines and private claims by individuals for failing to comply with GDPR. Telling an investigating supervisory authority that you are just a processor won’t work; they can fine you too. Suppliers need to take responsibility for compliance and assess their own compliance with GDPR. In many cases this will require the review and overhaul of current contracting arrangements to ensure better compliance. The increased compliance burden and risk will require a careful review of business cases. 2. Suppliers will need to decide for each type of processing undertaken whether they are acting solely as a processor or if their processing crosses the line and renders them a data controller or joint controller, attracting the full burden of GDPR. 3. Customers (as controllers) face similar challenges. Supply chains will need to be reviewed and assessed to determine current compliance with GDPR. Privacy impact assessments will need to be carried out. Supervisory authorities may need to be consulted. In many cases contracts are likely to need to be overhauled to meet the new requirements of GDPR. These negotiations will not be straightforward given the increased risk and compliance burden for suppliers. They will also be time consuming and it would be sensible to start the renegotiation exercise sooner rather than later, particularly as suppliers are likely to take a more inflexible view over time as standard positions are developed. 4. There are opportunities for suppliers to offer GDPR “compliance as a service” solutions, such as secure cloud solutions, though customers will need to review these carefully to ensure they dovetail to their own compliance strategy. E. DATA PROTECTION PRINCIPLES The core themes of the data protection principles in GDPR remain largely as they were in the Directive, though there has been a significant raising of the bar for lawful processing (see Higher Bar for Lawful Processing) and a new principle of accountability has been added. Personal data must be (Article 5): - Processed lawfully, fairly and in a transparent manner (the "lawfulness, fairness and transparency principle") - Collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes (the "purpose limitation principle") - Adequate, relevant and limited to what is necessary in relation to the purpose(s) (the "data minimization principle") - Accurate and where necessary kept up to date (the "accuracy principle") - Kept in a form which permits identification of data subjects for no longer than is necessary for the purpose(s) for which the data are processed (the "storage limitation principle") - Processed in a manner that ensures appropriate security of the personal data, using appropriate technical and organizational measures (the "integrity and confidentiality principle") The controller is responsible for and must be able to demonstrate compliance with the above principles (the "accountability principle"). 1. Controllers will need to assess and ensure compliance of data collection and use across their organisations with each of the above principles as any failure to do so attracts the maximum category of fines of up to 20 million Euros / 4% of worldwide annual turnovers. Data mapping, gap analysis and remediation action plans will need to be undertaken and implemented. 2. The enhanced focus on accountability will require a great deal more papering of process flows, privacy controls and decisions made to allow controllers to be able to demonstrate compliance. See Accountability and Governance F. HIGHER BAR FOR LAWFUL PROCESSING The lawfulness, fairness and transparency principle amongst other things requires processing to fall within one or more of the permitted legal justifications for processing. Where special categories of personal data are concerned, additional much more restrictive legal justifications must also be met. Although this structure is present in the Directive, the changes introduced by GDPR will make it much harder for organisations to fall within the legal justifications for processing. Failure to comply with this principle is subject to the very highest fines of up to 20 million Euros or in the case of an undertaking up to 4% of annual worldwide turnover, whichever is the greater. - The bar for valid consents has been raised much higher under GDPR. Consents must be fully unbundled from other terms and conditions and will not be valid unless freely given, specific, informed and unambiguous (Articles 4(11) and 6(1)(a)). Consent also attracts additional baggage for controllers in the form of extra rights for data subjects (the right to be forgotten and the right to data portability) relative to some of the other legal justifications. Consent must be as easy to withdraw consent as it is to give – data subjects have the right to withdraw consent at any time – and unless the controller has another legal justification for processing any processing based on consent alone would need to cease once consent is withdrawn. - To compound the challenge for controllers, in addition to a hardening of the requirements for valid consent, GDPR has also narrowed the legal justification allowing data controllers to process in their legitimate interests. This justification also appears in the Directive though the interpretation of the concept in the current regime has varied significantly among the different Member States with some such as the UK and Ireland taking a very broad view of the justification and others such as Germany taking a much more restrictive interpretation. GDPR has followed a more Germanic approach, narrowing the circumstances in which processing will be considered to be necessary for the purposes of the legitimate interests of the controller or a third party. In particular, the ground can no longer be relied upon by public authorities. Where it is relied upon, controllers will need to specify what the legitimate interests are in information notices and will need to consider and document why they consider that their legitimate interests are not overridden by the interests or fundamental rights and freedoms of the data subjects, in particular where children’s data is concerned. The good news is that the justification allowing processing necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject to enter into a contract is preserved in GDPR, though continues to be narrowly drafted. Processing which is not necessary to the performance of a contract will not be covered. The less good news for controllers relying on this justification is that it comes with additional burdens under GDPR, including the right to data portability and the right to be forgotten (unless the controller is able to rely on another justification). Other justifications include where processing is necessary for compliance with a legal obligation; where processing is necessary to protect the vital interests of a data subject or another person where the data subject is incapable of giving consent; where processing is necessary for performance of a task carried out in the public interest in the exercise of official authority vested in the controller. These broadly mirror justifications in the current Directive. Processing for new purposes It is often the case that organisations will want to process data collected for one purpose for a new purpose which was not disclosed to the data subject at the time the data was first collected. This is potentially in conflict with the core principle of purpose limitation and to ensure that the rights of data subjects are protected, GDPR sets out a series of considerations that the controller must consider to ascertain whether the new process is compatible with the purposes for which the personal data were initially collected (Article 6(4)). These include: - any link between the original purpose and the new purpose - the context in which the data have been collected - the nature of the personal data, in particular whether special categories of data or data relating to criminal convictions are processed (with the inference being that if they are it will be much harder to form the view that a new purpose is compatible) - the possible consequences of the new processing for the data subjects - the existence of appropriate safeguards, which may include encryption or pseudonymisation. If the controller concludes that the new purpose is incompatible with the original purpose, then the only bases to justify the new purpose are a fresh consent or a legal obligation (more specifically an EU or Member State law which constitutes a necessary and proportionate measure in a democratic society). Processing of special categories of personal data As is the case in the Directive, GDPR sets a higher bar to justify the processing of special categories of personal data. These are defined to include "data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation." (Article 9(1)) Processing of these data are prohibited unless one or more specified grounds are met which are broadly similar to the grounds set out in the Directive. Processing of special categories of personal data is only permitted (Article 9(2)): - with the explicit consent of the data subject - where necessary for the purposes of carrying out obligations and exercising rights under employment, social security and social protection law or a collective agreement - where necessary to protect the vital interests of the data subject or another natural person who is physically or legally incapable of giving consent - in limited circumstances by certain not-for-profit bodies - where processing relates to the personal data which are manifestly made public by the data subject - where processing is necessary for the establishment, exercise or defence of legal claims or where courts are acting in their legal capacity - where necessary for reasons of substantial public interest on the basis of Union or Member State law, proportionate to the aim pursued and with appropriate safeguards - where necessary for preventative or occupational medicine, for assessing the working capacity of the employee, medical diagnosis, provision of health or social care or treatment of the management of health or social care systems and services - where necessary for reasons of public interest in the area of public health, such as protecting against serious cross-border threats to health or ensuring high standards of health care and of medical products and devices - where necessary for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with restrictions set out in Article 89(1) The justifications and conditions for processing special categories of data is one area where Member States are permitted to introduce domestic laws including further conditions and limitations for processing with regard to processing genetic data, biometric data and health data. Processing of personal data relating to criminal convictions and offences GDPR largely mirrors the requirements of the Directive in relation to criminal conviction and offences data. This data may only be processed under official authority or when authorized by Union or Member State law (Article 10) which means this is another area where legal requirements and practice is likely to diverge among the different Member States. 1. Controllers will need to ensure that they have one or more legal justifications to process personal data for each purpose. Practically this will require comprehensive data mapping to ensure that all personal data within the extended enterprise (ie including data processed by third parties as well as data within the organisation) has a legal justification to be processed. 2. Consideration will need to be given as to which are the most appropriate justifications for different purposes and personal data, given that some justifications attract additional regulatory burdens. 3. The common practice of justifying processing with generic consents will need to cease when GDPR comes into force. Consent comes with many additional requirements under GDPR and as such is likely to be a justification of last resort where no other justifications are available. 4. Where controllers propose to process legacy data for new purposes, they will need to be able to demonstrate compliance with the purpose limitation principle. To do that, controllers should document decisions made concerning new processing, taking into account the criteria set out in GDPR and bearing in mind that technical measures such as encryption or psuedonymisation of data will generally make it easier to prove that new purposes are compatible with the purposes for which personal data were originally collected. International transfers and particularly those to the US have regularly made front page headline news over the last 12 months with the successful torpedoing of the EU/US Safe Harbor regime by Europe's highest court. Organisations will be relieved to hear that for the most part GDPR will not make any material changes to the current rules for transfers of personal data cross-border, largely reflecting the regime under the Directive. That said, in contrast to the current regime where sanctions for breaching transfer restrictions are limited, failure to comply with GDPR's transfer requirements attract the highest category of fines of up to 20 million Euros or in the case of undertakings up to 4% of annual worldwide turnover. Transfers of personal data to third countries outside the EU are only permitted where the conditions laid down in GDPR are met. (Article 44) Transfers to third countries, territories or specified sectors or an international organisation which the Commission has decided ensures an adequate level of protection do not require any specific authorisation. (Article 45(1)) The adequacy decisions made under the current Directive shall remain in force under GDPR until amended or repealed (Article 45(9)); so for the time being transfers to any of the following countries are permitted: Andorra, Argentina, Canada (with some exceptions), Switzerland, Faero Islands, Guernsey, Israel, Isle of Man, Jersey, Eastern Republic of Uruguary and New Zealand. The well-publicised gap for transfers from the EU to US following the ruling that Safe Harbor is invalid will, it is hoped, be filled with the new EU/US Privacy Shield. Transfers are also permitted where appropriate safeguards have been provided by the controller or processor and on condition that enforceable data subject rights and effective legal remedies for the data subject are available. The list of appropriate safeguards include amongst other things binding corporate rules which now enjoy their own Article 47 under GDPR and standard contractual clauses. Again, decisions on adequacy made under the Directive will generally be valid under GDPR until amended, replaced or repealed. Two new mechanics are introduced by GDPR to justify international transfers (Article 46(2)(e) and (f)): controllers or processors may also rely on an approved code of conduct pursuant to Article 40 or an approved certification mechanism pursuant to Article 42 together in each case with binding and enforceable commitments in the third country to apply these safeguards including as regards data subjects' rights. GDPR also removes the need to notify and in some Member States seek prior approval of model clauses from supervisory authorities. GDPR includes a list of derogations similar to those included in the Directive permitting transfers where: (a) explicit informed consent has been obtained (b) the transfer is necessary for the performance of a contract or the implementation of pre-contractual measures (c) the transfer is necessary for the conclusion or performance of a contract concluded in the interests of the data subject between the controller and another natural or legal person (d) the transfer is necessary for important reasons of public interest (e) the transfer is necessary for the establishment, exercise or defence of legal claims (f) the transfer is necessary in order to protect the vital interests of the data subject where consent cannot be obtained (g) the transfer is made from a register which according to EU or Member State law is intended to provide information to the public, subject to certain conditions. There is also a very limited derogation to transfer where no other mechanic is available and the transfer is necessary for the purposes of compelling legitimate interests of the controller which are not overridden by the interests and rights of the data subject; notification to the supervisory authority is required if relying on this derogation. Transfers demanded by courts, tribunals or administrative authorities of countries outside the EU (Article 48)are only recognised or enforceable (within the EU) where they are based on an international agreement such as a mutual legal assistance treaty in force between the requesting third country and the EU or Member State; otherwise transfer in response to such requests where there is no other legal basis for transfer will breach GDPR's restrictions. 1. Given the continued focus of the media and regulators on international transfer and the increased sanctions to be introduced by GDPR, all controllers and processors will need to carefully diligence current data flows to establish what types of data is being shared with which organisations in which jurisdictions. 2. Current transfer mechanics will need to be reviewed to assess compliance with GDPR and, where necessary, remedial steps implemented before GDPR comes into force. 3. For intra-group transfers, consider binding corporate rules which not only provide a good basis for transfers but also help demonstrate broader compliance with GDPR helping to comply with the principle of accountability. H. DATA BREACH NOTIFICATION One of the most profound changes to be introduced by GDPR is a European wide requirement to notify data breaches to supervisory authorities and affected individuals. In the US, data breach notification laws are now in force in 47 States and the hefty penalties for failing to notify have fundamentally changed the way US organisations investigate and respond to data incidents. Not notifying has become a high risk option. In contrast, Europe currently has no universally applicable law requiring notification of breaches. In the majority of Member States there is either no general obligation to notify or minimal sanctions for failing to do so; for many organisations not notifying and thereby avoiding the often damaging media fall-out is still common practice in Europe. That is set to change fundamentally when GDPR comes into force. GDPR requires "the controller without undue delay, and where feasible, not later than 72 hours after having become aware of it, [to] notify the … breach to the supervisory authority" (Article 33(1)). When the personal data breach is likely to result in a high risk to the rights and freedoms of individuals the controller is also required to notify the affected individuals "without undue delay" (Article 34). Processors are required to notify the controller without undue delay having become aware of the breach (Article 33(2)). The notification to the regulator must include where possible the categories and approximate numbers of individuals and records concerned, the name of the organisation’s DPO or other contact, the likely consequences of the breach and the measures taken to mitigate harm (Article 33(3)). Although the obligation to notify is conditional on awareness, burying your head in the sand is not an option as controllers are required to implement appropriate technical and organisational measures together with a process for regularly testing, assessing and evaluating the effectiveness of those measures to ensure the security of processing (Article 32). Controllers are also required to keep a record of all data breaches (Article 33(5)) (whether or not notified to the supervisory authority) and permit audits by the supervisory authority. Failing to comply with the articles relating to security and data breach notification attract fines of up to 10 million Euros or 2% of annual worldwide turnover, potentially for both the controller and the processor. As data breach often leads to investigations by supervisory authorities and often uncovers other areas of non-compliance, it is quite possible that fines of up to 20 million Euros or 4% of annual worldwide turnover will also be triggered. 1. Notification will become the norm: Sweeping breaches under the carpet will become a very high risk option under GDPR. Organisations that are found to have deliberately not notified can expect the highest fines and lasting damage to corporate and individual reputations. Notifying and building data breach infrastructure to enable prompt, compliant notification will be a necessity under GDPR. 2. A coordinated approach, including technology, breach response policy and training and wider staff training. Data breaches are increasingly a business as usual event. Lost or stolen devices; emails sent to incorrect addresses in error and the continuing rise of cybercrime means that for many organisations, data breaches are a daily occurrence. To deal with the volume of breaches, organisation's need a combination of technology, breach response procedures and staff training. a. Technology requirements: these will vary for each organisation but will typically include a combination of firewalls, log recording, data loss prevention, malware detection and similar applications. There are an increasingly sophisticated array of applications that learn what “normal” looks like for a particular corporate network to be able to spot unusual events more effectively. The state of the art continues to change rapidly as organisations try to keep pace with sophisticated hackers. Regular privacy impact assessments and upgrades of technology will be required. b. Breach response procedures: to gain the greatest protection from technology, investment is required in dealing with red flags when they are raised by internal detection systems or notified from external sources. Effective breach response requires a combination of skill sets including IT, PR and legal. Develop a plan and test it; regularly. c. Staff training: the weak link in security is frequently people rather than technology. Regular staff training is essential to raise awareness of the importance of good security practices, current threats and who to call if a breach is suspected. It is also important to avoid a blame culture that may deter staff from reporting breaches. 3. Consider privilege and confidentiality as part of your plan. Make sure that forensic reports are protected by privilege wherever possible to avoid compounding the losses arising from a breach. Avoid the temptation to fire off emails when a breach is suspected; pick up the phone. Don’t speculate on what might have happened; stick to the facts. Bear in mind that you may be dealing with insider threat – such as a rogue employee – so keep any investigation on a strictly need to know basis and always consider using external investigators if there is any possibility of an inside attack. 4. Appoint your external advisors today if you haven’t done so already. When a major incident occurs, precious time can be wasted identifying and then retaining external support teams when you are up against a 72 hour notification deadline. Lawyers, forensics and PR advisors should ideally be contracted well before they are needed for a live incident. Find out more about DLA Piper’s breach response credentials and team. 5. Insurance: many insurers are now offering cyber insurance. However, there is a lack of standardisation in coverage offered. Limits are often too small for the likely exposure. Conditions are often inappropriate such as a requirement for the insured to have fully complied with all applicable laws and its own internal policies which will rarely be the case. That said, it is usually possible to negotiate better coverage with carriers in what continues to be a soft insurance market. Now is a good time to check the terms of policies and work with your legal team and brokers to ensure that you have the best possible coverage. You should clarify with brokers and underwriters what amounts to a notifiable incident to insurers under your policies as again there is no common standard and failing to notify when required may invalidate cover. You should also ensure that your insurance policies will cover the costs of your preferred external advisors as many policies will only cover advice from panel advisors. 6. Develop standard notification procedures: Perhaps the greatest challenge facing organisations and regulators is the sheer volume of data breach and the lack of standards or guidance as to how breaches should be notified and at what point they become notifiable. In the absence of guidance organisation's will need to make an informed decision as to how to develop internal operations for the detection, categorisation, investigation, containment and reporting of data breaches. Similarly, supervisory authorities will need to develop standard approaches and standard categorisations of incidents to ensure that limited resources are focussed on the most serious incidents first. I. MORE RIGHTS FOR INDIVIDUALS GDPR builds on the rights enjoyed by individuals under the current Directive, enhancing existing rights and introducing a new right to data portability. These rights are backed up with provisions making it easier to claim damages for compensation and for consumer groups to enforce rights on behalf of consumers. One of the core building blocks of GDPR’s enhanced rights for individuals is the requirement for greater transparency. Various information must be provided by controllers to data subjects in a concise, transparent and easily accessible form, using clear and plain language (Article 12(1)). The following information must be provided (Article 13) at the time the data is obtained: - the identity and contact details of the controller - the Data Protection Officer's contact details (if there is one) - both the purpose for which data will be processed and the legal basis for processing including if relevant the legitimate interests for processing - the recipients or categories of recipients of the personal data - details of international transfers - the period for which personal data will be stored or, if that is not possible, the criteria used to determine this - the existence of rights of the data subject including the right to access, rectify, require erasure (the “right to be forgotten”), restrict processing, object to processing and data portability; where applicable the right to withdraw consent, and the right to complain to supervisory authorities - the consequences of failing to provide data necessary to enter into a contract - the existence of any automated decision making and profiling and the consequences for the data subject. - In addition, where a controller wishes to process existing data for a new purpose, they must inform data subjects of that further processing, providing the above information. Slightly different transparency requirements apply (Article 14) where information have not been obtained from the data subject. Subject access rights (Article 15) These broadly follow the existing regime set out in the Directive though some additional information must be disclosed and there is no longer a right for controllers to charge a fee, with some narrow exceptions. Information requested by data subjects must be provided within one month as a default with a limited right for the controller to extend this period for up to three months. Right to rectify (Article 16) Data subjects continue to enjoy a right to require inaccurate or incomplete personal data to be corrected or completed without undue delay. Right to erasure ('right to be forgotten')(Article 17) This forerunner of this right made headlines in 2014 when Europe’s highest court ruled against Google (Judgment of the CJEU in Case C-131/12), in effect requiring Google to remove search results relating to historic proceedings against a Spanish national for an unpaid debt on the basis that Google as a data controller of the search results had no legal basis to process that information. The right to be forgotten now has its own Article in GDPR. However, the right is not absolute; it only arises in quite a narrow set of circumstances notably where the controller has no legal ground for processing the information. As demonstrated in the Google Spain decision itself, requiring a search engine to remove search results does not mean the underlying content controlled by third party websites will necessarily be removed. In many cases the controllers of those third party websites may have entirely legitimate grounds to continue to process that information, albeit that the information is less likely to be found if links are removed from search engine results. The practical impact of this decision has been a huge number of requests made to search engines for search results to be removed raising concerns that the right is being used to remove information that it is in the public interest to be accessible. Right to restriction of processing (Article 18) Data subjects enjoys a right to restrict processing of their personal data in defined circumstances. These include where the accuracy of the data is contested; where the processing is unlawful; where the data is no longer needed save for legal claims of the data subject, or where the legitimate grounds for processing by the controller and whether these override those of the data subject are contested. Right to data portability (Article 20) This is an entirely new right in GDPR and has no equivalent in the current Directive. Where the processing of personal data is justified either on the basis that the data subject has given their consent to processing or where processing is necessary for the performance of a contract, or where the processing is carried out be automated means, then the data subject has the right to receive or have transmitted to another controller all personal data concerning them in a structured, commonly used and machine-readable format. The right is a good example of the regulatory downsides of relying on consent or performance of a contract to justify processing – they come with various baggage under GDPR relative to other justifications for processing. Where the right is likely to arise controllers will need to develop procedures to facilitate the collection and transfer of personal data when requested to do so by data subjects. Right to object (Article 21) The Directive's right to object to the processing of personal data for direct marketing purposes at any time is retained. In addition, data subjects have the right to object to processing which is legitimized on the grounds either of the legitimate interests of the data controller or where processing is in the public interest. Controllers will then have to suspend processing of the data until such time as they demonstrate “compelling legitimate grounds” for processing which override the rights of the data subject or that the processing is for the establishment, exercise or defence of legal claims. The right not to be subject to automated decision taking, including profiling (Article 22) This right expands the existing Directive right not to be subject to automated decision making. GDPR expressly refers to profiling as an example of automated decision making. Automated decision making and profiling "which produces legal effects concerning [the data subject] … or similarly significantly affects him or her" are only permitted where (a) necessary for entering into or performing a contract (b) authorized by EU or Member State law, or (c) the data subject has given their explicit (ie opt-in) consent. The scope of this right is potentially extremely broad and may throw into question legitimate profiling for example to detect fraud and cybercrime. It also presents challenges for the online advertising industry and website operators who will need to revisit consenting mechanics to justify online profiling for behavioral advertising. This is an area where further guidance is needed on how Article 22 will be applied to specific types of profiling. 1. Controllers will need to review and update current fair collection notices to ensure compliance with the expanded information requirements. Much more granular notices will be required using plain and concise language. 2. Consideration should be given to which legal justifications for processing are most appropriate for different purposes, given that some such as consent and processing for performance of a contract come with additional regulatory burden in the form of enhanced rights for individuals. 3. For some controllers with extensive personal data held on consumers, it is likely that significant investment in customer preference centers will be required on the one hand to address enhanced transparency and choice requirements and on the other hand to automate compliance with data subject rights. 4. Existing data subject access procedures should be reviewed to ensure compliance with the additional requirements of GDPR. 5. Policies and procedures will need to be written and tested to ensure that controllers are able to comply with data subjects’ rights within the time limits set by GDPR. In some cases, such as where data portability engages, significant investments may be required. J. DATA PROTECTION OFFICERS GDPR introduces a significant new governance burden for those organisations which are caught by the new requirement to appoint a DPO. Although this is already a requirement for most controllers in Germany under current data protection laws, it is an entirely new requirement (and cost) for many organisations. The following organisations must appoint a data protection officer (DPO) (Article 37): - public authorities - controllers or processors whose core activities consist of processing operations which by virtue of their nature, scope or purposes require regular and systemic monitoring of data subjects on a large scale - controllers or processors whose core activities consist of processing sensitive personal data on a large scale. DPOs must have "expert knowledge" (Article 37(5)) of data protection law and practices though perhaps in recognition of the current shortage of experienced data protection professionals, it is possible to outsource the DPO role to a service provider (Article 37(6)). Controllers and processors are required to ensure that the DPO is involved "properly and in a timely manner in all issues which relate to the protection of personal data." (Article 38(1)) The role is therefore a sizeable responsibility for larger controllers and processors. The DPO must directly report to the highest management level, must not be told what to do in the exercise of their tasks and must not be dismissed or penalized for performing their tasks. (Article 38(3)) The specific tasks of the DPO are set out in GDPR including (Article 39): - to inform and advise on compliance with GDPR and other Union and Member State data protection laws - to monitor compliance with law and with the internal policies of the organization including assigning responsibilities, awareness raising and training staff - to advise and monitor data protection impact assessments - to cooperate and act as point of contact with the supervisory authority 1. Organisations will need to assess whether or not they fall within one or more of the categories where a DPO is mandated. Public authorities will be caught (with some narrow exceptions) as will many social media, search and other tech firms who monitor online consumer behavior to serve targeting advertising. Many b2c businesses which regularly monitor online activity of their customers and website visitors will also be caught. 2. There is currently a shortage of expert data protection officers as outside of Germany this is a new requirement for most organisations. Organisations will therefore need to decide whether to appoint an internal DPO with a view to training them up over the next couple of years or use one of the external DPO service providers several of which have been established to fill this gap in the market. Organisations might consider a combination of internal and external DPO resources as given the size of the task it may not be realistic for just one person to do it. K. ACCOUNTABILITY AND GOVERNANCE Accountability is a recurring theme of GDPR. Data governance is no longer just a case of doing the right thing; organisations need to be able to prove that they have done the right thing to regulators, to data subjects and potentially to shareholders and the media often years after a decision was taken. GDPR requires each controller to demonstrate compliance with the data protection principles (Article 5(2)). This general principle manifests itself in specific enhanced governance obligations which include: - Keeping a detailed record of processing operations (Article 30) The requirement in current data protection laws to notify the national data protection authority about data processing operations is abolished and replaced by a more general obligation on the controller to keep extensive internal records of their data protection activities. The level of detail required is far more granular compared to many existing Member State notification requirements. There is some relief granted to organisations employing fewer than 250 people though the exemption is very narrowly drafted. - Performing data protection impact assessment for high risk processing (Article 35) A data protection impact assessment will become a mandatory pre-requisite before processing personal data for processing which is likely to result in a high risk to the rights and freedoms of individuals. Specific examples are set out of high risk processing requiring impact assessments including: automated processing including profiling that produce legal effects or similarly significantly affect individuals; processing of sensitive personal data; and systematic monitoring of publicly accessible areas on a large scale. DPOs, where in place, have to be consulted. Where the impact assessment indicates high risks in the absence of measures to be taken by the controller to mitigate the risk, the supervisory authority must also be consulted (Article 36) and may second guess the measures proposed by the controller and has the power to require the controller to impose different or additional measures (Article 58). - Designating a data protection officer (Article 37) See Data Protection Officers - Notifying and keeping a comprehensive record of data breaches (Articles 33 and 34) See Data Breach Notification - Implementing data protection by design and by default (Article 25) GDPR introduces the concepts of "data protection by design and by default". "Data protection by design" requires taking data protection risks into account throughout the process of designing a new process, product or service, rather than treating it as an afterthought. This means assessing carefully and implementing appropriate technical and organisational measures and procedures from the outset to ensure that processing complies with GDPR and protects the rights of the data subjects. "Data protection by default" requires ensuring mechanisms are in place within the organisation to ensure that, by default, only personal data which are necessary for each specific purpose are processed. This obligation includes ensuring that only the minimum amount of personal data is collected and proessed for a specific purpose; the extent of processing is limited to that necessary for each purpose; the data is stored no longer than necessary and access is restricted to that necessary for each purpose. 1. Data mapping: every controller and processor will need to carry out an extensive data audit across the organization and supply chains, record this information in accordance with the requirements of Article 30 and have governance in place to ensure that the information is kept up-to-date. The data mapping exercise will also be crucial to be able to determine compliance with GDPR’s other obligations so this exercise should be commenced as soon as possible. 2. Gap analysis: Once the data mapping exercise is complete, each organization will need to assess its current level of compliance with the requirements of GDPR. Gaps will need to be identified and remedial actions prioritized and implemented. 3. Governance and policy for data protection impact assessments: the data mapping exercise should identify high risk processing. Data protection impact assessments will need to be completed and documented for each of these (frequently these will include third party suppliers) and any remedial actions identified implemented. Supervisory authorities may need to be consulted. A procedure will need to be put in place to standardize future data protection impact assessments and to keep existing impact assessments regularly updated where there is a change in the risk of processing. 4. Data protection by design and by default: in part these obligations will be addressed through implementing remedial steps identified by the gap analysis and in data protection impact assessments. However, to ensure that data protection by design and by default is delivered, extensive staff and supplier engagement and training will also be required to raise awareness of the importance of data protection and to change behaviors. European data protection laws today are in many cases substantively very different among Member States. This is partly due to the ambiguities in the Directive being interpreted and implemented differently, and partly due to the Directive permitting Member States to implement different or additional rules in some areas. As GDPR will become law without the need for any secondary implementing laws, there will be a greater degree of harmonisation relative to the current regime. However, GDPR preserves the right for Member States to introduce different laws in many important areas and as a result we are likely to continue to see a patchwork of different data protection laws among Member States, for certain types of processing. Each Member State is permitted to restrict the rights of individuals and transparency obligations (Article 23) by legislation when the restriction "respects the essence of fundamental rights and freedoms and is a necessary and proportionate measure in a democratic society" to safeguard one of the following: (a) national security (c) public security (d) the prevention, investigation, detection or prosecution of breaches of ethics for regulated professions, or crime, or the execution of criminal penalties (e) other important objectives of general public interest of the EU or a Member State, in particular economic or financial interests (f) the protection of judicial independence and judicial proceedings (g) a monitoring, inspection or regulatory function connected with national security, defence, public security, crime prevention, other public interest or breach of ethics (h) the protection of the data subject or the rights and freedoms of others (i) the enforcement of civil law claims To be a valid restriction for the purposes of GDPR, any legislative restriction must contain specific provisions setting out: (a) the purposes of processing (b) the categories of personal data (c) the scope of the restrictions (d) the safeguards to prevent abuse or unlawful access or transfer (e) the controllers who may rely on the restriction (f) the permitted retention periods (g) the risks to the rights and freedoms of data subjects (h) the right of data subjects to be informed about the restriction, unless prejudicial to the purpose of the restriction In addition to these permitted restrictions, Chapter IX of GDPR sets out various specific processing activities which include additional derogations, exemptions and powers for Member States to impose additional requirements. These include: - Processing and freedom of expression and information (Article 85) - Processing and public access to official documents (Article 86) - Processing of national identification numbers (Article 87) - Processing in the context of employment (Article 88) - Safeguards and derogations to processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes (Article 89) - Obligations of secrecy (Article 90) - Existing data protection rules of churches and religious associations (Article 91) These special cases also appear in the Directive, though in some cases have been amended or varied in GDPR. 1. Controllers and processors will first need to determine which Member States' laws apply to their processing activities and whether processing will be undertaken within any specific processing activities which may be subject to additional restrictions. 2. These Member State laws will then need to be checked to determine what additional requirements engage. Changes in law will need to be monitored and any implications for processing activities addressed. 3. Derogations will pose a challenge to multi-national organisations seeking to implement standard European-wide solutions to address compliance with GDPR; these will need to be sufficiently flexible to allow for exceptions where different rules engage in one or more Member State. M. CROSS-BORDER ENFORCEMENT The ideal of a one-stop-shop ensuring that controllers present in multiple Member States would only have to answer to their lead home regulator failed to make it into the final draft. GDPR includes a complex, bureaucratic procedure allowing multiple 'concerned' authorities to input into the decision making process. The starting point for enforcement of GDPR is that controllers and processors are regulated by and answer to the supervisory authority for their main or single establishment, the so-called "lead supervisory authority". (Article 56(1)) However, the lead supervisory authority is required to cooperate with all other "concerned" authorities and there are powers for a supervisory authorities in another Member State to enforce where infringements occur on its territory or substantially affects data subjects only in its territory. (Article 56(2)) In situations where multiple supervisory authorities are involved in an investigation or enforcement process there is a cooperation procedure (Article 60) involving a lengthy decision making process and a right to refer to the consistency mechanism (Articles 63 - 65) if a decision cannot be reached, ultimately with the European Data Protection Board having the power to take a binding decision. There is an urgency procedure (Article 66) for exceptional circumstances which permits a supervisory authority to adopt provisional measures on an interim basis where necessary to protect the rights and freedoms of data subjects. 1. Controllers and processors will need to determine which Member States' supervisory authorities have jurisdiction over their processing activities; which is the lead authority and which other supervisory authorities may have jurisdiction. 2. An important aspect of managing compliance risk is to try to stay on the right side of your regulator by engaging positively with any guidance published and taking up opportunities such as training and attending seminars.
<urn:uuid:4cfabf24-7815-4c72-80fc-6eb3f6529d8f>
CC-MAIN-2017-04
https://www.dlapiperdataprotection.com/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00054-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930824
12,168
2.765625
3
Over the last few months there have been plenty of reports of vulnerable mobile devices and operating systems. Without actually chunking your device and living without technology, what can you do to raise the level of security? - Add a password — This simple act can go a long way in securing your device whether you lose it, it’s stolen, or someone tries to access it remotely - Keep the OS updated — At some point vulnerabilities in your device’s OS become exposed, and patching is the best way to address these potential problems - Avoid storing sensitive data on the device — Without data like account numbers, social security numbers, and passwords an attacker is limited in what they can access and use - Avoid public WiFi — While it’s free, there’s a much higher risk that someone may eavesdrop on the connection - Use SSL — Encryption makes it harder for for hackers to get into your phone and lowers your chance of attack - Disable Bluetooth — Bluetooth is great for hands free use in cars or uploading information to your computer, but it also gives hackers access to your device. Disable it when you don’t need it While these tips won’t prevent all problems, they are a good start in securing your mobile device.
<urn:uuid:6e1551c1-4a00-4de5-a6bf-24c9b432e735>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2011/07/14/how-secure-is-your-mobile-device-five-steps-to-increase-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00356-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904481
261
2.796875
3
Helping Drunks See the Light The P.A.S. III is both a flashlight and an alcohol-screening system that allows law enforcement officials to check for the presence of alcohol without the subject's active participation. As the subject speaks, a small sensor draws a sample of the subject's exhaled breath or air in front of and around an individual through a fuel cell. Subjects need to speak for about four seconds for PAS III to detect and register blood-alcohol levels on a color-coded display. It is well-suited for sobriety checkpoints, schools and law enforcement and can help determine the role of alcohol during the emergency management of unconscious individuals. P.A.S. III is also well-suited for detecting alcohol in enclosed spaces such as vehicles, aircraft, trains and rooms. Additional information in available by contacting PAS Systems at 540/372-3431. Reach Out and Touch Someone For years, scientists have tried to bring the virtual world into the physical world. FEELit is a pointing device that allows users to physically interact with anything the cursor touches, giving the sense of touch to all aspects of user interaction. According to the company, users can feel things such as shapes, textures, liquids, hills, valleys and other physical sensations. "We are bringing a new mode of human-computer interaction to mass markets," said Louis Rosenberg, Immersion's president. "We expect it to fundamentally change the way people think about computing, making the digital world of software tangible and accessible. A good user interface will no longer just look good, it will feel good
<urn:uuid:72d1c37a-c97a-462c-9079-233473b65be4>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/100499064.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00568-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919465
336
2.859375
3
From a user perspective, the term cloud refers to applications and services that are accessed via a browser, with no software or other agents needed to be installed on the device used to access them. From a provider perspective, there are many intricacies in setting up and managing such services, including ensuring high levels of control and security. But, for users, the key is simplicity and always-on availability. Organisations are seeing the advantages of allowing their employees to access the applications and data that they need to perform their functions from devices and locations that do not tie them to the office. For them, the benefits of offering applications via cloud-based services are many in terms of the lower upfront investment required and the reduced management overhead of managing the applications and provisioning their use to employees. All that is needed is a browser interface and, with just a couple of mouse clicks, a user can be provisioned to use the service. Now that the browser is the main interface, those applications can be accessed from a wide range of IP-enabled devices that allow internet connectivity. In many countries, there are now more mobile phones than people and the increasing sophistication of those devices means that they are often the first that users will reach for. The range of devices offering internet connectivity is also proliferating, such as digital TVs, and portable memory devices allow data to be transported easily from one device to another. Among the benefits of using applications delivered via the cloud are that they provide employees with the flexibility they demand in being able to access those applications from wherever they are, on whatever device they wish to use, whenever they want to. But, business applications are used to process, store and communicate information that can be highly sensitive or confidential, such as personal information and intellectual property. To defend itself against that information being accessed and potentially misused by those with no business reason to do so, organisations must develop policies regarding which employees can access what resources, from what devices and what they can do with the information they contain. However, a policy is only as good as the paper or electronic medium it is written on. It is a good as useless if it cannot be enforced. The only way to ensure that a policy is effective is to monitor how well users are adhering to its requirements, and that requires the use of technology. Join Bloor Research and Overtis Group for a webinar at 3pm UK time 18th January 2011 that will discuss how a user-centric approach will help them to reap the benefits of cloud-based applications and safeguard the security of their valuable data. To register for the webinar, click here.
<urn:uuid:670b8ea9-4a06-4a8f-b66c-14649dea1f3f>
CC-MAIN-2017-04
http://www.bloorresearch.com/blog/security-blog/webinar-on-developing-a-user-centric-approach-to-cloud-secur/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00476-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960665
529
2.546875
3
The decompiler makes some assumptions about the input code. Like that call instructions usually return, the memory model is flat, the function frame is set properly, etc. When these assumptions are correct, the output is good. When they are wrong, well, the output does not correspond to the input. Take, for example, the following snippet: The decompiler produces the following pseudocode: Apparently, the v3 variable (it corresponds to edx) is not initialized at all. Why? This happens because called functions usually spoil some registers. The calling conventions on x86 stipulate that only the esi, edi, ebx, and ebp registers are saved across calls. In other words, other registers may change their values (or be spoiled) by a function call. Since the decompiler assumes that functions obey the regular calling conventions, it separates edx before the call and after the call into two variables. The first variable gets optimized away and is replaced by a1. The second variable (v3) becomes uninitialized. In fact, there are three possible cases. The edx register could be: - used to return a value by the called function. The decompiler chose the default case (#3). Let’s check if it was right. Here’s the disassembly of sub_2A795: As we see, the edx register is not referenced at all, so we have the case #1. If the decompiler could find it out itself, without our help, our life would be much easier (maybe it will do so in the future!) Meanwhile, we have to add the required information ourselves. We do it using the Edit, Functions, Set function type command in IDA. The callee does not spoil any registers: The decompiler produces different pseudocode: Since it knows that edx is not modified by the call, it creates just one variable for both edx instances (before and after the call). Were the called function returning its value in edx (the case #2), we would set its type like this: (this prototype means: function with one argument on the stack, the argument will be popped by the callee; the result is returned in edx) The decompiler would create two separate variables for edx, as in the case #3. The first one would be optimized away, but the second one would be initialized with the returned value: As you see, the type information plays very important role in decompilation. In order to get a correct output, a correct input (or assumptions) must be given. Otherwise the decompiler works in the “garbage in – garbage out” mode. Always pay attention to the types, it is a good thing to do.
<urn:uuid:c5a2b944-0eb7-44dc-8cec-a90f6ffedf5a>
CC-MAIN-2017-04
http://www.hexblog.com/?p=78
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00200-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903189
576
3.1875
3
Do you talk to your computer or smartphone? Just a few years ago, that question would have been absurd. But with advances in natural language processing, the likelihood is that you have asked your phone to send a text or search the web for something within the last day. In fact, natural language processing (NLP) is one aspect of machine learning, big data, and artificial intelligence that has the potential to truly change everything. In its most basic terms, natural language processing is the ability of a computer to understand natural human speech as it is spoken. It’s the difference between saying, “Siri, where’s the nearest coffee shop?” and, “Search coffee shops ZIP Code 80021.” For a long time, searches online had to be done by typing in strings of words combined with Boolean search terms that ended up looking and sounding nothing like a conversation. Now, however, you can type a question into Google exactly how you’d ask it to a friend, and Google can reliably provide a good answer. The same recognition of natural language is being developed for speech. AI assistants like Siri, Cortana, and Google Now are good examples of this. While it seems simple for a human to answer a natural language question, it’s an incredibly complex task for a computer, requiring many steps computations and predictions, all of which must happen in the cloud and in a split-second. The fascinating thing is that, while a human inherently understands what is being said, a computer cannot really be said to understand language. It can parse out the different words, the context, the grammatical usage, etc. and then make a prediction about which response will be the best, but it does not actually understand what we are saying. One goal of NLP is to do away with computer programming languages like Java, Ruby, or C and replace them with natural human instructions and speech. Another ultimate goal is realistic artificial intelligence, wherein the computer can react to and interact with a human flawlessly. How NLP is Being Used Computer “assistants” like Siri and Cortana are the most visible use of NLP today, but there are many other applications of NLP in use. As mentioned above, Google has poured a great deal of resources into NLP as it relates to search, allowing us to type or speak a natural question and receive a relevant answer. Google also is using NLP to create predictive text responses to emails in its Inbox email client, allowing users to choose from one of three responses and respond to an email with a single click. You may have used NLP for yourself if you have ever used the “translate” link inside Facebook to translate a foreign language into your own (with varying results) or used Google translate on Google or Bing search results. A reliable machine translation has been a goal of NLP since the 1950s, and results are improving all the time. Other programs are being developed and used that can automatically summarize long documents or extract relevant keywords for searching. The legal system is using these types of applications, for example, to help lawyers sort through thousands of pages of documents in any given legal case to find relevant information. Marketers are using NLP for sentiment analysis, combing the millions of tweets and other social media messages to determine how users feel about a particular product or service. It has the potential to turn all of Twitter or Facebook into one giant focus group, at a fraction of the cost. Another way you likely use NLP daily in your life is with text classification – which is what Google and other email providers use to determine if an email is spam or not. This is a very simple binary classification: an email either is spam or it isn’t. But more sophisticated forms are being used for such complex analyses as determining the author of a work by comparing it to other works. Companies are predicting that chatbots will be able to take over some customer-service functions in as little as five years, providing automated, real-time responses to simple customer-service problems and questions. Integrations also are being developed for particular situations and users. For example, one company has developed an interface for the Amazon Echo that can allow business leaders to track key performance metrics. In fact, when the system is set up, a colored light bulb in an office can be used to visualize those metrics. One user set the system to monitor hold times for customer service, and when the light bulb goes red, he knows there is a problem that needs to be addressed immediately. How NLP will Change Things in the Future Imagine a future that looks like Star Trek or The Jetsons, where people are constantly talking to their house (or space ship), requesting information, giving commands, and so on. That future is not far off. Other science-fiction staples, like a universal translator and robots that can speak and react to spoken commands, also will be made possible by accurate NLP. But the main potential I see is in how we interact with everyday technologies. Will we read text messages and emails when our virtual assistant can simply read them for us? Will we shop and place orders for groceries through our smart refrigerators and tell the washing machine to call a repairman for itself when it breaks? All of these scenarios are right around the corner. What do you think is the most exciting current or future use of NLP? I’d love to hear your thoughts and predictions in the comments below. Bernard Marr is a bestselling author, keynote speaker, strategic performance consultant, and analytics, KPI, and big data guru. In addition, he is a member of the Data Informed Board of Advisers. He helps companies to better manage, measure, report, and analyze performance. His leading-edge work with major companies, organizations, and governments across the globe makes him an acclaimed and award-winning keynote speaker, researcher, consultant, and teacher. Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise, plus get instant access to more than 20 eBooks.
<urn:uuid:55a831ff-29b6-4aab-8d6c-db7d639152b7>
CC-MAIN-2017-04
http://data-informed.com/why-natural-language-processing-will-change-everything/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00412-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947292
1,255
3.546875
4
Through the use of inter-host mirroring and replication they can still provide many of the key features of virtualization, but there are some problems: You need a complete second copy of a virtual machine (VM) on another host, you are limited to only that second host for failover or migration (unless you make multiple copies), and there is CPU consumption required of the second target VM. Essentially, you double your VM count and the resources those VMs require. In a resource-constrained environment, this could be a problem. Vendors are trying to deliver other solutions that keep the cost, simplicity, and performance advantages of local storage solutions but that still provide VM flexibility and efficiency. One approach is the SAN-Less SAN. [ For more on shared vs. local storage, see Is Shared Storage's Price Premium Worth It? ] The SAN-Less SAN is actually another form of shared storage, but the storage is in the physical hosts of the virtual infrastructure instead of on a dedicated shared storage system. Each host is equipped with hard drives or Flash SSD storage, and as data is being stored it is written across each host in the infrastructure--similar to how data is written across the nodes of a scale-out storage cluster. Redundancy is achieved by using a RAID-like data stripping technique so that failure of one host or the drive of one host does not crash the entire infrastructure. As in traditional RAID, the redundancy is provided without requiring a full second copy of data. Also, it is not uncommon for the disks in each node to themselves be RAIDed via a RAID card inside the server. This technique of striping data across physical hosts provides the VM flexibility. All the hosts can get to the VM images, so a VM can be migrated in real time to any host. One downside of the SAN-Less SAN approach is that you lose the performance advantage of pure local storage since parts of the data must be pulled from the other hosts. From a performance perspective, you have essentially created a SAN. As discussed in my article, Building The SAN-Less Data Center, some vendors are merging features of local storage with this SAN-Less technique to bring the best of both worlds. These vendors are keeping a copy of each VM data local to the host on which it is installed in addition to replicating the VM’s data across the host nodes. The value of this technique is that the VM gets local performance until it needs to be migrated. A second step in migration allows the newly migrated VM to have its data rebuilt on its new host, restoring performance. This is especially intriguing if the local data is PCIe Solid State Disk. Of course, nothing is perfect, and the network that interconnects these hosts must be well designed. There is also some host resource consumption as the software that runs the data replication on each host does its work. However, that consumption should not be as significant as a host loaded down with target VMs in the mirroring/replication example discussed in my last column. Finally, the type of hard disks and solid state disks used in the hosts in a SAN-Less SAN must also be carefully considered. Despite the advantages of local storage and SAN-Less SANs, shared storage is far from dead. In my next column, I will look at local storage vs. SANs. Even small IT shops can now afford thin provisioning, performance acceleration, replication, and other features to boost utilization and improve disaster recovery. Also in the new, all-digital Store More special issue of InformationWeek SMB: Don't be fooled by the Oracle's recent Xsigo buy. (Free registration required.)
<urn:uuid:e68424fb-15e6-42f0-bb55-18ff747bd1ad>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/storage-only-looks-san/908683278
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00440-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939629
747
2.578125
3
Back to Basics With Unix Filesystems Two things come to mind when somebody asks me what they should know about the layout of their Linux or Unix file system. First is the actual structure, or where things are stored. Second, is whether or not to create multiple file systems or keep everything in one or two mondo devices. The easiest question, about why things are where they are, isn't as easy as it seems. Different operating systems have different ideas about the specifics, but the root-level directories mean nearly the same thing across all platforms. The Linux Standards Base created a Filesystem Hierarchy Standard, or FHS, to attempt to formalize some of the stranger ideas they had. They diverge enough that it's worth mentioning, but first let's look at the traditional viewpoint. As we all know, the file system is structured like a tree, with / at the top. The most common sub-directories within / are seen across most Unix and Linux flavors. The universal directories are: - essential user commands, presumably available when running in single-user mode, when /usr may not exist - device files, which provide access to devices on the system - host-specific configuration information - essential shared libraries, and perhaps even kernel modules on some systems (on others it's in /kern) - mount point for temporary file systems, usually manually created by administrators - add-on or "optional" software, which is rarely used for that purpose - system binaries, where daemons and things an administrator cares about will live - temporary files, created by any application running on the system The LSB FHS added a few, most notably /srv. The purpose of /srv is to store all "data for services" within. This includes any external service, like a Web server, that the server will provide. Not many people follow this standard, but certain Linux distros have conformed recently. What, exactly, is optional software? When talking about Solaris, optional software is anything but the base operating system. Download Sun's compiler suite, it goes in /opt/SUNWspro. Want more GNU software? Use blastwave and get everything in /opt/csw/. The long-held philosophy that you should never muck with the operating system-managed files and directories makes good sense. You want to be sure that every program in /usr/bin runs how people expect it to run in a given operating system. Third party supported software can easily explode if you replace /usr/bin/perl with an unknown version, for example. HP-UX, on the other hand defies all logic by putting both OS and optional software in /opt. Then there's Linux. Each variant provides any of a number of package managers, and they all install packages over top of the base system. This is fine, assuming you're using officially supported packages from the distribution. Frequently, however, people will add repositories to their package manager's configuration to get a wider variety of software. These are not part of the base system, and are not supported by the vendor. Depending on who prepared the package, they can tromp all over your operating system, leaving you in a completely unknown state. It's clear which philosophy I tend to agree with. One, or One Hundred Filesystems? The second point of contention, about how many physical filesystems to allocate, is an even more heated debate. I personally don't understand why-it seems easier than the first issue-because it isn't a one-size-fits-all question. Different servers will have different layouts, and that shouldn't lead to confusion as the first issue does. The debate goes something like this: If I allocate one physical partition and mount / on it, everything is in a single place; if I create a separate /, /var, /tmp, etc, I run the risk of guessing wrong and having to deal with full filesystems. Indeed, a fewer number of filesystems does provide some leeway if you estimate a size incorrectly. On the other hand, if you do fill up a file system that's too encompassing, you may find that your server handles this very poorly. If /, /var, and /tmp are all on the same filesystem on a Web server, the filling of any one of those filesystems means that the other are full as well. Web servers get extremely cranky when they can't write logs to /var (traditionally they are stored there), and likewise other daemons may fail if they cannot write to /tmp. One must also think about backup strategy when allocating filesystems, as not all backup software can work with only portions of a filesystem. In the end, it's highly dependent on your server's purpose. One a Web server, a separate /var is likely required, but on someone's desktop, the need isn't as pressing. It's all a matter of opinion, once the technical issues are resolved. Backups are important, the number of file systems you need to monitor is extremely important, and swap file location is probably the most overlooked consideration. The first blocks allocated on a disk are on the outer edge of the platter, where the disk is spinning fastest. If you allocate a 70GB / filesystem at slice 0, and then toss on a 4GB swap as slice 1, you're shooting yourself in the foot. Swap should always be closest to the edge of the disk for optimal performance, and other filesystems that are accessed frequently should be close by. Aside from swap responding slowly, you also need to make sure that the disk heads aren't constantly seeking across the entire platter to serve requests. I will not purport to know the ideal layout structure. Some Unix variants want just a few hundred MB for /, forcing you to create separate filesystems for everything else. Some middle ground is generally the best solution. As mentioned, I prefer to keep similar classes of machines all the same, and that means maintaining multiple partition maps within system deployment software. Most servers I maintain have a few GBs for /var, a few for /, 6GB for /usr, and 10GB for /opt. Desktops generally have a 10GB /usr and a 10GB /. Your layout will depend on your needs, and don't let anyone tell you different-just be prepared to justify your decisions.
<urn:uuid:09e35127-77a5-477e-8344-99ad546f9fab>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3735401/Back-to-Basics-With-Unix-Filesystems.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932631
1,321
3.375
3
BitDefender has said that children with parents who subject them to verbally aggressive behaviour are more likely to become cyber-bullies. The security company conducted a survey in to 2,300 parents and found a direct correlation between parental behaviour at home and children's behaviour online. 82 per cent o respondents said their children had been exposed to some form of cyber-bullying, with around the same percentage reporting that they only met some of their children's online friends. “Cyber-bullying remains a vivid threat harming children through multiple environments such as e-mail, mobile phone, social media, instant messaging, web sites or blogs,” said BitDefender's chief security researcher, Alexandru Balan. “Whether they are victims or harassers, young people are very affected by cyber-bullying, and some require specialised support to recover from the psychological consequences.” According to BitDefender, the top five cyber-bullying methods are spreading rumours (93 per cent), mockery (83 per cent), insults (75 per cent), threats (63 per cent) and sharing photo's without permission (58 per cent). Want to receive up-to-the-minute tech news straight to your inbox? Then click here to sign up for the completely free PCR Daily Digest and Newsflash email services. You can also follow PCR on Twitter and Facebook.
<urn:uuid:564355d8-df95-4ad8-aec1-1f2252ffec40>
CC-MAIN-2017-04
http://www.pcr-online.biz/news/read/bitdefender-cyber-bullying-starts-in-the-home/029476
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945714
283
2.765625
3
Jin A.,Epidemiology Consultant | George M.A.,University of British Columbia | George M.A.,Child and Family Research Institute | Brussoni M.,University of British Columbia | And 4 more authors. International Journal for Equity in Health | Year: 2015 Background: Aboriginal people in British Columbia (BC) have higher injury incidence than the general population. This report describes variability in visits to primary care due to injury, among injury categories, time periods, geographies, and demographic groups. Methods: We used BC's universal health care insurance plan as a population registry, linked to practitioner payment and vital statistics databases. We identified Aboriginal people by insurance premium group and birth and death record notations. Within that population we identified those residing off-reserve according to postal code. We calculated crude incidence and Standardized Relative Risk (SRR) of primary care visit due to injury, standardized for age, gender and Health Service Delivery Area (HSDA), relative to the total population of BC. Results: During 1991 through 2010, the crude rate of primary care visit due to injury in BC was 3172 per 10,000 person-years. The Aboriginal off-reserve rate was 4291 per 10,000 and SRR was 1.41 (95 % confidence interval: 1.41 to 1.42). Northern and non-metropolitan HSDAs had higher SRRs, within both total BC and Aboriginal off-reserve populations. In every age and gender category, the HSDA-standardized SRR was higher among the Aboriginal off-reserve than among the total population. For all injuries combined, and for the categories of trauma, poisoning, and burn, between 1991 and 2010, crude rates and SRRs declined substantially, but proportionally more rapidly among the Aboriginal off-reserve population, so the gap between the Aboriginal off-reserve and total populations is narrowing, particularly among metropolitan residents. Conclusions: These findings corroborate our previous reports regarding hospitalizations due to injury, suggesting that our observations reflect real disparities and changes in the underlying incidence of injury, and are not merely artefacts related to health care utilization. © 2015 Jin et al. Source Lalonde C.E.,University of Victoria | Brussoni M.,University of British Columbia | Brussoni M.,Child and Family Research Institute | Brussoni M.,BC Injury Research and Prevention Unit | And 4 more authors. PLoS ONE | Year: 2015 Background: Aboriginal people in British Columbia (BC) have higher injury incidence than the general population. Our project describes variability among injury categories, time periods, and geographic, demographic and socio-economic groups. This report focuses on unintentional falls. Methods: We used BC's universal health care insurance plan as a population registry, linked to hospital separation and vital statistics databases. We identified Aboriginal people by insurance premium group and birth and death record notations. We identified residents of specific Aboriginal communities by postal code. We calculated crude incidence and Standardized Relative Risk (SRR) of hospitalization for unintentional fall injury, standardized for age, gender and Health Service Delivery Area (HSDA), relative to the total population of BC. We tested hypothesized associations of geographic, socio-economic, and employment-related characteristics with community SRR of injury by linear regression. Results: During 1991 through 2010, the crude rate of hospitalization for unintentional fall injury in BC was 33.6 per 10,000 person-years. The Aboriginal rate was 49.9 per 10,000 and SRR was 1.89 (95% confidence interval 1.85-1.94). Among those living on reserves SRR was 2.00 (95% CI 1.93-2.07). Northern and non-urban HSDAs had higher SRRs, within both total and Aboriginal populations. In every age and gender category, the HSDA-standardized SRR was higher among the Aboriginal than among the total population. Between 1991 and 2010, crude rates and SRRs declined substantially, but proportionally more among the Aboriginal population, so the gap between the Aboriginal and total population is narrowing, particularly among females and older adults. These community characteristics were associated with higher risk: lower income, lower educational level, worse housing conditions, and more hazardous types of employment. Conclusions: Over the years, as socio-economic conditions improve, risk of hospitalization due to unintentional fall injury has declined among the Aboriginal population. Women and older adults have benefited more. © 2015 Jin et al. Source Desapriya E.,University of British Columbia | Desapriya E.,BC Injury Research and Prevention Unit | Hewapathirane D.S.,University of British Columbia | Romilly D.P.,University of British Columbia | And 2 more authors. Traffic Injury Prevention | Year: 2012 Objective: Previous research indicates that most vehicle occupants are unaware that a correctly adjusted, well-designed vehicular head restraint provides substantial protection against whiplash injuries. This study examined whether a brief educational intervention could improve awareness regarding whiplash injuries and prevention strategies among a cohort of vehicle fleet managers.Methods: A brief written survey was administered prior to, and approximately 1 h after a 30-min presentation on whiplash injury and prevention measures, which was delivered at a regional fleet manager meeting held in British Columbia, Canada (n = 27 respondents).Results: Respondents had low baseline knowledge levels regarding the causes, consequences, and prevention of whiplash. Following the presentation, however, respondents improved awareness in all of these domains and, most important, reported an increased motivation to implement changes based on this newly acquired knowledge.Conclusions: These results indicate that improved education practices and social marketing tools are potentially valuable to increase awareness among relevant stakeholders. © 2012 Copyright Taylor and Francis Group, LLC. Source Teschke K.,University of British Columbia | Harris M.A.,Occupational Cancer Research Center | Reynolds C.C.O.,University of British Columbia | Winters M.,Simon Fraser University | And 10 more authors. American Journal of Public Health | Year: 2012 Objectives: We compared cycling injury risks of 14 route types and other route infrastructure features. Methods: We recruited 690 city residents injured while cycling in Toronto or Vancouver, Canada. A case-crossover design compared route infrastructure at each injury site to that of a randomly selected control site from the same trip. Results: Of 14 route types, cycle tracks had the lowest risk (adjusted odds ratio [OR] = 0.11;95% confidence interval [CI] = 0.02, 0.54), about one ninth the risk of the reference: major streets with parked cars and no bike infrastructure. Risks on major streets were lower without parked cars (adjusted OR = 0.63;95% CI = 0.41, 0.96) and with bike lanes (adjusted OR = 0.54;95% CI = 0.29, 1.01). Local streets also had lower risks (adjusted OR = 0.51;95% CI = 0.31, 0.84). Other infrastructure characteristics were associated with increased risks: streetcar or train tracks (adjusted OR = 3.0;95% CI = 1.8, 5.1), downhill grades (adjusted OR = 2.3;95% CI = 1.7, 3.1), and construction (adjusted OR = 1.9;95% CI = 1.3, 2.9). Conclusions: The lower risks on quiet streets and with bike-specific infrastructure along busy streets support the route-design approach used in many northern European countries. Transportation infrastructure with lower bicycling injury risks merits public health support to reduce injuries and promote cycling. Source Tetroe J.M.,Canadian Institutes of Health Research | Graham I.D.,Canadian Institutes of Health Research | Scott V.,BC Injury Research and Prevention Unit Journal of Safety Research | Year: 2011 Introduction: The concept of knowledge translation as defined by the Canadian Institutes for Health Research and the Knowledge to Action Cycle, described by Graham et al (Graham et al., 2006), are used to make a case for the importance of using a conceptual model to describe moving knowledge into action in the area of falls prevention. Method: There is a large body of research in the area of falls prevention. It would seem that in many areas it is clear what is needed to prevent falls and further syntheses can determine where the evidence is sufficiently robust to warrant its implementation as well as where the gaps are that require further basic research. Conclusion: The phases of the action cycle highlight seven areas that should be paid attention to in order to maximize chances of successful implementation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved. Source
<urn:uuid:56855770-a65c-43da-add7-bc92e1a0f218>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/bc-injury-research-and-prevention-unit-654032/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00008-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926899
1,796
2.546875
3
How To Use Web Search Engines How to Find Information About People on the WebThe web has made it much easier to get information about people, including old friends and classmates, old boyfriends/girlfriends, ancestors, celebrities, politicians, public figures, criminals, and even your next-door neighbor. There are various opinions about this new flow of personal information. Most of us seem quite pleased to be able to get the information we need, but we're not necessarily happy if others can get the goods on us! What follow are a few tips about finding people via the web. It can be harder than you think. Although many people access the internet on a daily basis now, they often use screen names that are known only to their friends. Other people, particularly women, tend to change their last name once or more often during the course of their lives. Who is relatively easy to find on the web? Step 1: Enter The Name Although this will often be a waste of time, you might as well begin with the quick and easy type of search: type the full name of the person you're seeking into a search engine. When you do this, the most likely outcome is that you will get lots of hits on people who are NOT the person you're seeking. Many, many people have the same first and last names. If the names are unusual, though, you might get lucky. Ditto if the person you seek already has a notable web presence, with lots of webpages citing him or her for some achievement. You have a slightly higher chance of getting good results if you enter the first and last name as a phrase, surrounded by quotation marks. The middle name usually isn't important, unless the person typically uses her middle name. If the person typically uses his initials instead of the first and middle name, make sure you search as a phrase when looking him up. Warning: Entering names will frequently bring upon many hits on genealogical records. Instead of getting info on a living person, you'll find yourself staring at data about someone who lived and died a hundred years ago. Although it's great that so much genealogical data is available via the web, these webpages can hopelessly muddy your chances of finding a living person using only her name. Step 2: Enter The Domain Name Many search engines will list a name that also appears as a web domain name among its top results. So if you suspect your friend may be active on the web, you can also try a search using his first and last names run together as one word. Most people's domain names tend to use both first and last name: e.g., firstnamelastname.com. On rare occasions, you might find that your friend has registered a domain using only her last name. Example: if you do a Google search on "Monash" the top two hits on Google will be monash.edu in Australia and monash.com, which the domain name of this website. This site is owned by Curt Monash, who registered his last name as a domain name many years ago. Public figures, web geeks, and small business owners are more likely to have registered their names as web domains than the average person-on-the-street. Maintaining a domain costs money, and running a website requires knowledge of web design and programming. Step 3 Refine Your Search: Remember that search engines are simply software programs who cannot anticipate your needs. To a search engine, a name is just a collection of letters. All it cares about (usually) is matching those letters with all the other identical arrays of letters in its database. For example, if you enter the name "James Johnson" in Google, you will get 7,250,000 hits. Therefore, in most cases, you will need to provide the search engine with more information. How can you narrow the search? It often helps to envision the result you're looking for. If you could find a page on the web that mentioned the person you're looking for, what would it say? If you think the person might be mentioned in a webpage that also refers to her hometown, add that, if you know it. If the person is interested in a particular career or activity, use that activity as one of your search terms. For me, a search on "Linda Barlow" and "novelist" bring up pages that rule out most of the zillion other "Linda Barlows" in the world. If you happen to know where the person works, or even just what his profession is, try using the business or the profession as another keyword. Most businesses have websites, although not all employees are listed on such sites. But if your friend owns his or her own business, they probably have a website. If your friend is one of the executives of a public company, he or she may be listed in the company's tax filings or in press releases or corporate reports. Did you and your friend attend the same school or college together? Try to get information through the website of the school or college. If your friend is not listed anywhere on these sites, try the various class reunion websites, like classmates.com. Is your friend a member of a professional organization that has a web presence? Has he or she written a book, an article, or been cited in one? More and more books and articles are published to the web every month. When The Info is Correct, but Over-abundant If you are looking for a celebrity, a public figure, or someone who is extremely active on the internet, the above Step 1 and/or Step 2 are usually enough to find real information about that person. In fact, you'll probably find yourself confronted with far too much information. You'll need a way to winnow it down. In the case of celebrities, a simple search on their names is likely to produce more results than you want. You're likely to find fan sites, which can be excellent resources, but beware of the ones that offer nude pictures. Generally, the offers of clothed pictures are legitimate and the offers of nude ones are fakes, come-ons to try to get you to buy a subscription to a porn site. (In case you haven't learned it already, the many varieties of sexual content on the internet are rarely offered free of charge). To narrow it down, put in both the celebrity's name and the name of a movie or song or book or TV show they're associated with. Multiple titles are an even better way to find their sites. This works particularly well for authors. If the celebrity is an athlete in a major US team sport, several big sports sites keep pages on every single player, including links to up-to-date news. These include Yahoo, ESPN, Sportsline, as well as the sites for leagues such as the NBA and NFL. If the person you're looking for is careful enough about his privacy to have removed his personal info from various websites and databases, you may find it difficult to get any information about him. You can pay to access special databases at sites like peoplefind.com and peoplesearch.com. Anything that is a matter of public record is probably recorded in an electronic database. Not all such databases are web-accessible, and those that are usually charge a fee. What is available is often determined by individual state laws or policies. What you can find from the state of Iowa might be quite different from what you can find from the state of New York. Information that is likely to be contained in public records includes birth and death certificates, marriage certificates, divorce judgments (sometimes), home purchase and sales information, professional credentials verification, court and legal proceedings (not always), arrest records, bankruptcy filings, and other events that are recorded by public officials, state and federal divisions of vital statistics, and other public entities. Don't Forget the Phone Book Telephone books (white pages and yellow pages) are widely available now on the web. This means that if you know someone's name and what town they live in, you can access their address, phone number, even their age. There are also databases ("reverse look-up") that allow you to type in a phone number and get the name and address of the person who owns the phone. If you know the address of the person you are seeking, you can easily get a map of his town, street, and neighborhood on the many web map sites. Some maps are precise enough to show the exact location of his home. Try Yahoo! PeopleSearch, which offers basic phone book style look-up and links you to a site that can execute background checks (for a fee). The same things that you can find out about other people, other people can find out about you. Here's a list of some of the databases someone might access when researching you: If you are concerned about your privacy, you can ask to have your personal information removed from web databases. It is difficult to remove all trace of yourself, though. Some events and transactions are legitimately matters of public record, and more of these public records are becoming available every year via the web. The Spider's Apprentice was conceived and written by Linda Barlow, who maintains this site for Monash Information Services. Copyright 1996-2004. All rights reserved. Updated: 05/12/04
<urn:uuid:2474647d-331b-483c-8182-144cfc9400db>
CC-MAIN-2017-04
http://www.monash.com/people.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00402-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963633
1,923
2.53125
3
Programming Using the CCA API The API we’re going to use is the Common Cryptographic Architecture (CCA). There are more than 80 function calls, but we’ll examine only a few of them that let you do authentication and encryption. You’ll want to get a copy of the Cryptographic Services ICSF Application Programmer’s manual for future reference. On each function call, the first four parameters are always the same. These are variables for the return code, reason code, exit data length, and exit data. The first three are full-word binary values (S9(8) in COBOL). Normally, you won’t have exit data, so code the exit data length with a value of zero, and provide a character exit data variable of 4 bytes of nulls. The return code and reason code variables will be filled in on return from the function call. Normally, you can ignore the reason code if the return code is zero. However, there are times when you can receive a non-zero reason code with a zero return code. The reason code can be important. For example, a reason code of 10000 means that your key has been re-encrypted using a new master key. Depending on how you’ve written your application, this may mean that you need to save the new encrypted key value for future use. The first thing you may want to do in any application you write is to determine whether the ICSF hardware is present and working. ICSF doesn’t support a status call. One method to accomplish this task is to call CSNBRNG. This function will generate a random bit string, but the real value here is that the function doesn’t require any keys or difficult setup to use and will tell you if the hardware is functioning. If you receive a return code greater than zero, for example 16, you can be sure that either the hardware isn’t present, the master keys are not established, or the CSF system proc isn’t executing. In this case, there’s no point in continuing. The CSNBRNG function requires only two additional parameters: the key form and the returned random bit string. The 8-byte key form parameter can be either “RANDOM” or “ODD.” Many of the functions require a key value parameter. This is always provided as a 64-byte area and can consist of either a key label or key token. A key label is the name of a key you’ve defined and stored in the key data set using a TKE terminal or through API function calls. The key label must be blank padded to the full 64 bytes. A key token is also 64 bytes, but is formatted by the CCA API interface and can contain a working key encrypted by the master key, or an exported key encrypted by a key encrypting key. Key encrypting keys are special keys in the hardware used to import and export key values into and out of the hardware. For example, an exported key can be used to transfer a key value from one ICSF hardware to another where each have different master keys. Key encrypting keys are also called importer and exporter keys. The key can be a key label that’s already defined or a working token value. In either case, the key must have been defined for authentication use and not for encryption. Strictly speaking, a key is a key, but ICSF enforces the use of a key for a specific purpose. For example, when defining a key for single DES authentication, use the MAC type. For Triple-DES authentication, use the DATAM type. For encryption, use the DATA type for all key lengths. To create a MAC for some data, you can call the CSNBMGN function. If you just need to create a MAC for data in memory, then you can do this with one call to the routine. Two of the parameters are the data and length of the data. The rule count should be three. The rule array depends on the algorithm you choose and the form of the returned MAC you desire. Each entry in the rule array consists of 8 bytes. To select DES, use “X9.9- 1”; for Triple-DES, use “X9.19OPT.” Often, the form of the code is 9 bytes of hex characters with a blank in the middle. For this option, use “HEX-9.” For example, to calculate a MAC for a string of data using Triple-DES, you might code something like the COBOL example in Figure 4. You can use other languages, such as Assembler, PL/1, or C. WS-RULE2, ONLY, indicates you’re calling the function only once with all the data. The WS-CHAINING variable is used and maintained internally. You use this variable when you must call the CSNBMGN function multiple times to calculate only one MAC. The variable provides continuity for CSNBMGN to keep track of intermediate results and thus cannot be modified between calls. On the first call, you always start with the variable containing all NULL values. The resulting MAC is returned in the WS-MAC variable, provided you receive a zero return code.
<urn:uuid:95e04b63-75d9-4bec-8926-1f57a4d1d201>
CC-MAIN-2017-04
http://enterprisesystemsmedia.com/article/implementing-icsf-hardware-cryptography-on-z-os/3
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00036-ip-10-171-10-70.ec2.internal.warc.gz
en
0.865563
1,110
2.578125
3
Draft guidelines issued for using SCAP to automate security validation - By William Jackson - May 07, 2009 The National Institute of Standards and Technology has released a draft of its guidelines for using the Security Content Automation Protocol (SCAP) for checking and validating security settings on IT systems. SCAP is a NIST specification for expressing and manipulating security data in standardized ways. It can enumerate product names and vulnerabilities, including software flaws and configuration issues, identify the presence of vulnerabilities and assign severity scores to software flaws. Managing the configurations and security settings of information systems is a challenging job to do manually because the of size, complexity and constant changes in the systems. A wide variety of hardware and software platforms typically are used for many purposes with differing levels of risk in a single environment. The platforms and the threats to them are constantly evolving. “Organizations need a comprehensive, standardized approach to overcoming these challenges, and the Security Content Automation Protocol has been developed to help provide such an approach,” NIST says in Special Publication 800-117, titled "Guide to Adopting and Using the Security Content Automation Protocol." “SCAP can be used for maintaining the security of enterprise systems, such as automatically verifying the installation of patches, checking system security configuration settings, and examining systems for signs of compromise.” Several organizations created and maintain the SCAP components, including Mitre Corp., the National Security Agency and the Forum for Incident Response and Security Teams. NIST provides SCAP content such as vulnerability and product enumeration identifiers via the National Vulnerability Database. All database content and the high-level SCAP specification are freely available from NIST. Nongovernment organizations also create and make SCAP content available. The specifications that make up SCAP are: - Common Vulnerabilities and Exposures, a dictionary of names for publicly known security-related software flaws. - Common Configuration Enumeration, a dictionary of names for software security configuration issues, such as access control settings and password policy settings. - Common Platform Enumeration, a naming convention for hardware, operating systems and software. - Extensible Configuration Checklist Description Format, an Extensible Markup Language specification for structured collections of security configuration rules used by operating systems and applications. - Open Vulnerability and Assessment Language, an XML specification for exchanging technical details on how to check systems for security-related software flaws, configuration issues and patches. - Common Vulnerability Scoring System, a method for classifying characteristics of software flaws and assigning severity scores. Vendors have begun incorporating SCAP in tools for scanning software settings and configuration and, in July, the Office of Management and Budget required agencies to use SCAP-validated products to check compliance with the Federal Desktop Core Configuration settings for government computers running Windows XP and Windows Vista. NIST also has released a revised version of testing requirements for security products using SCAP, describing requirements products must meet to achieve SCAP validation. Draft NIST Interagency Report 7511, titled "Security Content Automation Protocol Validation Program Test Requirements, Revision 1," was written primarily for laboratories that are accredited to perform SCAP product testing, vendors interested in receiving SCAP validation for their products, and agencies and integrators deploying SCAP tools. The NIST guidelines offer these recommendations for using SCAP: - Organizations should use security configuration checklists that are expressed using SCAP. This documents desired security configuration settings, installed patches, and other system security elements in a standardized format. SCAP-expressed checklists are available relevant to specific software, and can be easily customized to meet specific organizational requirements. - Organizations should use SCAP to demonstrate compliance with high-level security requirements. NIST has created mappings between Windows XP security configuration settings and the high-level security controls in NIST Special Publication (SP) 800-53, which supports Federal Information Security Management Act. The mappings are embedded in SCAP-expressed checklists, which allows SCAP-enabled tools to automatically generate assessment and compliance evidence. - Organizations should use standardized SCAP enumerations — identifiers and product names. - Organizations should use SCAP for vulnerability measurement and scoring. SCAP enables quantitative and repeatable measurement and scoring of software flaw vulnerabilities across systems through the combination of the Common Vulnerability Scoring System (CVSS), CVE, and CPE. - Organizations should acquire and use SCAP-validated products. Whenever possible, software developers should ensure that their software provides the ability to assess underlying software configuration settings using SCAP, rather than relying on manual checks or proprietary checking mechanisms. NIST also encourages IT product vendors to participate in SCAP content development because of their depth of knowledge and their ability to speak authoritatively about the most effective and accurate means of assessing their products’ security configurations. Comments on the draft guidelines should be sent by June 12 to email@example.com, with "Comments SP 800-117" in the subject line. William Jackson is a Maryland-based freelance writer.
<urn:uuid:b54ba1b4-764e-42d2-8e15-16a0a6b63ce3>
CC-MAIN-2017-04
https://gcn.com/Articles/2009/05/07/NIST-SCAP-guidelines.aspx?Page=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.881053
1,036
2.734375
3
By application area, crop-based end-uses of pesticides are likely to maintain the fastest growth in terms of volume consumed and value demand during the similar period, and retain the leading raking in terms of largest application area. The worldwide consumption of Pesticides in Crop-Based applications is the largest, estimated at 1.9 million tons in 2014. Principal sources of oils derived from seeds include palm, soybean, rapeseed (canola), sunflower seed, peanut, cottonseed, palm kernel, coconut and olive. Utilization of pesticides for protecting oilseed crops from a variety of infestations constitutes the fastest growing application area, with a 2015-2020 volume CAGR of 6.6% and value CAGR of 7.3%. Production levels of oil sources have a direct correlation with consumption of oils derived from them, with consumers in some regions being highly specific in regard to the cooking oil to be used. Pesticides have played a positive role in controlling various forms of diseases caused through a range of sources in promoting growth of oilseed-bearing crops. Increased demand for restricting damage to crops has necessitated greater use of various kinds of pesticides for controlling or eliminating weeds, insects and fungi. Several developments across various formats of pesticides have been successful in countering highly resistant forms of infestations, and this trend is likely to continue into the future, too. Factors driving the markets for pesticides applications in crops include improving crop yields to meet the requirements of increasing population and decreasing arable lands. On the other hand, regulatory authorities such as EPA (Environment Protection Agency) frequently come up with stringent laws related to curbing pesticide use for alleviating environmental damage and increasing consumer awareness about pesticide consumption, which is expected to be instrumental in slowing down growth in demand for synthetic pesticides. In addition, several highly toxic pesticides have either been banned or are in the process of being phased out, thereby opening new avenues for growth in demand for Biopesticides This report provides segmentation based on application type viz Herbicides, Insecticides, Fungicides and Other Synthetic Pesticides, while categories of Biopesticides include Bioherbicides, Bioinsecticides, Biofungicides and Other Biopesticides. Synthetic pesticides dominate the global scenario in terms of volume consumption and value demand, though Biopesticides are slated to register faster growth in both these parameters over the 2015-2020 analysis period. In terms of regional demand, North America would continue to be dominant for consumption of pesticides in oilseed applications, while Asia-Pacific would record fastest growth. Growing population in developing regions that require adequate sustenance would propel the growth of pesticide use in this key application area. Major companies covered in the report include American Vanguard, Arysta LifeScience, BASF SE, Bayer CropScience, BioWorks, Cheminova, Chemtura Corp, Chr Hansen, Dow AgroSciences, DuPont, FMC Corp, Isagro SpA, Ishihara Sangyo Kaisha, Makhteshim Agan, Marrone Bio Innovations, Monsanto, Natural Industries, Novozymes A/S, Nufarm Ltd, Sumitomo Chemical, Syngenta AG and Valent Biosciences. The key strategies used by companies in this market are new product registrations, acquisitions to enter new markets. The focus in this industry should be on Integrated pest management techniques and sustainable practices for an improved yield without harming the environment. In this report we offer, Why should you buy this report?
<urn:uuid:8de61ea7-c051-4a8b-b693-4e5e51eb9e5e>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/global-oil-seed-crop-protection-market-growth-trends-and-forecasts-2014-2019-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00366-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923376
721
2.765625
3
Teleporters - Star Trek weapons and gadgets According to The Making of Star Trek, Star Trek creator Gene Roddenberry's original plan did not include transporters, instead calling for characters to land the starship itself. However, this would have required unfeasible and unaffordable sets and model filming, as well as episode running time spent while landing, taking off, etc. The next idea was the shuttlecraft; however when filming began, the full-sized shooting model was not ready. Transporters were devised as a less expensive alternative.
<urn:uuid:74e22160-1eac-45cc-9cb3-d47be3ad6d28>
CC-MAIN-2017-04
http://www.computerweekly.com/photostory/2240107938/Photos-Star-Trek-weapons-and-gadgets-Engage/1/Teleporters-Star-Trek-weapons-and-gadgets
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00486-ip-10-171-10-70.ec2.internal.warc.gz
en
0.978069
112
2.65625
3
Research from Kaspersky Lab shows malware on social networking sites such as Facebook and MySpace is 10 times more successful at infecting users than e-mail-based attacks. Enterprises and users need to adopt sound security practices to deal with the problem. hackers are using sites such as Facebook, LinkedIn and MySpace to launch attacks is no revelation. New statistics, however, show just how on social networking sites its "Malware Evolution 2008" report, published in February 2009, Kaspersky revealed that malicious code distributed via social networking sites has a success rate of 10 percent in terms of infections, making it 10 times more potent than malware distributed via e-mail. 2008 we increased the collection of malicious files relating to social networks by approximately 26,000," said Stefan Tanase, a security researcher for the Kaspersky Lab Global Research and Analysis Team. "In 2008 alone we processed more of those samples than in the total of all years prior to 2008, making the growth rate exponential. Our collection of malicious reached 43,000 at the end of last year." said he expects that number to hit 100,000 by the end of 2009. According to 800 new variants of the notorious Koobface virus were discovered in March alone. Social networking sites have also been hit by malware hidden legitimate third-party applications. particular site is more dangerous than others, Tanase said. Different sites are popular in different regions of the world, and attackers follow the users. very hard for social networking sites to do better, " he said. "Their business is about having an easy-to-use Website, so that everyone can join. The problem is that usability and security don't really go hand in hand most of the time." enterprises, that means developing policies to control the use of social networks by employees. Organizations can instruct employees not to mention the company name on social networking sites, for example, and can couple that with education on configuring privacy settings and general Web safety. access to social networking site[s] is not going to work in the long run," said Chenxi Wang, an analyst with Forrester Research. "As younger employees join the work force, they increasingly expect to have access to social networking sites from work, [so] having such a restrictive policy will damage the company's [prospects of attracting] employees and ultimately may become a competitive advantage [to competitors]." for basic security advice, Tanase advised users to limit the code executed inside their browsers to trusted sources only and to make sure the operating system, anti-virus application and other software are fully patched and talking about social networks, even though they are made of users wandering throughout cyber-space, we should not forget we're actually talking about real people, actual human beings that have friends and relationships," he said. "These relationships are usually based on trust, so the bad guys are trying to exploit this trust."
<urn:uuid:74af0cee-8290-4cc6-8ef2-e6f90f558537>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Security/Social-Networks-10-Times-as-Effective-for-Hackers-Malware-892010
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00210-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931155
641
2.609375
3
U.K. Prime Minister Gordon Brown told business leaders to prepare for a technological revolution and embrace the opportunities available in tackling climate change. Speaking at the Prince of Wales Business Summit in London today, Brown said that the fight against climate change could help create jobs, increase exports and "liberate the creativity and innovation of British companies and British communities." "The overall added value of the low carbon energy sector by 2050 could be as high as $3 trillion per year worldwide, employing more than 25 million people," Brown said. "If Britain maintains its share of this growth there could be over a million people employed in our environmental industries within the next two decades. "So building our own low carbon economy offers us the chance to create thousands of new British businesses and hundreds of thousands of new British jobs," continued Brown. The Prime Minister listed a number of areas in which the U.K. is driving progress, from research on carbon capture and storage to financing mechanisms to help developing nations cut their greenhouse gas emissions. Other measures, such as encouraging supermarkets to charge for plastic bags, are also having an impact, he said. The Climate Change Bill contains proposals to make the U.K. the first country to put into legislation a statutory cap on emissions, while a separate White Paper aims to create regulation that promotes innovation and give priority to low-carbon and sustainable products in procurement policies. The Prime Minister said his vision was for a green economy providing new jobs, powered by business innovation and driven by changes in consumer behavior.
<urn:uuid:2788530d-5eba-4c45-b350-359e420fed53>
CC-MAIN-2017-04
http://www.govtech.com/policy-management/PM-Brown-Calls-for-Green-Technology-Revolution.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00055-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958942
308
2.59375
3
Data Science and Big Data Analytics Discover how to use big data and the Data Analytics Lifecycle to address your business challenges. In this course, you will gain practical foundation level training that enables immediate and effective participation in big data and other analytics projects. You will cover basic and advanced analytic methods and big data analytics technology and tools, including MapReduce and Hadoop. The extensive labs throughout the course provide you with the opportunity to apply these methods and tools to real world business challenges. This course takes a technology-neutral approach. In a final lab, you will address a big data analytics challenge by applying the concepts taught in the course to the context of the Data Analytics Lifecycle. You will prepare for the Proven™ Professional Data Scientist Associate (EMCDSA) certification exam, and establish a baseline of Data Science skills.
<urn:uuid:59494449-781a-4983-ade8-15dc2fa2fb94>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/117166/data-science-and-big-data-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00055-ip-10-171-10-70.ec2.internal.warc.gz
en
0.864968
168
2.671875
3
As computers become more human-like, researchers are finding evidence of computing-like behavior in the animal kingdom. A UK scientist discovered that certain species of jellyfish employ a supercomputing algorithm to locate food. According to the new study, published in the Journal of the Royal Society Interface, the barrel jellyfish demonstrates movement patterns that are consistent with a type of supercomputing algorithm, called “fast simulated annealing,” described in the paper as “a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a large search space.” In the mathematical world, this kind of algorithm is usually used in tandem with a powerful computer to find optimal solutions to complex problems in a relatively short time span. For the barrel jellyfish, the same strategy is used to locate the richest concentrations of plankton, its preferred food source. It also helps the species zero in on the olfactory trails emitted by more distant prey. Such a sophisticated search strategy has never been observed before in nature, according to the study’s lead author Andy Reynolds, a scientist at Rothamsted Research, an agricultural research center in the UK. However, less complex mathematical patterns have been identified, the most common being the “Lévy walk” – named after French mathematician Paul Lévy, best known for his role in advancing probability theory. Reynolds described the distinction in an interview with LiveScience. “A Lévy walk is random walk in which frequently occurring small steps are interspersed with more rarely occurring longer steps, which in turn are interspersed with even rarer, even longer steps and so on,” he said. Species that rely on Lévy walks to find prey include sharks, penguins, honeybees, ants, turtles and even human hunter-gatherers. Instead of using a consistent Lévy walk approach, barrel jellyfish also employ a bouncing technique to locate prey. These large jellies ride the currents to a new depth in search of food. If a meal is not located in the new location, the creature rides the currents back to its original location. “In the presence of convective currents, it could become energetically favourable to search the water column by riding the convective currents,” Reynolds observes. Another conclusion of the author is that the family of Lévy walkers is much larger than previously thought, extending to “spores, pollens, seeds and minute wingless arthropods that on warm days disperse passively within the atmospheric boundary layer.” There is a reason why the jellyfish benefits from this optimized search algorithm, and that is because it requires a lot of plankton to become satiated. Reynolds explains: “A Lévy search is highly effective in finding the next meal, when any meal will do. Fast simulated annealing, on the other hand, takes the forager to the best possible meal. This is what makes jellyfish special — they are very discerning diners, unlike bony fish, penguins, turtles and sharks, which are just looking for any meal.”
<urn:uuid:7f56e1a8-e752-4dcb-9ca9-ef7d8b46b0f5>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/08/14/jellyfish-use-novel-search-strategy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00569-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950538
658
4.03125
4
In this post we will revisit an old friend that is used quite often in all of our modern networks, Dynamic Host Configuration Protocol (DHCP). The DHCP process allows a server to automatically provision IPv4 addresses, along with other important configurations, to clients as they boot up. The following processes take place when DHCP is implemented. The client broadcasts messages on the physical subnet asking for IP configuration information and to discover available DHCP servers. If required, the network administrators can configure a local router to forward DHCP packets to a DHCP server located on a different subnet. This client-implementation creates a User Datagram Protocol (UDP) packet with the broadcast destination of 255.255.255.255, or the specific subnet broadcast address. A DHCP client can also request its last-known IP address. If the client remains connected to a network for which this IP is valid, the server might grant the request. Otherwise, it depends whether the server is set up as authoritative or not. An authoritative server will deny the request, making the client ask for a new IP immediately. A non-authoritative server simply ignores the request, leading to an implementation-dependent timeout for the client to give up on the request and ask for a new IP address. When a DHCP server receives an IP lease request from a client, it reserves an IP address for the client and extends an IP lease offer by sending a DHCPOFFER message to the client. This message contains the client’s MAC address, the IP address that the server is offering, the subnet mask, the lease duration, and the IP address of the DHCP server making the offer. A DHCP lease duration is the amount of time that the DHCP server grants permission,to the DHCP client, to use a particular IP address. A typical server allows its administrator to set the lease time. The server determines the configuration based on the client’s hardware address as specified in the Client Hardware Address (CHADDR) field. Then, the server specifies an IP address in the Your IP Address (YIADDR) field. Depending on the required implementation, the DHCP server may use one of three methods of allocating IP-addresses. - Dynamic Allocation: With this method, a network administrator assigns a range of IP addresses to the DHCP server, and each client computer on the LAN has its IP software configured to request an IP address from the DHCP server during network initialization. The request-and-grant process uses a lease concept with a controllable time period. This process allows the DHCP server to reclaim and then reallocate IP addresses that are not renewed. This is considered to be a dynamic re-use of IP addresses. - Automatic Allocation: With this method, a DHCP server permanently assigns an available IP address to a requesting client from the range of a pool of IP addresses that have been defined by the administrator. This is like dynamic allocation, but the DHCP server keeps a table of past IP address assignments, so that it can preferentially assign to a client the same IP address that the client previously had. - Static Allocation: With this method, a DHCP server allocates an IP address based on a table with MAC address/IP address pairs, which are manually configured in a data base by a network administrator. Only requesting clients with a MAC address listed in this table will be allocated an IP address. This feature, which is not supported by all routers, is usually called Static DHCP Assignment. A client can receive DHCP offers from multiple servers, but it will accept only one DHCP offer and broadcast a DHCP request message. Based on the Transaction ID field in the request, servers are informed whose offer the client has accepted. When other DHCP servers receive this message, they withdraw any offers that they might have made to the client and return the offered address to the pool of available addresses. When the DHCP server receives the DHCPREQUEST message from the client, the configuration processes enters its final phase. The acknowledgement phase involves sending a DHCPACK packet to the client. This packet includes the lease duration and any other configuration information that the client might have requested. At this point, the IP configuration process is completed. The protocol expects the DHCP client to configure its network interface with the negotiated parameters. In my next few posts, I will continue discussing the DHCP process and focus in on some of the special functions of DHCP servers, along with some of the security issues that must be addressed. Author: David Stahl
<urn:uuid:91ddbb18-fed6-46c2-a421-309ccb0fe649>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/03/18/dhcp-implementation-processes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00293-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912797
916
2.78125
3
Virtual reality may seem like the stuff of science fiction, but it’s increasingly reality–and being used in new ways, including in education. And that means institutional IT departments need to begin thinking about how to integrate it, and other new technologies, into their operations, says Emory Craig, director of e-learning and instructional technology at College of New Rochelle in New York. Craig’s role at New Rochelle puts him in charge of both making sure online course systems are operational and integrating emerging technologies into classrooms. That means managing the learning management system and working with staff and faculty to make sure things are running smoothly. This process could involve setting up lecture capture, for example, and arranging for the digital storage of those lectures. Or, it could mean working with faculty to establish split classrooms, where lectures are placed online and in-class time is used for workshops, questions and special projects. “They’re looking for things that are user friendly, very simple to use, and that have a great deal of support,” Craig says of the faculty at New Rochelle. That’s why New Rochelle’s IT department recently moved to a cloud-native (i.e., applications built specifically for cloud platforms) learning management system; to help give faculty the tools they need to support technology-forward teaching. But Craig’s role also means working with different players at the college to introduce new technologies for online and electronic learning. Virtual reality increasingly plays a role here. Many people think of virtual reality as a way to make video games more entertaining, or perhaps something to be used only by the richest or more technologically connected people. But it’s increasingly a tool that can be accessed by all kinds of people, Craig says, with materials as easily accessible as a $15 viewer made of cardboard. “This is the future of learning, media and entertainment,” Craig says of developments in virtual reality. “It is going to transform everything we do.” And it shouldn’t be all that surprising given that virtual reality has been named a key trend for anyone involved in IT Infrastructure these days. While it’s clearly an end user computing trend, it will fall to IT professionals to ensure the proper infrastructure is in place to support it. According to Craig, there are three key ways this IT trend can be integrated into the education industry. The first is to provide realistic, deeply immersive experiences. “It opens up the opportunity to not just read a text about something or watch a film about something, but to step into an environment,” he says. It could be used, for example, to insert students into historical situations. Additionally, virtual reality can be used to provide training based on more specific scenarios such as a particular natural setting or medical procedure. “I already have nursing faculty talking about how they could use something like this to help train their nursing students,” Craig says. Finally, virtual reality can add new powers to documentary filmmaking, another powerful teaching tool, Craig says. “It’s one thing to watch something on a screen. It’s another thing to step inside a screen and be inside an experience,” he says. Craig is well aware of the cost restraints many universities are working under, and how that might affect the willingness of institutions to experiment with new technology like virtual reality or split classrooms. But as costs lower and technology advances, the tools for virtual reality on a smaller scale become increasingly accessible. Craig currently has his students use cardboard viewers for a course he’s currently teaching on new media and society, for example. It provides them–and the college–with an easy way to see what kinds of things are capable with this technology, and where it might be worthwhile to invest in the future. “For a lot of institutions, I think it’s hard to put the financial commitment into it until you see exactly what you’re going to do with it,” Craig says. “It becomes kind of a quick win.” The same approach could be applied to any IT department. And making larger-scale choices, like choosing a cloud-based learning management system that allows for the integration of emerging technology, can help an institution or IT department take a big leap forward. “That’s a really exciting thing, and something that I’d love to see being rolled out more,” Craig says of the moves an IT department can take to make classrooms more engaging, interactive and collaborative. “We’re in a modern era where it’s increasingly difficult to say ‘I’m just going to stand in front of a room and I’m going to tell you what I know.’” Terri Coles is a freelance writer based in St. John’s, NL. Her work covers topics as diverse as food, health and business. If you have a story you would like profiled, contact her at email@example.com. The IT Innovators series of articles is underwritten by Microsoft, and is editorially independent.
<urn:uuid:964c8a6f-b0c5-4848-b6fb-ed48c21ad2a4>
CC-MAIN-2017-04
http://www.datacenterknowledge.com/archives/2016/02/23/it-innovators-eying-its-influence-in-education/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958918
1,066
2.828125
3
Looking to address harsh criticism from its own inspector general that has been painfully slow in getting important technologies out of the lab and into commercial applications, NASA today said it has opened a revamped Technology Transfer Portal which aims to streamline the way the space agency handles that business. IN THE NEWS: What's up with these solar storms? Not unlike its efforts of the past, NASA said the new tech portal simplifies and speeds access to the agency's intellectual property portfolio, much of which is available for licensing. The site features a searchable, categorized database of NASA's patents, a module for reaching out to a NASA technology transfer specialist and articles about past successful commercialization of NASA technology. Historical and real-time data for NASA's technology transfer program also are available. "One of NASA's highest priority goals is to streamline its technology transfer procedures, support additional government-industry collaboration and encourage the commercialization of novel technologies flowing from our federal laboratories," said NASA Administrator Charles Bolden in a statement. "One way NASA can streamline and increase the rate of aerospace technology transfer is through tools like NASA's Technology Transfer Portal." Examples of the types of technologies NASA has licensed in the past include devices designed to operate remotely and with limited servicing in the harsh environment of space, and strong and lightweight materials that can withstand the extreme temperatures of supersonic flight or space travel. NASA has designed lifesaving techniques, protocols, and tools for use when orbiting the Earth and the nearest doctor is more than 200 miles below. Closed environment recycling systems, as well as energy generation and storage methods also have useful applications here on Earth. A report released in March by NASA Inspector General Paul Martin that assessed NASA's technology commercialization efforts and said among other things that decreased funding and reductions in personnel have hindered NASA's technology transfer efforts. Specifically, funding for technology transfer has decreased from $60 million in fiscal year (FY) 2004 to $19 million in FY 2012 while the number of patent attorneys at the Centers dropped from 29 to 19 over the same period. As a result, patent filings decreased by 37%. Martin's report cites a number of "missed opportunities to transfer technologies from its research and development efforts and to maximize partnerships with other entities that could benefit from NASA-developed technologies." For example: • Algorithms designed to enable an aircraft to fly precisely through the same airspace on multiple flights - a development that could have commercial application for improving the autopilot function of older aircraft - was not considered for technology transfer because project personnel were not aware of the various types of innovations that could be candidates for the program. • NASA personnel failed to capitalize fully on the Flight Loads Laboratory at Dryden Flight Research Center - a unique facility used for aeronautic testing services - because they did not recognize the facility as a transferable technology and consequently had not developed a Commercialization Plan to manage customer demand. • The NASA project team for a precision landing and hazard avoidance project was not aware of NASA's technology commercialization policy and had not conducted a commercial assessment or developed a Commercialization Plan for the project. However, team members provided us with several examples from their work that could be considered new technologies with potential commercial application, such as technology to improve communication between aircraft and air traffic control that could be useful to the aviation community and technology to aid helicopter landings during dust storms, low cloud cover, fog, or other periods of low visibility that could be useful to the military. Aside from reduced money for their efforts, NASA project managers and other personnel responsible for executing NASA's technology transfer processes could improve their effectiveness in identifying and planning for the transfer and commercialization of NASA technologies. Specifically, NASA personnel did not realize the transfer potential of some technological assets and project managers did not develop Technology Commercialization Plans that provide a methodology for identifying potential commercial partners, Martin stated. At the time, NASA did not disagree with the report's observations and promised to address the situation with training and other improvements. Creating new technologies is fundamental to NASA's mission, and facilitating the transfer of these technologies to other government agencies, industry, and international entities is one of the Agency's strategic goals. Technology transfer promotes commerce, encourages economic growth, stimulates innovation, and offers benefits to the public and industry. Follow Michael Cooney on Twitter: nwwlayer8 and on Facebook Read more about data center in Network World's Data Center section. This story, "NASA revamps, looks to speed high-tech commercialization opportunities" was originally published by Network World.
<urn:uuid:af988fa0-317b-4136-a845-19702443f8e9>
CC-MAIN-2017-04
http://www.itworld.com/article/2722404/hardware/nasa-revamps--looks-to-speed-high-tech-commercialization-opportunities.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00111-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945426
916
2.84375
3
You can help meet the needs of a diverse group of users by designing applications that people with disabilities or impairments can use. In some cases, you might want to address specific accessibility needs of people with disabilities or impairments. For example, you might want to develop an application that supports assistive technology, such as a screen reader. In other cases, you might want to develop an application that can reach the widest possible audience. In this case, following the best practices for designing accessible applications can benefit a broad range of users, including the typical users of your application. Best practice: Designing accessible applications Guidelines for UI design - Stay focused on users' immediate task. Display only the information that users need at any one moment. For example, simplify data selection and presentation by displaying information in a logical order. - Group components according to common usage or common functionality to minimize the cognitive load for users. - Provide enough space between components so that users can distinguish one control from another. - Use UI components consistently so that users can recognize common UI components easily. For example, use buttons to initiate actions. Avoid using other components, such as hyperlinks, to initiate actions. - If you are designing an application that supports an assistive technology device, such as a screen reader, and you do not use BlackBerry UI APIs or support the Accessibility API, expose the unique UI components in your application programmatically so that assistive technology can interpret the information. Guidelines for navigation - Indicate clearly the UI component that has focus. For example, use white text on a blue background. - Where possible, allow users to use the keyboard to initiate the most frequently used actions in the application. For example, allow users to press the Enter key to select a menu item. - Where possible, inform users of important events, such as calendar reminders, in multiple ways. For example, provide a sound effect and a visual notification for the reminder. - Where possible, apply redundancy to provide users with multiple ways to interact with common actions. For example, use the Menu key to allow users to access the full menu and a trackpad or touch screen to allow users to access the pop-up menu. - In each menu, set the default menu item as the item that users are most likely to select. The default item in the pop-up menu should be the same as the default item in the full menu. - If a process or application requires users to complete a series of lengthy or complex steps, list all the steps or screens where possible. Identify the steps that are complete, in progress, and not yet started. For example, include a table of contents in wizards. If users close a wizard, they can use the table of contents to return to a specific location in the wizard. Guidelines for text - Provide specific messages. To support error recovery, use one short sentence that states clearly the reason for displaying the message and the actions that can dismiss it. - Where possible, inherit the font settings that the user has set. Guidelines for color and images - Avoid using color as the only means of communication. For example, instead of using only red text to notify users of a critical action, consider placing a red symbol, such as a red exclamation mark, beside the text instead. - Choose colors that have high contrast, such as black, white, navy blue, and yellow. - To help users to distinguish between adjacent UI components (such as alternating messages in an SMS text message thread) and to distinguish between background and foreground colors, use colors that result in a contrast ratio of 7:1 or higher. - Add contextual information to images, such as the image name, to communicate the meaning and context of the images.
<urn:uuid:1c3bdc9d-2aa2-4367-976f-28e29fc03563>
CC-MAIN-2017-04
https://developer.blackberry.com/design/bb7/accessibility_6_1_1509103_11.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00441-ip-10-171-10-70.ec2.internal.warc.gz
en
0.865038
772
3.046875
3
Is SQL Injection Still a Major Security Threat? Robert Graham, CEO of Errata Security, explains SQL injection, a technique criminal hackers could use to compromise Web site databases.Q: What exactly is SQL injection? A: SQL injection is a type of attack that targets Web sites backed by a relational database such as Microsoft SQL Server, Oracle or MySQL. The database might be doing nothing more complicated than capturing user names and passwords, or it might be executing full-blown sales transactions. Q: Who is vulnerable to SQL injection? A: Hundreds of thousands of sites around the world are potentially vulnerable to SQL injection if they dont properly defend against it. Q: How does SQL injection work? A: The way it works is very simple. An improperly programmed Web form can inadvertently allow data and executable code to get mixed up. Suppose the site has a page where a user has to type in some data-maybe just a user name, a blog comment, or a description of an item for sale. A hacker can hijack a data entry field on this Web form by entering a value that is completely different in type from what the programmer intended. For example, SQL uses the single quote character () as an escape character. This tells the database that whatever comes next is no longer data but executable code. All the hacker has to do is insert a piece of live SQL after the escape character. The database engine will see that code and think it is expected to execute it. In that way it can be tricked into performing a task of the hackers choice-perhaps inserting fictitious values into the database or retrieving data the hacker shouldnt see, or even maliciously deleting an entire table. A: The best defense is to design your database-backed Web site properly to make sure it always separates SQL code and user data. You basically have a choice between programming tools that are specifically designed to prevent you from making this kind of mistake and those that allow you to get into trouble if youre not careful. Roughly speaking, this corresponds to the difference between the newer Microsoft .Net tools and their older tools or open source frameworks like PHP. The pre-.Net Microsoft tools in particular were very vulnerable to attack and at the same time very easy to use. You had a lot of people building Web sites with them who really had no clue how to defend themselves from attackers. Since then Microsoft has rearchitected its products and the current generation of .Net tools makes it much more difficult to expose yourself to SQL injection unless you do something really strange. Q: Are you saying that sites built with open source tools like PHP are more vulnerable to SQL injection attacks than sites built with .Net? A: Its a question of mentality. Microsofts mindset is to fix things in such a way that the user doesnt have so much control and is therefore less vulnerable. The open source tools like PHP have a different philosophy. They assume that users know what they are doing and want to be free of constraints, so these tools let users do what they want but at their own risk. The open source tools assume that developers these days are aware of the threat of SQL injection and will do the right thing. Q: Is it fair to say that the risk of SQL injection is greater for older web sites? A: Yes. Q: Whats so hard about remediating the vulnerabilities in older sites? A: You can certainly do it. But retraining the developers who built the old sites is often a big problem. Perhaps you went out in 2002 and hired a bunch of college kids to build your site. The college kids applied what they had just learned in college, which at that time didnt include protecting themselves from things like SQL injection. Now fast forward five years to 2007 and the college kids you hired in 2002 havent learned anything new. They have to be retrained to use the newer tools. Or else you have to go out and hire new college kids, who will apply what they have just learned in college, but will themselves have to be retrained five years down the road. It always comes down to a question of education.
<urn:uuid:c0e5ce73-8495-4cd8-b459-024ee55d0fca>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Database/Is-SQL-Injection-Still-a-Major-Security-Threat
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00285-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958445
832
2.875
3
Picture This: A Visual Guide to Injection Attacks Whenever any list of the biggest computing risks is compiled, it’s almost guaranteed that one or more injection attacks will vie for the top position. Injection attacks are commonly associated in news reports with SQL, but they can pop up in a variety of different places. There are injection-related vulnerabilities associated with headers, logs and a sizable list other attack points. In brief, an injection attack can be defined as any attack wherein an attacker is able to misuse an application by feeding it values that are different than what it expected. In this article, we will first look at four of the most well-known types of injection attacks (SQL, LDAP, XML, and command), and then use some images from a simplistic example to try to differentiate between the various categories. Knowing and understanding the basics of injection attacks will help you in your pursuit of security certifications. As mentioned above, the most popular injection attack today, at least from the standpoint of making the news, involves SQL, the de facto programming language used for communicating with online (and other relational) databases. A SQL injection attack is also commonly — and more appropriately — called a SQL insertion attack. The reason for this is that an attacker manipulates the database code to take advantage of a weakness in it. For example, if the interface is expecting the user to enter a string value, but has not been specifically coded that way, then the attacker can enter a line of code, and that code will then execute instead of being accepted as a string value. Several types of exploits use SQL injection, and the most common fall into the following categories: - Escape characters not filtered correctly - Type handling not properly done - Conditional errors - Time delays SQL is used to communicate with a database, so it is common to have SQL statements executed when someone clicks a logon button. The SQL statements take the username and password entered, and they query the database to see whether those credentials are correct. The problem begins with the way websites are written. They are written in some scripting, markup or programming language, such as HTML (Hypertext Markup Language), PHP (PHP: Hypertext Preprocessor), ASP (Active Server Pages), and so on. These languages don’t understand SQL, so incoming SQL statements are usually put into a string and whatever the user inputs in the username and password boxes is appended to that string. Here is an example: “SELECT * FROM tblUSERS WHERE UserName ='” + txtUserName + “‘”AND ~CA Password = ‘”+password +”‘” Notice that single quotes are inserted into the text, meaning that whatever the user types into username and password text fields is enclosed in quotes within the SQL query string, like this: SELECT * FROM tblUSERS WHERE UserName =’admin’ AND Password = ‘password”; Now the attacker will put a SQL statement into the username and password fields that is always true, like this: ‘ or ‘1’ =’1 This results in a SQL query like this: ‘SELECT * FROM tblUSERS WHERE UserName =” or ‘1’ =’1′ AND Password = ” or ‘1’ =’1” So now it says to get all entries from table = tblUsers if the username is ” (blank) OR IF 1 =1. And if password = ” (blank) OR IF 1=1! Since 1 always equals 1, the user is logged in. The way to defend against this attack is always to filter input. That means that the website code should check to see if certain characters are in the text fields and, if so, to reject that input. Just as SQL injection attacks take statements that are input by users and exploit weaknesses within, an LDAP injection attack exploits weaknesses in LDAP (Lightweight Directory Access Protocol) implementations. This can occur when the user’s input is not properly filtered, and the result can be executed commands, modified content, or results returned to unauthorized queries. One of the most common uses of LDAP is associated with user information. Numerous applications exist — such as employee directories — where users find other users by typing in a portion of their name. These queries are looking at the cn value or other fields (those defined for department, home directory, and so on). Someone attempting LDAP injection could feed unexpected values to the query to see what results are returned. All too often, finding employee information equates to finding usernames and values about those users that could be portions of their passwords. The best way to prevent LDAP injection attacks is to filter the user input and to use a validation scheme to make certain that queries do not contain exploits. When a user enters values that query XML (known as XPath) with values that take advantage of exploits, it is known as an XML injection attack. XPath works in a similar manner to SQL, except that it does not have the same levels of access control, and taking advantage of weaknesses within can return entire documents. The best way to prevent XML injection attacks is to filter the user’s input and sanitize it to make certain that it does not cause XPath to return more data than it should. A common goal of an injection attack is to be able to access a directory other than the one the application is supposed to be limited to. Known as directory traversal, one of the simplest ways to perform this is by using a command injection attack that carries out the action. For example, exploiting a weak server implementation by calling up a web page along with the parameter cmd.exe?/c+dir+c:\ would call the command shell and execute a directory listing of the root drive (C:\). With Unicode support, entries such as %c%1c and %c0%af can be translated into / and \ respectively. The ability to perform command injection is rare these days. Most vulnerability scanners will check for weaknesses with directory traversal/command injection and inform you of their presence. To secure your system, you should run such a scanner and keep the web server software patched. Looking at Various Categories Bear in mind that injection attacks work by “injecting” in something that wasn’t expected so that the application behaves in a way that the programmer never intended for it to behave. As a simplistic example of this, imagine that you are a programmer writing — from scratch — a program to allow a robot to play the game of Blackjack with real cards and against an actual player Blackjack is a card game in which a player draws cards from a standard deck of playing cards and tries to get the numerical value of the cards in his hand as close to 21 as he can without going over. With normal play, the player starts with two cards and then chooses to stand on that or draw another one (and another one, and another one … ) as they try to get closer and closer to the desired number without going over it. If the cards total 22 or more, this results in an immediate loss. The code that you write needs to let the robot look at the cards the player has and calculate their value, and we’ll assume that only a single deck is in use. Figure One shows the player’s hand at the beginning of play. A six of spades and a seven of hearts total thirteen, and that is a pretty simple piece of code to write. Based on this value, it would not be unreasonable to expect another card to be taken by the player. Figure One: The player’s beginning hand totals 13. Suppose that after the next card, the player’s hand resembles Figure Two. Jokers are not used in this game, and now the new card represents an unexpected value. The player could be using this card to try to trick the program into executing unintended commands or allowing access to unauthorized data — thus constituting the very definition of an injection attack by the Open Web Application Security Project (OWASP). Figure Two: The unexpected card injected into the deck can trick the interpreter. As a programmer, you would need to write a routine to filter the input to prevent any unintended consequences — make sure that every card is of a value that can be found in a standard deck. That still leaves open the possibility depicted in Figure Three, however. Here, the player has slipped in a card from a second deck. They are trying to increase their chances of winning (with cards that total 19), and are cheating to make it happen by using a second six of spades. Figure Three: The player is pulling cards from a second, unauthorized deck This type of attack is often referred to as a union attack because, with databases, the union command can be used to return query values from another table. As a programmer, you would need to write a routine to make sure that only cards from this deck are used and that you can account for all 52 cards. If you tried to solve the problem by only making sure there were no duplicate cards, the card borrowed from the second deck might be an eight of diamonds and its pair could still be in the deck so you would never know this type of attack was being implemented. Figure Four shows another type of attack. If you look closely at the seven of hearts, you’ll note the edge of a 10 card being concealed beneath it. The player, with 23, has busted and the round should be over. They are, however, trying to make it look as if they have fewer cards and thus are able to keep drawing more and playing. If the results of the attack are not visible to the player themselves (they know they have a card they are hiding, but don’t what card it is), this can be called a blind attack. Figure Four: The player is hiding cards from view to make it look as if it is an earlier round in the game than it actually is A similar strategy of pretending it is earlier than it really is can be played out by feeding a program dot-slash sequences (“../”) to get to the parent directory from the current directory. As a programmer, you would obviously need to include code that prevents it from happening and the list of what you need to specifically look for — and prevent — increases in size exponentially. Summing It Up Injection attacks are a major threat to application security. The examples discussed here barely skim the comprehensive ocean of possibilities. The examples do, however, illustrate the basics of the attacks and provide a foundation upon which to build.
<urn:uuid:b8c2e103-c6ae-4685-9d70-33c3cc0316ad>
CC-MAIN-2017-04
http://certmag.com/picture-visual-guide-injection-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00523-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946764
2,218
3.671875
4
When it comes to security, one of the scariest things out there sounds like science fiction and pertains to hacking implantable medical devices. Pacemakers and insulin pumps do help save lives, but they are vulnerable to lethal attacks; there are continued warnings that exploiting these medical devices will eventually cost someone their life. Here’s a slightly different take on the scenario; you’ve heard of drive-by-downloads that can infect a machine with malware without the user agreeing to the automatic download, but how about serving up malware in software updates for medical devices such as ventilators? The global medical technology corporation CareFusion specializes in “reducing medication errors and helping prevent health care-associated infections.” It makes IV pumps, ventilators, respiratory products, automated dispensing of medicine, patient identification systems, has infection surveillance services and more. The company website states, “At CareFusion, we are united in our vision to improve the safety and lower the cost of healthcare for generations to come.” Granted that IT staffs are always overworked and understaffed, but it seems less like “care” and more like negligence to run its website on six year old versions of Windows software. Viasyshealthcare.com belongs to CareFusion, so imagine going there to update a lifesaving piece of medical equipment like a respirator, specifically "AVEA Ventilator software update." Instead, however, you discover the healthcare site is sick with malware and serving up infections in medical software updates. This was so frustrating to the Medical Device Security Center that University of Massachusetts Amherst professor Kevin Fu wrote, “Health care professionals might as well stop their washing hands while they're at it.” He added, “The risks should be obvious. This is an update for a medical device, and yet one must download it in a manner as if software sepsis is no big deal.” Google Safe Browsing for viasyshealthcare.com reported, “Of the 354 pages we tested on the site over the past 90 days, 20 page(s) resulted in malicious software being downloaded and installed without user consent. The last time Google visited this site was on 2012-06-17, and the last time suspicious content was found on this site was on 2012-06-13. Malicious software includes 48 trojan(s), 3 scripting exploit(s).” Threatpost reported that DHS is investigating and “an analysis by the Department of Homeland Security found that some of CareFusion's Web sites were relying on six year old versions of ASP.NET and Microsoft Internet Information Services (IIS) version 6.0, which was released with Windows Server 2003. Both platforms are highly susceptible to compromise.” DHS “may refer it to its ICS-CERT division, which focuses on threats to critical infrastructure.” Why would Homeland Security be involved? In April, the feds were pressed to protect wireless medical devices from hackers. By May, Public Intelligence posted the “DHS Wireless Medical Devices/Healthcare Cyberattacks Report.” Just because we can hook all these medical devices to the Internet, does not make it any wiser than connecting other critical and vulnerable infrastructure to the web so it might be hacked. DHS said most medical devices were “not designed to be accessed remotely” yet “the flexibility and scalability of wireless networking makes wireless access a convenient option.” According the report [PDF]: Because the technology is so new, there may not be an authoritative understanding of how to properly secure it, leaving open the possibilities for exploitation through zero-day vulnerabilities or insecure deployment configurations. In addition, new or robust features, such as custom applications, may also mean an increased amount of third party code development which may create vulnerabilities, if not evaluated properly. Implantable Medical Devices (IMD): Some medical computing devices are designed to be implanted within the body to collect, store, analyze and then act on large amounts of information. These IMDs have incorporated network communications capabilities to increase their usefulness. Legacy implanted medical devices still in use today were manufactured when security was not yet a priority. Some of these devices have older proprietary operating systems that are not vulnerable to common malware and so are not supported by newer antivirus software. However, many are vulnerable to cyberattacks by a malicious actor who can take advantage of routine software update capabilities to gain access and, thereafter, manipulate the implant. Well now . . . while taking advantage of a routine software update in the case of CareFusion may not have led to a lethal cyberattack, could it have opened the way to some equally insidious attack that infects hospitals or open a backdoor to medical devices that are supposed to help save lives? Scrubs and Suits said, “Many IT security experts are concerned that patient care could be compromised by terrorists who want to cause destruction and fear, or even by a particularly aggressive viral infection.” Then the article pointed out that “in July of 2010, Kern Medical Center, a 172-bed hospital in California, was infected by a virus that was so aggressive that it actually shut down the hospital’s EHR system for about two weeks.” During the Slashdot discussion of CareFusion serving up malware in medical device software updates, an Anonymous Coward wrote, “Hospitals have LARGE amounts of devices that are internet enabled like $300,000 cat scan machines that PDF and email documents and are managed only via IE 6....They almost always use very obsolete platforms with 256 megs of ram, IE 6, etc. The budget analysts folks are under heavy pressure to cut costs and IT is always the cost center at the end of day.” It’s time for IT to be a priority when it comes to securing healthcare, not dead last on the totem pole, and running totally exploitable systems that allow ventilator software updates to be tainted with malware. Let’s not wait to make security a priority for implantable medical devices either; let’s not wait until after an attacker exploits and remotely assassinates someone through a device that was supposed to save their life. **Update: CareFusion is very unhappy with this article and says: "We know the Windows virus does not affect any downloadable software and has no effect on our medical devices. It could affect Windows PC files, and we have taken quick action to clean and restore our affected systems."
<urn:uuid:e96b6250-03a0-4b62-97d3-1ce33e1834e3>
CC-MAIN-2017-04
http://www.computerworld.com/article/2472100/malware-vulnerabilities/downloading-of-software-updates-for-lifesaving-medical-devices-proves-very-d.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00065-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952766
1,339
2.53125
3
A history of the Internet was created using Facebook's timeline feature by Internet education company Grovo. The timeline, which goes back to the year 1536, includes important dates like 1983, when the Internet was born, and 1978, when the first spam email was sent over the ARPANET. And in 1900, deep sea divers discovered the Antikythera mechanism, an analog computer dating back to around 1 BC. The purpose of the timeline is to continue Grovo's mission of providing “high-quality Internet education.” The project is an example of how social media can be used in unexpected ways. Similar uses of social media, such as the real-time WW2 twitter project, provide insight into the future of the Internet landscape by exploring innovative uses of technology. “Many can still recall their first professionally-questionable AOL email addresses, while others can date the first time they watched a YouTube video,” Grovo's first timeline post reads. “As we’ve grown, so too has the Internet - and that’s exactly what we hope to share with this project, calling out some of our favorite moments from the Internet’s storied, complex, fun and memorable history. ” To find out what happened in 1536 that was so darned important, visit Grovo's Internet History on Facebook.
<urn:uuid:10296fce-2802-4f31-a725-f92e40c11fd8>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Colorful-Internet-History-Chronicled-on-Facebook.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00551-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925198
279
3.1875
3
FIN-ACK: Wrapping Up Networking 101 All good things must come to an end, and so it is with Networking 101. This installment, we take a look back at everything we covered in our series. We designed the series with the belief that everyone in networking should understand all of these concepts, at least within the space constraints we gave ourselves. We wanted to relate information applicable to all parties involved in maintaining or making network decisions, and in such a way that seasoned veterans down where the rubber meets the road could brush up on forgotten basics, and managers could get a sense of what those veterans are talking about. We began with an overview of IP Addresses. The idea was to provide a basis for the way subnets work by giving readers the tools to understand the features and limitations of 32-bit numbers. We also discussed what the terminology around IP was all about, including the differences between multicast, unicast and broadcast addresses. We also wanted to provide plain English discussions of the topics everyone hears about, but may not fully understand. In both installments on subnetting, we did a thorough overview of Subnets and CIDR, but the second delved into some Subnetting Examples and an IPv6 overview. The most important take-aways from the IP address and subnet discussion were: - CIDR IP addresses have a host and network portion. The netmask specifies the number of bits that the network portion uses, and the rest are for the host. - Subnets are created by the simple act of moving the divider up and down the 32-bit number. - IPv6 addresses are the same with regards to slicing them into subnets, and as long as you remember the rules of address representation, confusion can be kept to a minimum. Up the OSI Stack These articles provided the background necessary to understand all other aspects of networking. The next step began the trip up the OSI stack, with an introduction to how layers work. The OSI model works well, but plenty of network training material fails to mention exactly how these layers work together. - Unless you're a router, data coming up the stack is for you, and data going down the stack is being sent by you. - Layer two data is called a frame, and doesn't involve IP addresses. IP addresses and packets are layer 3, MAC addresses are layer 2! The journey up the OSI stack began at Layer 2, the Data Link layer. When managing a network, Layer 2 issues seem to crop up more often than you'd think, so this is a very important section. Likewise, we brought the spanning tree protocol into the loop (pun intended). Spanning tree provides a means to control loops in such a way that allows you to have an Ethernet network that will "fail over" in the case of downed links. Spanning tree is a bit complex, but necessary, and its concepts relate to many routing protocols too. Before moving on to layer 3, we dedicated an entire article to ICMP, because it lives in-between layers 2 and 3. ICMP is vital to proper routing and packet delivery, and there are many aspects of ICMP that go unnoticed. It isn't just ping. Layer 3, IP, began the next part. IP is unreliable. When IP packets are lost it's up to the higher-level protocols to realize this and request retransmissions. It's very important to understand IP fragmentation as well, because firewall and network connection decisions can impact the Internet Protocol in strange and unexpected ways. We covered TCP in two parts: basics and a more in-depth discussion. Most applications use TCP, and troubleshooting TCP sometimes requires looking at packet dumps and figuring out what went wrong. Flow control in TCP can also be impacted by management decisions, so an understanding of congestion control and TCP windows is quite relevant. The trip up the stack was basically concluded at that point, since Layer 7 applications are just that: random applications. Layers 5 and 6 don't exist, and just add to the confusion. Of Governance and Protocols Routing protocols were next in the list, but first we diverged a bit into a quick talk about how the Internet works, with Internet Governance. ICANN's role, IANA and RIR roles, and what the IETF and IAB actually do were clarified. Understanding the impact of news items should be a bit easier, once people understand a bit about how the Internet operates. The routing portions began with a good overview of what routing is, explaining the theory of routing. Routers send packets toward their destination, normally by shipping them toward a router that knows a bit more about the destination topology. It's important that the decision makers understand the limitations and features between the two types of protocols, link-state and vector-distance. The most widely used internal routing protocol was examined in two parts, OSPF one and two. The concept of "areas" in OSPF are very important, both from the designer's and an implementer's perspective. Very subtle, but serious routing issues can result from a poorly designed OSPF network. Due to the sheer complexity of OSPF, two parts were used to explain it in enough detail to allow for informed architectural decisions to be made. Internet Routing came in three parts: how routing and peering work, BGP and iBGP. The most important takeaway from this portion was that there is no such thing as a default route in the Internet. BGP operates very differently when compared to other routing protocols, so the first article dealt mostly with the conceptual protocol-level aspects of BGP. The iBGP protocol is simply BGP used internally as a mechanism to exchange BGP information between multiple BGP border routers (on the inside). The iBGP article really glues together the concepts of autonomous systems and BGP routing. Multicast routing was the final routing topic, then two other miscellaneous topics were added: Understand Tunnels and NAT is not what you think it is. Tunneling itself is sometimes very complicated to conceptualize. Many people struggle when setting up VPN connections, so we felt that some level of discussion about tunnels was required. And that's the series that was. Feedback from readers was very positive, and it encouraged us to look at taking Networking 101 to the next level. We hope to see you there.
<urn:uuid:516f46da-aca9-450b-a6c3-bca0b769a0ca>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3635251/FINACK-Wrapping-Up-Networking-101.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00367-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957279
1,315
3.5625
4
Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Its strengths – like the human brain it attempts to mimic – are pattern recognition (space and time) and inference reasoning. Advocates say it will also be possible to compute at much lower power than current paradigms. At ISC this year Karlheinz Meier, a physicist-turned-neuromorphic computing pioneer and a leader of the European Human Brain Project (HBP), gave an overview that served as both an update and primer. Moving to neuromorphic computing architectures, he believes, will make emulating brain function not only more effective and efficient, but also eventually accelerate computational learning and processing significantly beyond the speed of biological systems opening up new applications. Sometimes it’s best to start with the conclusions: Meier offered these summary bullets describing the state of neuromorphic computing today (not all covered in this article) at the end of his talk: - After 10 years of development available hardware systems have reached a high degree of maturity, ready for non-expert use cases - High degree of configurability with dedicated software tools, but obviously no replacement for general purpose machines - Only way to access multiple time scales present in large-scale neural systems, making them functional - Well suited for stochastic inference computing - Well suited for use of deep-submicron, non-CMOS devices Meier is predictably bullish, citing the already proven power of deep learning and cognitive computing while still using traditional computer architectures by players like Google, Baidu, IBM, and Facebook. Indeed, computer-based neural networking isn’t new. It’s a mainstay in a wide variety of applications, typically assisted by one or another type of accelerator (GPU, FPGA, etc). Notably, NVIDIA launched its ‘purpose-built’ deep learning development server this year, basically an all GPU machine. Many of these cognitive computing/deep learning efforts on traditional machines are quite impressive. Google’s AlphaGo algorithm from subsidiary DeepMind handily beat the world Go champion this year. But simply adapting neural networking to traditional to von Newmann architectures has drawbacks. One is power – not that it’s bad by conventional standards – but it shows no sign of the being able to approach the tiny 20W or so requirement for the human brain versus megawatts per year for supercomputers. Also problematic is the time required to train networks. Talking about the AlphaGo victory, which he lauds, Meier said, “What people don’t see or what Google doesn’t tell people is that it took something like a year to train this system on a big cluster of graphic cards, certainly several hundred kilowatts of power over a very long time scale, many, many months. Of course the system looked at many Go games to discover the rules and structure and to play very well.” Let’s acknowledge the training problem persists in neuromorphic computing as well. That said, neuromorphic, or brain inspired, computing seeks to mimic more directly how the human brain works. In the brain, neurons, the key components of brain processing, are connected in a vast network of networks. Individual neurons typically act in what’s called an integrate-and-fire fashion – that’s when the neuron’s membrane potential reaches a threshold and suddenly fires. Reaching that potential may involve numerous synaptic inputs that together sum to cross the firing threshold. One of the staggering aspects of the brain is the range of physical size and ‘event’ durations it encompasses. In rough terms from tiny synapses to the whole brain there are seven orders of spatial magnitude, noted Meier, and in terms of time there are eleven orders of magnitude spanning activities from neuron firing to long-term learning. “Typically brains consist of neurons that spike and produce these kinds of action potentials, which are at the millisecond or sub millisecond level. And as you all know the time to learn things is months to years,” he said. There have been successful efforts to map neural networks onto a supercomputer. One such effort on Japan’s K computer deployed a relatively simple network (~ one percent of the brain) and ran 1500X slower than the brain. This early work on the K computer by Markus Diesmann, another leader in European Human Brain Project (HPB), was the largest neural net simulation to date and an impressive achievement. However, it was a far cry from the efficiency (energy or processing capability) of the human brain. “You have to wait four years for a single simulated day. A day is nothing in the life of a brain. If you consider how you learn, real rewiring the structure of the brain, which takes many, many years at the beginning of your life, these time scales are inaccessible on conventional computers. And that will not change if you just go to exascale [on traditional architectures]. One of the ways out is neuromorphic computing,” he said. Neuromorphic architectures “aren’t doing numerical calculations but generic pattern recognition and discrimination processing just as the brain does.” Three of the more prominent neuromorphic systems in operation today are: - IBM’s TrueNorth uses the TrueNorth chip implemented in CMOS. Since memory, computation, and communication are handled in each of the 4096 neurosynaptic cores, TrueNorth circumvents the von-Neumann-architecture bottlenecks and is very energy-efficient, consuming 70 milliwatts, about 1/10,000th the power density of conventional microprocessors. This spring IBM announced a collaboration with Lawrence Livermore National Laboratory in which it will provide a scalable TrueNorth platform expected to process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery – a mere 2.5 watts of power. - The SpiNNaker project, run by Steve Furber, one of the inventors of the ARM architecture and a researcher at the University of Manchester, has roughly 500K arm processors. It’s a digital processor approach. The reason for selecting ARM, said Meier, is that ARM cores are cheap, at least if you make them very simple (integer operation). The challenge is to overcome scaling required. “Steve implemented a router on each of his chips, which is able to very efficiently communicate, action potentials called spikes, between individual arm processors,” said Meier. SpiNNaker’s bidirectional links to between chips is a distinguishing feature – think of it as a mini Internet, optimized to transmit ‘biological signal spikes, said Meier. The SpiNNaker architecture acts well as a real time simulator. - The BrainScaleS machine effort, led by Meier, “makes physical models of cell neurons and synapses. Of course we are not using a biological substrate. We use CMOS. Technically it’s a mixed CMOS signal approach. In reality is it pretty much how the real brain operates. The big thing is you can automatically scale this by adding synapses. When it is running you can change the parameters,” he said. It’s a local analogue computing approach with four million neurons and one billion synapses – binary, asynchronous communication. Training the networks – basically programming an application – remains a challenge for all the machines. In essence, the problem and solutions become baked into the structure of the network with training. In the brain this occurs via synaptic plasticity in which the connections (synapses) between neurons are strengthened or weakened based on experience. Neuromorphic computing emulates learning and synaptic plasticity through a variety of techniques. In his ISC presentation, Meier walked through three examples of neural networks and their training: deterministic supervised; deterministic unsupervised; and stochastic supervised. “Most of the neural networks in use today are deterministic. You have an input and output pattern, and they are linked by the network – [in other words] if you repeat the experiment you will always get the same result. Of course the configuration of a network has to happen through learning,” said Meier. The supervision involves telling the computer, during learning, whether it has made a right or wrong choice. You can also have stochastic networks: “You say that reality is a distributions of patterns. What you do in your networks is store a stochastic distribution of patterns, which reflect your prior knowledge, and which is acquired through learning. You can use those stored patterns either to generate distributions without any input or you can do inference,” he explained. Deterministic Supervised Learning Taking an example from nature, Meier showed an instance of deterministic supervised learning in which a neural network mimics an insect’s natural ability to uses its chemical sensors to distinguish between different flowers. “These are circuits that we reverse engineered using neuroscience,” he explained. “We have receptor neurons that respond to certain chemical substances and you have a layer, called a de-correlation layer, which is basically contrast enhancement. You see that in all perceptive systems in biology. On the right side you see association layer to take combined inputs and make a decision based on what kind of flower you have,” said Meier. It’s basically a data classification exercise. “The trick is to configure the links between those the de-correlation and association layers and this is done by supervised learning (telling the machine if it is correct or incorrect), through things like back propagation, Monte Carlo techniques, you really have to configure the synaptic link. Does it work? Actually it works very nicely,” said Meier. As a general rule, he said, spiking activity is high at input layers but then drops markedly. “We see in the intermediate layer that connects association with the input layer, there is also spiking activity but it’s rare, it’s sparse. That’s a very important thing and may be one of the reasons nature has invented spikes because it saves energy. Where interesting computation is being done, the firing rate is sparse.” Deterministic Unsupervised Learning As an example of unsupervised deterministic learning, Meier reviewed how owls find prey in the dark. In biology the model is very straightforward. “Since the mouse is on the right side, it is a short flight path for the sound to right ear and a long path to the left ear. [Detecting this is] is done by a circuit encoding compensating for the short path in air by a long path in the brain. You depict the time coincidence between two input impulses to produce a stronger signal and that is done in a completely unsupervised way,” said Meier. This neural net is fairly straightforward to implement in hardware. “You have this stored distribution of probabilities in your brain and you take samples and jump between two options. This can be implemented with Boltzmann machines, in particular, with spiking Boltzmann machines. One of the machines has a network of symmetrically connected stochastic nodes where the stage of the nodes is described by a vector of binary random variables.” If you are wondering how a spiking vector be a variable, Meier said, “We have developed a theory [in which] zeros and one are presented by neurons that is either active or in a refractory state. The probability that this network converges to a target is a Boltzmann distribution,” said Meier. “Here of course it is a neural network where we are connecting weights between neurons. How do you train these things? There is a very well established mechanism where you clamp the visible unit of the input layer to the value of a particular pattern and then you weight the interaction between any two nodes. This is the learning process but it’s slow,” said Meier. While a great deal of progress in neuromorphic computing has been accomplished, thorny issues remain. For example, it’s still not clear what the base technology should be – CMOS chips, wafer scale lithographic ‘emulations’ of neurons, etc. – and more options are on the horizon. IBM recently published a paper around phase-change memristor technology that shows promise. The wafer scale integration used for BrainScaleS is probably the most novel and brain-like approach so far. During the post-presentation discussion, questions arose around process and device variability and degradation issues for various technologies. Meier noted, “There is no degradation in the sense of aging. CMOS systems stay as bad or as good as they are. Nano devices still show some endurance problems. The big challenge for the BrainScaleS system is the static variability arising from the CMOS production process. This is like “fixed pattern noise“ on a CCD sensor. You can calibrate it but for really large systems we have to learn how to implement homeostatic“ adaptation like in biology. Our new digital learning center concept will just be doing that.” (http://www.kip.uni-heidelberg.de/vision/research/dls/) About memristor technology Meier said, “really cool devices, but people have so far totally ignored the aspect of variability. It’s much, much bigger than CMOS. I don’t see how you can calibrate a memristor in a large circuit. [With CMOS] you can calibrate the synapse on a neuron because there are parameters, and SRAM to store the parameters. You can measure and if it doesn’t work too well and I can fix it by going in and calibrating it. How do you do that with memristors?” Challenges aside, the IBM-LLNL project and two European systems should help accelerate neuromorphic development. Meier notes that access to SpiNNaker and BrainScaleS is not restricted to Europeans although restrictions for some countries exist due to national technology export law. “There are plenty of users for the small prototype systems. The SpiNNaker boards are in particular attractive because they can be used by anyone trained in standard software tools. There are more than 100 users. The small scale BrainScaleS system has attracted about 10 users but the use is very different from normal computers so people struggle more. The numbers of external users of the HBP collaboratory (including all platforms) is shown in the attached plot. About 10% of them are using the large scale NM systems,” said Meier. Here are a handful of access points: - “The large scale hardware systems are now available through a HBP web interface called “collaboratory.” (https://www.humanbrainproject.eu/ncp) - Smaller scale single (or few) chip systems are available through remote access or purchase/loan: http://www.kip.uni-heidelberg.de/vision/research/spikey/ - For information on SpiNNaker contact Furber at the University of Manchester, U.K. - Software tools to configure and control the systems are described in the Neuromorphic Guidebook. “Clearly the machines we are building at the moment are research devices but they have one important feature, they are really extremely configurable. In particular you can also read out the activity of network because you want to understand what is going on,” said Meier. This will change when putting neuromorphic systems into real-world practice. “Our idea on long term development is to give up on configurability and to give up on monitoring because if you have a neuromorphic chip, for example, that has to detect certain patterns you don’t really want to look as a user on your cell phone, you don’t want to read out any membrane potential or look at all the spike transistors, and look at all the correlations. You just want the thing to work.” Large-scale systems, similar to those described here, are more likely to be used “to develop circuits that are interesting, that solve interesting problems. Then you export it and you make special dedicated chips optimized to solve this single problem without any configurability or monitoring capability.” Think neuromorphic FPGAs, said Meier, “You want a dedicated things that can be mass produced and does one thing very well. That’s the way I see it evolving.” One issue is implementaing local learning (on chip) when the device is in the field, enabling the system to adapt to changing environments. It would open up a new range of applications, agreed Meier, “There’s a study under way now looking at how a car engine changes all the time, its performance changes, and you always want to optimize the efficiency of the engine. It would be really nice to have these local learning capabilities on chip on the system in the field being applied.” There are no technical barriers in theory he said, but still missing technology components. Nearer term, Meier is optimistic that accelerating learning will occur and that it represents the needed enabler for broader use of neuromorphic computing and for shortening time needed for training. Spike-based systems will play a role here, he believes – “there is very good argument to say the spikes are not only contributing to energy efficiency but also for learning speed. Once you accelerate learning it will be breakthrough for this technology and may change the way you compute fundamentally.” Slide Source: Prof Karlheinz Meier presenation at ISC2016 (Neuromorphic Computing Concepts, Achievements & Challenges)
<urn:uuid:ac5d9dd2-8657-4cfa-827b-2d4c54aaf25b>
CC-MAIN-2017-04
https://www.hpcwire.com/2016/08/15/think-fast-neuromorphic-computing-racing-ahead/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00487-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933074
3,685
2.796875
3
This 2-day instructor-led course teaches core process modeling skills, project development methodology, and delivery best practice patterns that improve the speed and quality of process definition and implementation efforts. In this course, students use the Process Designer component of IBM Business Process Manager to create a business process definition (BPD) from business requirements that are identified during process analysis. The course begins with an overview of business process management (BPM) and process modeling. Students learn how to make team collaboration more efficient by enabling all team members to use standard process model elements and notation, which makes expressing and interpreting business requirements consistent throughout the BPM life cycle. The course also teaches students how to build an agile and flexible shared process model that can be understood by key business stakeholders, implemented by developers, and adjusted to accommodate process changes. Students learn to work within the parameters of the BPM life cycle methodology to maximize the functionality of IBM Business Process Manager and project development best practices, such as meeting the target playback goal. This course is designed for all users who have purchased any of the IBM Business Process Manager software packages, including the basic, standard, and advanced packages. This course utilizes an interactive learning environment. Hands-on demonstrations, class activities to reinforce concepts and check understanding, and labs embedded in each of the course units enable hands-on experience with BPM tasks and skills. This course is designed to be collaborative, and students can work in teams to perform class activities.
<urn:uuid:e4927a48-1428-49bb-88a9-8c9488ae5e4e>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/118847/process-modeling-with-ibm-business-process-manager/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00056-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921747
294
2.8125
3
Whether a car on the highway, a plane flying through the air, or a ship in the ocean, all of these transport systems move through fluids. And in nearly all cases, the fluid flowing around these vehicles will be turbulent. With over 20% of global energy consumption expended on transportation, the large fraction of the energy expended in moving goods and people that is mediated by wall-bounded turbulence is a significant component of the nation’s energy budget. However, despite the energy impact, scientists do not possess a sufficiently detailed understanding of the physics of turbulent flows to permit reliable predictions of the lift or drag of these system. In order to probe the physics of wall-bounded turbulent flows, a team of scientists at the University of Texas are conducting the largest ever Direct Numerical Simulation (DNS) of wall-bounded turbulence at Ret = 5200. With 242 billion degrees of freedom, this simulation is fifteen times larger than the previously largest channel DNS of Hoyas and Jimenez, conducted in 2006. In a DNS of turbulence, the equations of fluid motion (the Navier-Stokes equations) are solved, without any modeling, at sufficient resolution to represent all the scales of turbulence. In general, the full three-dimensional data fields of turbulent flow are difficult to obtain experimentally. On the other hand, computer simulations provide exquisitely detailed and highly reliable data, which have driven a number of discoveries regarding the nature of wall-bounded turbulence. However, the use of DNS to study high speed flows has been hindered by the significant computational expense of the simulations. Resolving all the essential scales of turbulence introduces enormous computational and memory requirements, requiring DNS to be performed on the largest supercomputers. For this reason, DNS is a challenging HPC problem, and is a commonly used application to evaluate the performance of Top-500 systems. Due to the great expense of running a DNS, improving efficiencies in computation allows the simulation of more realistic scenarios (higher Reynolds numbers and larger domains) than would otherwise be possible. M.K.(Myoungkyu) Lee, the lead developer of the new DNS code used in the simulations, will present the results of numerous software optimizations during the Extreme-Scale Applications Session at SC13, on Tuesday, Nov 19th, 1:30PM – 2:00PM. The presentation will detail scaling results across a variety of Top-500 platforms, such as the Texas Advanced Computing Center’s Lonestar and Stampede, the National Center for Supercomputing Applications’ Blue Waters, and Argonne Leadership Computing Facility’s Blue Gene/Q Mira, where the full scientific simulation was conducted. The results demonstrate that performance is highly dependent on characteristics of the communication network and memory bandwidth, rather than single core performance. On Blue Gene/Q, for instance, the code exhibits approximately 80% strong scaling parallel efficiency at 786K cores relative to performance on 65K cores. The largest benchmark case uses 2.3 trillion grid points and the corresponding memory requirement is 130 Terabytes. The code was developed using Fourier spectral methods, which are typically preferred for turbulence DNS because of the superior resolution properties, despite the resulting algorithmic need for expensive communication. Optimization was performed to address several major issues: efficiency of banded matrix linear algebra, cache reuse and memory access, threading efficiency and communication for the global data transposes. A special linear algebra solver was developed, based on a custom matrix data structure in which non-zero elements are moved to otherwise empty elements, reducing the memory requirement by half, which is important for cache management. In addition, it is found that compilers inefficiently optimized the low-level operations on matrix elements for the LU decomposition. As a result, loops were unrolled by hand to improve reuse of data in cache. FFTs, on-node data reordering and the time advance were all threaded using OpenMP to enhance single-node performance. These were very effective, with the code demonstrating nearly perfect OpenMP scalability (99%). The talk will also discuss how replacing the existing library for 3D global Fast Fourier Transforms (P3DFFT) with a new library developed using the FFTW 3.3-MPI library and lead to substantially improved communication performance. The full scientific simulation used 300 million core hours on ALCF’s BG/Q Mira from the Department of Energy Early Science Program and the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) 2013 Program. Each restart file generated by the simulation is 1.8 TB in size, with approximately eighty such files archived for long term postprocessing and investigation. Postprocessing this large an amount of data is also a supercomputing challenge. Title : Petascale Direct Numerical Simulation of Turbulent Channel Flow on up to 786K Cores Location : Room 201/203 Session : Extreme-Scale Applications Time : Tuesday, Nov 19th, 1:30PM – 2:00PM Presenter : M.K.(Myoungkyu) Lee SC13 Scheduler : http://sc13.supercomputing.org/schedule/event_detail.php?evid=pap689 M.K. (Myoungkyu) Lee is a Ph.D student in Department of Mechanical Engineering at the University of Texas at Austin. Nicholas Malaya is a researcher in the Center for Predictive Engineering and Computational Sciences (PECOS) within the Institute for Computational Engineering and Sciences (ICES) at The University of Texas at Austin. Robert D. Moser holds the W. A. “Tex” Moncrief Jr. Chair in Computational Engineering and Sciences and is professor of mechanical engineering in thermal fluid systems. He serves as the director of the ICES Center for Predictive Engineering and Computational Sciences (PECOS) and deputy director of the Institute for Computational Engineering and Sciences(ICES).
<urn:uuid:2036d67b-70c0-46ab-9c98-ae1549c79721>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/11/15/sc13-research-highlight-petascale-dns-turbulent-channel-flow/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00322-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904601
1,234
2.890625
3
Along with the optical fiber communication, optical fiber sensing optical network technology development, the optical fiber system matures, application fields expanding, and fiber optic system structure is complicated, make all kinds of passive functional device gradually developed, fiber optic coupler splitter is a kind among them. With a fiber optic coupler, the light from an input fiber can appear at one or more outputs, Fiber optic coupler is an optical fiber device with one or more input fibers and one or several output fibers,Its input and output signals have a good separation effect. Light entering an input fiber can appear at one or more outputs and its power distribution potentially depending on the wavelength and polarization. So it is widely used in circuit applications. Fiber optic coupler can greatly increase the stability of computer when applied in digital communication and real-time computer control interface as a signal isolation device. Fiber optic couplers or splitters are available in a range of styles and sizes to split or combine light with minimal loss. All couplers are manufactured using a proprietary process that produces reliable, low-cost devices. They are rugged and impervious to common high operating temperatures. Couplers can be fabricated with custom fiber lengths and/or with terminations of any type. If all involved fibers of the fiber coupler are single-mode ,there are certain physical restrictions on the performance of the coupler. for example, it is not possible to combine two inputs of the same optical frequency into one single-polarization output without significant excess losses. However, a fiber optic coupler which can combine two inputs at different wavelengths into one output,which are commonly seen in fiber amplifiers to combine the signal input and the pump wave. Single Window FBT Coupler are designed for power splitting and tapping in telecommunication equipment, CATV network, and test equiptment. Our single mode optical couplers work at either 1310nm or 1490nm or 1550nm. They are fabricated using single mode fibers. Single Window FBT Coupler have operation bandwidth of 40nm around its central wavelength. Such as 1×2 fiber coupler is one of single-mode fiber couplers. Don’t forget, fiber couplers not only have single-mode couplers,but also have multimode couplers.Multimode Coupler is fabricated from graded index fibers with core diameters of 50um or 62.5um. Fiber optic multimode couplers are applied in short distance communications at 1310nm or 850nm. Multimode couplers are manufactured using a technique or fusion technique. They are available for all common multimode fibers with core diameters from 50 μm to 1500 μm. For more information about fiber optic coupler splitter, please contact us at email@example.com.In Fiberstore,you can find some fiber optic products which you want.Buy with confidence.
<urn:uuid:c6445774-3e84-4f52-82e4-37bf1ab13317>
CC-MAIN-2017-04
http://www.fs.com/blog/fiberstore-supply-the-single-mode-and-multimode-couplers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00534-ip-10-171-10-70.ec2.internal.warc.gz
en
0.889658
594
3.203125
3
Computer forensics is the practice of collecting, analysing and reporting on digital data in a way that is legally admissible. It can be used in the detection and prevention of crime and in any dispute where evidence is stored digitally. Computer forensics follows a similar process to other forensic disciplines, and faces similar issues. - Uses of computer forensics - Live acquisition - Stages of an examination About this guide This guide discusses computer forensics from a neutral perspective. It is not linked to particular legislation or intended to promote a particular company or product, and is not written in bias of either law enforcement or commercial computer forensics. The guide is aimed at a non-technical audience and provides a high-level view of computer forensics. Although the term “computer” is used, the concepts apply to any device capable of storing digital information. Where methodologies have been mentioned they are provided as examples only, and do not constitute recommendations or advice. Copying and publishing the whole or part of this article is licensed solely under the terms of the Creative Commons – Attribution Non-Commercial 4.0 license Uses of computer forensics There are few areas of crime or dispute where computer forensics cannot be applied. Law enforcement agencies have been among the earliest and heaviest users of computer forensics and consequently have often been at the forefront of developments in the field. Computers may constitute a ‘scene of a crime’, for example with hacking or denial of service attacks or they may hold evidence in the form of emails, internet history, documents or other files relevant to crimes such as murder, kidnap, fraud and drug trafficking. It is not just the content of emails, documents and other files which may be of interest to investigators but also the ‘metadata’ associated with those files. A computer forensic examination may reveal when a document first appeared on a computer, when it was last edited, when it was last saved or printed and which user carried out these actions. More recently, commercial organisations have used computer forensics to their benefit in a variety of cases such as; * Intellectual Property theft * Industrial espionage * Employment disputes * Fraud investigations * Bankruptcy investigations * Inappropriate email and internet use in the work place * Regulatory compliance For evidence to be admissible it must be reliable and not prejudicial, meaning that at all stages of a computer forensic investigation admissibility should be at the forefront of the examiner’s mind. A widely used and respected set of guidelines which can guide the investigator in this area is the Association of Chief Police Officers Good Practice Guide for Digital Evidence [PDF], or ACPO Guide for short. Although the ACPO Guide is aimed at United Kingdom law enforcement, its main principles are applicable to all computer forensics. The four main principles from this guide (with references to law enforcement removed) are as follows: 1. No action should change data held on a computer or storage media which may be subsequently relied upon in court. 2. In circumstances where a person finds it necessary to access original data held on a computer or storage media, that person must be competent to do so and be able to give evidence explaining the relevance and the implications of their actions. 3. An audit trail or other record of all processes applied to computer-based electronic evidence should be created and preserved. An independent third-party should be able to examine those processes and achieve the same result. 4. The person in charge of the investigation has overall responsibility for ensuring that the law and these principles are adhered to. In what situations would changes to a suspect’s computer by a computer forensic examiner be necessary? Traditionally, the computer forensic examiner would make a copy (or acquire) information from a device which is turned off. A write-blocker would be used to make an exact bit for bit copy of the original storage medium. The examiner would work from this copy, leaving the original demonstrably unchanged. However, sometimes it is not possible or desirable to switch a computer off. It may not be possible if doing so would, for example, result in considerable financial or other loss for the owner. The examiner may also wish to avoid a situation whereby turning a device off may render valuable evidence to be permanently lost. In both these circumstances the computer forensic examiner would need to carry out a ‘live acquisition’ which would involve running a small program on the suspect computer in order to copy (or acquire) the data to the examiner’s hard drive. By running such a program and attaching a destination drive to the suspect computer, the examiner will make changes and/or additions to the state of the computer which were not present before his actions. However, the evidence produced would still usually be considered admissible if the examiner was able to show why such actions were considered necessary, that they recorded those actions and that they are to explain to a court the consequences of those actions. Stages of an examination We’ve divided the computer forensic examination process into six stages, presented in their usual chronological order. Forensic readiness is an important and occasionally overlooked stage in the examination process. In commercial computer forensics it can include educating clients about system preparedness; for example, forensic examinations will provide stronger evidence if a device’s auditing features have been activated prior to any incident occurring. For the forensic examiner themself, readiness will include appropriate training, regular testing and verification of their software and equipment, familiarity with legislation, dealing with unexpected issues (e.g., what to do if indecent images of children are found present during a commercial job) and ensuring that the on-site acquisition (data extraction) kit is complete and in working order. The evaluation stage includes the receiving of instructions, the clarification of those instructions if unclear or ambiguous, risk analysis and the allocation of roles and resources. Risk analysis for law enforcement may include an assessment on the likelihood of physical threat on entering a suspect’s property and how best to counter it. Commercial organisations also need to be aware of health and safety issues, conflict of interest issues and of possible risks – financial and to their reputation – on accepting a particular project. The main part of the collection stage, acquisition, has been introduced above. If acquisition is to be carried out on-site rather than in a computer forensic laboratory, then this stage would include identifying and securing devices which may store evidence and documenting the scene. Interviews or meetings with personnel who may hold information relevant to the examination (which could include the end users of the computer, and the manager and person responsible for providing computer services, such as an IT administrator) would usually be carried out at this stage. The collection stage also involves the labelling and bagging of evidential items from the site, to be sealed in numbered tamper-evident bags. Consideration should be given to securely and safely transporting the material to the examiner’s laboratory. Analysis depends on the specifics of each job. The examiner usually provides feedback to the client during analysis and from this dialogue the analysis may take a different path or be narrowed to specific areas. Analysis must be accurate, thorough, impartial, recorded, repeatable and completed within the time-scales available and resources allocated. There are myriad tools available for computer forensics analysis. It is our opinion that the examiner should use any tool they feel comfortable with as long as they can justify their choice. The main requirements of a computer forensic tool is that it does what it is meant to do and the only way for examiners to be sure of this is for them to regularly test and calibrate the tools they rely on before analysis takes place. Dual-tool verification can confirm result integrity during analysis (if with tool ‘A’ the examiner finds artefact ‘X’ at location ‘Y’, then tool ‘B’ should replicate these results). This stage usually involves the examiner producing a structured report on their findings, addressing the points in the initial instructions along with any subsequent instructions. It would also cover any other information which the examiner deems relevant to the investigation. The report must be written with the end reader in mind; in many cases the reader will be non-technical, and so reader-appropriate terminology should be used. The examiner should also be prepared to participate in meetings or telephone conferences to discuss and elaborate on the report. As with the readiness stage, the review stage is often overlooked or disregarded. This may be due to the perceived costs of doing work that is not billable, or the need ‘to get on with the next job’. However, a review stage incorporated into each examination can help save money and raise the level of quality by making future examinations more efficient and time effective. A review of an examination can be simple, quick and can begin during any of the above stages. It may include a basic analysis of what went wrong, what went well, and how the learning from this can be incorporated into future examinations’. Feedback from the instructing party should also be sought. Any lessons learnt from this stage should be applied to the next examination and fed into the readiness stage. Issues facing computer forensics The issues facing computer forensics examiners can be broken down into three broad categories: technical, legal and administrative. Encryption – Encrypted data can be impossible to view without the correct key or password. Examiners should consider that the key or password may be stored elsewhere on the computer or on another computer which the suspect has had access to. It could also reside in the volatile memory of a computer (known as RAM ) which is usually lost on computer shut-down; another reason to consider using live acquisition techniques, as outlined above. Increasing storage space – Storage media hold ever greater amounts of data, which for the examiner means that their analysis computers need to have sufficient processing power and available storage capacity to efficiently deal with searching and analysing large amounts of data. New technologies – Computing is a continually evolving field, with new hardware, software and operating systems emerging constantly. No single computer forensic examiner can be an expert on all areas, though they may frequently be expected to analyse something which they haven’t previously encountered. In order to deal with this situation, the examiner should be prepared and able to test and experiment with the behaviour of new technologies. Networking and sharing knowledge with other computer forensic examiners is very useful in this respect as it’s likely someone else has already come across the same issue. Anti-forensics – Anti-forensics is the practice of attempting to thwart computer forensic analysis. This may include encryption, the over-writing of data to make it unrecoverable, the modification of files’ metadata and file obfuscation (disguising files). As with encryption, the evidence that such methods have been used may be stored elsewhere on the computer or on another computer which the suspect has had access to. In our experience, it is very rare to see anti-forensics tools used correctly and frequently enough to totally obscure either their presence or the presence of the evidence that they were used to hide. Legal issues may confuse or distract from a computer examiner’s findings. An example here would be the ‘Trojan Defence’. A Trojan is a piece of computer code disguised as something benign but which carries a hidden and malicious purpose. Trojans have many uses, and include key-logging ), uploading and downloading of files and installation of viruses. A lawyer may be able to argue that actions on a computer were not carried out by a user but were automated by a Trojan without the user’s knowledge; such a Trojan Defence has been successfully used even when no trace of a Trojan or other malicious code was found on the suspect’s computer. In such cases, a competent opposing lawyer, supplied with evidence from a competent computer forensic analyst, should be able to dismiss such an argument. A good examiner will have identified and addressed possible arguments from the “opposition” while carrying out the analysis and in writing their report. Accepted standards – There are a plethora of standards and guidelines in computer forensics, few of which appear to be universally accepted. The reasons for this include: standard-setting bodies being tied to particular legislations; standards being aimed either at law enforcement or commercial forensics but not at both; the authors of such standards not being accepted by their peers; or high joining fees for professional bodies dissuading practitioners from participating. Fit to practice – In many jurisdictions there is no qualifying body to check the competence and integrity of computer forensics professionals. In such cases anyone may present themselves as a computer forensic expert, which may result in computer forensic examinations of questionable quality and a negative view of the profession as a whole. Resources and further reading There does not appear to be very much material covering computer forensics which is aimed at a non-technical readership. However the following links may prove useful: Forensic Focus An excellent resource with a popular message board. Includes a list of training courses in various locations. NIST Computer Forensic Tool Testing Program The National Institute of Standards and Technology (America) provides an industry respected testing of tools, checking that they consistently produce accurate and objective test results. Computer Forensics World A computer forensic community web site with message boards. Free computer forensic tools A list of free tools useful to computer forensic analysts, selected by Forensic Control. The First Forensic Forum (F3) A UK based non-profit organisation for forensic computing practitioners. Organises workshops and training. - Hacking: modifying a computer in a way which was not originally intended in order to benefit the hacker’s goals. - Denial of Service attack: an attempt to prevent legitimate users of a computer system from having access to that system’s information or services. - Metadata: data about data. It can be embedded within files or stored externally in a separate file and may contain information about the file’s author, format, creation date and so on. - Write blocker: a hardware device or software application which prevents any data from being modified or added to the storage medium being examined. - Bit copy: ‘bit’ is a contraction of the term ‘binary digit’ and is the fundamental unit of computing. A bit copy refers to a sequential copy of every bit on a storage medium, which includes areas of the medium ‘invisible’ to the user. - RAM: Random Access Memory. RAM is a computer’s temporary workspace and is volatile, which means its contents are lost when the computer is powered off. - Key-logging: the recording of keyboard input giving the ability to read a user’s typed passwords, emails and other confidential information.
<urn:uuid:0fc9229f-ee1a-4c8b-a2ec-300657384083>
CC-MAIN-2017-04
https://forensiccontrol.com/resources/beginners-guide-computer-forensics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00534-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946125
3,047
3.359375
3
Many of us are lucky to live in a place where, for most people, the first two levels of Maslow’s Hierarchy of Needs are met. These consist of physiological needs, such as food and water, and safety needs, e.g., physical, economic, and health. These needs are usually met through personal resources or help from the government and charitable organizations. To meet the next two levels of need (friendship/belonging and self-esteem), however, more than half of the 60 million disabled people in the United States turn, at least partially, to the world of gaming. “That’s where video gaming for people with disabilities becomes very important because you can free yourself from your disability through a video game, you can make friends, you can present yourself in a way that has less stigma around your disability,” says Mark Barlet, founder of AbleGamers, an eight year-old organization that is devoted to supporting gamers with disabilities. AbleGamers works to educate content producers as well as hardware and software developers on the development of accessible games, and to educate and support caregivers about the benefits of gaming for those with disabilities. They also host events such as their Accessibility Arcades to show disabled gamers and caregivers equipment and technology that already exists to help them enjoy video games like anyone else. I recently spoke with Barlet about the role gaming plays in the lives of many disabled people and the current state of accessible gaming. I learned a number of interesting things, such as: - There are roughly 33 and a half million disabled gamers in the United States, mostly (two-thirds) male, with more of them over the age of 50 than under 18, which mimics the general population of gamers. Game developers, having been educated about the size of this market, have become very open to making their games accessible. Developers are “competing in a very big marketplace and are really looking to draw in as many people as they can. So, if they can add in 5 or 6 accessibility features to help make it more appealing to the mass audience then they’re going to,” said Barlet. - There currently isn't any legislation in the United States requiring video games to be accessible - and, surprisingly, disabled gamers prefer it that way. Barlet argues that gaming helps to push the envelope in computer development and that government legislation would only hurt the development of games and, hence, computer technology. Instead, he feels that the number of disabled gamers is large enough to provide incentives for developers to ensure their games are accessible. “I think that’s a far better path than legislation,” said Barlet. - In terms of gaming platform (PC vs. console vs mobile), while there is apparently some disagreement in the disabled community, Barlet said, “I am firm believer that the most flexible platform for a gamer with disabilities is the PC. There are a truckload of devices and peripherals out there... that you can plug into USB and they’re fairly inexpensive.” Consoles, on the other hand, while offering the most cutting edge games, are worse for disabled gamers because they’re closed systems. “Adaptive controllers and custom controllers… have to go through incredible hoops to try to get the Xbox to talk to the peripheral, because they’ve locked it down through proprietary processes.” Mobile gaming, is still fairly new, though Barlet notes that “independent developers that key on the mobile gaming space are much more creative and much more responsive to accessible features.” - In order to support gaming among the disabled, caregivers need to be educated about it. “The caregiver is governor of what a person with disabilities can do,” said Barlet. “The understanding has to be there in the caregiver, because they’re the ones that have to support the cause.” To that end, AbleGamers has recently begun producing simple videos like “How to set up an Xbox.”
<urn:uuid:a2037825-8d25-4c5b-9928-701208501de6>
CC-MAIN-2017-04
http://www.itworld.com/article/2719734/mobile/for-some-with-disabilities--gaming-fills-a-basic-need.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00560-ip-10-171-10-70.ec2.internal.warc.gz
en
0.97223
833
2.78125
3
[Photo courtesy of Greg Henshall/FEMA.] As the nation focuses on dramatic, novel or “niche” threats of chemical, biological, radiological, nuclear and explosive weapons of mass destruction, a big threat to homeland security occurs more than 80 times a day in our own neighborhoods: arson. From 1999 to 2008, domestic arson accounted for more than 3,410 deaths, more than $7 billion in direct property loss and approximately 436,000 structure fire incidents, according to the National Fire Protection Association. This puts a strain on local, state and federal law enforcement, fire and court resources. Arson in motor vehicles, wildland and other “nonstructural” properties also add to the impact on public and private sectors. The White House National Security Strategy 2010 emphasizes threats that are of significant consequence, but occur less frequently: “The gravest danger to the American people and global security continues to come from weapons of mass destruction, particularly nuclear weapons. The space and cyber-space capabilities that power our daily lives and military operations are vulnerable to disruption and attack.” While these threats may be real, the probability of success is suspect. According to the 2003 RAND report Putting WMD Terrorism into Perspective, the “technical capacity of groups to produce or acquire and effectively deliver unconventional weapons varies considerably” and “requires a considerable scale of operations.” To be successful, an arsonist needs only a match and a combustible target. Structural fires account for a large percentage of America’s property losses, but intentionally set transportation, chemical plant or wildland fires as terrorist acts can’t be ruled out. A recent Congressional Research Service report states, “Pyro-terrorism is just one example of many alternative hypotheses that homeland security risk managers may wish to consider in order to avoid what was famously described in the 9/11 Commission Report as ‘a failure of imagination.’” Arson detection and prosecution remain a state and local responsibility, except where a federal statute has been violated. There isn’t a national mandate for reporting arson, so the scope of the problem remains unclear. Many jurisdictions that rely only on fire services for suppression don’t have the technical expertise or training to perform thorough fire investigations to detect arson. Often, follow-up investigation is the sole purview of an insurance company that underwrote the risk and has no obligation to report the outcome. The decentralized and predominantly local nature of investigation, reporting and prosecution is a lost opportunity for an organized national effort. The inability or reluctance of agencies and individuals to share case information, identified trends or successful solutions exacerbates the problem. Prosecutors often are unwilling to tackle arson cases that are built predominantly on circumstantial evidence. Arson is not “on the radar screen” of nationally elected officials or policymakers because other than for highly publicized events, fires generally are seen as a local problem needing local solutions. An advantage to the local approach is that investigators obtain intimate knowledge of their communities and can build close-knit organizational teams to combat the problem. State and local investigators rely on professionally derived relationships to share information on motives, techniques and individuals. However, those who use fire as a tool or weapon aren’t constrained by jurisdictional boundaries, and networks of leaderless cells or “lone wolves” provide a challenge to detect, apprehend and prosecute. The federal Bureau of Alcohol, Tobacco, Firearms and Explosives is building a new Web-based intelligence-sharing database, which is still in its infancy and the bureau will require local organizations to populate it. Several strategic options exist to address arson including the following: Add “arson” or “fire” to the national vernacular. While politicians and policymakers continue to use the acronym CBRNE (chemical, biological, radiological, nuclear and explosives) for terror-related hazards, references to arson or fire aren’t included. CBRNEA (CBRNE arson) or CBRNEF (CBRNE fire) may complicate the acronym, but would help bring these threats to the forefront of national discussion. Create a national arson awareness and prevention strategy. For years fire departments and service organizations have advocated generic “fire prevention” strategies and techniques, but other than in juvenile fire-setting circumstances, rarely confronted the problem on the head or arson’s root cause. Many campaigns of sound bites (Rat on a Rat) and post-incident rewards address arson after the event, but few employ a preventive approach. Some of this may be attributed to a lack of resources, but it is likelier a dilemma of not having the socio-psychological research on hand to address the complexities of arson. The arson awareness and prevention strategy should include simple, standardized self-assessment tools for risk management so property owners and law enforcement officers can evaluate their risks against known or anticipated threats. Recruit nontraditional partners in the arson awareness and prevention strategy. Although arson often is seen as a public safety issue, there are many organizations that can be employed in the fight at all government levels. This may include law enforcement, fire services, social service organizations, faith-based organizations and other nongovernmental organizations, the last three of which may see potential fire setters in noncontroversial and nonconfrontational settings. This approach comports with the UK’s effort at community prevention programs. Provide incentives for better data reporting and analysis. National fire incident data collection is a voluntary effort; the data is often unreliable for various reasons, including data entry errors, poor or nonexistent fire cause determinations, or simply that the local fire services elect not to report their incidents. Although the federal government can’t mandate fire incident data reporting, it can encourage better reporting through grants, awards, incentives and other inducements. Provide prompt data filtering, analysis and feedback. Currently national fire incident data collection is vetted by state organizations before it is submitted to the U.S. Fire Administration (USFA) Fire Data Center where it’s collated and analyzed to identify trends. This can take up to two years following an event. Congress has mandated that the USFA develop a more real-time data collection method that should be operational within the next two years. Important to its success, however, will be the quality and quantity of data that’s submitted. The homeland security fusion centers that exist in nearly all states could provide a role in collecting and interpreting fire and arson data in a timely fashion. If fire incident data were scanned at these fusion centers, trained and qualified intelligence personnel could extrapolate emerging trends that need to be addressed, as well as find linkages among methods, motives or perpetrators. This strategy aligns with the Homeland Security Advisory Council’s recommendation for a national intelligence estimate of pending threats. Develop a national training standard for fire investigators and law enforcement. Several organizations have created their own national “certification” programs for fire investigation personnel, but there isn’t a national standard that describes the skills needed to successfully investigate and prosecute criminal fires. Enhance fire protection and anti-arson strategies in building codes to enhance resiliency. Fire-resistant construction and automatic sprinklers improve a building for life safety and property protection. Though unable to prevent all fires, this construction method mitigates the impacts of those that occur, including arson. Building codes also provide generous design latitude and don’t require that all structures meet these rules. Homeland security strategies should encompass an all-hazards approach. The current national discussion on CBRNE threats focuses on high-risk/low-frequency events, but doesn’t address man-made threats that occur daily and aren’t always on the front pages of the national news media. Total fire deaths from the last decade and property loss due to arson exceeds that of all domestic terrorist attacks combined and must be addressed as part of a national security strategy. Robert A. Neale is deputy superintendent for the U.S. National Fire Academy in Emmitsburg, Md., and with its state partners trains more than 120,000 first responders each year. The fire academy is part of the FEMA component of the U.S. Department of Homeland Security.
<urn:uuid:00bc7f73-84cc-4e5e-ba2f-c20e7e1801d2>
CC-MAIN-2017-04
http://www.govtech.com/em/safety/Arson-Homeland-Security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00378-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931699
1,697
2.765625
3
OTDR, full name of which is optical time-domain reflectometer is one of the most popular method of testing the light loss in the cable plant. In most circumstance, it also indicate an fiber optic testing instrument to characterized the optical fibers. OTDRs are always used on OSP cables to verify splicing loss or locating damages to the fiber optic cables. Due to the decline in the OTDR price over recent years, it is more and more applied by technicians for the system installation process. OTDR uses backscattered light of the fiber to imply loss, which is an indirect measurement of the fiber. OTDR works by sending a high power laser light source pulse down the fiber and looking for return signals from backscattered light in the fiber itself or reflected light from connectors or splice interface. OTDR testing requires a launch cable for the instrument to settle down after reflections from the high powered test pulse overloads the instrument. OTDRs can either use one launch cable or a launch cable with a receive cable, the tester result of each is also different. Test With Launch Cable Only A long lauch cable allows the OTDR to settle down after the initial pulse and provides a reference cable for testing the first connector on the cable. When testing with an OTDR using only the launch cable, the trace will show the launch cable, the connection to the cable under test with a peak from the reflectance from the connection, the under testing cable and likely a reflection from the far end if it is terminated or cleaved. Most terminations will show reflectance that helps identify the ends of the cable. By this method, it can not test the connector on the far end of the under testing cable since it is not connected to another connector, and connection to a reference connector is necessary to make a connection loss measurement. Test With Launch And Receive Cable By placing a receive cable at the far end of the under testing cable, the OTDR can measure the loss of all factors along the cable plant no matter the connector, the fiber of cables, and other connections or splices in the cable under test. Most OTDRs have a least squares test method that can substract out the cable included in the measurement of every single connector, but keep in mind, this may not workable when the tested cable is with two end. During the process you should always keep in mind to start with the OTDR set for the shortest pulse width for best resolution and a range at least twice the length of the cable you are testing. Make an initial trace and see how you need to change the parameters to get better results. OTDRs can used to detect almost any problems in the cable plant caused during the installation. If the fiber of the cable is broken, or if any excessive stress is placed on the cable, it will show up the end of the fire much shorter than the cable or a high loss splice at the problem locations. Except OTDR testing, the source and optical power meter method is another measurement which will test the loss of the fiber optic cable plant directly, The source and meter duplicate the transmitter and receiver of the fiber optic transmission link, so the measurement correlates well with actual system loss.
<urn:uuid:b9d45c1e-b07a-4bb5-bdf9-9023a733cdb9>
CC-MAIN-2017-04
http://www.fs.com/blog/how-to-test-fiber-optic-cables-by-otdr.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00404-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911872
656
2.921875
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: Unit 3 Study Guide Select a size Spanish I A - Unit 3 Study Guide Instrucciones: Complete this study guide as you complete the unit. Study the notes from each lesson before quizzes and tests. Page numbers are in parenthesis for each question. For vocabulary tables, fill in the blanks with the missing Spanish or English word and take notes on use and pronunciation. To study before quizzes/tests, use another piece of paper to cover up the English side and quiz yourself, and then use the piece of paper to cover the Spanish side and quiz yourself. el amigo, la amiga Fill in the blank with the best vocabulary word. “Me gusta ir a la escuela. Me gusta estudiar. Me gusta leer libros. Yo soy ____________ .” (1, 2) In Spanish-speaking cultures, what are some types of events that families spend time together? (3) Describe the relationship between families and close family friends. How are close family friends addressed, even though they aren’t related to the family by blood? (3) What is the boulevard in La Habana that is a major tourist attraction? (4) When was the Catedral de la Habana built? In what architectural style was it built? (4) What is the Cascada en Río Brazo? In what region of Cuba is it located? (4) What are you like? What’s his/h er name? Are you . . .? he/she likes . . . he/she doesn’t like . . . When salsa music came about in the 1960s, what types of music did it combine? (3) In the 1970s, what happened to salsa music that helped further define it? (3) What is cubism? Who cofounded it? (4) Describe the art style of Fernando Botero. (4) a, an (feminine) Do we use the word “muy” before or after the adjective? (2) What are two ways you learn in this lesson to say “the” in Spanish? When do you use each one? (3) Complete the phrases with the correct form of the word “the” in Spanish (3): What are two ways you learn in this lesson to say “a” or “an” in Spanish? When do you use each one? (3) Complete the phrases with the correct form of the word “a”/”an” in Spanish (3): What countries participate in the Pan American Games? Are European countries included? (4) Describe how the Pan American Games were started. When did people start to talk about having something like the Pan American Games? When did the Games actually start? (4) What are some sports that are in the Pan American Games but not the Olympics? (4) Where will the 2015 Pan American Games be hosted? (4) What theory did Charles Darwin develop through study on the Galápagos Islands? (5) What type of islands are the Galápagos that makes them a particularly harsh environment? (5) Fill out the chart to describe the general rule for adjective agreement found on page 2. The adjective ends in –o when the noun is _____________ and ________________ . The adjective ends in –a when the noun is _____________ and ________________ . The adjective ends in –os when the noun is ____________ and________________ . The adjective ends in –as when the noun is ____________ and________________ . Fill in the correct form of the word “reservado” according to the phrase. Change the ending according to the above chart when necessary (2): La chica _____________________ Un amigo ___________________ Los chicos ____________________ Las amigas ___________________ Do adjectives in Spanish typically come before the noun or after the noun they modify? (3) What are some things that you can do in Punta del Este, Uruguay? What can you do outdoors, for artistic interests, and at night? (4) What is important about María Nsué Angüe as an author? What is cultural important about her novel Ekomo? What country brought the game of dominoes to the New World? (4) Describe the cultural relationship between playing dominoes, the past, and the present. (4) What type of animals do the Cadejos look like? (5) What is the purpose of the white cadejo? (5) What does the legend of the white and black cadejos represent? (5), talentosa Las amigas __________
<urn:uuid:9eccbd22-6d07-4efc-937b-36d868bcde70>
CC-MAIN-2017-04
https://docs.com/danielle-poppell/1057/unit-3-study-guide
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00038-ip-10-171-10-70.ec2.internal.warc.gz
en
0.839637
1,102
2.671875
3
Bitcoin has become one of the most interesting technologies of the past few years. The magic behind bitcoin is powered by an equally exciting technology known as the blockchain. While the Bitcoin is certainly the most famous application of the blockchain, it is far from being the only one. The decentralized, trustless and secured capabilities of the blockchain have the potential of redefining many traditional business solutions including those powering the enterprise Internet of Things (IoT). The blockchain explained The blockchain is Bitcoin’s public ledger. From a functional standpoint, the blockchain provides a decentralized, time stamped, ordered record of all transactions in a Bitcoin network that can be verified at any time. These simple capabilities represent the first practical answer to profound computer science problems based on the trust of nodes in a decentralized network. One of the most popular and ancient problems that can be solved by the blockchain is the "the Byzantine Generals’ Problem.” To quote from the original paper defining the B.G.P.: “[Imagine] a group of generals of the Byzantine army camped with their troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan. However, one or more of them may be traitors who will try to confuse the others. The problem is to find an algorithm to ensure that the loyal generals will reach agreement.” The blockchain can solve the B.G.P. because it provides, for the first time, the infrastructure for a user to directly transfer a piece of property (ex: money), to another user in a secure and safe way in which everyone in the network knows about the transfer and yet nobody can challenge its legitimacy. The blockchain and industrial IoT The decentralized, autonomous, and trustless capabilities of the blockchain make it an ideal component to become a foundational element of industrial IoT solutions. It is not a surprise that enterprise IoT technologies have quickly become one of the early adopters of blockchain technologies. In an IoT network, the blockchain can keep an immutable record of the history of smart devices. This feature enables the autonomous functioning of smart devices without the need for centralized authority. As a result, the blockchain opens the door to a series of IoT scenarios that were remarkably difficult, or even impossible to implement without it. Trustless peer-to-peer messaging By leveraging the blockchain, industrial IoT solutions can enable secure, trustless messaging between devices in an IoT network. In this model, the blockchain will treat message exchanges between devices similar to financial transactions in a bitcoin network. To enable message exchanges, devices will leverage smart contracts which then model the agreement between the two parties. In this scenario, we can sensor from afar, communicating directly with the irrigation system in order to control the flow of water based on conditions detected on the crops. Similarly, smart devices in an oil platform can exchange data to adjust functioning based on weather conditions. Autonomous smart devices Using the blockchain will enable true autonomous smart devices that can exchange data, or even execute financial transactions, without the need of a centralized broker. This type of autonomy is possible because the nodes in the blockchain network will verify the validity of the transaction without relying on a centralized authority. In this scenario, we can envision smart devices in a manufacturing plant that can place orders for repairing some of its parts without the need of human or centralized intervention. Similarly, smart vehicles in a truck fleet will be able to provide a complete report of the most important parts needing replacement after arriving at a workshop. Fully auditable, secured device ledger One of the most exciting capabilities of the blockchain is the ability to maintain a duly decentralized, trusted ledger of all transactions occurring in a network. This capability is essential to enable the many compliance and regulatory requirements of industrial IoT applications without the need to rely on a centralized model. It’s happening already The promise of the blockchain in IoT solutions is far from just a theoretical exercise. From incumbents to startups, new IoT technologies are leveraging the capabilities of the blockchain to disrupt traditional centralized IoT scenarios. Let’s look at a few examples: - Last year, IBM and Samsung announced a collaboration to build decentralized IoT solutions by leveraging the blockchain. IBM documented some of the learnings from the initial pilot in a very thoughtful paper that is one of the first documented architecture references for using the blockchain in IoT scenarios. - Filament is startup that develops ad-hoc mesh networks of smart sensors for industrial applications, operating on the blockchain. Filament’s wireless sensor devices, or Taps, can cover industrial areas with low-power autonomous mesh networks for data collection and asset monitoring. By leveraging the blockchain, devices in a network can accept bitcoin payments to enable access to specific data. - Blockchain platform leader Ethereum, recently hosted a hackathon to build IoT solutions powered by the blockchain. The solutions produced by this exercise clearly highlight the disruptive nature of the blockchain in IoT solutions. These are just some of the examples of IoT solutions that are starting to leverage the capabilities of the blockchain. While we are still in the early stages of the evolution of the blockchain technologies, the possibilities in the IoT space are nothing short of remarkable. Exciting times indeed. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:9302afa9-c074-4460-a211-05c3f2eaa096>
CC-MAIN-2017-04
http://www.cio.com/article/3027522/internet-of-things/beyond-bitcoin-can-the-blockchain-power-industrial-iot.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00432-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917461
1,075
3.125
3
Temperature sensors providing a vital tool for cattle ranchers Wednesday, Mar 20th 2013 Recent climate trends have sparked a wave of concern for the potential effects caused across many agricultural and livestock industries. National Geographic reported that 2012 was the warmest year on record the continental United States, with the average temperature reaching 55.3 degrees Fahrenheit. That figure is one degree higher than the previous record, set in 1998. Causing further alarm has been widespread trends of aridity, with the average precipitation total for the contiguous U.S. measured at 26.57 inches, 2.57 inches below the historical average. The U.S. National Oceanic and Atmospheric Administration's Jake Crouch warned that if the trend continued, the nation could begin see more unusually warm years. The implications for livestock If warming trends continue, it could be extremely problematic for cattle farmers as the animals have a far lower tolerance to heat than humans. According to the University of Nebraska - Lincoln, cattle have an upper critical temperature that is cooler than humans by approximately 20 degrees Fahrenheit. In 90 degree conditions, for instance, while a person may feel uncomfortable, a cow would be in danger of extreme heat stress. Furthermore, cattle production is concentrated to the United States' central region, including Texas, Nebraska, Kansas and Oklahoma, according to the EPA. This area experiences higher than average temperatures compared to much of the continental United States. Oklahoma alone experiences, on average, 71 days with temperatures at or above 90 degrees Fahrenheit, according to the Oklahoma Climatological Survey. Identifying at risk cattle The development of dairy cows into high-producing livestock has further exasperated their tendencies toward overheating. The New York Times reported that researchers from the University of Arizona and Northwest Missouri State conducted a study on how to identify cattle at-risk for heat exhaustion and found that dairy-yield demands had made the animals more susceptible to overheating. "Heat exhaustion is a fairly common problem in summer months over most of the U.S., especially as our cows have gotten to be high-producing animals," University of Arizona researcher Robert Collier said, according to the news outlet. "They're eating more and producing more heat, so they're more sensitive." Researchers measured the cattle's internal body conditions with temperature sensors. In addition, the team affixed sensors to the cows' legs to determine if they were standing or lying down. The study found that cattle that registered 102 degrees Fahrenheit or higher on their temperature monitors were more likely to stand for long stretches of time. Researchers said cattle with higher temperatures preferred standing most likely because it increased air circulation across their bodies, allowing body heat to escape. They cautioned, however, that the cattle ultimately use more energy standing than lying down. According to the research team, farmers should coax their livestock to lie down in extreme heat conditions as well as provide a steady mist of cool water. Living the Country Life further recommended that farmers supply their livestock with ample shade and clean water as well as limiting daily activity to the early morning hours when temperatures are significantly lower.
<urn:uuid:1e1008d8-e2f2-4783-92bf-d84392d9e5e1>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/temperature-sensors-providing-a-vital-tool-for-cattle-ranchers-406803
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00340-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951223
616
3.328125
3
Smart City technology is an attractive prospect to councils and utilities, but with large distributed networks with unique technologies, how can we keep them secure? In 2007 IBM teamed up with Singapore’s Land Transport Authority (LTA) to produce a system that would not only give the LTA real time reports as to the flow of traffic, but also predict the state of traffic thirty minutes in to the future. The £10 million dollar system exceeded expectations, predicting traffic with an accuracy of over 85%. The LTA used this information to manage the flow of traffic and better avoid congestion. Smart Cities aren’t just limited to improving traffic management. New solutions are being developed and implemented that cover everything from detecting gun crime through acoustic sensors, to more efficient garbage collection by sensing which dumpsters are full and which are empty and can be skipped. In essence if enough data can be collected about a resource, its management can be made more efficient. In a typical smart city hundreds of sensors are positioned around a city which report back to a control system. This collects and analyses the data before displaying conclusions and in some cases updating controllers. With the conservation of energy and reduction in pollution becoming high priorities for modern cities, the smart city is an apparent utopia for governments. As more and more cities invest large amounts of money into smart city solutions, major manufacturers are lining up to demonstrate that they should be the ones who can be build, install and run these solutions. In 2013, the UK department for Business, Innovation and Skills (BIS) released a report stating that the global smart cities industry would be valued at $400 billion by 2020. Given their level of investment, major international corporations agree with this estimate. IBM, Cisco, Schneider and Siemens are just a few who have been singled out as major leaders in investment and innovation in smart cities. Several smart cities have already been created across the world in an attempt to capitalise on the new field. But not all technologies have been welcomed by the inhabitants. In 2013 200 smart bins were installed in London. Their goal was not to lower pollution or reduce energy usage, but to show adverts. These adverts would be produced depending on the unique identifier of the pedestrian’s smart phone. According to their manufacturer, they would use “cookies for the real world”, letting advertisers better target their adverts. Perhaps unsurprisingly when London’s public found out about how these bins worked, there were calls for them to be removed. The City of London took seriously enough to pull the devices from the streets. According to The City of London Corporation: “We have already asked the firm concerned to stop this data collection immediately. We have also taken the issue to the Information Commissioner’s Office. Irrespective of what’s technically possible, anything that happens like this on the streets needs to be done carefully, with the backing of an informed public.” It could be argued that this particular instance of city wide data collection could have been successful if the public was better informed. There is another factor that all smart city vendors and implementers should also be concerned with. And if they are in a rush to be first to market, it could be far more costly. During Christmas 2014, the newspapers were dominated with the Sony hacking story, but another story came out around the same time that deserved far more attention that it gained. A report detailing a Turkey pipeline explosion found that the culprit was not a simple malfunction as had been initially thought, but a deliberate and well-planned cyber attack. The attack first disabled security cameras and alarms, then pressurised the pipeline until it caused the pipeline to explode. After a lengthy investigation, it was found that the attackers had broken into the network via the remote surveillance cameras that ran the length of the pipeline. Smart cities also need to rely on this distributed layout of components, often using low powered sensors transmitting their data wirelessly back to the controller. Smart city vendors and users must keep one fact in mind when building their network: It might be their devices, but they can not be trusted. Most people in the security field should be comfortable with the idea of trust boundaries, but the traditional model blurs when components are outside of physical boundaries, but inside encrypted channels. Tools are becoming more available and affordable to anyone who wishes to investigate and reverse engineer smart city components. Security needs to be thought about at inception of smart cities so that they are safe throughout their many years in the field. Companies should build systems that are resilient to attack, have methods for detecting attacks when the attacker finally gets in, and have plans in place to deal with an attack that bypasses their detection. Smart Cities are a major target and should be built with security measures to match. So is the risk of an attacker turning off the lights too much? Does the risk of smart cities outweigh their benefits? Saying no already isn’t an option. Smart cities are already here. From Boston to Santander to Stockholm, cities around the world are already implementing the technology to help them better manage their infrastructure and resources. And their effect is impressive. Santander, the EU’s designated test bed for smart city technology reduced energy costs “by as much as 25 percent”. It’s an incredible incentive for other cities to follow suit. However it must be done at a pace that keeps the public informed, and security considerations included in every stage of the process.
<urn:uuid:aa867167-89cd-46a8-8052-782ca60680b7>
CC-MAIN-2017-04
https://www.mwrinfosecurity.com/our-thinking/securing-the-smart-city/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00368-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966376
1,108
2.8125
3
With the Carbon to Collaboration Initiative, Cisco is committed to reducing carbon emissions by a minimum of 10 percent, starting with a dramatic reduction in the company’s air travel in 2008. In addition, Cisco will invest US$20 million in collaborative technologies that will reduce the need for physical travel at Cisco, by combining its Unified Communications technologies, which include voice and data, with a rich-media and video experience to create virtual interactions across distances. People located across the country and around the globe, for example, will be able to work together as effectively as if they were sitting in the same room. How Cisco TelePresence Supports the Connected Urban Development Program The Cisco TelePresence videoconferencing solution supports the CUD program, encouraging telecommuting to help decrease the number of vehicles on the road, especially during peak hours. TelePresence creates a high level of virtual interaction across distances, without compromising communications and collaboration among people.
<urn:uuid:c5278518-5f50-4d1e-aa05-cb6b3854ea0b>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/consulting-thought-leadership/what-we-do/industry-practices/public-sector/our-practice/urban-innovation/connected-urban-development/further-cud-information/thought-leadership/carbon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00304-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942856
195
2.578125
3
In this practice lab, we are going to investigate inter-VLAN routing using Packet Tracer. I am assuming every reader of this article has fundamental knowledge about VLAN otherwise you can check my previous article on Inter-VLAN here. As we know, VLANs subdivided LAN into different groups and Inter-VLAN routing is required to communicate with each other. In the below topology, we will first create VLAN and then apply the router-on-a-stick method for inter-VLAN communication. From the topology above, you can see we created VLAN 10 and VLAN 20. In order to apply the router-on–a-stick method for Inter-VLAN communication, we will have to create sub-interfaces on fast-Ethernet interface of the router c. Remember we will only create as many sub-interfaces for as many VLANs we are using in our topology. In this case we are using two VLANs, so we will create two sub-interfaces for both VLANs. After that we will apply encapsulation on those sub-interfaces. Tasks to Perform: 1. You need to configure the specified VLANs for Switch ACS 2. Perform router-on-a-stick method for inter-VLAN communication so Hosts H1.1, H1.2 and H2.1, H2.2 can successfully ping each other. Note: You need to configure only Switch ACS & router C to complete the task The following subnets are available to implement this solution: Hosts H1.1, H1.2, H2.1, H2.2 are configured with the correct IP address and default gateway. Switch ACS uses cisco as the enable password. At this point, we have completed the steps to enable inter-VLAN communication. But what if we have 100s of VLAN in a network? We have to create hundreds of sub-interfaces, which is not a feasible solution. The issue can be resolved by creating SVI (Switched Virtual Interface) and for that we need an L3 switch as you read in my previous article on inter-VLAN communication. Tasks to perform inter-VLAN routing using SVI are: You need to configure the illustrated VLANs and SVI on SwitchX so that Hosts H1.1, H1.2, H2.1, H2.2 can successfully ping the server S1. To complete this task, you need to configure VLAN port assignments or create trunk links. Don’t try to use static or default routing. All routes must be learned via EIGRP 10 routing protocol. Note: You do not have access to router C. Router C is correctly configured. Hosts H1.1, H1.2, H2.1, and H2.2 are configured with the correct IP address and default gateway. SwitchX uses Cisco as the enable password. Routing must only be enabled for the specific subnets shown in the diagram. This lab is complete when you can demonstrate IP connectivity between each of the user VLANs and the external server network, and between the switch management VLAN and the server network. Note: I have enclosed a solution file for verification or to use as a reference to clear your doubts. In this Practice CCNA Lab you can use Packet Tracer (Ver 5.3 & above) to troubleshoot a simulated network that was designed and configured to support two VLANs and a separate server network. Inter-VLAN routing is provided by an external router in a router-on-a-stick configuration as well as SVI configuration. However, the network is not working as designed and complaints from your users do not provide much insight into the source of the problems. You must first define what is not working as expected, and then analyze the existing configurations to determine and correct the source of the problems.
<urn:uuid:ef0f56bb-6be4-4faa-a58b-aa1fc93992cb>
CC-MAIN-2017-04
http://resources.intenseschool.com/packet-tracer-ccna-prep-inter-vlan-routing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00425-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899262
830
2.984375
3
Is Android Really Open Source? Google's Android mobile operating system has emerged to become a dominant force in the smartphone landscape. Sitting at the core of Android is Linux as well as a long list of open-source technologies. Many people mistakenly think that Android itself is all open source, but the harsh reality is that from a usable handset perspective, it's not always open source and an incident this week proved that fact beyond any shadow of a doubt. You see, there is Google Android, the project that Google builds and shares with its handset partners, then there is the Android Open Source Project (AOSP). The two are not exactly the same. One of them includes proprietary technologies that are not available as open source (guess which one?). Jean-Baptiste Quéru, the maintainer of AOSP abruptly quit his post this week, throwing into question the viability of Android as an open-source effort. "There's no point being the maintainer of an Operating System that can't boot to the home screen on its flagship device for lack of GPU support," Queru stated in a G+ post. The challenge that Queru is referring to is the ability of AOSP to boot on the Nexus 4 and 7 devices. Apparently there are some proprietary bits that silicon vendor Qualcomm is not making available as open source, without which AOSP will not boot. I don't necessarily see this as a Google-only issue, but rather one that any and all Linux/open-source vendors and projects must face as they dip their toes into the murky patent-infested waters of mobile telephony (don't forget, Microsoft has made more patent licensing deals with Android handset makers than I care to count). Personally speaking, when it comes to the ability to boot a device, especially when we're talking about something that is Linux-based, I believe that all that core boot functionally should reside in the kernel space (the core of the operating system). With Linux, the kernel is GPL (GNU Public License), which is a reciprocal license. That means that any code that is in the core kernel itself must also be GPL and that code must always be open and made available for others to enjoy and extend. The GPL is the magic that makes Linux work and is a key part of its success. Yes, I know that Android has already enjoyed much success, but if all the core code required to make it boot (on any device) was open source, it would be even better. I'm not talking about special drivers or cameras or accelerometers here; I'm just talking about the ability of the device to boot. I suspect the Mozilla folks are dealing with the same issue on the Firefox OS-based smartphones that are just now starting to appear. Mozilla is a company that has always been at the heart of the open-source movement, but proprietary drivers and components are often foundational elements in the mobile phone business, so it's not an easy challenge to overcome. The solution to this issue is for mobile hardware tech vendors to open up their intellectual property (not likely) or for a vendor to emerge that will build a "pure" open-source hardware platform from silicon on up. Google has long held to its corporate mantra of "Do No Evil," and leaving the ASOP community to twist in the wind isn't a "nice" thing to do. Time will tell how this plays out. I'm hoping Google will play nice. Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.
<urn:uuid:373dab37-3017-4568-9545-930a23447651>
CC-MAIN-2017-04
http://www.eweek.com/blogs/first-read/is-android-really-open-source.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00057-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964489
731
2.546875
3
Cell Phones With X-Ray Vision, Mega-Speed WiFi Feasible: MIT Cell phones that peer through walls and wireless connection speeds increased by 10 times are technically feasible, and prototypes have already been demonstrated, according to an MIT researcher. At the recent Mobile Summit: It's a Disruptive Mobile World conference in Boston sponsored by the Massachusetts Technology Leadership Council, Dina Katabi, the director for the MIT Center for Wireless Networks and Mobile, outlined a series of technology developments that had the seriously mobile enthusiastic audience wanting to know more. Katabi, who holds 12 awards for her research, has a particular interest in adapting tools from applied mathematics to solve network obstacles including network congestion, scale and security. "There are new ways to use the wireless spectrum to achieve 10 times more data for your cell phone and new applications," Katabi told the audience. Mega multiple input, multiple output (MegaMIMO)—a patent-pending technology—delivers 10 times more wireless data per unit of spectrum. A winner of the MIT Elevator Pitch Contest, MegaMIMO allows multiple WiFi access points to collaboratively create WiFi capacity as additional access points are added to the system. In current WiFi environments, additional access points often create additional data collisions and crashes. One of the more well-known examples of WiFi limitations came during Steve Jobs' introduction of the iPhone 4 on June 7, 2010, at the Apple Worldwide Developers Conference when he was forced to ask audience members to shut off their wireless devices to demonstrate the new phone. While the MIMO concept has been well established, the ability to create capacity rather than create collisions is the MIT development and, according to Katabi, the technology can scale, offer additional security and be overlaid onto existing networks. A more complete description of the technology is available at the MIT Networks site, but Katabi claimed that the technology has been proven in a conference room setup using 10 WiFi stations where capacity was increased nearly 10 times. The possibility of using WiFi signals and smartphones to peer through walls drew intense questioning from the audience. Describing the technology, dubbed WiVi, as still in its infancy, Katabi showed a video of the smartphone-WiFi combination detecting a person moving in a conference room separated from the smartphone by a solid wall. The WiVi system is based on concepts similar to sonar and radar but instead of using expensive equipment and restricted spectrum, the system used low-cost, low-power WiFi signals to measure and cancel out stationary objects while leaving moving object visible. The first public disclosure of the technology was in June 2013, and it is still unproven if the system that uses two transmit antennas and a single receiver could be easily incorporated into a smartphone. While the system at present can detect movement, it cannot identify individual features. University-based technology projects tend to have a long, slow journey to commercial deployment. Of the two projects Katabi highlighted in her presentation, MegaMIMO has the most immediate appeal for the bandwidth starved. However, the WiVi project wins the cool factor by a large margin. Both projects illustrate how to reuse established technologies in new ways. Vendor-based research is too often focused on small improvements to existing products or trying to duplicate a competitor's product while avoiding a patent legal suit. I'm rooting for Katabi to turn her powerful command of networking technologies into powerful products. Eric Lundquist is a technology analyst at Ziff Brothers Investments, a private investment firm. Lundquist, who was editor in chief at eWEEK (previously PC Week) from 1996-2008, writes this blog for eWEEK to share his thoughts on technology, products and services. No investment advice is offered in this blog. All duties are disclaimed. Lundquist works separately for a private investment firm, which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.
<urn:uuid:f421ccaa-6f59-427d-a0d2-b67704852cc9>
CC-MAIN-2017-04
http://www.eweek.com/blogs/upfront/cell-phones-with-x-ray-vision-mega-speed-wifi-feasible-mit.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00451-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942021
801
2.828125
3
NASA to test Google 3D mapping smartphones - By Kathleen Hickey - Mar 27, 2014 Project Tango, Google’s prototype 3D mapping smartphone, will be used by NASA to help the international space station (ISS) with satellite servicing, vehicle assembly and formation flying spacecraft configurations. NASA’s SPHERES (Synchronized Position Hold, Engage, Reorient, Experimental Satellites) are free-flying bowling-ball-sized spherical satellites that will be used inside the ISS to test autonomous maneuvering. By connecting a smartphone to the SPHERES, the satellites get access to the phone’s built-in cameras to take pictures and video, sensors to help conduct inspections, computing units to make calculations and Wi-Fi connections to transfer data in real time to the computers aboard the space station and at mission control, according to NASA. The prototype Tango phone includes an integrated custom 3D sensor, which means the device is capable of tracking its own position and orientation in real time as well as generating a full 3D model of the environment. “This allows the satellites to do a better job of flying around on the space station and understanding where exactly they are,” said Terry Fong, director of the Intelligent Robotics Group at Ames. Google handed out 200 smartphone prototypes earlier this month to developers for testing the phone’s 3D mapping capabilities and developing apps to improve these capabilities. The customized, prototype Android phones create 3D maps by tracking a user’s movement throughout the space. Sensors “allow the phone to make over a quarter million 3D measurements every second, updating its position and orientation in real-time combining that data into a single 3D model of the space around you,” Google said. Mapping is done with four cameras in the phone, according to a post on Chromium. The phone has a standard 4MP color backside camera, a 180 degrees field-of-view (FOV) fisheye camera, a 3D depth camera shooting at 320×180@5Hz and a front-facing camera with a 120 degree FOV, which should have the same field of view as the human eye. The cameras are using Movidius’ Myriad 1 vision processor platform. Previous visual sensor technology was prohibitively expensive and too much of a battery drain on the phones to be viable; new visual processors use considerably less power. Myriad 1 will allow the phone to do motion detection and tracking, depth mapping, recording and interpreting spatial and motion data for image-capture apps, games, navigation systems and mapping applications, according to TechCrunch. The cameras, together with the processor, let the phone track and create a 3D map of its surroundings, opening up the possibility for easier indoor mapping, a problem facing urban military patrols and first responders. Kathleen Hickey is a freelance writer for GCN.
<urn:uuid:f99630d4-b889-43b7-a942-ef7dd1c1cf01>
CC-MAIN-2017-04
https://gcn.com/articles/2014/03/27/nasa-google-tango.aspx?admgarea=TC_EmergingTech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00295-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894461
602
2.828125
3
Last year, we wrote about a sweeping project from the MIT Sustainable Design Lab and the Boston design firm Modern Development Studio that mapped the potential for installing solar power on every square meter of every roof in Cambridge, Massachusetts. MIT developed algorithms using public flyover LIDAR data to automatically assess each building's suitability – by location, angle and surroundings – for soaking up the sun's rays. At the time, the tool looked like a replicable one that could change how we harvest solar power on a community scale. Now, the project's original creators have licensed their technology from MIT and launched a spinoff company, called Mapdwell, that intends to scale this up beyond Cambridge, even beyond solar surveys. A similar and slick interactive platform, they figure, could also educate homeowners and commercial building managers about their potential for other kinds of green roofs, or rainwater collection. The project just tapped its second city, Washington, D.C. And similar solar maps in Wellfleet, Massachusetts, and, abroad, in Chile, are due in the new year. Eduardo Berlin, the new company's CEO, says Mapdwell is also in talks with about 20 other cities, mostly in the U.S., to create something similar to this Washington platform:
<urn:uuid:1109ac3f-a728-4fd3-827e-1c8018b1a4cf>
CC-MAIN-2017-04
http://www.nextgov.com/emerging-tech/2013/12/imagining-us-supreme-court-covered-solar-panels/75542/?oref=ng-dropdown
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00351-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940832
255
2.609375
3
Stress Inoculation Training Similar to first responders and other emergency management professionals, emergency room staff are frequently under huge amounts of pressure, and often forced to make life or death decisions with little to no notice. Even with their experience, however, they can often find themselves temporarily frozen by the unexpected. In such situations, said director of emergency ultrasound and director of education at the University of Utah, Michael Mallin, MD to Emergency Medicine News, “the more we recognize that our stress is going to be through the roof, the better we’ll be able to cope with that stress in the moment.” To help in such situations, the University of Toronto is performing a funded trial of a stress inoculation training program for nurses, respiratory therapists, and emergency medicine and general surgery residents, that may find a larger audience in emergency and disaster management personnel. The training, intended to develop the “fight” reflex in high-stress situations, is intended to have three components: - The teaching people of the physiological facts and effects of stress; - The practicing of the physical and mental skills to deal with stress; - The simulation of scenarios to allow for practice of skills under stress. Notes Dr. Mallin with respect to the benefits of training, “The medicine's not that hard. We can all talk through a cric. But whether you can perform a thoracotomy in real time is all about how well you can compose yourself in the moment, how prepared you are for the adrenaline rush. That's what stress inoculation training is about.”
<urn:uuid:5b5b134c-1e34-4f22-9a1c-8782e08d1f80>
CC-MAIN-2017-04
http://www.disaster-resource.com/index.php?option=com_content&view=article&id=2838
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00167-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948697
324
2.953125
3