text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Convergence: Tools for Traffic From the earliest days of networking, the term “convergence” has been interpreted to mean the confluence of different types of traffic across the same networking media. At the outset of this phenomenon, one dealt with tools and technologies designed to let voice and data networks share the same media, especially when expensive, high-bandwidth carrier connections were common. Because it permits infrastructures—and thus, related costs, equipment and links—to be shared, convergence has always come in for a fair amount of buzz and attention. But in an age of the ubiquitous Internet and ever-higher bandwidth links to homes as well as offices, convergence has been expanded to include whatever kinds of traffic one can wish for or imagine. These days, the biggest buzz in convergence circles isn’t about linking private telephone systems with data networks, or vice versa. It’s all about various ways to use the network directly for various types of traffic, including voice, video and other forms of what are sometimes called “media-rich communications” (which combine data, sound and images into complex communication streams). As convergence needs have evolved, so have the tools and technologies to support them. Today, the biggest topics in convergence have to do with the following types of communication activities: - Voice over Internet Protocol (VoIP): This rapidly exploding voice communications technology permits voice traffic to traverse public and private IP networks. Combined with phone switches and other equipment, VoIP phone users not only can call each other, but also can dial into the old-fashioned public switched telephone network (PSTN). Given that delays in voice communications can be problematic, this type of service is normally classified as real-time. - Voice and Video Communications: This interesting family of applications and services is built largely around the H.323 standard developed by the International Telecommunications Union (ITU) to define how audio visual conferencing data should be transmitted across networks. As implemented for TCP/IP, this pretty much sets the stage for how webinars, online training, networked meetings and other forms of real-time communication are orchestrated and handled. - Wireless Link-Ups: This is more a matter of extending the types of communication onto wireless network segments, but also embraces emerging generations of wireless phones and mobile devices with support for VoIP, streaming media and possibly even voice and video communications. Another interesting form of convergence occurs in the proliferation of multi-purpose access devices. At one time, purchasing an Internet access device meant buying a single-purpose device designed to deliver Internet connections to individual computers or entire networks. Especially for broadband Internet services, a whole new class of devices has emerged in the past two to three years that typically integrates Internet access, security capabilities, network services and connections, and more in a single inexpensive box. Manufacturers like Belkin, D-Link and SMC (among others) all offer devices for less than $100 that include some or all of the following capabilities: - Cable modem Internet connection: Some support both DSL and cable connections. - Four or more ports of switched 10/100 Ethernet: These devices also come in wireless implementations with support for 802.11b typical, and 802.11g increasingly evident. For wireless networks, devices can usually handle eight or more simultaneous connections. One or more USB connections for direct PC link-ups also is pretty typical. - IP services: Most of these boxes include built-in routing, DHCP and network address translation (NAT). Some even include higher-level content filtering or print server functionality as well. - Internet security capabilities: Most of these devices offer built-in firewalls with both inbound and outbound traffic screening capabilities. Many also offer stateful inspection and limited intrusion detection and prevention capabilities. That’s a lot of functionality for little money, but where convergence really comes into play is with an increasing number of offerings that include telephone jacks and support for one or more VoIP-capable or conventional telephone handsets. Other evidence of this kind of convergence includes another class of similar products built around telephone handsets. These integrate the same kind of functionality just described—Internet link-ups for DSL and/or cable, wired or wireless local network links, IP services and security capabilities, along with VoIP support—inside a telephone-shaped device. A broad range of vendors, from giants like Cisco Systems to upstarts like Zultys, are charging into this space with compact, affordable device offerings. Though these do cost more than multi-purpose Internet access devices, they integrate powerful telephone service capabilities along with wired or wireless handsets and headsets, along with everything else. Making Networks Ready for Real-Time Of course, the introduction of real-time protocols and services such as voice or conferencing onto conventional data networks can be an interesting process. In fact, most experts recommend that some forms of planning, modeling and analysis be performed before setting such things loose on any network. This goes double when it comes to implementing plans to add voice or conferencing services. Ultimately, it’s all about tolerable delay and quality of service issues. Both phrases touch on needs for the packets of data to not take too long in transit, to experience no unusual degrees of loss or rejection along the way, and to arrive in some reasonable semblance of the sequence in which they left—at least, within the levels of tolerance for delay that these forms of communication can reasonably support. To facilitate introduction of voice and conferencing services onto networks, technology vendors have developed all kinds of tools. These are usually divided into two major classes: - Planning and modeling tools: These tools depend primarily on simulation of events and traffic on modeled versions of real (or planned) networks to help network professionals decide if they can handle planned levels of real-time traffic and activity, along with whatever other more conventional uses they may support. - Analysis and characterization tools: These tools can measure delays, monitor quality-of-service behavior and, in general, probe and measure network performance and other characteristics to see how well or poorly they can handle real-time traffic, and what kinds of loads are workable, as well as where things start to break down or become unworkable. These tools not only measure actual traffic and network characteristics, but also can simulate traffic loads to see how networks behave when real-time traffic is introduced onto them. It’s eminently possible to step into a situation where management’s and users’ high expectations about the usability and productivity gains they will get from convergent technologies and communication are dashed by network problems. Thus, it’s essential to make sure not only that initial usage scenarios are workable, but also that enough room for growth and increased capacity potential is built into initial designs so that constant upgrades don’t become the norm. Once users become accustomed to services, no matter how advanced, they soon come to depend on them, so the last thing you want to do is put them in a situation where those services degrade or become temporarily unavailable on a noticeably regular basis. What Convergence Really Means Introduction of voice and conferencing services often necessitates upgrades of network infrastructure to accommodate performan
<urn:uuid:3e2277de-c277-498e-a2c5-0cc230507720>
CC-MAIN-2017-04
http://certmag.com/convergence-tools-for-traffic/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00238-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935554
1,481
2.9375
3
Having your website users’ financial information intercepted outranks the vast majority of eavesdropping consequences, at school or in the workplace, for example. And yet, are website owners taking the necessary precautions to protect against man in the middle/IP spoofing attacks like sequence prediction attacks on their websites? No. No they aren’t. According to the internet security and CDN provider Imperva Incapsula’s IP spoofing definition, IP spoofing is the process of disguising the origin IP of internet traffic. This is accomplished by changing the source IP address header, which is one of the headers that contains the information necessary for routing and transmission continuity. IP spoofing has legitimate purposes, such as skirting censorship laws in authoritarian regimes, but it is often used in cyber attacks, typically in order to disguise the location of a botnet in order to pull off a DDoS attack, or to impersonate a different user or device on the internet, such as in a sequence prediction attack. >See also: The Trojan horse: 2017 cyber security trends A sequence prediction attack is a two-headed monster in that it’s an IP spoofing attack as well as a man in the middle attack – a man in the middle attack being one where the attacker positions him or herself between a user and a website in order to eavesdrop on the communications being exchanged. Sequence numbers explained When a person visits a website his or her browser connects to the site’s server using the transmission control protocol or TCP handshake. In this handshake, the browser sends a connection request to the server, the server responds with an acknowledgement, and the browser acknowledges the acknowledgment by sending an acknowledgment of its own. Boom, session activated. TCP stipulates that each byte of data exchanged between a browser and a server have a sequence number, which is used to identify the order of the bytes so the data can be reconstructed in the proper order. The sequence number of the first byte is determined during the initial handshake when the browser first sends the connection request to the server. The server responds with an acknowledgment number, which is the initial sequence number +1. The sequence/acknowledgment numbers continue like so throughout the session. The sequence prediction attack issues begin when an attacker is able to position him or herself between the communications between the browser and the server, monitoring the data being exchanged. By spoofing the IP of either the browser or the server and then predicting the next sequence number in the exchange, a man in the middle attacker can take the place of the browser or server and insert him or herself into the trusted connection. Once the attacker takes the place of the browser or the server, he or she has a grab bag of malicious tricks to choose from. The attacker can terminate the connection, access information (including potentially sensitive data), or even run malicious commands or scripts. Rendering the man in the middle ineffective So what can a website owner do to prevent these sequence prediction attacks? It’s actually a pretty simple solution: encryption. By using secure sockets layer (SSL) protocol, a website encrypts all communications going back and forth between its server and users’ browsers, so all a man in the middle attacker would gain from its eavesdropping is cryptographic code that can’t be cracked. No sequence numbers. SSL is an absolute necessity for website owners concerned about these attacks, as well as website owners who run websites that deal with any kind of sensitive or confidential information including login names, passwords, email addresses, home addresses and financial information. The drawback to additional security There is an inherent drawback to SSL, which is that it slows down page load times. However, this lag can be negated by the use of a content delivery network (CDN), which is a global network of servers that work to serve up website content as quickly as possible by reducing the physical distance between users and servers and caching all cacheable content. >See also: 5 cyber security predictions for 2017 Advanced CDNs will also provide protection against DDoS attacks. When considering the next investment you’re going to make for the good of your website (and your users), think back to all of those times you had the name of your crush spread around school or your secret weekend plans blabbed to your parents and banish eavesdroppers for good with SSL encryption. And then banish site lag with a CDN.
<urn:uuid:126a6a84-3a6f-4c04-bd10-868628c559db>
CC-MAIN-2017-04
http://www.information-age.com/end-threat-sequence-prediction-attacks-123463896/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00386-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926789
916
2.796875
3
The DOE said the goal of its solicitation is to identify and prove new concepts for applied research in materials chemistry, battery components, battery designs and any technologies that will lead to breakthroughs in grid energy storage. Such technology will be focused on novel materials, electrodes, electrolytes, membranes and other components, along with new concepts for ultra low cost, high efficiency and long lasting energy storage systems. Emphasis is placed on highly innovative research proposals in areas that have the potential to have strong impact on large-scale energy storage in the future, the DOE stated. The DOE went on to say the variable and stochastic nature of renewable sources makes solar and wind power difficult to manage. To effectively use the intermittent renewable energy and enable its delivery, large-scale electrical energy storage is required. For example, storage systems operating near an intermittent, renewable wind energy source can smooth out wind variability and, if of sufficient scale, store off peak wind energy, the DOE stated. Big energy storage is an effective tool to improve the reliability, stability, and efficiency of the envisioned electrical grid of the future. This grid will be significantly impacted by new demands, such as plug-in electrical vehicles, increased use of renewable energies, and smart grid controls. Large scale storage technology could shave the peaks from a user or utility load profile, increase asset utilization and delay utility upgrades, decrease fossil fuel use and provide high levels of power quality, while increasing grid stability. In addition, distributed energy storage near load centers can reduce congestion on both the distribution and transmission systems, the DOE stated. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:be90ad0e-d00d-4245-b3ac-2c9900b7b742>
CC-MAIN-2017-04
http://www.networkworld.com/article/2227199/security/us-wants-big--revolutionary-energy-storage-systems.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00230-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922046
339
3.078125
3
For chips, the next step is a great leap NIST explores the next dimension for chips<@VM>Sidebar | Natural order - By William Jackson - Dec 07, 2007 As technologies go, CMOS is a tough act to follow. The complementary metal-oxide semiconductor, first used in digital watches almost 40 years ago, is the most widely used integrated circuit in information technology products and has enabled the rapid development of everything from modern PCs to increasingly powerful cell phones and other handheld devices. 'The reason CMOS has been valuable to us is because we have been able to improve it every year,' putting more and faster circuits and transistors into each square inch of chip, said Jeff Welser, director of the Semiconductor Research Corp.'s Nanoelectronics Research Initiative. This progress in processor performance has allowed industry to follow Moore's Law so far. Postulated by Intel cofounder and engineer Gordon Moore in 1965, Moore's Law states that, because of steady improvements in chip fabrication, the number of transistors on a chip will double roughly every 18 months, with computer processing power increasing correspondingly. This observation has held remarkably true across the decades. But nothing lasts forever. 'We've always been worried about the physical limits of how small CMOS can get,' Welser said. SRC, an industry consortium created to fund university research into semiconductors, established the Focus Center Research Program in the 1990s to support research to advance CMOS semiconductors. In 2004, with the end of the CMOS road coming into view, SRC established the Nanoelectronics Research Initiative (NRI) to develop a new generation of technology to replace CMOS by 2020. 'Getting something by 2020 is a challenge, but reasonably attainable,' Welser said. 'We think we probably have another good decade for advancing CMOS' through scaling techniques such as multicore processors before a replacement technology will be needed. SRC's NRI recently acquired a partner in its quest, one with deep pockets and broad expertise in nanotechnology research. The National Institute of Standards and Technology announced in September that it would provide $2.76 million in research grants to NRI projects this year, the first step in a projected five-year program to provide more than $18 million in semiconductor research funding. NIST scientists also will be collaborating with industry and university researchers. This is the kind of long-term basic research that NIST wants to be involved in, said David Seiler, chief of NIST's semiconductor electronics division. 'They are coming up against the limits of what can be done with current semiconductors, so that there are serious concerns about what happens 10 or 15 years from now,' Seiler said. If the IT industry cannot sustain current progress, there could be serious repercussions on the industry and U.S. economy as a whole, he said. 'The entire semiconductor industry has highlighted the problem as one that needs to be solved,' said Jason Boehm, senior analyst at the NIST program office. Joaquin Martinez, senior scientist at NIST's Office of Microelectronics Programs, said now is the right time for NIST to get involved. The program is at a precompetitive stage, when the basic research needed is too expensive for any one company or university to undertake by itself. This gives NIST a chance to advance the state of the entire industry. Welser called NIST's participation in the program absolutely crucial. The research requires the long-term vision and funding available from a government program, and turning a theory into a commercial product requires expertise in extremely refined testing and measurement. 'This is very much what NIST is good at,' he said. 'NIST is all about measurement,' Seiler said.Positive and negative CMOS semiconductors enable information processing and calculations by shepherding electrons along circuits and through gates or switches. The complementary part of CMOS refers to the fact that a CMOS chip has an equal number of transistors that switch from positive and negative charges. Switches can be either on or off, which allows digital processing of ones and zeroes. The smaller and more compact the circuits and switches can be made, the more powerful the processors are. One advantage of CMOS technology has always been its low static power drain; that is, the processor uses power only when switching between off and on, reducing both power consumption and the amount of heat generated. But as the circuits approach the molecular and atomic scale, they are reaching the limits of miniaturization and power advantages are beginning to disappear too. The current leaking from the switches when not in use almost equals that needed to operate the switches, a situation Welser likened to a car that uses as much gas when parked as when running. Additional heat accompanies the wasted power. 'We are reaching the limits of air cooling,' Welser said. Other techniques, such as water cooling, can work for large pieces of equipment but are not feasible in the small devices such as laptop and handheld computers that CMOS has enabled. 'So we need to find a new way to extend scaling. We need to find something that is better than CMOS.' This is not the government's first involvement in processor development. The Defense Advanced Research Projects Agency is the largest single financial supporter of SRC's Focus Center Research Program, which is advancing CMOS technology. And NIST has a long history of working with the semiconductor industry to develop testing and measurement techniques. Understanding a technology and being able to reproduce it commercially requires the ability to measure accurately, Martinez said. Although a goal and a deadline for the program are in place, nobody knows yet what they are looking for. 'This is a long-term, basic research goal,' Boehm said. 'We are starting from scratch, at least in the realm of nanoelectronics.' That is not to say there are no ideas on how to approach the problem. 'There are some good leads,' Martinez said, such as carbon nanotubes. 'But nobody knows how to use them to make good transistors consistently.' One of the first questions to be worked out is how to represent the ones and zeroes used in digital processing. CMOS processors use on and off switches for this. Researchers are considering ideas such as electron spin and molecular conformational technology, in which atoms are moved in key positions within a molecule to produce different shapes or behaviors. There still is a lot of work to be done in selecting a method and coupling it with a technology to produce a product, but the 2020 goal is not unreasonable if money and resources are devoted to the project, Seiler said. 'People are very creative,' he said. 'Breakthroughs do happen. Putting the best minds on it will allow us to come up with breakthroughs.' With the time left before foreseeable improvements in CMOS are exhausted, 'the timing is good,' Seiler said. 'We can be there, hopefully, in 10 years.' Moving a technology of this complexity from prototype to production typically takes about 10 years, Welser said. NRI hopes to cull some of the theories now being proposed and develop some feasible ideas to work on by 2010. To accomplish this, NRI in 2006 established three virtual regional research centers made up of groups of cooperating universities. The Western Institute of Nanoelectronics is based in California, the Institute for Nanoelectronic Discovery and Exploration is in New York, and the Southwest Academy for Nanoelectronics is in Texas. NRI has just completed its first annual review of work at these centers. 'I'm surprised at how well we've done' in the first year, he said. There has been good progress in understanding electron spin, but the greatest advance has been getting physicists and chemists to move from theory and experimental science to talking about practical requirements. 'Our biggest challenge has been bridging the communications gap' between scientists and engineers, Welser said. Theories are essential to practice, but schemes that work at a few degrees above absolute zero are a long way from being useful to engineers who have to come up with a process to manufacture products. Fortunately, the scientists see the hurdle as an interesting challenge and are coming up with ideas, he said. Although NIST intends to spend $18.5 million during the next five years, the money for the next four years of grants has not yet been appropriated. It also has not yet been determined how the initial $2.76 million will be spent, whether it will go to a few big projects or a lot of smaller ones. But the first round of money should be available quickly. A call for proposals is expected to be released this fall, and NIST will assist NRI in evaluating the proposals. The grants should be in the hands of the researchers by spring.Researchers turn to nature for help in constructing nanoscale circuits NANOTECHNOLOGY, which involves creating and using tools measured in billionths of a meter, holds great promise for applications such as medicine and quantum computing, but producing the devices in usable quantities in reasonable time remains a challenge. Researchers at the University of Maryland's A. James Clark School of Engineering are working to enlist nature's help to produce nanocircuits economically. 'While we understand how to make working nanoscale devices, making things out of a countable number of atoms takes a long time,' said Ray Phaneuf, associate professor of materials science and engineering. 'Industry needs to be able to mass-produce them on a practical time scale.' That's where nature comes in. 'Nature is very good at making many copies of an object' through self-assembly, Phaneuf said. But nature knows how to make only a limited range of patterns for these complex structures, such as shells or crystals. Phaneuf's work focuses on the use of templates to teach nature some new tricks. 'The idea of using templates is not new,' Phaneuf said. 'What is new is the idea of trying to convince nature, based on the topography of the template, that it should assemble objects in a particular place,' atom by atom. One application for the process could be quantum computing. A host of schemes propose harnessing the quantum states of atomic particles to do complex calculations. One involves assembling pairs of quantum dots ' tiny semiconductors containing from one to 100 particles with elementary electric charges ' to create the qubits used in quantum calculations. Assembling the billions of dots in the precise patterns needed for massively parallel computing may be possible, but, Phaneuf said, 'it may not be doable within the age of the universe' with current techniques. 'Nature already knows how to assemble quantum dots,' he said. 'We are working on the step before self-assembly, the self-organization of the substrate,' which will act as the template for the dots. The silicon substrate is etched into steps using lithography, but it is difficult to reach the level of precision required at the atomic scale using lithography alone. Heat and cold can be used to add or subtract atoms on the surfaces and precisely shape the step patterns. The steps can also be shaped. The step patterns, which are stiff, tend to straighten out under heat but are limited by the surrounding patterns in how much they can straighten. 'We play this stiffness off with the repulsive interaction between steps' to create the sizes and shapes needed, Phaneuf said. The result is a substrate that can be reused many times as a template for growing nanostructures with silicon and gallium arsenide for computer and cell phone components. 'It still is in the development stages,' he said. 'There is still quite a lot to do before we make practical devices out of it. I don't think we're quite ready to make transistors on the chips.' And the market for the end products has not yet developed. You are not likely to find any deals on quantum computers from Dell or HP in the ads of your Sunday supplements this weekend. More-immediate applications for this technology are likely to be biochips used in biology and medicine.
<urn:uuid:9ce3d263-5b80-42e6-bd00-39db9e0c1d14>
CC-MAIN-2017-04
https://gcn.com/articles/2007/12/07/for-chips-the-next-step-is-a-great-leap.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00258-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942642
2,522
2.53125
3
Article by Paul Simoneau Systems like telephones and computers are good at looking at number to identify a destination address. Most people are less skilled at using numeric addresses and prefer easy to remember names. When users begin applications such as e-mail or Web browsing, they find it easier to supply the name of the target system rather than that machine’s IP address especially as IP addresses expand to four times their current size. This often involves a request to a local server that provides the DNS (Domain Name System) service. The DNS makes it easier to use applications without having to remember multiple IP addresses. The DNS takes advantage of the context-based memory clues that names provide and translates those names into IP addresses. Network managers can also take advantage of this name to address mapping to control traffic to various servers in their networks. DNS uses a distributed database that applications access to convert names into IP addresses. The DNS system is distributed among multiple DNS servers, each knowing about their own networks and having pointers to other servers. No single Internet site or DNS server needs to know all the information. The application receives a name in an application request and turns to its names resolver to find the IP address to use for that name. DNS becomes involved when the name is absent from the source system’s files. The resolver contacts the local domain name server to find the matching IP address. The resolver continues following the trail, and contacts as many DNS servers as it needs, to locate the correct IP address. It stops only when it has a matching FQDN (fully qualified domain name) that ends in a period. In this example, that FQDN is unix.class.globalknowledge.com. The root or core of the DNS distributed server tree is an artificial point at which the DNS comes together. It is never named, but assumed to be the information after the final period (.) in an FQDN. - .aero Air transport industry - .biz Business - .com Commercial organizations - .coop Cooperative associations - .edu Education institutions - .gov Civilian government - .info Information - .int internal organizations - .jobs Human resource managers - .mil United States military - .museum Museums - .name Individuals - .net Networks - .org Nonprofit organizations - .pro Credentials - .travel Travel industry From that point, the DNS server tree spreads to TLDs (top-level domains) listed above. All organizations fall under one of the top-level domains or the two character country domains (see the sample below). The number of domains within each organization may vary though each must be labeled with a unique name at each level. Each of these labels or names is limited to 63 characters, though most are much shorter. Each case-insensitive label must start with a letter or number and may contain only letters, numbers, and the hyphen (-). No other characters are allowed. Each of the labels are separated by a period or dot. Every node must have a unique domain name, though labels may be used more than once in the tree as long as they are at different levels. Each domain name is more specific to its left side so that the first label is the system name. The exception is arpa domain that offers a reverse lookup capability: address to name mapping. For example, the resolver searches for the name that matches 18.104.22.168 by looking up 22.214.171.124.in-addr.arpa. in a DNS server. Country Domain Sample - au Australia - ca Canada - de Germany - es Spain - eu European Union - fr France - it Italy - jp Japan - sg Singapore - tv Tuvalu - uk United Kingdom - us United States Each of the 248 two-character codes is ISO’s abbreviated name for a sovereign nation-level body. One of the recent additions is .eu for the European Union. Many countries also form second-layer domains inside their country code similar to the referenced generic codes. For example, the UK uses .co for commercial organizations and .ac for academic, which gives colleges and universities a domain name that ends with .ac.uk and companies domain names that end in .co.uk. For more information on ccTLDs (country code top-level domains) visit IANA. Although many organizations in the United States use the three character domains, other organizations have chosen to use the .us country domain. State governmental agencies are among those who have made this choice (at times under pressure). The only restricted generic domains in the United States are .gov and .mil. See RFC [Request for Comments]1480 for more detail on the .us domain.) Root Servers Organization IPv4 Address - Autonomica/NORDUnet 126.96.36.199 - Cogent Communications 188.8.131.52 - ICANN 184.108.40.206 - Information Sciences Institute 220.127.116.11 - NASA Ames Research Center 18.104.22.168 - Réseaux IP Européens 22.214.171.124 - U.S. Army Research Lab 126.96.36.199 U.S. - DoD Network 188.8.131.52 - University of Maryland 184.108.40.206 - VeriSign 220.127.116.11 - VeriSign Naming and Directory Services 18.104.22.168 - WIDW Project 22.214.171.124 The DNS root server’s job is to reliably publish the root zone server: http://www.isoc.org/briefings/020/zonefile.s html The root zone file contains the names and IP addresses of the authoritative DNS servers for all top-level domains including generic, sponsored and country codes. As of the last change to the file (12-Dec-2004) there were 258 TLDs and 773 different authoritative servers for the listed TLDs. When other name servers do not have information about a query, they send the query to the root name servers. The root name server responds by referring the request to the appropriate authoritative server or with an answer that shows no such TLD exists. | | | | | APNIC ARIN RIPE LACNIC AFRINIC The organizations in the chart above handle the Internet domain name administration. They are: - ICANN: The Internet Corporation for Assigned Names and Numbers coordinates IP address and name registration worldwide. - IANA: The Internet Assigned Numbers Authority administers IP address and name registration for the ICANN. - AfriNIC: African Network Information Centre is the Regional Registry for Internet Number Resources for Africa. - APNIC: The Asia-Pacific Network Information Center assigns names and numbers in Asia and the Pacific. - ARIN: The American Registry for Internet Numbers serves North America. - LACNIC: The Latin American and Caribbean Internet Addresses Registry administers IP address space, reverse resolution and other Internetresources of the Latin American and Caribbean region. - RIPE: The Réseaux IP Européens assigns names and numbers in Europe. AfriNIC, APNIC, ARIN, LACNIC, and RIPE are Regional Internet Registries (RIRs). The RIRs delegate the domain name transaction process to organizations such as Internet Service Providers (ISPs) who work with organizations and individuals to help them get the desired domain name(s) assigned. Cross-posted from Global Knowledge
<urn:uuid:9e59fa65-d4bf-4b2c-b17f-67054c4fd302>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/9857-Understanding-the-Domain-Name-System.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00312-ip-10-171-10-70.ec2.internal.warc.gz
en
0.857267
1,616
3.234375
3
by Geoff Huston, APNIC Throughout its relatively brief history, the Internet has continually challenged our preconceptions about networking and communications architectures. For example, the concepts that the network itself has no role in management of its own resources, and that resource allocation is the result of interaction between competing end-to-end data flows, were certainly novel innovations, and for many they have been very confrontational. The approach of designing a network that is unaware of services and service provisioning and is not attuned to any particular service whatsoever—leaving the role of service support to end-to-end overlays—was again a radical concept in network design. The Internet has never represented the conservative option for this industry, and has managed to define a path that continues to present significant challenges. From such a perspective it should not be surprising that the next phase of the Internet story—that of the transition of the underlying version of the IP protocol from IPv4 to IPv6—refuses to follow the intended script. Where we are now, in late 2008, with IPv4 unallocated address pool exhaustion looming within the next 18 to 36 months, and IPv6 still largely not deployed in the public Internet, is a situation that was entirely uncontemplated and, even in hindsight, entirely surprising. The topic examined here is why this situation has arisen, and in examining this question we analyze the options available to the Internet to resolve the problem of IPv4 address exhaustion. We examine the timing of the IPv4 address exhaustion and the nature of the intended transition to IPv6. We consider the shortfalls in the implementation of this transition, and identify their underlying causes. And finally, we consider the options available at this stage and identify some likely consequences of such options. This question was first asked on the TCP/IP list in November 1988, and the responses included foreshadowing a new version of IP with longer addresses and undertaking an exercise to reclaim unused addresses . The exercise of measuring the rate of consumption of IPv4 addresses has been undertaken many times in the past two decades, with estimates of exhaustion ranging from the late 1990s to beyond 2030. One of the earliest exercises in predicting IPv4 address exhaustion was undertaken by Frank Solensky and presented at IETF 18 in August 1990. His findings are reproduced in Figure 1. At that time the concern was primarily the rate of consumption of Class B network addresses (or of /16 prefixes from the address block 18.104.22.168/2, to use current terminology). Only 16,384 such Class B network addresses were within the class-based IPv4 address plan, and the rate of consumption was such that the Class B networks would be fully consumed within 4 years, or by 1994. The prediction was strongly influenced by a significant number of international research networks connecting to the Internet in the late 1980s, with the rapid influx of new connections to the Internet creating a surge in demand for Class B networks. Figure 1: Report on IPv4 Address Depletion Successive predictions were made in the context of the Internet Engineering Task Force (IETF) in the Address Lifetime Expectancy (ALE) Working Group, where the predictive model was refined from an exponential growth model to a logistical saturation function, attempting to predict the level at which all address demands would be met. The predictive technique described here is broadly similar, using a statistical fit of historical data concerning address consumption into a mathematical model, and then using this model to predict future address consumption rates and thereby predict the exhaustion date of the address pool. The predictive technique models the IP address distribution framework. Within this framework the pool of unallocated /8 address blocks is distributed by the Internet Assigned Numbers Authority (IANA) to the five Regional Internet Registries (RIRs). (A "/8 address block" refers to a block of addresses where the first 8 bits of the address values are constant. In IPv4 a /8 address block corresponds to 16,777,216 individual addresses.) Within the framework of the prevailing address distribution policies, each RIR can request a further address allocation from IANA when the remaining RIR-managed unallocated address pool falls below a level required to meet the next 9 months of allocation activity. The amount allocated is the number of /8 address blocks required to augment the RIR's local address pool to meet the anticipated needs of the regional registry for the next 18 months. However, in practice, the RIRs currently request a maximum of 2 /8 address blocks in any single transaction, and do so when the RIR-managed address pool falls below a threshold of the equivalent of 2 /8 address blocks. As of August 2008 some 39 /8 address blocks are left in IANA's unallocated address pool. A predictive exercise has been undertaken using a statistical modeling of historical address consumption rates, using data gathered from the RIRs' records of address allocations and the time series of the total span of address space announced in the Internet interdomain default-free routing table as basic inputs to the model. The predictive technique is based on a least-squares best fit of a linear function applied to the first-order differential of a smoothed copy of the address consumption data series, as applied to the most recent 1,000 days' data. The linear function, which is a best fit to the first-order differential of the data series, is integrated to provide a quadratic time-series function to match the original data series. The projection model is further modified by analyzing the day-of-year variations from the smoothed data model, averaged across the past 3 years, and applying this daily variation to the projection data to account for the level of seasonal variations in the total address consumption rate that has been observed in the historical data. The anticipated rate of consumption of addresses from this central pool of unallocated IPv4 addresses is expected to be about 15 /8s in 2009, and slightly more in 2010. RIR behaviors are modeled using the current RIR operational practices and associated address policies, which are used to predict the times when each RIR will be allocated a further 2 /8s from IANA. This RIR consumption model, in turn, allows the IANA address pool to be modeled. This anticipated rate of increasing address consumption will see the remaining unallocated addresses that are held by IANA reach the point of exhaustion in February 2011. The most active RIRs are anticipated to exhaust their locally managed unallocated address pools in the months following the time of IANA exhaustion. The assumptions behind this form of prediction follow: Although the statistical model is based on a complete data set of address allocations and a detailed hourly snapshot of the address span advertised in the Internet routing table, a considerable level of uncertainty is still associated with this prediction. First, the behavior of the Internet Service Provider (ISP) industry and the other entities that are the direct recipients of RIR address allocations and assignments are not ignorant of the impending exhaustion condition, and there is some level of expectation of some form of last-minute rush or panic on the part of such address applicants when exhaustion of this address pool is imminent. The predictive model described here does not include such a last-minute acceleration of demand. The second factor is the skewed distribution of addresses in this model. From 1 January 2007 until 20 July 2008, 10,402 allocation or assignments transactions were recorded in the RIRs' daily statistics files. These transactions accounted for a total of 324,022,704 individual IPv4 addresses, or the equivalent of 19.3 /8s. Precisely one-half of this address space was allocated or assigned in just 107 such transactions. In other words, some 1 percent of the recipients of address space in the past 18 months have received some 50 percent of all the allocated address space. The reason why this distribution is relevant here is that this predictive exercise assumes that although individual actions are hard to predict with any certainty, the aggregate outcome of many individuals' actions assumes a much greater level of predictability. This observation about aggregate behavior does not apply in this situation, however, and the predictive exercise is very sensitive to the individual actions of a very small number of recipients of address space because of this skewed distribution of allocations. Any change in the motivations of these larger-sized actors that results in an acceleration of demand for IPv4 will significantly affect the predictions of the longevity of the remaining unallocated IPv4 address pool. The third factor is that this model assumes that the policy framework remains unaltered, and that all unallocated addresses are allocated or assigned under the current policy framework, rather than under a policy regime that is substantially different from today's framework. The related assumption here is that the cost of obtaining and holding addresses remains unchanged, and that the perceptions of future scarcity of addresses do not affect the policy framework of address distribution of the remaining unallocated IPv4 addresses. Given this potential for variation within this set of assumptions, a more accurate summary of the current expectations of address consumption would be that the exhaustion of the IANA unallocated IPv4 address pool will occur sometime between July 2009 and July 2011, and that the first RIR will exhaust all its usable address space within 3 to 12 months from that date, or between October 2009 and July 2012. Apart from the exact date of exhaustion that is predicted by this modeling exercise, none of the information relating to exhaustion of the unallocated IPv4 address pool should be viewed as particularly novel information. The IETF Routing and Addressing (ROAD) study of 1991 recognized that the IPv4 address space was always going to be completely consumed at some point in the future of the Internet . Such predictions of the potential for exhaustion of the IPv4 address space were the primary motivation for the adoption of Classless Inter-Domain Routing (CIDR) in the Border Gateway Protocol (BGP), and the corresponding revision of the address allocation policies to craft a more exact match between planned network size and the allocated address block. These predictions also motivated the protracted design exercise of what was to become the IPv6 protocol across the 1990s within the IETF. The prospect of address scarcity engendered a conservative attitude to address management that, in turn, was a contributory factor in accelerating the widespread use of Network Address Translation (NAT) in the Internet during the past decade. By any reasonable metric this industry has had ample time to study this problem, ample time to devise various strategies, and ample time to make plans and execute them. And this reality has been true for the adoption of classless address allocations, the adoption of CIDR in BGP, and the extremely widespread use of NAT. But all of these measures were short-term, whereas the longer-term measure, that of the transition to IPv6, was what was intended to come after IPv4. But IPv6 has not been the subject of widespread adoption so far, while the time of anticipated exhaustion of IPv4 has been drawing closer. Given almost two decades of advance warning of IPv4 address exhaustion, and a decade since the first stable implementations of IPv6 were released, we could reasonably expect that this industry—and each actor within this industry—is aware of the problem and the need for a stable and scalable long-term solution as represented by IPv6. We could reasonably anticipate that the industry has already planned the actions it will take with respect to IPv6 transition, and is aware of the triggers that will invoke such actions, and approximately when they will occur. However, such an expectation appears to be ill-founded when considering the broad extent of the actors in this industry, and there is little in the way of a common commitment as to what will happen after IPv4 address exhaustion, nor even any coherent view of plans that industry actors are making in this area. This lack of planning makes the exercise of predicting the actions within this industry following address exhaustion somewhat challenging, so instead of immediately describing future scenarios, it may be useful to first describe the original plan for the response of the Internet to IPv4 address exhaustion. What Was Intended? The original plan, devised in the early 1990s by the IETF to address the IPv4 address shortfall, was the adoption of CIDR as a short-term measure to slow down the consumption of IPv4 addresses by reducing the inefficiency of the address plan, and the longer-term plan of the specification of a new version of the Internet Protocol that would allow for adoption well before the IPv4 address pool was exhausted. The industry also adopted the use of NAT as an additional measure to increase the efficiency of address use, although the IETF did not strongly support this protocol. For many years the IETF did not undertake the standardization of NAT behaviors, presumably because NAT was not consistent with the IETF's advocacy of end-to-end coherence of the Internet at the IP level of the protocol stack. Over the 1990s the IETF undertook the exercise of the specification of a successor IP protocol to Version 4, and the IETF's view of the longer-term response was refined to be advocacy of the adoption of the IPv6 protocol and the use of this protocol as the replacement for IPv4 across all parts of the network. In terms of what has happened in the past 15 years, the adoption of CIDR was extremely effective, and most parts of the network were transitioned to use CIDR within 2 years, with the transition declared to be complete by the IETF in June 1996. And, as noted already, NAT has been adopted across many, if not most, parts of the network. The most common point of deployment of NAT has not been at an internal point of demarcation between provider networks, but at the administrative boundary between the local customer network and the ISP, so that the common configuration of Customer Premises Equipment (CPE) includes NAT functions. Customers effectively own and operate NAT devices as a commonplace aspect of today's deployed Internet. CIDR and NAT have been around for more than a decade now, and the address consumption rates have been held at very conservative levels in that period, particularly so when considering that the bulk of the population of the Internet was added well after the advent of CIDR and NAT. The longer-term measure—the transition to IPv6—has not proved to be as effective in terms of adoption in the Internet. There was never going to be a "flag-day" transition where, in a single day, simultaneously across all parts of every network the IP protocol changed to using IPv6 instead of IPv4. The Internet is too decentralized, too large, too disparate, and too critical for such actions to be orchestrated, let alone completed with any chance of success. A flag day, or any such form of coordinated switchover, was never a realistic option for the Internet. If there was no possibility of a single, coordinated switchover to IPv6, the problem is that there was never going to be an effective piecemeal switchover either. In other words, there was never going to be a switchover where host by host, and network by network, IPv6 is substituted for IPv4 on a piecemeal and essentially uncoordinated basis. The problem here is that IPv6 is not "backward-compatible" with IPv4. When a host uses IPv6 exclusively, then that host has no direct connectivity to any part of the IPv4 network. If an IPv6-only host is connected to an IPv4-only network, then the host is effectively isolated. This situation does not bode well for a piecemeal switchover, where individual components of the network are switched over from IPv4 to IPv6 on a piecemeal basis. Each host that switches over to IPv6 essentially disconnects itself form the IPv4 Internet at that point. Given this inability to support backward compatibility, what was planned for the transition to IPv6 was a "dual-stack" transition. Rather than switching over from IPv4 to IPv6 in one operation on both hosts and networks, a two-step process has been proposed: first switching from IPv4 only to a "dual-stack" mode of operation that supports both IPv4 and IPv6 simultaneously, and second—and at a much later date—switching from dual-stack IPv4 and IPv6 to IPv6 only. During the transition more and more hosts are configured with dual stack. The idea is that dual-stack hosts prefer to use IPv6 to communicate with other dual-stack hosts, and revert to use IPv4 only when an IPv6-based end-to-end conversation is not possible. As more and more of the Internet converts to dual stack, it is anticipated that use of IPv4 will decline, until support for IPv4 is no longer necessary. In this dual-stack transition scenario, no single flag day is required and the dual-stack deployment can be undertaken in a piecemeal fashion. There is no requirement to coordinate hosts with networks, and as dual-stack capability is supported in networks the attached dual-stack hosts can use IPv6. This scenario still makes some optimistic assumptions, particularly relating to the achievement of universal deployment of dual stack, at which point no IPv4 functions are used, and support for IPv4 can be terminated. Knowing when this point is reached is unclear, of course, but in principle there is no particular timetable for the duration of the dual-stack phase of operation. There are always variations, and in this case it is not necessarily that each host must operate in dual-stack mode for such a transition. A variant of the NAT approach can perform a rudimentary form of protocol translation, where a Protocol-Translating NAT (or NAT-PT ) essentially transforms an incoming IPv4 packet to an outgoing IPv6 packet, and conversely, using algorithmic binding patterns to map between IPv4 and IPv6 addresses. Although this process relieves the IPv6-only host of some additional complexity of operation at the expense of some added complexity in Domain Name System (DNS) transformationsand service fragility, the essential property still remains that in order to speak to an IPv4-only remote host, the combination of the local IPv6 host and the NAT-PT have to generate an equivalent IPv4 packet. In this case the complexity of the dual stack is now replaced by complexity in a shared state across the IPv6 host and the NAT-PT unit. Of course this solution does not necessarily operate correctly in the context of all potential application interactions, and concerns with the integrity of operation of NAT-PT devices are significant, a factor that motivated the IETF to deprecate the existing NAT-PT specification . On the other hand, the lack of any practical alternatives has led the IETF to subsequently reopen this work, and once again look at specifying the standard behavior of such devices . The detailed progress of a dual-stack transition is somewhat uncertain, because it involves the individual judgment of many actors as to when it may be appropriate to discontinue all support for IPv4 and rely solely on IPv6 for all connectivity requirements. However, one factor is constant in this envisaged transition scenario, and whether it is dual stack in hosts or dual stack through NAT-PT, or various combinations thereof, the requirement that there are sufficient IPv4 addresses to span the addressing needs of the entire Internet across the complete duration of the dual-stack transition process is consistent. Under this dual-stack regime every new host on the Internet is envisaged to need access to both IPv6 and IPv4 addresses in order to converse with any other host using IPv6 or IPv4. Of course this approach works as long as there is a continuing supply of IPv4 addresses, implying that the envisioned timing of the transition was meant to have been completed by the time that IPv4 address exhaustion happens. If this transition were to commence in earnest at the present time, in late 2008, and take an optimistic 5 years to complete, then at the current address consumption rate we will require a further 90 to 100 /8 address blocks to span this 5-year period. A more conservative estimate of a 10-year transition will require a further 200 to 250 /8 address blocks, or the entire IPv4 address space again, assuming that we will use IPv4 addresses in the future in precisely the same manner as we have used them in the past and with precisely the same level of usage efficiency as we have managed to date. Clearly, waiting for the time of IPv4 unallocated address pool exhaustion to act as the signal to industry to commence the deployment of IPv6 in a dual-stack transition framework is a totally flawed implementation of the original dual-stack transition plan. Either the entire process of dual-stack transition will need to be undertaken across a far faster time span than has been envisaged, or the manner of use of IPv4 addresses, and, in particular their usage efficiency in the context of dual-stack transition support, will need to differ markedly from the current manner of address use. Numerous forms of response may be required, posing some challenging questions because there is no agreed precise picture of what markedly different and significantly more efficient form of address use is required here. To paraphrase the situation, it is clear that we need to do "something" differently, and do so as a matter of some urgency, but we have no clear agreement on what that something is that we should be doing differently. This situation obviously is not an optimal one. What was intended as a transition mechanism for IPv6 is still the only feasible approach that we are aware of, but the forthcoming exhaustion of the unallocated IPv4 address pool now calls for novel forms of use of IPv4 addresses within this transitional framework, and these novel forms may well entail the deployment of various forms of address translation technologies that we have not yet defined, let alone standardized. The transition may also call for scaling capabilities from the interdomain routing system that also head into unknown areas of technology and deployment feasibility. At this point it may be useful to consider how and why this situation has arisen. If the industry needed an abundant supply of IPv4 addresses to underpin the entire duration of the dual-stack transition to IPv6, then why didn't the industry follow the lead of the IETF and commence this transition while there was still an abundant supply of IPv4 addresses on hand? If network operators, service providers, equipment vendors, component suppliers, application developers, and every other part of the Internet supply chain were aware of the need to commence a transition to IPv6 well before effective exhaustion of the remaining pool of IPv4 addresses, then why didn't the industry make a move earlier? Why was the only clear signal for a change in Internet operation to commence a dual-stack transition to IPv6 one that has been activated too late to be useful for the industry to act on efficiently? One possible reason may lie in a perception of the technical immaturity of IPv6 as compared to IPv4. It is certainly the case that many network operators in the Internet are highly risk-adverse and tend to operate their networks in a mainstream path of technologies rather than constantly using leading-edge advance releases of hardware and software solutions. Does IPv6 represent some form of unacceptable technical risk of failure that has prevented its adoption? This reasoning does not appear to be valid in terms of either observed testing or observation of perceptions about the technical capability of IPv6. The IPv6 protocol is functionally complete and internally consistent, and it can be used in almost all contexts where IPv4 is used today. IPv6 works as a platform for all forms of transport protocols, and is fully functional as an internetwork layer protocol that is functionally equivalent to IPv4. IPv6 NAT exists, Dynamic Host Configuration Protocol Version 6 (DHCPv6) provides dynamic host configuration for IPv6 notes, and the DNS can be completely equipped with IPv6 resource records and operate using IPv6 transport for queries and responses. Perhaps the only notable difference between the two protocols is the ability to perform host scans in IPv6, where probe packets are sent to successive addresses. In IPv6 the address density is extremely low because the low-order 64-bit interface address of each host is more or less unique, and within a single network the various interface addresses are not clustered sequentially in the number space. The only known use of address probing to date has been in various forms of hostile attack tools, so the lack of such a capability in IPv6 is generally seen as a feature rather than an impediment. IPv6 deployment has been undertaken in a small scale for many years, and although the size of the deployed IPv6 base remains small, the level of experience gained with the technology functions has been significant. It is possible to draw the conclusion that IPv6 is technically capable and this capability has been broadly tested in almost every scenario except that of universal use across the Internet. It also does not appear that the reason was a lack of information or awareness of IPv6. The efforts to promote IPv6 adoption have been under way in earnest for almost a decade now. All regions and many of the larger economies have instigated programs to promote the adoption of IPv6 and have provided information to local industry actors of the need to commence a dual-stack transition to IPv6 as soon as possible. In many cases these promotional programs have enjoyed broad support from both public and industry funding sources. The coverage of these promotional efforts has been widespread in industry press reports. Indeed, perhaps the only criticism of this effort is possibly too much promotion, with a possible result that the effectiveness of the message has been diluted through constant repetition. A more likely area to examine in terms of possible reasons why industry has not engaged in dual-stack transition deployment is that of the business landscape of the Internet. The Internet can be viewed as a product of the wave of progressive deregulation in the telecommunications sector in the 1980s and early 1990s. New players in the deregulated industry searching for a competitive edge to unseat the dominant position of the traditional incumbents found the Internet as their competitive lever. The result was perhaps unexpected, because it was not one that replaced one vertically integrated operator with a collection of similarly structured operators whose primary means of competition was in terms of price efficiency across an otherwise undifferentiated service market, as we saw in the mobile telephony industry. In the case of the Internet, the result was not one that attempted to impose convergence on this industry, but one that stressed divergence at all levels, accompanied by branching role specialization at every level in the protocol stack and at every point in the supply chain process. In the framework of the Internet, consumers are exposed to all parts of the supply process, and do not rely on an integrator to package and supply a single, all-embracing solution. Consumers make independent purchases of their platform technology, their software, their applications, their access provider, and their means of advertising their own capabilities to provide goods and services to others, all as independent decisions, all as a result of this direct exposure to the consumer of every element in the supply chain. What we have today is an industry structure that is highly diverse, broadly distributed, strongly competitive, and intensely focused on meeting specific customer needs in a price-sensitive market, operating on a quarter-by-quarter basis. Bundling and vertical integration of services has been placed under intense competitive pressure, and each part of the network has been exposed to specialized competition in its right. For consumers this situation has generated significant benefits. For the same benchmark price of around US$15 to US$30 per month, or its effective equivalent in purchasing power of a local currency, today's Internet user enjoys multimegabit-per-second access to a richly populated world of goods and services. The price of this industry restructure has been a certain loss of breadth and depth of the supply side of the market. If consumers do not value a service, or even a particular element of a service, then there is no benefit in incurring marginal additional cost in providing the service. In other words, if the need for a service is not immediate, then it is not provided. For all service providers right through the supply side the focus is on current customer needs, and this focus on current needs, as distinct from continued support of old products or anticipatory support of possible new products, excludes all other considerations. Why is this change in the form of communications industry operation an important factor in the adoption of IPv6? The relevant question in this context is that of placing IPv6 deployment and dual-stack transition into a viable business model. IPv6 was never intended to be a technology visible to the end user. It offers no additional functions to the end user, nor any direct cost savings to the customer or the supplier. Current customers of ISPs do not need IPv6 today, and neither current nor future customers are aware that they may need it tomorrow. For end users of Internet services, e-mail is e-mail and Web-based delivery of services is just the Web. Nothing will change that perspective in an IPv6 world, so in that respect customers do not have a particular requirement for IPv6, as opposed to a generic requirement for IP access, and will not value such an IPv6-based access service today in addition to an existing IPv4 service. For an existing customer IPv6 and dual stack simply offer no visible value. So if the existing customer base places no value on the deployment of IPv6 and dual stack, then the industry has little incentive to commit to the expenditure to provide it. Any IPv6 deployment across an existing network is essentially an unfunded expenditure exercise that erodes the revenue margins of the existing IPv4-based product. And as long as sufficient IPv4 address space remains to cover the immediate future needs, looking at this situation on the basis of a quarter-by-quarter business cycle, then the decision to commit to additional expenditure and lower product margins to meet the needs of future customers using IPv6 and dual-stack deployments is a decision that can comfortably be deferred for another quarter. This business structure of today's Internet appears to represent the major reason why the industry has been incapable of making moves on dual-stack transition within a reasonable timeframe as it relates to the timeframe of IPv4 address pool exhaustion. What of the strident calls for IPv6 deployment? Surely there is substance to the arguments to deploy IPv6 as a contingency plan for the established service providers in the face of impending IPv4 address exhaustion, and if that is the case, why have service providers discounted the value of such contingency motivations? The problem to date is that IPv4 address exhaustion is now not a novel message, and, so far, NAT usage has neutralized the urgency of the message. The NAT protocol is well-understood, it appears to work reliably, applications work with it, and it has influenced the application environment to such an extent that now no popular application can be fielded unless is can operate across this protocol. For conventional client-server applications, NAT represents no particular problem. For peer-to-peer–based applications, the rendezvous problem with NAT has been addressed through application gateways and rendezvous servers. Even the variability of NAT behavior is not a service provider liability, and it is left to applications to load additional functions to detect specific NAT behavior and make appropriate adjustments to the behavior of the application. The conventional industry understanding to date is that NAT can work acceptably well within the application and service environment. In addition, NAT usage for an ISP represents an externalized cost, because it is essentially funded and operated by the customer and not the ISP. The service provider's perspective is that considering that this protocol has been so effective in externalizing the costs of IPv4 address scarcity from the ISP for the past 5 years, surely it will continue to be effective for the next quarter. To date the costs of IPv4 address scarcity have been passed to the customer in the form of NAT-equipped CPE devices and to the application in the form of higher complexity in certain forms of application rendezvous. ISPs have not had to absorb these costs into their own costs of operation. From this perspective, IPv6 does not offer any marginal benefits to ISPs. For an ISP today, NATs are purchased and operated by customers as part of their CPE equipment. To say that IPv6 will eliminate NATs and reduce the complexities and vulnerabilities in the NAT service model is not directly relevant to the ISP. The more general observation is that, for the service provider industry currently, IPv6 has all the negative properties of revenue margin erosion with no immediate positive benefits. This observation lies at the heart of why the service provider industry has been so resistant to the call for widespread deployment of IPv6 services to date. It appears that the current situation is not the outcome of a lack of information about IPv6, nor a lack of information about the forthcoming exhaustion of the IPv4 unallocated address pool. Nor is it the outcome of concerns over technical shortfalls or uncertainties in IPv6, because there is no evidence of any such technical shortcomings in IPv6 that prevent its deployment in any meaningful fashion. A more likely explanation for the current situation is an inability of a highly competitive deregulated industry to be in a position to factor longer-term requirements into short-term business logistics. Now we consider some questions relating to IPv4 address exhaustion. Will the exhaustion of the current framework that supplies IP addresses to service providers cause all further demand for addresses to cease at that point? Or will exhaustion increase the demand for addresses in response to various forms of panic and hoarding behaviors in addition to continued demand from growth? The size and value of the installed base of the Internet using IPv4 is now very much larger than the size and value of incremental growth of the network. In address terms the routed Internet currently (as of 14 August 2008) spans 1,893,725,831 IPv4 addresses, or the equivalent of 112.2 /8 address blocks. Some 12 months ago the routed Internet spanned 1,741,837,080 IPv4 addresses, or the equivalent of 103.8 /8 address blocks, representing a net annual growth of 10 percent in terms of advertised address space. These facts lead to the observation that, even in the hypothetical scenario where all further growth of the Internet is forced to use IPv6 exclusively while the installed base still uses IPv4, it is highly unlikely that the core value of the Internet will shift away from its predominate IPv4 installed base in the short term. Moving away from the hypothetical scenario, the implication is that the relative size and value of new Internet deployments will be such that these new deployments may not have sufficient critical mass by virtue of their volume and value as to be in a position to force the installed base to underwrite the incremental cost to deploy IPv6 and convert the existing network assets to dual-stack operation in this timeframe. The corollary of this observation is that new Internet network deployments will need to communicate with a significantly larger and valuable IPv4-only network, at least initially. The fact that IPv6 is not backward-compatible with IPv4 further implies that hosts in these new deployments will need to cause IPv4 packets with public addresses in their packet headers to be sent and received, either by direct deployment of dual stack or by proxies in the form of protocol-translating NATs. In either case the new network will require some form of access to public IPv4 addresses. In other words, after exhaustion of the unallocated address pools, new network deployments will continue to need to use IPv4 addresses. From this observation it appears highly likely that the demand for IPv4 addresses will continue at rates comparable to current rates across the IPv4 unallocated address pool and after it is exhausted. The exhaustion of the current framework of supply of IPv4 addresses will not trigger an abrupt cessation of demand for IPv4 addresses, and this event will not cause the deployment of IPv6-only networks, at least in the short term of the initial years following IPv4 address pool exhaustion. It is therefore possible to indicate that immediately following this exhaustion event there will be a continuing market need for IPv4 addresses for deployment in new networks Although a conventional view is that this market need is likely to occur in a scenario of dual-stacked environments, where the hosts are configured with both IPv4 and IPv6, and the networks are configured to also support the host operation of both protocols, it is also conceivable to envisage the use of deployments where hosts are configured in an IPv6-only mode and network equipment undertakes a protocol-translating NAT function. In either case the common observation is that we apparently will have a continuing need for IPv4 addresses well after the event of IPv4 unallocated pool exhaustion, and IPv6 alone is no longer a sufficient response to this problem. If demand continues, then what is the source of supply in an environment where the current supply channel, namely the unallocated pool of addresses, is exhausted? The options for the supply of such IPv4 addresses are limited. In the case of established network operators, some IPv4 addresses may be recovered through the more intensive use of NAT in existing networks. A typical scenario of current deployment for ISPs involves the use of private address space in the customer's network and NAT performed at the interface between the customer network and the service provider infrastructure (the CPE). One option for increasing the IPv4 address usage efficiency could involve the use of a second level of NAT within the service provider's network, or the so-called "carrier-grade" NAT option . This option has some attraction in terms of increasing the port density use of public IPv4 addresses, by effectively sharing the port address space of the public IPv4 address across multiple CPE NAT devices, allowing the same number of public IPv4 addresses to be used across a larger number of end-customer networks. The potential drawback of this approach is that of added complexity in NAT behavior for applications, given that an application may have to traverse multiple NATs, and the behavior of the compound NAT scenario becomes in effect the behavior of the most conservative of the NATs in the path in terms of binding times and access. Another potential drawback is that some applications have started to use multiple simultaneous transport sessions in order to improve the performance of the download of multipart objects. For single-level CPE NATs with more than 60,000 ports to be used for the customer network, this application behavior had little effect, but the presence of a carrier NAT servicing a large number of CPE NATs may well restrict the number of available ports per connection, in turn affecting the utility of various forms of applications that operate in this highly parallel mode. Allowing for a peak simultaneous demand level of 500 ports per customer provides a potential use factor of some 100 customers per IP address. Given a large enough common address pool, this factor may be further improved by statistical multiplexing by a factor of 2 or 3, allowing for between 200 and 300 customers per NAT address. Of course such approximations are very coarse, and the engineering requirement to achieve such a high level of NAT usage would be significant. Variations on this engineering approach are possible in terms of the internal engineering of the ISP network and the control interface between the CPE NATs and the ISP equipment, but the maximal ratio of 200 to 300 customers per public IP address appears to be a reasonable upper bound without unduly affecting application behaviors. Another option is based on the observation that, of the currently allocated addresses, some 42 percent of them, or the equivalent of some 49 /8 address blocks, are not advertised in the interdomain routing table, and are presumed to be either used in purely private contexts, or currently unused. This pool of addresses could also be used as a supply stream for future address requirements, and although it may be overly optimistic to assume that the entirety of this unadvertised address space could be used in the public Internet, it is possible to speculate that a significant amount of this address pool could be used in such a manner, given the appropriate incentives. Speculating even further, if this address pool were used in the context of intensive carrier-grade NATs with an achieved average deployment level of, say, 10 customers per address, an address pool of 40 /8s would be capable of sustaining some 7 billion customer attachments. Of course, no such recovery option exists for new entrants, and in the absence of any other supply option, this situation will act as an effective barrier to entry into the ISP market. In cases where the barriers to entry effectively shut out new entrants, there is a strong trend for the incumbents to form cartels or monopolies and extract monopoly rentals from their clients. However, it is unlikely that the lack of supply will be absolute, and a more likely scenario is that addresses will change hands in exchange for money. Or, in other words, it is likely that such a situation will encourage the emergence of markets in addresses. Existing holders of addresses have the option to monetize all or part of their held assets, and new entrants, and others, have the option to bid against each other for the right to use these addresses. In such an open market, the most efficient usage application would tend to be able to offer the highest bid, in an environment dominated by scarcity tending to provide strong incentives for deployment scenarios that offer high levels of address usage efficiency. It would therefore appear that options are available to this industry to increase the usage efficiency of deployed address space, and thereby generate pools of available addresses for new network deployments. However, the motive for so doing will probably not be phrased in terms of altruism or alignment to some perception of the common good. Such motives sit uncomfortably within the commercial world of the deregulated communications sector. Nor will it be phrased in terms of regulatory impositions. It will take many years to halt and reverse the ponderous process of public policy and its expression in terms of regulatory measures, and the "common-good" objective here transcends the borders of regulatory regimes. This consideration tends to leave this argument with one remaining mechanism that will motivate the industry to significantly increase the address usage efficiency: monetizing addresses and exposing the costs of scarcity of addresses to the address users. The corollary of this approach is the use of markets to perform the address distribution function, creating a natural pricing function based on levels of address supply and demand.
<urn:uuid:24e63f13-00e1-4c5c-a5a0-e5e98dd26bfc>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-41/113-ipv4.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00128-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950775
8,659
2.546875
3
With every new Transparency Report that Google releases biannually since 2009, new information about data requests from government agencies are included. This last report, which spans July to December 2012, contains vague data about National Security Letters. NSLs are a form of request for information that the FBI can make when they or other U.S. agencies are conducting national security investigations. NSLs are an alternative to court ordered warrant and subpoena, and require only that the FBI director or another senior designee provides a written certification that proves that that the information requested is “relevant to an authorized investigation to protect against international terrorism or clandestine intelligence activities.” Via NSLs, the FBI can request information such as the name, address, length of service, and local and long distance toll billing records of a subscriber, but cannot ask for things like Gmail content, search queries, YouTube videos or user IP addresses, as explained in Google’s User Data Requests FAQ. Also, the thing about NSLs is that their existence can be hidden from the investigated person. The FBI only has to write that the disclosure of the NSL may result in “a danger to the national security of the United States, interference with a criminal, counterterrorism, or counterintelligence investigation, interference with diplomatic relations, or danger to the life or physical safety of any person,” and Google (or any other provider) is forbidden to talk about the request. This is why, in this latest Transparency Report, Google may only share the numerical range within which the actual number of NSLs they have received and the users/accounts they referred to rests. In 2012 – and all years since 2009 except for 2010 – NSLs received by Google were between 0 and 999, and the users/accounts they applied to were between 1000 and 1999.
<urn:uuid:37193032-3bd5-4c2d-9e90-1782d1616c6a>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/03/06/google-reports-on-non-court-ordered-fbi-data-requests/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00093-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944509
376
2.609375
3
HTTPS Vulnerable To Crypto AttackSecurity researchers have built a tool that exploits weaknesses in the SSL and TLS encryption protocol, used by millions of websites to secure communications. The secure sockets layer (SSL) and transport layer security (TLS) encryption protocol, used by millions of websites to secure Web communications via HTTPS, is vulnerable to being decrypted by attackers. In particular, security researchers Juliano Rizzo and Thai Duong have built a tool that's capable of decrypting and obtaining the authentication tokens and cookies used in many websites' HTTPS requests. "Our exploit abuses a vulnerability present in the SSL/TLS implementation of major Web browsers at the time of writing," they said. The duo plan to detail their findings, which they characterize as a "fast block-wise chosen-plaintext attack against SSL/TLS," on Friday at the Ekoparty Security Conference in Argentina. They said websites using SSL version 3 and TLS version 1.0 and earlier are vulnerable. Although newer versions of TLS are available--and apparently not vulnerable to this attack--most sites still use TLS 1.0. [Do you have an effective cyber attack response strategy? See 7 Lessons: Surviving A Zero-Day Attack.] The researchers plan use BEAST during their Ekoparty presentation to decrypt PayPal authentication cookies and access a PayPal account, according to the Register. While full details of the vulnerability haven't been publicly disclosed, browser developers don't appear to be running scared. "The researchers disclosed BEAST to browsers so I'm not going to comment in detail until public," said Google Chrome engineer Adam Langley in a Twitter post. "It's neat, but not something to worry about." Opera, however, has already released a related patch, and the researchers said they expect other browser makers to follow suit. The HTTPS vulnerability is likely to accelerate calls for an overhaul of today's fragile SSL ecosystem. Such calls have intensified after the July 2011 exploit--not revealed publicly until last month--of Dutch certificate authority DigiNotar. As a result of that exploit, attackers were able to issue false credentials for hundreds of legitimate websites, including Gmail and Windows Update. Interestingly, Rizzo and Duong are no strangers to vulnerability research. Rizzo is one of the founders and designers behind open source network security tool platform Netifera, while Duong is chief security officer for a large Vietnamese bank, and has led Black Hat workshops detailing practical attacks against cryptography. Last year, notably, the pair detailed a previously unknown "padding oracle attack" (referring not to Oracle, but rather a cryptographic concept) against ASP.NET Web applications that could be used to "decrypt cookies, view states, form authentication tickets, membership password, user data, and anything else encrypted using the framework's API," they said. Exploiting the vulnerability, present in 25% of ASP Web applications, could allow attackers to access information or even compromise systems. The vulnerability stemmed from how Microsoft implemented AES in ASP.NET. Notably, if an attacker altered the encrypted data contained in a cookie, ASP.NET returned semi-detailed error messages. After amassing enough of these, an attacker could make an educated guess about the encryption key being used. That vulnerability disclosure led Microsoft to issue an emergency patch. Security professionals often view compliance as a burden, but it doesn't have to be that way. In this report, we show the security team how to partner with the compliance pros. Download the report here. (Free registration required.)
<urn:uuid:3ac71867-1ced-4ce7-b7a4-145ea74c9011>
CC-MAIN-2017-04
http://www.darkreading.com/vulnerabilities-and-threats/https-vulnerable-to-crypto-attack/d/d-id/1100246
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00395-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936347
721
2.859375
3
Long a staple of open source computing, MySQL serves as the database back end to a massive array of applications, from network monitoring frameworks to Facebook. To those uninitiated in how databases work, setting up MySQL for the first time can be somewhat daunting. Nevertheless, with a few pointers and concepts, you can quickly get a new MySQL instance up and running, ready to deploy your application. For the purposes of this guide, we'll assume that the reader has little or no experience with MySQL on Linux, and we'll concentrate on getting MySQL installed and configured to the point where an application can be connected to the database and begin operation. Advanced elements of MySQL, such as database programming and the SQL language itself, are beyond the scope of this effort. [ Also on InfoWorld: How to install Apache on Linux | Prove your expertise with the free OS in InfoWorld's Linux admin IQ test round 1 and round 2. | Track the latest trends in open source with InfoWorld's Open Sources blog and Technology: Open Source newsletter. ] To continue reading this article register now
<urn:uuid:9d52c7b9-34d9-41ea-9fcc-61a62e461e0c>
CC-MAIN-2017-04
http://www.computerworld.com/article/2495170/enterprise-applications/how-to-get-started-with-mysql.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00239-ip-10-171-10-70.ec2.internal.warc.gz
en
0.88331
218
2.515625
3
I am working in a Natural/Adabas environment. I have very brief idea about these statemsnts(in subject line). Can someone elaborate these three statements also publish when to use whaich statement. Which is preferrable in terms of resource consuption and Adabas Calls The READ statement is used to read records from a database. The records can be retrieved from the database: in the order in which they are physically stored in the database (READ IN PHYSICAL SEQUENCE), or in the order of Adabas Internal Sequence Numbers (READ BY ISN), or in the order of the values of a descriptor field (READ IN LOGICAL SEQUENCE). The FIND statement is used to select from a database those records which meet a specified search criterion. The HISTOGRAM statement is used to either read only the values of one database field, or determine the number of records which meet a specified search criterion. The HISTOGRAM statement does not provide access to any database fields other than the one specified in the You usually use READ to read sequentially large number of records by some descriptor. The FIND is rather for direct access to small amount of records of database. The HISTOGRAM use only index part of database so it works quickly.
<urn:uuid:09520d01-6528-4398-beae-da28cdde24d4>
CC-MAIN-2017-04
http://ibmmainframes.com/about20924.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00543-ip-10-171-10-70.ec2.internal.warc.gz
en
0.807957
278
2.875
3
There is nothing more to broadband than dial-up except that its digital, while dialup is analog. A positive voltage over the telephone line was a 1 and a negative voltage on the line was a 0. That’s how dialup modems used to work and traditional phone lines couldn’t handle more than 2000 such fluctuations per second, since the line was analog. The analog was then converted to digital, so that the voltage variations were just 0 to 5 or slightly higher and therefore greater frequencies were possible with suitable equipment at the service provider’s end. In a broadband, you don’t need to dial a number, since all lines are always connected to the internet server, and your digital modem does the handshaking automatically as it is booted up, and you still can use the phone when you are connected to the internet, unlike in the old analog dialup system. Since broadband is digital, the band of data that can be transferred is a bit broader, and nothing more. The broadband usually shares a very high end leased line. Therefore, a lower bandwidth and a higher bandwidth connection in a broadband just differ in packet size, and not in number of packets. This is proven when you download something from a website or an FTP. If there is only a single source of data, the download appears to be faster. While you download the same thing with a torrent with multiple connections, the efficiency of the download will be rather poor. This is because the broadband cannot handle multiple connections as good as it handles larger packets. If you connect a broadband line with high bandwidth to a network with about 20 PCs, the output efficiency will be very poor. Everything will seem to be lagging. When you connect a leased line to the same network, you will find a great performance. This is because the leased line can handle more connections while transferring bigger packets. A broadband is just acting as a local network that distributes the data from a super jumbo leased line within its area. Virtual Leased Lines Using MPLS: A question that might pop up in the reader’s mind is, why MPLS for VPNs? The answer is quite simple; MPLS seems like an attractive technology for the following reasons. The MPLS Label Switched Paths (LSPs) inherently provide tunneling of traffic from one point to the other. Since MPLS switches packets operate based on their labels, it inherently masks the IP address, and hence, could be used to isolate the IP addresses on the subscriber side from those on the service provider side. In other words, it wouldn’t matter, then, whether the subscriber is using global or private IP addresses; MPLS is capable of supporting both. The overhead of MPLS encapsulation is small when compared to other encapsulation technologies. MPLS labels are only four octets long. The EXP bits in the MPLS shim header could be used to prioritize MPLS frames – a feature that is available in most MPLS implementations today. Traffic engineered Label Switched Paths (LSPs) could be deployed in order to offer multiple service levels to subscribers or to avoid network congestion points. The MPLS approach allows the creation of highly scalable VPNs. Visit HCL Tech's telecom infrastructure solutions to know more. Click here to learn more about leased line and broadband.
<urn:uuid:de2335bd-9e0c-4cf8-863c-5af9ef6ea6ce>
CC-MAIN-2017-04
https://www.hcltech.com/blogs/broadband-vs-leased-line
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00175-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948704
687
2.953125
3
Encryption: The Last Line of Defense Encryption is probably the last barrier to prevent an attacker from gaining access to sensitive business information. Security practitioners must seriously review using encryption as an important component of the overall business security strategy. Numerous factors are driving the urgency for businesses to encrypt data, including increases in the number of mobile users, storage of sensitive information on portable devices, compliance legislation, outsourcing, wireless transmission and the need to protect confidential, sensitive information. In the area of legislation, the U.S. Health Insurance Portability and Accountability Act (HIPAA) establishes encryption as an addressable implementation specification for both data at rest and data in motion. The U.S. Sarbanes-Oxley legislation, Section 404, will result in technology solutions that provide assurance that tampering has not occurred. California Senate Bill (SB) 1386 requires companies to inform California customers of security breaches involving the compromise of their names in combination with their Social Security, driver’s license or credit card numbers. This legislation will result in encryption being seriously considered to secure sensitive customer or client information. There are two basic types of cryptography: symmetric and asymmetric. Symmetric cryptography is an encryption system that uses the same key to encrypt and decrypt. The secrecy of encrypted data depends the secret key, or “private key.” The private key may be stored on a computer’s hard disk or a specialized cryptographic device. The message is encrypted and decrypted with the same key. Examples of symmetric key algorithms are Data Encryption Standard (DES), Triple DES (3DES), Blowfish and the Advanced Encryption Standard (AES). The challenge for symmetric key encryption is how to “secretly” distribute the “secret” key. Asymmetric key encryption is an encryption system that uses a linked pair of keys. What one pair of keys encrypts, the other pair decrypts. The public key is publicly available and is usually embedded in digital certificates. The public and private keys are mathematically related but cannot be derived from each other. The challenge for asymmetric key encryption is performance—they tend to be much slower than symmetric key encryption. The advantage of asymmetric key encryption is that key distribution is not a challenge. Examples of asymmetric key algorithms are RSA, Elliptic Curve Cryptosystem (ECC) and Diffie-Hellman. To address the challenge of data integrity, businesses should consider the application of message digests. A message digest takes a message of any size as input and outputs a short, fixed-length code. The message digest is unique to the message and depends on every bit of the message and its attachments. A message digest is like a fingerprint of the message, and it cannot be used to restore the original message. Message digests are also referred to as digital fingerprints, cryptographic hashes or cryptographic checksums. Commonly used message digests include MD4 and MD5 from RSA Security and SHA-1 (Secure Hash Algorithm) from the National Institute of Standards and Technology (NIST). Don’t even start thinking about encryption products and technologies until you first develop your organization’s encryption policy. This policy provides the framework for the deployment of encryption. Businesses need to formally review their requirements for securing data at rest and data in motion through a formal risk analysis. The objective is to conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity and availability of sensitive information. The results will provide valuable information for the business to determine its encryption requirements. For example, the encryption policy might state that the organization will deploy encryption solutions that support 2048 bits for asymmetric encryption and 128 bits for symmetric encryption. Further, the encryption policy could state that all mobile devices that store any sensitive business data must store all such data in encrypted files and directories. The encryption policy further addresses issues such as: - Identification of sensitive data at rest or data in motion that needs to be encrypted. - Identification of standards that guide the selection of safeguards to implement encryption requirements. - Guidelines for key management and security. Organizations should seriously consider the encryption of all critical files, directories or media on server systems as well as mobile devices, such as PDAs and laptops. Mobile devices typically hold sensitive business information and may be stolen. Software encryption solutions need to be strongly considered for all such mobile devices. For example, in the Microsoft Windows environment you can use the Encrypted File System (EFS) to store encrypted files and folders on NTFS file system volumes. When a folder is encrypted, all the folders and subfolders created in the encrypted folder are automatically encrypted. This may be something to consider for encryption of information at the operating system level on server systems. To address the challenge of securing data in motion (transmission) you need to consider the application. For example, to secure Web server/browser communication you can use Secure Sockets Layer (SSL) to establish an encrypted tunnel for the transmission of sensitive data such as Web-based electronic transactions. To secure the transmission of data over a public network such as the Internet you must consider establishing a virtual private network (VPN). A VPN is the use of an encrypted tunnel over a public network to provide privacy on par with a private network—either site-to-site (router-to-router) or for secure remote access (client-to-server). The emerging standard for site-to-site tunneling is the IPSec protocol. Organizations that need to encrypt a large volume of data may consider using a network appliance, which resides between the server or storage system and the network. The devices operate at LAN speeds. Thus, performance, which is typically an issue with encryption, is not an issue with encryption network devices. Vendors that specialize in this area include Decru, NeoScale Systems, nCipher, Ingerian Net-works and Vormetric. Businesses must be serious about encrypting sensitive data. Encryption adds a critical layer of defense in an organization’s security strategy. The sensitive information stored in critical server systems as well as mobile devices must be secure so data isn’t compromised, even if the systems are. Further, sensitive information that is transmitted must be encrypted. Security practitioners as well as the security officer must ensure the business has developed a formal encryption policy and must be familiar with practical encryption solution options. Follow the principal of defense-in-depth and selectively deploy encryption technologies, at a minimum, to secure sensitive data on critical server systems, mobile devices and data transmitted over the Internet. This last line of defense is often not deployed, and that is something security professionals need to seriously review to secure the electronic “crown jewels” of the enterprise. Uday O. Ali Pabrai, Security+, CISSP, CHSS, chief executive of ecfirst.com, consults extensively in the areas of enterprise security and HIPAA. Pabrai, author of the best-selling “Getting Started with HIPAA,”is the co-creator of the Security Certified Program (www.securitycertified.net). E-mail him at firstname.lastname@example.org.
<urn:uuid:71e24f30-281d-4506-8167-d96a46a7bb1b>
CC-MAIN-2017-04
http://certmag.com/encryption-the-last-line-of-defense/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00479-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902976
1,483
3.03125
3
OS X 10.x (Client) When you start your Macintosh investigation it is important to know what version of the operating system is installed on the computer. The version of OS X (10.4, 10.5, 10.6) can shape and direct the analysis as each version has certain unique characteristics for other artifacts as well as their locations on the disk. Macintosh operating systems use plist files (.plist) as repositories for system and program settings/information. Plist files can wither be in a binary-encoded format (bplist file header) or as XML. To get the operating system version the first plist files you will want to examine is the “SystemVersion.plist” located in “/System/Library/CoreServices/” folder. With this knowledge you can be aware of other plists and system artifacts that are unique to the OS under inspection. Forensic Programs of Use plist Edit Pro (Mac): plist Editor Pro (Win): Installed Printers (Mac) Mac OS X This property list (plist) on a Mac OS X machine will tell you what types of printers have been installed on that system. Be advised though, that a printer may have been uninstalled/removed by the user and if they have not restarted their computer, that printer’s entry will persist until the computer is rebooted. This plist will then be overwritten to reflect the change. Apple Developer Tools: http://developer.apple.com/technologies/tools/xcode.html Forensic Programs of Use plist Editor that is provided with XCode Safari Browsing History (Mac) Safari is the default browser on the Mac OS X Operating System. As with most browsers, there is a plethora of information to be found and Browsing History is one of them. If you are looking into the Safari Browsing History on an Apple computer, you will have to find the History.plist to get that information. For those that don’t know, a plist is a Preference file for an application on an Apple computer. They usually contain user settings for that particular application. They also hold information regarding that application. The default setting for Browsing History in Safari 4 and 5 is one month. Now, locate the Safari History plist by navigating to /username/Library/Safari/History.plist on the suspect machine. Then export it out of your case. If you are working in a Windows based forensics lab, you can download a copy of WOWSoft’s free plist Editor and install it. Once installed, find the exported copy of the History.plist file and open it. You will see the following screen: If you are using a Mac as your forensics platform, I would suggest heading over to the Apple Developers site and register there to get a free copy of XCode 3. XCode comes with a plist Editor included. Once installed, it becomes your default viewer for plists. Locate the History.plist file that you wish to view and double click on it. It will open in the plist Editor and here is what you will see: Now let’s say I want to find out the Last Visit Date & Time to a particular site. I would locate the site in the History and look for the lastVisitedDate row and look across to the right to the third column: Now the value that you see recorded there is Mac Absolute Time. You are going to want to decode that into a readable format. In Windows, you can download a copy of R. Craig Wilson’s DCode to do that. For example, you would take the number shown in the lastVisitedDate row and enter all of the numbers in up to the period into DCode, choose Mac Absolute Time and make sure to adjust for the suspect machine’s Time Zone Settings and click on Decode. I have used the lastVisitedDate string from the example screenshots I have provided above and received the following results: AUTHOR NOTE— As of this post, I am unfamiliar with a tool/utility that works in Mac OS X that has the same functionality. If someone can point me in the right direction, I will be more than happy to edit this post and give full credit. Forensic Tools of Use Apple Developer Tools (XCode): http://developer.apple.com/programs/mac/ WOWSoft’s Free plist editor of Windows: http://www.icopybot.com/blog/free-plist-editor-for-windows-10-released.htm DCode by R. Craig Wilson (Digital Detective UK): http://www.digital-detective.co.uk/freetools/decode.asp Copyright 2012, ForensicArtifacts.com. All rights reserved.
<urn:uuid:9ee92d74-dc3b-4ab1-a77d-ac8a7d5f04ab>
CC-MAIN-2017-04
http://forensicartifacts.com/tag/plist/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00019-ip-10-171-10-70.ec2.internal.warc.gz
en
0.882127
1,041
2.53125
3
Bruce Springsteen said it best in 1992 when he lamented the empty promise of entertainment choices for the masses. Now a communications explosion of multimedia options competes for our attention; you can even get those 57 channels (plus much more) on your phone. With all of the multimedia choices today, learning professionals are challenged with getting and keeping the attention of prospective and committed learners alike. We need to ensure that we offer the right mix of media to reach all students without falling prey to the same shallow edutainment of countless online learning options. Since the firestorm of multimedia options started to gain ground in the 1990’s, learning theory expanded dramatically. Several lines of research evolved (e.g. cognitive load, multimedia learning). The possibilities for learning and instruction are limitless. Developed by educational psychologist Richard E. Mayer, multimedia learning theory states that optimal learning occurs when both visual and verbal materials are presented simultaneously. Mayer’s studies found that students did significantly better when it came to applying what they had learned and committing it to long-term memory after receiving multimedia rather than mono-media (visual only or audio only) instruction. Combining our understanding of multimedia education with the adult student’s need to learn socially as well as construct his own meaning provides the balance needed to create the rich experience enjoyed in a traditional classroom setting while still providing the flexibility to learn when you want to learn. Successfully translating this understanding into tangible, instructional products, helps take the “distance” out of distance learning. By combining text, audio, still images, animation, video with “hands-on” interactivity, and content forms like live labs and collaborative learning activities, we can more effectively appeal to varying learning styles. This not only offers the necessary balance of choice, but (frankly) it works and it’s just right. If these techniques are properly implemented, we no longer have to scan the various channels of similar edutainment only find that there is “nothing on”. We can dive into a rich learning experience where interaction with the environment, other students, and subject-matter experts keep us engaged and help us reach our learning goals in the most efficient and productive manner possible.
<urn:uuid:656f7f79-de86-4756-b6b9-dbdf97f766e7>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2011/04/25/57-channels-and-nothin-on/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00469-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933223
453
2.8125
3
Sorting is one of the fundamental aspects of computer science. Throughout the short history of computer science sorting algorithms matured in a rapid pace and from the early day’s computers started using sophisticated methods to sort the elements in a collection data structure. Before we start you have to notice this is not about finding out the best sorting algorithm. As any other aspect of life, there is no clear best way here. A lot of things affect our choice of using a sorting algorithm such as the number of elements, the available space, budget, the priorities in our application etc. Selection sort is an exception on our list. This is considered an academic sorting algorithm. Why? Because the time efficiency is always O(n2) which is not acceptable. There is no real world usage for selection sort except passing the data structure course exam. - Always run at O(n2) even at best case scenario - Help students to get some credits towards their degree, nothing to be precise This is the other exception in the list because bubble sort is too slow to be practical. Unless the sequence is almost sorted feasibility of bubble sort is zero and the running time is O(n2). This is one of the three simple sorting algorithms alongside selection sort and insertion sort but like selection sort falls short of insertions sort in terms of efficiency even for small sequences. - Again nothing, maybe just “catchy name1” - With polynomial O(n2) it is too slow - Implementing it makes for an interesting programming exercise Insertion sort is definitely not the most efficient algorithm out there but its power lies in its simplicity. Since it is very easy to implement and adequately efficient for small number of elements, it is useful for small applications or trivial ones. The definition of small is vague and depends on a lot of things but a safe bet is if under 50, insertion sort is fast enough. Another situation that insertion sort is useful is when the sequence is almost sorted. Such sequences may seem like exceptions but in real world applications often you encounter almost sorted elements. The run time of insertions sort is O(n2) at worst case scenario. So far we have another useless alternative for selection sort. But if implemented well the run time can be reduced to O(n+k). n is the number of elements and k is the number of inversions (the number of pair of elements out of order). With this new run time in mind you can see if the sequence is almost sorted (k is small) the run time can be almost linear which is a huge improvement over the polynomial n2. - Easy to implement - The more the sequence is ordered the closer is run time to linear time O(n) - Not suitable for large data sets - Still polynomial at worst case - For small applications when the sequence is small (less than 50 elements) - When the sequence is going to be almost sorted This is the first general purpose sorting algorithm we are introducing here. Heap sort runs at O(nlogn) which is optimal for comparison based sorting algorithms. Though heap sort has the same run time as quick sort and merge sort but it is usually outperformed in real world scenarios. If you are asking then why should anyone use it, the answer lies in space efficiency. Nowadays computers come with huge amount of memory, enough for many applications. Does this mean heap sort is losing its shine? No, still when writing programs for environments with limited memory, such as embedded systems or space efficiency is much more important than time efficiency. A rule of thumb is if the sequence is small enough to easily fit in main memory then heap sort is good choice. - Runs at O(nlogn) - Can be easily implemented to be executed in place - Not as fast as other comparison based algorithms in large data sets - It doesn’t provide stable sorting - The natural choice for small and medium sized sequences - If the main memory size is concerned heap sort is the best option One of the most widely used sorting algorithms in computer industry. Surprisingly quick sort has a running time of O(n2) that makes it susceptible in real-time applications. Having a polynomial worst case scenario still quick sort usually outperforms both quick sort and merge sort (coming next). The reason behind the popularity of quick sort despite the short comings is both being fast in real world scenarios (not necessarily worst case) and the ability to be implemented as an in place algorithm. - Most often than not runs at O(nlogn) - Quick sort is tried and true, has been used for many years in industry so you can be assured it is not going to fail you - High space efficiency by executing in place - Polynomial worst case scenario makes it susceptible for time critical applications - Provides non stable sort due to swapping of elements in partitioning step - Best choice for general purpose and in memory sorting - Used to be the standard algorithm for sorting of arrays of primitive types in Java - qsort utility in C programming language is powered by quick sort Having an O(nlogn) worst case scenario run time makes merge sort a powerful sorting algorithm. The main drawback of this algorithm is its space inefficiency. That is in the process of sorting lots of temporary arrays have to be created and many copying of elements is involved. This doesn’t mean merge sort is not useful. When the data to be sorted is distributed across different locations like cache, main memory etc then copying data is inevitable. Merge sort mainly owes its popularity to Tim Peters who designed a variant of it which is in essence a bottom-up merge sort and is known as Tim sort. - Excellent choice when data is fetched from resources other than main memory - Having a worst case scenario run time of O(nlogn) which is optimal - Tim sort variant is really powerful - Lots of overhead in copying data between arrays and making new arrays - Extremely difficult to implement it in place for arrays - Space inefficiency - When data is in different locations like cache, main memory, external memory etc. - A multi-way merge sort variant is used in GNU sorting utility - Tim sort variant is standard sorting algorithm in Python programming language since 2003 - Default sorting algorithm of arrays of object type in Java since version 7 onward Special purpose sorting algorithms Though currently O(nlogn) seems like an unbreakable cap for sorting algorithms, this just holds true for general purpose sorts. If the entities to be sorted are integers, strings or d-tuples then you are not limited by the sorting algorithms above. Radix sort and Bucket sort are two of most famous special purpose sorting algorithms. their worst case scenario run time is O(f(n+r)). [0, r-1] is the range of integers and f=1 for bucket sort. All in all this means if f(n+r) is significantly below nlogn function then these methods are faster than three powerful general-purpose sorting algorithms, merge sort, quick sort and heap sort. - They can run faster than nlogn - Cannot be used for every type of data - Not necessarily always run faster than general purpose algorithms - When the prerequisites of data types is met then they are the definitive choice |worst case time||average case time||best case time||worst case space| - The art of computer programming by Donald Knuth - Not necessarily auxiliary optimizations are considered
<urn:uuid:c6ff7aba-2cb0-4175-af79-dfe46b59f6a7>
CC-MAIN-2017-04
https://latesthackingnews.com/2016/12/10/comparison-various-sorting-algorithms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00405-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910309
1,545
3.203125
3
I recently read an article that talked about the possibility of a completely secure data transfer using quantum entanglement. Essentially what that means in terms of computer data and packets is that the data becomes correlated with each other and shares properties. Hypothetically speaking, if you send 50 packets, all packets could take the same properties of the first packet, therefore making it impossible to see what the total outcome of all 50 packets contains. Until now, the entanglement was only controllable for up to a second, but a recent advancement at the University of Copenhagen’s Niels Bohr institute have been able to keep this entanglement active for up to an hour. This could enable you to be able to make direct connections between two systems, and when you make a change on one end the other end will know and it can all be transferred directly over the internet. Scientists are currently working on ways to incorporate this into both networking and the internet. Although this is cutting edge and in my honest opinion pretty cool, it may be some time before this would ever reach a PC near you.
<urn:uuid:7eab8082-47a4-47e9-8ce5-8fcae2358718>
CC-MAIN-2017-04
http://www.bvainc.com/completely-secure-data-transfer-on-the-way/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00249-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957566
222
2.703125
3
So much of what is written these days about FirstNet has to do with the delays that have dogged the initiative since the very beginning and continue five years later. With all of the controversy, it’s easy to lose sight of why FirstNet was created in the first place and what it means for first responders. Given the latest delays and the possibility for future uncertainty, it’s helpful to take a virtual step back to understand exactly what’s at stake for the country’s public safety communications. The First Days of FirstNet Congress created the First Responder Network Authority, or FirstNet, in February 2012 “to establish a nationwide broadband network for public safety.” Creation of the network was the final recommendation of the 9/11 Commission to connect police officers, firefighters and EMS providers and ensure as close to universal interoperability as possible. The authority is led by a 15-member board with representatives from government, public safety and the wireless industry.
<urn:uuid:a283eeda-6e24-4956-a282-a461ebe6d8ac>
CC-MAIN-2017-04
http://info.chicomm.com/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00551-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953138
201
2.625
3
Forwarded from: Jay D. Dyson <jdysonat_private> -----BEGIN PGP SIGNED MESSAGE----- Courtesy of Cryptography List. Exploding chips could foil laptop thieves 19:00 16 January 02 Duncan Graham-Rowe A new way of making silicon explode could mean anyone trying to use a stolen laptop or mobile will be confronted by this message: "This machine is stolen and will self-destruct in ten seconds ... ". Until now scientists have only managed to make silicon go bang by mixing it with either liquid oxygen or nitric acid. But Michael Sailor and his colleagues at the University of California in San Diego have found a way to blow up silicon chips using an electrical signal. They say their method could be used to fry circuitry in devices that fall into the wrong hands. For instance, the American spy plane impounded by China last year could have used it to destroy its secret electronics systems. Sailor's team hit upon this new way of exploding silicon when they applied the oxidising chemical gadolinium nitrate to a porous silicon wafer. As colleague Fred Mikulec used a diamond scribe to split the wafer it blew up in his face, giving Mikulec the shock of his life. Luckily, only a minute quantity of silicon was involved so it was a small bang. "It's a bit like a cap in a cap gun going off," says Sailor. Fast burn The gadolinium nitrate used the energy from the diamond scribe to oxidise the silicon fuel, which burns fast because its crystals have a large surface area. "The faster the burn, the bigger the bang," explains Sailor. You would only need a tiny quantity of the chemical to do irreparable damage to delicate transistors, so it would be cheap and easy to add when the chips are being made. In a stolen mobile phone, the network would send a trigger signal to the part of the chip containing the gadolinium nitrate "detonator", triggering the explosion. "We have shown that you can store this stuff and detonate it at will," says Sailor. Other applications suggested for the technology include testing for toxic substances in groundwater. The device could be used on the spot to burn minute samples on a disposable chip and analyse their chemical composition. Alternatively, it could be used as a fuel supply for microscopic machines etched onto silicon wafers, says Sailor. Journal reference: Advanced Materials (vol 14, p 38) ( ( _______ )) )) .-"There's always time for a good cup of coffee."-. >====<--. C|~~|C|~~| (>------ Jay D. Dyson - jdysonat_private ------<) | = |-' `--' `--' `--------- Quietem nemo impune lacessit. ---------' `------' -----BEGIN PGP SIGNATURE----- Version: 2.6.2 Comment: See http://www.treachery.net/~jdyson/ for current keys. iQCVAwUBPEdJD7lDRyqRQ2a9AQFcxwQAhtARnZGfzo01/xxUXTyo5y+S/EBUZ3mC My+XllDLEz4nEHTV11U6YAR14h3oDAHkBaBkCj23v8SLhdCYVPiqzusI+EMo3B6L YkHrEcy0RbzT6/SmUgf0tmDehX+q44yvSD4db8SGQvSiHWrecNxiF/MA7VALQgNx vgnjpIbzsqY= =/BKx -----END PGP SIGNATURE----- - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail. This archive was generated by hypermail 2b30 : Fri Jan 18 2002 - 08:12:20 PST
<urn:uuid:65555145-dafa-476d-881d-b03bdb78e301>
CC-MAIN-2017-04
http://lists.jammed.com/ISN/2002/01/0099.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00551-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903884
838
2.78125
3
Construction is under way on Maine’s seventh — largest — high-tech bridge, replacing standard concrete and steel construction with a lightweight and portable carbon-fiber tube structure. The new technology is designed to ward off corrosion, double a bridge’s structural lifespan and significantly reduce construction time and repair costs. At an offsite location, carbon-fiber tubes are inflated, shaped into arches and infused with resin to harden them. The tubes are then moved to the foundation’s location and filled with concrete, producing arches as strong as steel. The arches are then covered with a fiber-reinforced decking and buried under several feet of sand. The carbon fiber protects the resin from harsh weather and extreme climates, which safety experts say is the greatest cause of bridge corrosion. In standard steel bridge construction, de-icing road salts and saltwater infiltrate the concrete and corrode the steel bar, which causes it to expand and crack the concrete, weakening the bridge. The new, environmentally-safe design was developed by the University of Maine Advanced Structures and Composites Center and has been named “bridge in a backpack” technology because its components are lightweight and easily transportable. “It changes the entire logistics of the construction,” said Habib Dagher, a University of Maine engineering professor and director of the Advanced Structures and Composites Center, which brings new technology to construction sites. A 70-foot arch weighs about 200 pounds, compared to a steel girder, which weighs between 40,000 and 50,000 pounds, he said. The new technology could be one way to help rebuild the country’s bridge infrastructure, which has scored poorly in recent years on an annual report card of their integrity. A national conversation about bridge safety continues, a debate first sparked in 2007 after the I-35 Minnesota bridge over the Mississippi River, killing 13. Although the concept sounds simple and quick — in two months, seven bridges have been built throughout Maine, including one bridge that was completed in 12 days — the design took eight years of development and testing. “There was a lot of homework done before we got out there,” Dagher said. Inside an 85,000-square-foot lab, University of Maine researchers built five bridges and simulated traffic using a computer system that can “add” hundreds of thousands of pounds at a fast pace onto the bridge, similar to the weight of real-life bigrigs. “We tested the strength of the bridge after 50 or 75 years of aging versus the strength of the bridge without being aged, and we saw essentially no degradation over the system,” said Dagher. The testing and design development were funded by the Federal Highway Administration and the Army Corps of Engineers. After the I-35 Minneapolis bridge collapsed in 2007, Maine Gov. John Baldacci signed a bill to increase the state’s bridge funding, which is spent on rehabilitating corroded bridges that have been worn down by various environmental factors. “It’s hundreds of billions of dollars a year in infrastructure issues that we have. So by going to these new technologies, we are preventing a lot of these corrosion issues that we see out there,” said Dagher. The costs to build the bridge are competitive with standard bridge construction, but Dagher said those costs will drop in the next year because the company has been “overbuilding” the bridges to make sure they are safe. Dagher said it’s predicted that 20 to 30 more “bridge in a backpack” spans will be built next year both domestically and internationally, And potentially more could be in the works since the American Association of State Highway and Transportation Officials, the agency that puts out U.S. bridge codes, decided to promote the technology nationally by aiding at least eight states in building these bridges in the next couple of years. Currently the Maine Department of Transportation has a contract for six bridges to be built in two years, and the rest of the projects have been funded by private companies. Dagher’s team is currently in talks with Russia about using the “bridge in a backpack” technology as the country builds the infrastructure for the 2014 Winter Olympics. “It’s a huge deal,” said Dagher. “With this technology, you can ship a dozen bridges in a 20-foot container.”
<urn:uuid:5629b87a-8606-4fb9-bc10-eb37927a37e1>
CC-MAIN-2017-04
http://www.govtech.com/transportation/Bridge-in-a-Backpack.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00030-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95828
915
3.234375
3
Online visitors usually want to know their IP address for certain reasons like: for game playing purpose, technical support, remote connection and proxy detection etc. Moreover, they can also check is their address changed or still there are using their personal e-mail server. If you want to gallop your personal e-mail or web server then static IP address will best for you. Tag: ip address An IP address (Internet Protocol address) is actually a numerical tag that is assigned to each participating device over the network either a computer or a printer. These devices are attached via networking using internet protocols for their communication. Two main functions of an IP address are given under: - Identifying a host or a network interface - allocating addresses What is IP v6 – IPv6 addresses – what is this all about? Why we need IPv6 addressess? What is this buzz all about? When the IP protocol and addressing technology first came out, there was not even close so many devices in the world that could connect to the Internet. It was hard to imagine that we will have the need to make IP addresses longer that 32-bit. So we make them 32-bit addresses and call them IPv4 address. We were not able to imagine that we will have close to 4,294,967,296 devices online? This number is simple 232 = 4,294,967,296 that means the number of different IP addresses that we can generate with 32 digits (bit) number.
<urn:uuid:be373ce5-8df3-4139-badf-25726a8d64e0>
CC-MAIN-2017-04
https://howdoesinternetwork.com/tag/ip-address
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00058-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955233
304
3.21875
3
Great news: 1 out of 10 Web sites you visit may actually be secure. Great news: 1 out of 10 Web sites you visit may actually be secure.According to a report published yesterday by Web security firm WhiteHat Security, nine out of 10 Web sites have significant vulnerabilities that can be exploited by attackers. The report also finds that Web sites, on average, have seven flaws that make them vulnerable to attack. Not surprisingly, cross-site scripting (XSS) vulnerabilities is still the most prevalent Web security problem out there, with 70% of Web sites vulnerable to this form of attack. Simply put, XXS is a vulnerability that allows attackers (or anyone who feels like it, really) to inject code, such as HTML, into a site. These types of vulnerabilities can be used to bypass Web site access controls, and has been used by phishers to conduct mass-scale fraud. Now, fraudsters are likely to turn to an emerging type of attack, similar to XXS, known as cross-site request forgery, or CSRF. Beyond being yet another acronym we need to remember, this type of attack doesn't require code be injected into a Web site. Rather, users authenticated, or logged-in, to a Web site can be attacked while the session is active. This means, if you're logged into your bank, it could be possible for someone to use your active session to transfer money out of your account without you being aware. Until it's too late, of course. Here's what WhiteHat Security had to say about CSRF in a statement: However, CSRF, while known in the public domain for years, has recently garnered more attention from malicious hackers. Attackers using CSRF can easily force a user's Web browser to send unintended HTTP requests such as fraudulent wire transfers, change passwords and download illegal content. Effective automated CSRF detection techniques have eluded all technology scanning vendors in the space, making identification a largely manual process. That means it's tough to find this flaw in existing code. Your best defense make be making sure you pick the one site (out of 10) that is secure. There may be a couple more practical steps you can take to avoid CSRF attacks on your sessions: Don't have multiple Web sites open in your browser while doing any type of banking, or other types of high-value transactions. Always remember to properly log out from credit card and banking-related Web sites. Anyone else have any useful suggestions?
<urn:uuid:fdf40a53-8194-4f7b-8b9c-aa04c6dc6b32>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/web-app-threats-rising/d/d-id/1066083
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00360-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940438
513
2.84375
3
NASA laughs off comet fears A comet that some doomsayers had warned was going to cause big changes on Earth will in fact pass harmlessly by. So says NASA anyway, which may not ease the fears of comet believers. Geekosystem reports that comet Elenin, which is predicted to have its closest approach to Earth on Oct. 16, stirred apocalyptic end-times rumors, including the theory that NASA has engaged in a coverup about a potential "brown dwarf" effect from its passage that could change the course of celestial objects. Some believe that Elenin has already caused major earthquakes, which have been progressively stronger as the comet nears our world. The space agency set out to debunk such theories by issuing a press release. In it, NASA noted that the relatively small size of Comet Elenin and its distance from the Earth — it won't get any closer than 22 million miles away — make its upcoming passage pretty much a nonevent that will have "immeasurably minuscule influence on our planet." Connect with the FCW staff on Twitter @FCWnow.
<urn:uuid:3a8c4e04-41fd-48d2-a0b9-7dac50822c68>
CC-MAIN-2017-04
https://fcw.com/articles/2011/08/18/agg-nasa-comet-elenin.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00176-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967532
227
2.859375
3
The Economic and Social Research Council (ESRC) has just run a week-long Festival of Social Science. One of the events was an afternoon discussion on signing avatars (avatars that have been animated to produce British Sign Language (BSL) to help communication with the deaf community). The discussion split into two parts: - Presentations on the state of the art and some examples of its use - Discussion on the quality of the signing and what it could be usefully used for It provided a fascinating insight into the challenges faced by developers of assistive technology (AT) in making sure that what is produced is really useful for the target audience. It is essential to understand that there is a distinct difference between the deaf and other disability groups. The Deaf are a ‘community' because they communicate in their own language (BSL). I have been chastised, quiet rightly, for talking about ‘disabled communities' but deaf people undoubtedly see themselves as part of a community with clubs, societies, a language (with dialects) and a unique and vibrant culture. The main research on signing avatars has been done by Prof. John Glauert at the University of East Anglia. This research has now been used by the BBC to produce educational materials for deaf children. The conversion of written English into a signing avatar has two separate challenges: - The first is that the syntax of a BSL is not the same as English so word-by-word conversion is not appropriate. - The second is to create an avatar that reflects the complexity of the movements used to ‘speak' good quality BSL. Most of the research has been into the second of these. Last year IBM ran an Extreme Blue project, called Say it See it (SiSi), that showed the possibilities of converting spoken English into BSL. The project used voice recognition technology, the output of which was analysed and converted into BSL syntax. This syntax was then used to instruct the University of East Anglia avatars to sign in BSL. As yet there is no indication from IBM as to whether this will be turned into a publicly available product. The concentration on the avatar is important because, before they will be accepted by deaf people, their signing needs to be of high quality and easily understandable. The most obvious example of the improvements that have been made in avatars is the inclusion of facial expressions, which are essential to the full appreciation of the language. The level of the challenge was graphically described by one participant who used his whole body to describe a horse race in BSL. He agreed that he did this to be provocative and make a point but it was obviously similar to a hearing-person listening to a novice read word-by-word from a teleprompt or listening to a professional Shakespearian actor. Having said that, the avatars on display obviously could do the job—and do it better than a novice teleprompter. The best examples were for children, where cartoon characters signed. These examples were interactive, the children could develop their own stories, but were also immersive as the signing was not an adjunct to the action but was an integral part of the story. The discussion that followed was heated and a challenge for the organisers who had to ensure that the interpreters (BSL to/from English) only had to interpret one speaker at a time. The main arguments against the avatars were: - The quality was not good enough and there was a concern that the deaf community was being fobbed-off with something sub-standard. - Using avatars might create a standard BSL which would be less expressive and complete. - The money spent on the research could be better spent on more live interpreters. - Videos of real signers would be more useful. - Deaf people can read English (although it is a second language) and therefore using avatars for short messages might not be very effective. The main arguments for the signing avatars were: - This is still early research and the technology will improve. - The research identifies the real challenges and requirements. - The children exposed to the comic avatars really liked them and responded well to them—this could particularly be useful in adding signing to video game characters. - In an interactive context, the avatars can do what video of real signers cannot—it is possible to sequence signs from an avatar together to present information that is changing, but this is not possible via sequencing video clips. Based on these views the final question was what, if anything, signing avatars could or should be used for. The consensus of the deaf audience considered: - Interpreting (no): this is currently not possible, due to the complexities of interpretation between English and BSL—think of the "Franglais" of Google's translation. Also interpreting is normally two-way and there have been no attempts to get computers to understand BSL. - In-Vision (no): this is currently easier, cheaper, and better quality to do with interpreters than avatars. - Short information clips (probably not): examples suggested were weather forecasts and train announcements, where the ability of avatars to sequence changing information may be useful. The problem in both these cases is that written English is cheaper, useful to a wider audience, and just as easy for most deaf people. - Translation of web sites (possibly): certain web sites might lend themselves to information of this sort and the deaf community would appreciate not having to read large sections of English, although this could also be done through video. - Embedded (yes): if an avatar is an integral part of the environment of a game, cartoon or learning experience then enabling the avatar to sign as well as speak could be very attractive. Having listened to these arguments I tried to compare it with the growth of other assistive technologies, especially voice recognition and text to voice. In the early stages of these technologies quality was poor but there was a significant take up, whereas there is not a great take up of signing avatars. It appears that the difference is that, however poor the voice technologies were, they provided a significant benefit to the user. Early screen readers gave people with vision impairments access to a mass of electronic text even if it was tedious and painful to listen to. Signing avatars, if considered to be just an assistive technology, do not provide a similar benefit. So, will we be seeing more signing avatars? I am sure the answer is yes. The research will continue and the quality will improve so making more scenarios worthwhile. Embedding signing avatars into situations with other avatars will be an area of continuing growth. Signing in Second Life could be an attractive option but, even more so, having signing avatars in educational environments has potential benefits. One idea I had was to have a version of the in-flight safety video cartoons where the characters sign. Finally I felt that the conference showed the importance of really understanding the requirements of any special set of users and ensuring that the products fit their desires.
<urn:uuid:2f0c052d-c564-4859-b9a4-032cfb6eaf7d>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/what-is-the-use-of-a-signing-avatar/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00480-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968225
1,454
3.21875
3
It could be argued that the ease with which patches can be distributed has fostered an environment of features, as opposed to an emphasis on mature development practices that inherently promoted stability and security. With today's constant stream of patches coming from so many sources, however, and the tandem needs for security and stability applying pressure, organizations can no longer afford to patch and pray. Why Are There So Many Patches? The point is this: different teams using different personal styles, methodologies, tools, and assumptions generate all of this code, often with little to no interaction. When you combine the various pieces of software (i.e., all compiled or interpreted code, be it embedded in firmware or run in an OS environment), the results aren't always readily predictable due to the tremendous number of independent variables. As a result, issues arise; and when development groups attempt to fix the issues, they generate software patches with all of the best intentions. If we return to our basic principle — as software becomes increasingly complex, the number of errors in the code will rise as well — this also means that potential errors with the patches themselves will correspondingly rise as well. Furthermore, patches often contain third-party code, or ancillary libraries that are not directly designed, coded, compiled, and tested by the development team in question. Simply put, there are many variables introduced with patches. To be explicit, for the purpose of this article, patches are defined as a focused subset of code that is released in a targeted manner as opposed to the release of an entire application through a major or minor version code drop. The patch may fix a bug, improve security, or even update from one version of the application to another in order to address issues and provide new features. These days, of course, the security patch issues really get a lion's share of media attention, but correcting security isn't the only reason patches are released. Regardless of the intent of a patch, the problem is that the introduction of a patch into an existing system introduces unknown variables that can adversely affect the very systems that the patches were, in good faith, supposed to help. Organizations that apply patches in an ad hoc (i.e., little or no planning taking place prior to development) manner are known to "patch and pray." This slang reflects that when patches are applied IT must hope for the best. Interestingly, in reaction to the often-unknown impact of patching, there appears to be one school of thought wherein all patches should be applied and another that argues that patches should never be applied. It is unrealistic to view the application of patches as a bipolar issue. What groups need to focus on is the managed introduction of patches to production systems based on sound risk analysis. It's All About Risk Management In a perfect world, everyone would have the exact same hardware and software. This way, any new patch would perfectly install without issues. However, this perfect view is nearly impossible to attain on a macro/global scale, but does serve as an interesting thought experiment. The fact is that organizations will almost always have different environments than their vendors, peers, competitors and so on. Thus, any patch applied to existing systems carries a degree of risk. Likewise, there are risks associated with not patching. What organizations need to do is assess the level of risk of each patch, define mitigation strategies to manage the identified risks, and formally decide whether or not the risk is acceptable. To put this in the proper context, let's define a basic process for patching because risk management is a pervasive concern though the whole process, but risk management by itself does not define a process. A Basic Software Patch Process The patching process does not need to be complicated, but it must be effective for the organization and its adoption must be formalized. Furthermore, it is absolutely critical that people be made aware that the process is mandatory. The intent is to codify a process that manages risk while allowing systems to evolve. By creating a standard process that everyone follows, best practices can also be developed over time and the process refined. With all of this in mind, here is a simple high-level process that organizations can use as a starting point in discussions over their own patch management process: There must be active mechanisms that alert administrators that new patches exist. These methods can range from monitoring e-mails from vendors, talking to support groups, all the way to using automated tools, such as the Microsoft Baseline Security Analyzer, to actively scan systems for missing patches. These patches must be identified and added to a list of potential patches for each system.
<urn:uuid:c68fbbb4-75ae-43f8-8290-fbc94e4a2f76>
CC-MAIN-2017-04
http://www.cioupdate.com/trends/article.php/3065821/Dont-Patch-and-Pray.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00416-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956311
935
2.90625
3
Researchers Build Prototype Ion Pump to Cool Chips Published: September 7, 2006 by Timothy Prickett Morgan Chips are too hot these days, and fans are not a terribly efficient means of cooling them. While having air-cooled computers is much-preferred compared to the water-cooled past of mainframes (or the present of extreme PC users who over-clock their boxes and who have revived a variant of water cooling for the PC), air is nonetheless a relatively poor conductor of heat and you have to use a lot of energy and make a lot of noise to move cold air to computers and hot air away from them. Researchers at the University of Washington, Intel, and Kronos Advanced Technologies have collaborated to create a nanoscopic ion pump that can be integrated on the surfaces of chips to electrically and directly create air currents that can take heat away from chips as they run. Alexander Mamishev, an associate professor of electrical engineering at the university, is the principal investigator behind the miniaturized ion pump, says that the idea has been around for years, but no one built a working prototype. The pump has two basic parts: an emitter, which is a 1 micron wire that ionizes air, and a collector, a device that, as its name suggests, is a wire a few microns away that collects the ionized air. The electric field between these two devices causes the air to move, and at speeds comparable to that of air that is moving because a fan is whirring. If you want to move a lot of air, you create a chip package with a lot of emitters and collectors on it, much as you pack a lot of transistors on a chip to make a memory cell, and you program the wires to move air in stages across the chip, or you aim air flow at hot spots on the chip as different chip features are being used. (This is one of the tough problems that researchers are still trying to work out.). If you want to move a lot more air, you can also jack up the voltage. The important thing is that an ion pump that has an area of a few tens of square microns can cool an area of several square millimeters. The prototype cooled a chip that was running hot on just 0.6 watts.
<urn:uuid:b0c43dbb-4f1a-45c1-a05e-fc4ec050881e>
CC-MAIN-2017-04
http://www.itjungle.com/tug/tug090706-story06.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00416-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965338
471
3.5
4
Every device connected to the Internet—such as your smartphone, computer, tablet, even certain high-tech household devices—must be assigned an IP address for identification and location addressing in order to communicate with other devices. With the number of new devices being connected to the Internet rapidly increasing, it’s no wonder IPv4 addresses were predicted to run out. We’ve been talking about the cut-over from IPv4 to IPv6 for years now, and it was expected that IPv4 would run out of addresses in late 2011. With that in mind, the federal government set two deadlines for migrating to IPv6: • All federal organizations must be IPv6-compliant by Sept. 30, 2012. This included public/external facing servers and services, such as Webmail, Domain Name Server (DNS), and Internet Service Provider (ISP) services. • Internal client enterprise networks were given until the end of fiscal year 2014. Well, it appears the first date has been largely ignored, much as it was back in 2005, with only a small percentage of Internet traffic adopting the new protocol. And there don’t seem to be any consequences or penalties as a result of not meeting the first mandate. So, Why Haven’t Sites Migrated? IPv6 was formally announced in 1996. There have been various initiatives to encourage people to move to it, such as the World IPv6 Launch on June 6, 2012, and announcements of IPv4 addresses running out throughout 2011. However, many companies don’t see the benefits of migrating, as they already have the IP addresses they need to conduct business. Other organizations have avoided the need to migrate by using techniques such as Network Address Translation (NAT), which lets ISPs and enterprises hide their private network addresses behind a single, publicly routable, Internet-facing IPv4 address. There are other issues associated with the use of IPv6 addressing. Not many ISPs currently support it, nor do routers or internal routing at phone companies. Even though routers, switches and other networking hardware bought within the last two years are most likely IPv6-capable, ISPs, phone companies and businesses will likely incur huge costs replacing older hardware. According to network testing specialists, Ixia, ISPs and enterprises upgrading their networks to IPv6 are likely to incur costs running into hundreds of millions of dollars. Managers may have mixed feelings about the fact that since their smartphone and tablet will have an IP address, the Internet will need to know where those devices are in order to deliver PUSH messages. And that means there will be a record of wherever their phone has been! On the other hand, devices can roam among different networks without losing their network connectivity. In addition, sites may feel that having a public-facing IP address makes them less secure and more open to cyber attack. They may feel that any kind of migration is going to impact their every day activities, which in turn could impact their bottom line. There’s a natural reluctance to embrace change unless there’s an obvious and immediate benefit. As is often the case, when weighing the benefits of a migration, more weight is given to short-term benefits and less weight is given to longer-term ones. This explains why so many sites have yet to migrate to IPv6. The Benefits of IPv6 The first obvious benefit is there are far more addresses available. IPv6 provides 340 trillion addresses whereas IPv4 provides roughly only 4 billion addresses. Migrating to IPv6 provides these additional benefits: • IPv6 networks are easier to manage; they provide auto-configuration capabilities and are simpler and flatter. Auto configuration offers the benefit of true out-of-the-box, plug-and-play connectivity. This removes much of the burden currently felt by IPv4 network managers. IPv4 networks must be configured manually or with DHCP. Being simpler and flatter means they’re easier to manage, particularly across large installations. Also, a flat network provides more paths through the network and can maximize bandwidth and promote lower latency, which enhances performance. • Direct addressing is possible, providing end-to-end connective integrity. The large number of addresses available means there’s virtually no need for NAT. • Security with IPv6 is so much better than IPv4. Internet Protocol Security (IPSec) is built into the IPv6 protocol and is usable with a suitable key infrastructure. IPSec support was an optional feature with IPv4. IPsec allows authentication, encryption and integrity protection at the network layer. • IPv6 enables more efficient routing, because routing tables can be so much smaller, and more efficient packet processing, because of IPv6’s simplified packet header. • IPv6 provides integrated interoperability and mobility capabilities that are already widely used in network devices. • Multicasting, which allows organizations to send messages to multiple devices at once • Organizations can push information to their users. For example, your bank could push out a message telling you that your last transaction caused your account to be overdrawn.
<urn:uuid:9c097f67-37ea-48db-a89d-f1b365d68d00>
CC-MAIN-2017-04
http://enterprisesystemsmedia.com/article/strategies-for-migrating-your-mainframe-to-ipv6
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00076-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947465
1,037
2.734375
3
In this course, you will gain a better understanding of what Service-Oriented Architecture (SOA) is, the impact of SOA, what it means in terms of today's systems and architectures, and how to apply the concepts in designing distributed architectures. You will explore what services and SOAs are, and what best practices and design patterns to use in designing SOA-based applications. This course presents a strong perspective on services as an essential and important part of enterprise systems as well as how to identify, design, and develop of complex services using sound analysis and design techniques and best programming practices. You will get a clear picture of how a service orientation can fundamentally change the dynamics of how software is developed and "lives" within your enterprise. You will leave the course armed with the required skills to design and lead the implementation of realistic SOA-based business application projects. You will cover advanced SOA concepts and practices for enterprise applications, and examine Enterprise Service Bus (ESB), the Business Process Execution Language (BPEL), SOAP, Web Services Description Language (WSDL), and web services. Experience expert-led online training from the convenience of your home, office or anywhere with an Internet connection. Train your entire team in a private, coordinated professional development session at the location of your choice. Receive private training for teams online and in-person. Request a date or location for this course.
<urn:uuid:91f997b1-5f32-43c3-8212-ba2a268a4a75>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/116912/service-oriented-architecture-analysis-design-soad-tt7100/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00470-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910005
291
2.703125
3
BGP is a beautiful, simple protocol, but it’s a miracle this whole thing we call the internet works in the first place. It is based ultimately on gossipy routers which freely share information in a trust based system. There is no central authority, so internet operators have to go on what their peers tell them. Unfortunately sometimes that information is wrong, or at least not what we intended. In the worst case, a network can pass themselves off as your AS and hijack your traffic. This could have disastrous security impacts, as traffic could be affected by a man-in-the-middle scenario, or even terminated at the hijacker where they might mimic your destination. Think about that: everything matched. Right domain, even DNSSEC, but the IP you were using was stolen. In less malicious scenarios, you can find your traffic gets “leaked” to networks that shouldn’t have a direct route to you. This can cause misdirection, impacting performance, but also has its own security implications with traffic now freely passing through unfriendly waters. What do you do about this? The first thing, like anything, is to monitor it closely and be alerted as soon as something appears. Ok, then what? If you were hijacked by someone announcing a more specific route, you can match or raise them. Otherwise you might want to swap out the prefix altogether to something not under attack. Then have a conversation with your upstream provider. Were they the one who leaked the route? Could they use their own leverage in the space to sanction the bad actor? And this isn’t just you, this can and does happen to both entire countries, and major brands.
<urn:uuid:d40f136c-bb2a-4a84-999c-b5b611bd317c>
CC-MAIN-2017-04
http://hub.dyn.com/dyn-blog/how-to-ensure-service-delivery-the-bgp-blues
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00286-ip-10-171-10-70.ec2.internal.warc.gz
en
0.971088
345
2.734375
3
Exploring the seas, via IP Okeanos Explorer uses a satellite link to provide scientists with ocean-going data - By William Jackson - Sep 28, 2008 THE OKEANOS EXPLORER is something of a throwback to the Age of Discovery, when ships sailed the seas on missions of exploration rather than dedicated research. But it has a twist: The Okeanos will use the latest communications technology to explore the oceans and confer with scientists onshore in near-real time. The National Oceanic and Atmospheric Administration commissioned the Okeanos, which is the Greek word for ocean and pronounced with a hard 'k,' in August. It is a converted Navy surveillance vessel and the first U.S. ship dedicated to ocean exploration. 'IP plays a very important part,' said Webb Pinner, a systems engineer at NOAA's Office of Ocean Exploration and Research. 'That's the medium through which telepresence works.' Telepresence refers to the ship's system for live, near-real-time audio, video and data transmission via satellite and Internet2 to five Exploration Command Centers that will give onshore scientists the ability to participate in the ship's mission. 'With telepresence, we're no longer crowding the ship with scientists,' said Fred Gorell, a spokesman at the Office of Ocean Exploration and Research. 'There will be scientists on the ship, but they will be more technicians running the equipment.' Telepresence will enable shore-side scientists to be available as needed at the following command centers: - The Inner Space Center at the University of Rhode Island. - The Center for Coastal and Ocean Mapping/ Joint Hydrographic Center at the University of New Hampshire. - The Mystic Aquarium and Institute for Exploration in Connecticut. - NOAA's Pacific Marine Environmental Laboratory in Seattle. - NOAA's Science Center in Silver Spring, Md. NOAA's Office of Marine and Aviation Operations has a sea and air fleet that includes a number of research vessels. But exploration missions are distinct from research, which is conducted with a specific goal, tests hypotheses and builds on data already discovered. Grants typically fund research missions. Exploration is more open-ended and encompasses searches for anomalies and new information that researchers can follow up on later. Okeanos will gather a representative sampling of data from interesting discoveries and leave the exhaustive studies to researchers. Because of the nonspecific nature of the mission ' there is no way of knowing what the ship will encounter ' it is crucial to have scientists on call rather than onboard. When NOAA discovered tube worms in hydrothermal vents, a new form of marine life, off the Galapagos Islands in 1977, no marine biologists were aboard the vessel. Today, biologists will be able to examine new discoveries remotely via a video feed and advise personnel aboard the ship on their significance. 'You have the best scientists at the right time in the right place,' Gorell said. Another difference between exploration and research is that grant-funded research data typically belongs to the scientists who create it. By contrast, the Okeanos' discoveries will be public information. Okeanos was launched in 1988 as USNS Capable; it performed submarine and air surveillance and drug interdiction. Its designation as a U.S. naval ship meant that civilians, rather than military personnel, largely staffed and ran it. The vessel was transferred to NOAA in 2004, and during the next four years, the agency refitted it for ocean exploration. It was re-commissioned last month, and after testing at sea, the ship is expected to begin its first full field season in 2009, during which it will spend two years in the Pacific Ocean. Its equipment includes a first-of-its-kind hull-mounted multibeam sonar mapping system that can produce detailed 3-D maps of the ocean floor at depths of 4,000 meters. It will use a remote operated vehicle (ROV) to examine anomalies. That tethered system includes a high-definition camera sled that watches the main vehicle below it. The main vehicle also carries high-definition cameras and lights, sensors, manipulators, and a small 60-pound xBot that will be sent into areas where it would be difficult or unwise to send the main vehicle. 'The 'x' in xBot stands for expendable,' Pinner said. To keep the ship in place while the ROV is deployed, a dynamic positioning system integrates satellite data with the ship's engines and thrusters to keep it stationary to within about five meters. Data collected by the systems will be sent to shore-based facilities via a 3.7-meter very-small-aperture terminal satellite link. SeaMobile Enterprises' MTN Satellite Services is providing satellite communications under a three-year contract for the NOAA fleet. The link provides up to 45 megabits/sec of throughput, although NOAA is paying for 20 megabits/sec for Okeanos. The signal is beamed to a ground station in California that is connected to the NOAANet Multiprotocol Label Switching network. Because the five Exploration Command Centers established by NOAA are not on the agency's network, data is delivered to them via Internet2, the United States' high-performance research and education network. The link will offer 16 audio channels and three video channels with embedded audio. Each command center has three large plasma screens for simultaneous video feeds from Okeanos ' two for video from the ROV and camera sled and a third for video from the ship's other cameras and sensors. Data from the ship's computer screens can also be sent as video so scientists can see data generated by the mapping sonar and navigation systems. The command centers use Tandberg video decoders.Multicast system The telepresence system will capitalize on Internet2's multicast capability to reduce bandwidth requirements. Unlike the Internet's standard unicast technology, multicast allows a single data stream to be sent to multiple endpoints. 'With multicast, it doesn't matter whether you have one user or 100, it uses the same amount of bandwidth,' Pinner said. Although the command centers are interoperable, the Inner Space Center acts as the system's hub. It is unlikely that the appropriate specialists will be available at all facilities, so centers will typically take the lead in specific areas. For example, researchers at the University of New Hampshire center will likely be primarily involved in mapping activities. Although scientists at the onshore centers will be able to call the shots ' or at least make suggestions ' for the ROV, physical control of the vehicle will be in the hands of engineers aboard the ship because of the 0.75-second delay in video between ship and shore. As many as 360 hours of video can be archived onboard Okeanos. The ROV control room has 16 video screens, not including computer monitors. The control room is divided into an engineering side and a science side because 'what science wants and what engineering needs to safely run the vehicle sometimes clash,' Pinner said. Audio communication in the ship's control room and with the command centers onshore will take place via the 16 channels of the RTS digital intercom system from Telex Communications. Having everyone use intercom headsets reduces the amount of cross-chatter and makes it easier to converse, Pinner said. An unexpected benefit of the intercom system is a more peaceful work environment, he added. 'It helps calm everyone by not having to raise your voice to be heard.' It is impossible to predict just where Okeanos will go and what it will find. 'It's a big ocean,' Gorell said. 'It's 95 percent unknown.'
<urn:uuid:7e7ae0ff-2223-4d19-9c53-efe00105e41d>
CC-MAIN-2017-04
https://gcn.com/articles/2008/09/28/exploring-the-seas-via-ip.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00012-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930421
1,589
2.8125
3
Tatton-Brown K.,Institute of Cancer Research | Murray A.,Institute of Cancer Research | Hanks S.,Institute of Cancer Research | Douglas J.,Institute of Cancer Research | And 29 more authors. American Journal of Medical Genetics, Part A | Year: 2013 Weaver syndrome, first described in 1974, is characterized by tall stature, a typical facial appearance, and variable intellectual disability. In 2011, mutations in the histone methyltransferase, EZH2, were shown to cause Weaver syndrome. To date, we have identified 48 individuals with EZH2 mutations. The mutations were primarily missense mutations occurring throughout the gene, with some clustering in the SET domain (12/48). Truncating mutations were uncommon (4/48) and only identified in the final exon, after the SET domain. Through analyses of clinical data and facial photographs of EZH2 mutation-positive individuals, we have shown that the facial features can be subtle and the clinical diagnosis of Weaver syndrome is thus challenging, especially in older individuals. However, tall stature is very common, reported in >90% of affected individuals. Intellectual disability is also common, present in ~80%, but is highly variable and frequently mild. Additional clinical features which may help in stratifying individuals to EZH2 mutation testing include camptodactyly, soft, doughy skin, umbilical hernia, and a low, hoarse cry. Considerable phenotypic overlap between Sotos and Weaver syndromes is also evident. The identification of an EZH2 mutation can therefore provide an objective means of confirming a subtle presentation of Weaver syndrome and/or distinguishing Weaver and Sotos syndromes. As mutation testing becomes increasingly accessible and larger numbers of EZH2 mutation-positive individuals are identified, knowledge of the clinical spectrum and prognostic implications of EZH2 mutations should improve. © 2013 Wiley Periodicals, Inc. Source Schuck P.F.,Federal University of Rio Grande do Sul | Schuck P.F.,University of the Extreme South of Santa Catarina | Busanello E.N.B.,Federal University of Rio Grande do Sul | Moura A.P.,Federal University of Rio Grande do Sul | And 6 more authors. Neurochemical Research | Year: 2010 High concentrations of ethylmalonic acid are found in tissues and biological fluids of patients affected by ethylmalonic encephalopathy, deficiency of short-chain acyl-CoA dehydrogenase activity and other illnesses characterized by developmental delay and neuromuscular symptoms. The pathophysiological mechanisms responsible for the brain damage in these patients are virtually unknown. Therefore, in the present work we investigated the in vitro effect of EMA on oxidative stress parameters in rat cerebral cortex. EMA significantly increased chemiluminescence and thiobarbituric acid-reactive species levels (lipoperoxidation), as well as carbonyl content and oxidation of sulfhydryl groups (protein oxidative damage) and DCFH. EMA also significantly decreased the levels of reduced glutathione (non-enzymatic antioxidant defenses). In contrast, nitrate and nitrite levels were not affected by this short organic acid. It is therefore presumed that oxidative stress may represent a pathomechanism involved in the pathophysiology of the neurologic symptoms manifested by patients affected by disorders in which EMA accumulates. © 2009 Springer Science+Business Media, LLC. Source Comparison of the performance of polymerase chain reaction and pp65 antigenemia for the detection of human cytomegalovirus in immunosuppressed patients [Comparação do desempenho da reação em cadeia da polimerase e antigenemia pp65 para detecção de citomegalovírus humano em pacientes imunossuprimidos] Martiny P.B.,Unidade de Microbiologia e Biologia Molecular | de-Paris F.,Unidade de Microbiologia e Biologia Molecular | Machado A.B.M.P.,Unidade de Microbiologia e Biologia Molecular | de Mello R.O.,Unidade de Microbiologia e Biologia Molecular | And 4 more authors. Revista da Sociedade Brasileira de Medicina Tropical | Year: 2011 Introduction: Human cytomegalovirus (HCMV) is often reactive in latently infected immunosuppressed patients. Accordingly, HCMV remains one of the most common infections following solid organ and hemopoietic stem cell transplantations, resulting in significant morbidity, graft loss and occasional mortality. The early diagnosis of HCMV disease is important in immunosuppressed patients, since in these individuals, preemptive treatment is useful. Te objective of this study was to compare the performance of the in-house qualitative polymerase chain reaction (PCR) and pp65 antigenemia to HCMV infection in immunosuppressed patients in the Hospital de Clínicas of Porto Alegre (HCPA). Methods: A total of 216 blood samples collected between August 2006 and January 2007 were investigated. Results: Among the samples analyzed, 81 (37.5%) were HCMV-positive by PCR, while 48 (22.2%) were positive for antigenemia. Considering antigenemia as the gold standard, sensitivity, specifcity, positive predictive values and negative predictive values for PCR were 87.5%, 76.8%, 51.8% and 95.5% respectively. Conclusions: Tese results demonstrated that qualitative PCR has high sensitivity and negative predictive value (NPV). Consequently PCR is especially indicated for the initial diagnosis of HCMV infection. In the case of preemptive treatment strategy, identifcation of patients at high-risk for HCMV disease is fundamental and PCR can be useful tool. Source de Souza L.T.,Laboratorio Of Medicina Genomica | de Souza L.T.,Federal University of Rio Grande do Sul | Kowalski T.W.,Federal University of Rio Grande do Sul | Ferrari J.,Laboratorio Of Medicina Genomica | And 9 more authors. Oral Diseases | Year: 2016 Objectives: We investigated the association between non-syndromic oral cleft and variants in IRF6 (rs2235371 and rs642961) and 8q24 region (rs987525) according to the ancestry contribution of the Brazilian population. Subjects and methods: Subjects with oral cleft (CL, CLP, or CP) and their parents were selected from different geographic regions of Brazil. Polymorphisms were genotyped using a TaqMan assay and genomic ancestry was estimated using a panel of 48 INDEL polymorphisms. Results: A total of 259 probands were analyzed. A TDT detected overtransmission of the rs2235371 G allele (P = 0.0008) in the total sample. A significant association of this allele was also observed in CLP (P = 0.0343) and CLP + CL (P = 0.0027). IRF6 haplotype analysis showed that the G/A haplotype increased the risk for cleft in children (single dose: P = 0.0038, double dose: P = 0.0022) and in mothers (single dose: P = 0.0016). The rs987525 (8q24) also exhibited an association between the A allele and the CLP + CL group (P = 0.0462). These results were confirmed in the probands with European ancestry. Conclusions: The 8q24 region plays a role in CL/P and the IRF6 G/A haplotype (rs2235371/rs642961) increases the risk for oral cleft in the Brazilian population. © 2016 John Wiley & Sons A/S. Source Alders M.,University of Amsterdam | Al-Gazali L.,United Arab Emirates University | Cordeiro I.,Servico de Genetica | Dallapiccola B.,Ospedale Pediatrico Bambino Gesu | And 8 more authors. Human Genetics | Year: 2014 The Hennekam lymphangiectasia–lymphedema syndrome is a genetically heterogeneous disorder. It can be caused by mutations in CCBE1 which are found in approximately 25 % of cases. We used homozygosity mapping and whole-exome sequencing in the original HS family with multiple affected individuals in whom no CCBE1 mutation had been detected, and identified a homozygous mutation in the FAT4 gene. Subsequent targeted mutation analysis of FAT4 in a cohort of 24 CCBE1 mutation-negative Hennekam syndrome patients identified homozygous or compound heterozygous mutations in four additional families. Mutations in FAT4 have been previously associated with Van Maldergem syndrome. Detailed clinical comparison between van Maldergem syndrome and Hennekam syndrome patients shows that there is a substantial overlap in phenotype, especially in facial appearance. We conclude that Hennekam syndrome can be caused by mutations in FAT4 and be allelic to Van Maldergem syndrome. © 2014, Springer-Verlag Berlin Heidelberg. Source
<urn:uuid:f4799399-6321-4fec-b00a-af808f8f5b48>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/servico-de-genetica-56978/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00498-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893909
1,952
2.53125
3
In February last year, one of the leading internet service providers in Slovakia suffered from the largest DDoS attack in the history of the country. The total volume of the attack exceeded 400 Gbps. Servers of its customers were down for tens of minutes… and not only the targeted ones. The attack wasn’t identified by automated tools and few hours passed from its start to successful resolution of the situation and restoration of the services. In this case, the sources of the attack were thousands of servers and various networks including unsecured household devices, computers, printers and other “things” connected to the Internet. With the rise of IoE / IoT, increasing capacity of networks, massive digitization, automation and robotization we will be facing these types of attacks more often and their consequences will be much more damaging. To do that, attackers will make use of sophisticated, automated tools (the availability of professional attacks on the internet for several tens of EUR is well-known even today). This is not marketing of security vendors. The importance of effective cyber defence also highlights independent authorities such as World Economic Forum (see the chart). Needless to say that WEC points out that cyber-attacks have bigger impact and likelihood than many other risks, including terrorism. And the market is starting to react. European law obliges ISPs to immediately eliminate DDoS attack or neutralize the particular infrastructure segment which behaves as a source of the attack. But how to identify it in near real-time? What happens in the first tens of minutes when the important part of the critical infrastructure is down does not need an explanation – hospitals, power engineering plants, ISP’s, water management etc., will become the targets of various types of cyber-attacks in the near future. Nowadays we are living in a time of lull before the storm. And that is also the reason why the European legislation is taking steps in building more directives about cybersecurity. An early automated detection and mitigation of the attack should be one of top priorities for ISPs. Many of them, especially the larger ones, consider using the services of both internal and external scrubbing centres, in-line tools, and out-of-path solutions for comprehensive DDoS protection. These tools are directly dependent on the early detection of DDoS. DDoS attacks can be identified by using the network behavior analysis that supports an automatic flow data analysis. This way we can smartly identify long-term DDoS attacks that use different system vulnerabilities, spoofing, reflectors, amplification and similarities. DDoS attacks are different in nature, but their identification based on the analysis of flow data is highly feasible. Without interfering with the infrastructure in the off-line mode, it is possible to use resources that are actually available to providers. With a suitable combination of scrubbing centre, out-of-path solution and support for protocols such as PBR, RTBH, BGP Flowspec can these attacks be early identified and mitigated. Last year, Flowmon Networks introduced a new tool, the DDoS Defender, which is a part of Flowmon solution. It enables a near real-time detection and therefore fast mitigation of volumetric DDoS attacks precisely on the basis of flow statistics. DDoS Defender complements the attack detection in the Flowmon ADS system. By using Flowmon you can prevent service outages caused by DDoS and avoid that unpleasant experience of the Slovak ISP.
<urn:uuid:8c240298-75c8-4e81-bab2-22827a299dab>
CC-MAIN-2017-04
https://www.flowmon.com/en/blog/ddos-attacks-protection
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00222-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94527
695
2.578125
3
Imagine if you had developed, built and flown a spacecraft that successfully traversed the cosmos but upon landing, spun out of control or hit something that destroyed the ship. Such nightmare scenarios are exactly what NASA engineers are developing sophisticated software technology to avoid. [RELATED: What is so infinitely cool about Mars?] NASA is currently testing one of the more important components of such software - the algorithms that incorporate the spacecraft's trajectory, speed and landing information to guide a ship to a safe arrival. The latest algorithm, known as Fuel Optimal Large Divert Guidance algorithm (G-FOLD) is being flight-tested in conjunction with Masten Space Systems at the Mojave Air and Space Port in California. According to NASA, G-FOLD, invented at the agency's Jet Propulsion Laboratory, "autonomously generates fuel optimal landing trajectories in real time and provides a key new technology required for planetary pinpoint landing. Pinpoint landing capability will allow robotic missions to access currently inaccessible science targets. For manned missions, it will allow increased precision with minimal fuel requirements to enable landing larger payloads in close proximity to predetermined targets," NASA said. G-FOLD incorporates key information such as the maximum and minimum thrust magnitude; thrust pointing direction; glide slope to avoid surface contact during flight; and the maximum velocity to avoid supersonic flight. "Spacecraft accumulate large position and velocity errors during the atmospheric entry phase of a planetary mission due to atmospheric uncertainties and limited control authority. The powered descent phase, which is the last phase of Entry, Descent, and Landing, is when the lander makes a controlled maneuver to correct for these errors. This maneuver must be computed onboard in real-time because the state of the lander cannot be predicted at the start of powered descent phase," NASA stated. Current powered-descent guidance algorithms used for spacecraft landings are inherited from the Apollo era. These algorithms do not optimize fuel usage and significantly limit how far the landing craft can be diverted during descent, NASA said. For the current test, NASA said it used Masten's XA-0.1B Xombie vertical-launch, vertical-landing experimental rocket. To simulate a course correction during a decent to Mars, the Xombie was given a vertical descent profile to an incorrect landing point. About 90 feet into the profile, the G-FOLD flight control software was automatically triggered to calculate a new flight profile in real-time and the rocket was successfully diverted to the "correct" landing point some 2,460 feet away. Check out these other hot stories:
<urn:uuid:67315298-55fb-45c3-8f8a-5d3a1caad0e7>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225165/applications/nasa-details-software-algorithm-that-could-precisely-guide-future-spacecraft-landings.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00526-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921701
524
3.578125
4
We explain what IaaS is and what it can do for your business. Infrastructure as a Service (IaaS) is a service model of cloud computing, alongside Platform as a Service (PaaS) and Software as a Service (SaaS). It allows businesses to rent cloud infrastructure on a pay-as-you-go service. IaaS gives businesses access to the cloud, allowing them to outsource the equipment they need to support their operations, keeping track of it using an Internet connection. This means that networks and storage for organisations can be off-premise, allowing businesses comfort in the knowledge that the complexity of managing hardware and storage falls to the cloud provider. The service provider will host and maintain the hardware, servers and storage, and this is offered to customers on an on-demand, pay per-use basis. This allows enterprise customers to create more cost efficient and scalable IT solutions. Infrastructure as a Service is sometimes called Hardware as a Service (HaaS). Businesses are given access to the virtualised components, allowing them to self-provision the infrastructure. They can then build their own IT platforms and use a web-based visual user interface that serves as an IT operations management service. Businesses are able to access cloud resource as and when they need them rather than buy, install and run hardware themselves. Major companies such as IBM and Oracle provide IaaS to clients in a number of ways. IBM offers self-service IaaS and a fully managed IaaS solution, that allows users to deploy and scale virtual servers and dedicated bare metal servers, whilst developing applications. Their self-service IaaS is called SoftLayer Infrastructure and their managed IaaS is known as the IBM SmartCloud Enterprise+. The SoftLayer Infrastructure allows businesses to have complete control over their servers, whereas IBM SmartCloud Enterprise+ is an IaaS cloud designed for critical enterprise workloads that is managed by IBM rather than the business itself. Oracle’s IaaS solution aims to provide flexibility for customers by allowing them to pay for peak CPU capacity only when necessary, and combining the security and control of on-premise solutions and the features of cloud computing. One of the main features of IaaS is scalability. Growing companies are able to scale their infrastructure according to their expansion, allowing them to be more flexible. This means that equipment and resources are available when the business needs it. It is also a cost-effective option for businesses as the company only pays for the services that it uses. There is also an aspect of security, as cloud services hosted externally with the cloud provider can benefit from physically secure data centres. One of the great things about IaaS is that it is ideal for start-ups and small businesses, as they can test their company’s solutions without investing in expensive hardware.
<urn:uuid:c37b45b0-8b61-4fd7-bb26-04f84173de81>
CC-MAIN-2017-04
http://www.cbronline.com/news/cloud/aas/everything-you-need-to-know-about-infrastructure-as-a-service-iaas-4172982
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00342-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961633
590
2.546875
3
We're seeing a surge in successful experiments with alternative, atom-thin materials that are going to speed up and reduce the size of computer chips. Black phosphorus is the latest super-material that promises efficiency in electronics. This one promises speed gains too. Adding the substance, commonly found in match heads and tracer bullets, to optical circuits made out of silicon increases data speeds, according to a University of Minnesota research team, and reported by Dexter Johnson in the Institute of Electrical and Electronics Engineers' IEEE Spectrum publication. Over fiber, the scientists claim they have obtained data transfer speeds of up to 3 billion bits per second—or about one high-definition movie in 30 seconds. For comparison, a two-hour HD movie can currently take around an hour to download over a commonly available residential 5 Mbits per second Internet connection. Black phosphorus is a two-dimensional material, like graphene, another few-atoms-thin material. Graphene is an ultra-thin, very strong, carbon-based miracle conductor. It's the best conductor ever found. However, black phosphorus proffers a big difference to graphene in that it's got a band gap. Band gaps, caused by the structure of the atoms within the material, allow tuning and switching. That off-and-on is good in electronics. It's an important part of a semi-conductor. A band gap would allow for use in chips. I recently wrote about another band gap-enabled, atoms-thin material called silicene in an article titled, "Materials breakthrough promises smaller chips." Silicene could be used in chips, but there are some drawbacks, including that the transistors disintegrate when exposed to air. Where black phosphorus excels is that it can be used to detect light very efficiently. And as we know, light can be used to communicate. So the idea is that if you create an optical circuit to allow on-chip processor cores to communicate with each other, and can keep the circuit small and efficient, as phosphorus promises to help do, you can create more, ever more powerful and smaller processor cores on a chip. That means smaller and faster electronics overall. This photo detecting on chips is an important element in the bet on future miniaturization of electronics. Another material called germanium has been heralded as a good way to photo-detect on chips, in other words do the same fast communications as black phosphorus. However, it's harder to make, or "grow," it on silicon optical circuits. Black phosphorous can be grown separately and then transferred onto any material. But it's the dual use that makes the substance particularly interesting. The computer chip may not be the only benefactor. High-speed data over fiber, sent optically, might be able to be recovered by black phosphorous photo-detectors. That's where the 3 billion bits per second HD movie experiment came from. In any case, graphene, silicene, and black phosphorus will be battling it out soon for the title of best newcomer. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:bf35f728-6bff-471b-b1e8-3f411acf6c43>
CC-MAIN-2017-04
http://www.networkworld.com/article/2895768/data-center/optical-fiber-soon-to-see-performance-gains.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00342-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933196
643
3.609375
4
Microsoft has significantly decreased its water usage in 4 of its data centers. This has led to zero waste water production. The primary reason for this achievement is the work done by the Data Center Advanced Development or DCAD which works to reduce the resource requirements of data centers. These resources include power and water. The DCAD also works to reduce carbon emissions. One of the ways in which Microsoft has reduced water wastage is by going in for air cooling. For its facilities in Dublin, Iowa, Washington and Virginia, the company has opted for air cooling which results in the usage of a mere 1-3% of conventional data centers. Microsoft is targeting its back up diesel generators too. The effort to reduce these generators will happen with the company developing its Data Plants which will be data centers with on-site power generation capacity. Microsoft is hoping that its Data Plants will also help it realize better reliability, lower emissions and reduction in costs. Read More About Microsoft
<urn:uuid:761f80b3-8114-45c2-bb65-c86d6b5a6ea1>
CC-MAIN-2017-04
http://www.datacenterjournal.com/no-waste-water-in-microsoft-data-center/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00463-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964524
194
3.328125
3
Martin Vuagnoux and Sylvain Pasini from the Security and Cryptography Laboratory (LASEC) demonstrated a way of compromising electromagnetic emanations of wired keyboards. Wired keyboards emit electromagnetic waves, because they contain eletronic components. These eletromagnetic radiation could reveal sensitive information such as keystrokes. Although Kuhn already tagged keyboards as risky, we did not find any experiment or evidence proving or refuting the practical feasibility to remotely eavesdrop keystrokes, especially on modern keyboards. To determine if wired keyboards generate compromising emanations, we measured the electromagnetic radiations emitted when keys are pressed. To analyze compromising radiations, we generally use a receiver tuned on a specific frequency. However, this method may not be optimal: the signal does not contain the maximal entropy since a significant amount of information is lost. Our approach was to acquire the signal directly from the antenna and to work on the whole captured electromagnetic spectrum. We found 4 different ways (including the Kuhn attack) to fully or partially recover keystrokes from wired keyboards at a distance up to 20 meters, even through walls. We tested 11 different wired keyboard models bought between 2001 and 2008 (PS/2, USB and laptop). They are all vulnerable to at least one of our 4 attacks. We conclude that wired computer keyboards sold in the stores generate compromising emanations (mainly because of the cost pressures in the design). Hence they are not safe to transmit sensitive information. No doubt that our attacks can be significantly improved, since we used relatively unexpensive equipments. Here are two video examples:
<urn:uuid:2ab214fd-4c20-4746-b681-42d0296b4768>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2008/10/20/video-compromising-electromagnetic-emanations-of-wired-keyboards/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00279-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937638
324
3
3
DELL EMC Glossary Reference Architecture is a pre-engineered infrastructure solution that use proven configurations of compute, storage, networking and virtualization. Technology is integrated, as well as standardized, to ensure better performance, greater reliability and streamlined implementation. Many organizations are now implementing Converged Infrastructure and Hyper-converged Infrastructure instead of Reference Architecture for improved simplicity that delivers faster business outcomes. Why Should I Consider a Reference Architecture? Businesses who require applications to be deployed efficiently often choose Reference Architectures. While it is possible to build IT infrastructure without Reference Architectures, the process requires a significant investment of time and expertise. Reference Architectures provide business with solutions that incorporate current best practices sized to meet current businesses needs. How Does a Reference Architecture Work? Reference Architectures are designed and validated through extensive testing before being documented by expert engineers. Because Reference Architectures are deployed in many situations, experts incorporate adjustments from real world experience into later versions of the reference architecture. Thus, business benefits from the wisdom accumulated through many iterations of the reference architecture. Reference architectures are not highly packaged, and in many cases are descriptive in their approach. This means that they are flexible, allowing customers to build a solution that includes components from their preferred IT vendors. Benefits of a Reference Architecture Deploying a new data center solution using a do-it-yourself approach provides the ultimate in flexibility, but can be complex, costly, and time consuming. A reference architecture reduces or eliminates many of the steps required to deploy new hardware. Users can customize a solution to fit their needs and realize the benefits of predictable deployments, reduced complexity, and faster time to value.
<urn:uuid:d8503a64-dc7d-44ca-b6fd-94c732414eab>
CC-MAIN-2017-04
https://www.emc.com/corporate/glossary/reference-architecture.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00187-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923424
337
2.671875
3
Bad space weather -- everything from solar flares, geomagnetic storms and electromagnetic radiation has wreaked havoc on a number of satellites over the years and much more needs to be done to protect these systems that provide so much of our communications systems. Those were a couple conclusions discussed by a team of Massachusetts Institute of Technology researchers looking into the impact of space weather on satellites in a paper they published in the journal, Space Weather. [MORE MIT NEWS: MIT's inflatable antennae could boost small satellite communications] The MIT team analyzed space weather conditions at the time of 26 failures in eight geostationary satellites between 1996 and 2012 . The researchers found that most of the failures occurred at times of high-energy electron activity during declining phases of the solar cycle. This particle flux may have accumulated in the satellites over time, creating internal charging that damaged their amplifiers -- key components responsible for strengthening and relaying a signal back to Earth, the MIT researchers stated. To undertake the study, the MIT team partnered with the London-based Inmarsat telecommunication firm to analyze some 665,000 hours of telemetry data from eight of the company's satellites, including temperature and electric-current measurements from the satellites' solid-state amplifiers. From these data, the researchers analyzed scientific space-weather data coinciding with 26 anomalies from 1996 to 2012, the majority of which were considered "hard failures" -- unrecoverable failures that may lead to a temporary shutdown of the spacecraft, MIT stated. [MORE SPACE NEWS: Colliding, exploding stars may have created all the gold on Earth] Specifically, the researchers said they analyzed what's known as the Kp index, a measurement of geomagnetic activity that is represented along a scale from zero to nine. Satellite engineers incorporate the Kp index into radiation models to anticipate space conditions for a particular spacecraft's orbit. However, as the team found, most of the amplifier failures occurred during times of low geomagnetic activity, with a Kp index of three or less -- a measurement that engineers would normally consider safe. The finding suggests that the Kp index may not be the most reliable metric for radiation exposure. The team said they found that many amplifiers broke down during times of high-energy electron activity, a phenomenon that occurs during the solar cycle, in which the Sun's activity fluctuates over an 11-year period. The flux of high-energy electrons is highest during the declining phase of the solar cycle -- a period during which most amplifier failures occurred. Over time, such high-energy electron activity may penetrate and accumulate inside a satellite, causing internal charging that damages amplifiers and other electronics. While most satellites carry back-up amplifiers, these amplifiers may also fail, MIT said. "Once you get into a 15-year mission, you may run out of redundant amplifiers," said Whitney Lohmeyer, a graduate student in MIT's Department of Aeronautics and Astronautics in a statement. "If a company has invested over $200 million in a satellite, they need to be able to assure that it works for that period of time. We really need to improve our method of quantifying and understanding the space environment, so we can better improve design." Check out these other hot stories:
<urn:uuid:37149a2e-e2bd-429f-a770-760fdd0ce959>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225383/security/mit-team-says-space-weather-has-taken-out-satellites.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00425-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943997
668
3.484375
3
NAS, which stands for “network-attached storage”, refers to storage devices you can connect to wirelessly. NAS storage devices see use both in homes and small businesses, meeting a wide variety of data storage needs. Like any other storage device, though, these devices can and do break down, leaving their users without access to their data. There are all sorts of components in a NAS device that can fail and cut you off from your data, including the hard drive (or hard drives) within the device. If you have a broken or malfunctioning NAS device, Gillware Data Recovery’s world-class experts can assist you with all of your NAS data recovery needs. There are many different brands and models of NAS device on the market. Many NAS device manufacturers also sell external USB hard drives and servers for businesses of all sizes. NAS devices manufactured for personal use include Western Digital World Books, which Western Digital sells alongside desktop-sized My Book external hard drives. NAS devices manufactured for use by small businesses, freelancers, or hobbyists, such as the Buffalo Terastation or Drobo 5N, are more like miniature and low-cost servers, often with four or five hard drives arranged in a RAID array. What Makes NAS Devices Different from External Hard Drives? The most striking difference between a NAS device such as Western Digital’s World Book and a My Book external hard drive by the same manufacturer is the lack of USB port. A normal external hard drive plugs into your computer via a USB cable, allowing you to read data from and write data to the device. Instead, a NAS device will plug into your Internet router via ethernet cable. You can access the data stored on the device from your computer just like any other external hard drive, but your computer doesn’t have to be tethered to it. In addition, any other computer connected to your wireless router can also access the NAS device. Small NAS devices will often have a single hard drive inside them, like most external drives. But other models will have two hard drives, combined in a mirrored RAID-1 array for data redundancy. Hobbyist or freelance musicians, photographers, and video editors whose data takes up a lot of space can purchase NAS devices with four or five drives arranged in higher level RAID configurations. Components of a two-drive Netgear ReadyNAS, including the daughterboard, SATA expansion board, and fan. Hard drives not included. Inside the typical external hard drive is usually a normal and unmodified hard disk drive, along with a small USB bridging dongle that plugs into the drive’s SATA port. This is usually the only other piece of hardware in the device. A NAS device, by comparison, has a lot more. Connecting to your computer via Wi-Fi is more complicated than connecting to your computer via USB and requires more than a simple bridging dongle. A NAS device has a CPU, RAM, and sometimes even a small fan, especially if the device has two or more hard drives in it. It’s actually a miniature computer all on its own—in terms of hardware, NAS devices are in fact very similar to servers. What Makes NAS Devices Different from Servers? NAS devices see a lot of use in small businesses as a small and low-cost alternative to large enterprise-grade servers. Generally, NAS devices have many of the same components as a server and can perform the same function, only scaled down. For a small business that only needs a four- or five-drive RAID-5 array to store and manage its Exchange email database or SQL database, a cheap NAS device is much more cost-effective and less budget-busting than a Dell PowerVault or PowerEdge server with a dozen hard drive bays. There is, of course, a downside to using a NAS device in lieu of a more powerful (and expensive) server. These devices are made to be cost-effective. The components they use, such as their RAM and CPU, are cheaper and less powerful than those found inside full-fledged servers, and can often fail and break down more easily than the components found in enterprise-grade server setups. This also includes the hard drives used inside the device as well. A full-fledged server will usually use enterprise-grade hard drives, which often have high-performance SCSI or SAS ports instead of the SATA ports found on consumer-grade drives. These drives also have specially-optimized firmware designed to make the hard drive better at doing the kinds of things it has to do in a server (as opposed to the hard drive in your computer). NAS devices, however, will have the same kinds of hard drives you can find inside your computer. They aren’t the best of the best, they’re not designed to run 24/7, and they likely all came off of the assembly line within minutes of each other. When one hard drive fails (which the device will often be fault-tolerant enough to withstand), a second or third hard drive failure might not be far behind. The Dangers of NAS Devices Multiple-drive NAS devices, from the personal-use devices with two-drive RAID-1 mirrors, to the mini-servers with five-drive RAID-5 arrays, have a degree of fault tolerance. When one drive fails, the array keeps going without any data loss. But if you don’t replace the failed drive, it only takes one more drive failure to cut you off from your data. A NAS device can notify its owner via email when one hard drive has failed and needs to be replaced, and can even inform the user how to contact the manufacturer to receive a replacement hard drive. However, since it’s so easy to just start using a NAS device right out of the box, many users forget to set up these alerts. When one drive fails, another can fail days, weeks, or months later, without the user even realizing that the first drive had failed at all. When the next drive fails, the whole NAS device goes belly-up. NAS devices can also fail as a result of any of their other components going bad. Think of the NAS device like a miniature computer, then imagine how many ways your computer can fail. The CPU can break down, the RAM can go bad, the operating system can become corrupted, something on the motherboard could short out… In addition to physical failure, accidents, such as an accidental quick format, can also spell doom for the data on a NAS device. Some NAS device manufacturers, such as Drobo, use proprietary filesystems in their devices, which the user doesn’t notice but can confound freely-available data recovery software tools. Regardless of the reason, when you’ve lost data from your NAS device, sometimes you really need to get it back. In these NAS data loss situations, Gillware is here for you with our NAS data recovery experts. NAS Data Recovery by Gillware When your NAS device fails, Gillware’s NAS data recovery experts can retrieve your valuable data. With highly-skilled hard drive repair engineers in our certified clearoom data recovery lab and our expert logical data recovery technicians, the experts at Gillware Data Recovery can handle whatever your failed NAS device can throw at us. Hard drives have very sensitive and delicate moving parts inside them. When they fail, often one or more of these delicate components are often to blame. Gillware’s professional engineers can repair and replace failed hard drive components, such as read/write heads, control boards, and spindle motors. When hard disk platters become scratched or coated with debris, our state-of-the-art burnishing tools can clean the platters and save your data. Our logical technicians can recover your data after an accidental reformat or reset erases the contents of your NAS device. Even if your NAS device uses a proprietary filesystem or twists the data you write to it in an unusual way, our NAS data recovery experts can recover and piece your data back together. No matter the problem, Gillware’s data recovery experts are in your corner when you need data recovered from your NAS device. Why Choose Gillware for NAS Data Recovery? Gillware’s NAS data recovery services are completely financially risk-free. We charge no evaluation fees. We can even provide an inbound UPS shipping label for you on our dime. We don’t charge you unless we successfully fulfill your data recovery goals. If we cannot meet your needs or the price quote for our NAS data recovery services just doesn’t work for you, you owe us nothing. We only ask for you to cover the cost of return shipping if you want your device returned to you (otherwise, we recycle your device free of charge). With prices 40-50% lower than other professional data recovery labs, Gillware Data Recovery makes our NAS data recovery services affordable as well as world-class in quality. If your small business needs your data recovered from a crashed NAS server on the double, have no fear. Gillware Data Recovery is here. With our expedited emergency data recovery services, we can complete your data recovery case and send your data back to you in as little as one or two business days. Ready to Have Gillware Assist You with Your NAS Data Recovery Needs? Best-in-class engineering and software development staff Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions Strategic partnerships with leading technology companies Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices. RAID Array / NAS / SAN data recovery Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices. Virtual machine data recovery Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success. SOC 2 Type II audited Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure. Facility and staff Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company. We are a GSA contract holder. We meet the criteria to be approved for use by government agencies GSA Contract No.: GS-35F-0547W Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI. No obligation, no up-front fees, free inbound shipping and no-cost evaluations. Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered. Our pricing is 40-50% less than our competition. By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low. Instant online estimates. By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery. We only charge for successful data recovery efforts. We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you. Gillware is trusted, reviewed and certified Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible. Gillware is a proud member of IDEMA and the Apple Consultants Network.
<urn:uuid:e348a006-e08d-42f9-a24e-62e8d62599a0>
CC-MAIN-2017-04
https://www.gillware.com/nas-data-recovery-services/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00269-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940413
2,481
2.515625
3
In the late nineties and early part of this decade there was a marketing push around the concept of "centralization". Companies like IBM, Oracle and Sun focused on creating hardware and software platforms with single points of deployment and administration in the vain attempt to make it easier manage your infrastructure. It quickly became apparent that for all its marketing hype centralization has created more problems then it has solved. In nature, most things are not centralized, they are almost always decentralized. Centralization is a human construct used to create structure to an unstructured world. Whether an ant hill or a human body, the Sun or a Galaxy, decentralization and chaos is all around us. Some may see decentralization as anarchy or chaos but in the chaos comes the ability for systems whose states can evolve and adapt over time. These adaptive systems can exhibit dynamics that are highly sensitive to initial conditions and may adjust to demands placed on them. To build scalable cloud platforms the use of decentralized architectures and systems maybe our best option. The cloud must run like a decentralized organism, one without a single person or organization managing it. Like the Internet it should allow 99 percent of its day-to-day operations to be coordinated without a central authority. The Internet is in itself the best example of a scalable decentralized system and should serve as our model. The general concept of decentralization is to remove the central structure of a network so that each object can communicate as an equal to any other object. The main benefits to decentralization are applications deployed in this fashion tend to be more adaptive and fault tolerant, because a single point of failure is eliminated. On the flip side, they are also harder to shut down and can be slower. For a wide variety of applications decentralization appears to be an ideal model for an adaptive computing environment. For me, cloud computing is a metaphor for Internet based computing and therefore should be the basis for any cloud reference architectures. In the case of the creation of cloud computing platforms we need to look at decentralization as a way of autonomously coordinating a global network of unprecedented scale and complexity with little or no human management. Through the chaos of decentralization will emerge our best hope for truly scalable cloud environments. This has been a random thought brought to you on a random night.
<urn:uuid:142de581-647b-4eec-822e-d352836a1051>
CC-MAIN-2017-04
http://www.elasticvapor.com/2008/11/cloud-chaos-decentralization.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00113-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937362
458
2.75
3
OData (see the Resources section for more information) is a specification for a Web API for data access to enable resources such as tables in databases to be accessed from Web browsers and mobile devices. OData specifies create, delete, update, delete (CRUD), and query over HTTP on resources (data or applications). It also specifies the way the results are formatted in ATOM (XML) and JSON. OData is like a mini-ODBC or JDBC for the Web. More precisely, OData allows clients to construct URIs that name an entity set, filter among the entities it contains, and traverse relationships to related entities and collections of entities. Figure 1 shows how DB2 or Informix can be exposed on the Web through their ADO.Net enablement. Microsoft Visual Studio provides the tooling to enable the database data to be exposed via HTTP on the Web. The database data can be created, updated, deleted, and queried via the OData syntax from Web Browsers and other OData consumers (see the Resources section for more information). Figure 1. OData overview Figure 2 shows CSDL (Conceptual Schema Definition Language), which is an XML notation that describes underlying resources using an entity relationship model that can be accessed through OData. Figure 2. CSDL (Conceptual Schema Definition Language) CSDL is often used at development time, for example in tools or model mappers. CSDL is optional and is generated by Visual Studio to help consumer applications understand the structure of the data being exposed. CSDL is like metadata in JDBC and ODBC, helping client applications understand what they are accessing. Exposing tables in the sample database on the web using OData The next sections describe in detail how you can do the following. - Perform initial setup, such as defining the connection to the database. - Create an ADO.Net entity model. - Select the DB2 tables that will be exposed via OData. - Create an OData service (WCF service) for the selected tables. - Test the OData service. The description uses the OData runtime that is incorporated into Microsoft Visual Studio. However, once the support has been tested in Visual Studio, other OData runtimes can be used. See the Resources section for more information. Creating a new web site - Start Visual Studio. From the File menu, click New Web Site. The New Web Site dialog box is displayed. - In the left pane under Installed Templates, select Visual C#. - In the center pane, select ASP.NET Dynamic Entities Web Site. - In the Web location box, select File System and then enter the name of the folder where you want to store the pages of the web site. For example, type the folder name C:\WebSites\DB2OData and click OK. Visual Studio creates the web site, as shown in Figure 3. Figure 3. A new web site Adding a data connection to Server Explorer - From the Tools menu, select Connect to Database. The Add Connection dialog box is displayed. - Click IBM DB2 and IDS Servers, as shown in Figure 4, and then click Continue. Figure 4. Adding data connection to Server Explorer - As shown in Figure 5, in the Select or enter server name box, enter 127.0.0.1/localhost, or the hostname. In the User ID box, type db2admin. In the Password box, enter your password. Select the Save my password check box. Figure 5. Connection information - In the Select or enter a Database Name box, type a name for the database, such as SAMPLE. - Click Test Connection, and then click OK. - As shown in Figure 6, in Server Explorer, you can optionally expand Data Connections and view the database tables. Figure 6. Database tables Adding data to the web site and creating a new ADO.NET entity data model An entity data model is required to expose DB2 data using WCF data services. Perform the following steps to add data to the site and create the ADO.NET entity data model. - In Solution Explorer, right-click the project and then click New Item. The Add New Item dialog box is displayed. - In the left pane under Installed Templates, select Visual C#. In the center pane, select ADO.NET Entity Data Model. - In the Name box, type a name for the database model. For example, enter the name DB2.edmx, and then click Add, as shown in Figure 7. Figure 7. Create entity data model - When prompted, click Yes. - From the Entity Data Model Wizard, select Generate from database, and then click Next, as shown in Figure 8. Figure 8. Entity data model wizard The Choose Your Data Connection dialog box is displayed. - In the drop-down list, select the connection that you configured previously. For example, db2admin@SAMPLE. Click Yes to include the sensitive data (user name and password) in the connection string, and then click Next, as shown in Figure 9. Figure 9. Choose your data connection The Choose Your Database Objects dialog box appears. - Click the triangle to expand the Tables node. Select the check box for DEPARTMENT and EMPLOYEE tables, and then click Finish, as shown in Figure 10. Figure 10. Choose your data objects - The DB2.edmx page will appear with the new Entity Data Model, as shown in Figure 11. Figure 11. DB2.edmx page Registering the data content - As shown in Figures 12 and 13, from the Solution Explorer, open the Global.asax file. - Uncomment the line that contains the DefaultModel.RegisterContext method. - Set the context type to SAMPLEModel.SAMPLEEntities, and set the variable ScaffoldAllTables to True. Figure 12. Global.asax (DefaultModel.RegisterContext method - commented) Figure 13. Global.asax (DefaultModel.RegisterContext method - uncommented) - Save the Global.asax file. Adding WCF data service - From the Solution Explorer, right-click the project name DB2OData. The Add New Item dialog box is displayed. - Under Installed Templates, in the left pane, select Visual C# and in the center pane, and then click WCF Data Service. - In the Name box, enter a name for the data service, such as WcfDataService.svc, and then click Add, as shown in Figure 14. Figure 14. Adding WCF Data Service Configure the WCF data service - As shown in Figure 15, in the WcfDataService.cs file, replace the code comments /* TODO: put your data source class name here */ with SAMPLEModel.SAMPLEEntities. Figure 15. Data source class name and MyEntitySet - Uncomment the code containing the config.SetEntitySetAccessRule. Replace MyEntitySet with an asterisk "*", as shown in Figure 16. Figure 16. Change code for WcfDataService.cs Testing the WCF data service - To run the application, from the Debug menu, click Start Debugging. If prompted to enable debugging, click OK. - In the web browser, enter a URI to return all of the records from a DB2 table through the data service. - Enter http://localhost:15452/DB2OData/WcfDataService.svc/ to view the Entities that have been included, as shown in Figure 17. Figure 17. WcfDataService.svc To see the text of this file, view the WcfDataService.svc.txt file in the download with this article. to view the Department table, as shown in Figure 18. Figure 18. Department table (View a larger version of Figure 18.)See the download for the text of this figure. - To search for Employee ID = 000010, paste in the URL, as shown in Figure 19. Figure 19. Employee table (View a larger version of Figure 19.)See the download for the text of this figure. - You can use Add-Ons (like Bamboo for Firefox) to view the formatted XML data, as shown in Figures 20 and 21. Figure 20. Formatted employee table Figure 21. Formatted department Table This article showed you how to expose DB2 data over HTTP via OData using Microsoft Visual Studio, making it possible to access DB2 from mobile devices and Web browsers. OData libraries exist for a number of mobile devices (see the Resources section for more information). Similar OData support is available for Informix and DB2 on z/OS. Currently, temporal and XML data are not supported through Visual Studio. - Learn more about DB2 and .Net. - Visit the Data Developer Center to learn more about OData. - Learn more about Susan Malaika and her publications by visiting her blog. - Read the "OData" developerWorks article to learn more about OData. - Learn more about OData Specifications. - Learn more about OData Consumers. - Learn more about Deploying WCF Services. - Visit the developerWorks Information Management zone: Find more resources for DB2 developers and administrators. - Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics. - Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools as well as IT industry trends. - Follow developerWorks on Twitter. - Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners, to advanced functionality for experienced developers. Get products and technologies - Build your next development project with IBM trial software, available for download directly from developerWorks. - Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement Service Oriented Architecture efficiently. - Participate in the discussion forum. - Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
<urn:uuid:d69d9a61-5f28-4042-96d9-de91013a9e9f>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/data/library/techarticle/dm-1205odata/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00417-ip-10-171-10-70.ec2.internal.warc.gz
en
0.735865
2,218
3.09375
3
The unmanned X-51A is expected to fly autonomously for five minutes, after being released from a B-52 Stratofortress off the southern coast of California. The Waverider is powered by a supersonic combustion scramjet engine, and will accelerate to about Mach 6 as it climbs to nearly 70,000 feet. Once flying the X-51 will transmit vast amounts of data to ground stations about the flight, then splash down into the Pacific. There are no plans to recover the flight test vehicle, one of four built, the Air Force stated. "In those 300 seconds, we hope to learn more about hypersonic flight with a practical scramjet engine than all previous flight tests combined," said Charlie Brink, X-51A program manager with the Air Force Research Laboratory's Propulsion Directorate. Since scramjets are able to burn atmospheric oxygen, they don't need to carry large fuel tanks containing oxidizer like conventional rockets, and are being explored as a way to more efficiently launch payloads into orbit. The longest previous hypersonic scramjet flight test performed by a NASA X-43 in 2004 was faster, but lasted only about 10 seconds and used less logistically supportable hydrogen fuel, the Air Force stated. Hypersonic combustion generates intense heat and routing of the engine's own JP-7 fuel will help keep the engine operating properly, the Air Force stated. As the scramjet engine ignites it will initially burn a mix of ethylene and JP-7 jet fuel before switching exclusively to JP-7. The Air Force describes the X-51 as virtually wingless, designed to ride its own shockwave. The heart of the system is its Pratt & Whitney Rocketdyne SJY61 scramjet engine, but other key technologies will be demonstrated including thermal protection systems materials, airframe and engine integration, and high-speed stability and control. The Air Force said this will be the only hypersonic flight attempt this fiscal year, a change from the original test plan which was to fly in December 2009 then three more times in 2010. The X-51A WaveRider program is a joint effort by the Air Force, Defense Advanced Research Projects Agency (DARPA), Pratt & Whitney Rocketdyne, and Boeing. The X-51 isn't the only hypersonic research going on. Just last week, the Air Force said it was looking to develop what it called the Next Generation Thermal Protection System (TPS). The project looks to develop all manner of advanced thermal protection technology from ceramics to hybrid materials that, when combined with vehicle designs, will enable efficient, supersonic and hypersonic systems, the Air Force stated. Advanced materials and concepts which are highly durable, highly capable, highly supportable/maintainable, structurally efficient, extremely lightweight, and affordable are sought, the Air Force said. NASA and the Air Force said they would be offering up to $35 million to help fund research that could ultimately develop aircraft that can fly at over five-times the speed of sound or faster. Such hypersonic aircraft face myriad trajectory control, propulsion and heat-related issues akin to what a spacecraft would endure, experts say. NASA is also looking into developing hypersonic air or spacecraft that could travel in the Earth's atmosphere or between here and other planets. The space agency recently announced a $45 million contract with longtime partner ERC Inc., for just such space vehicle research. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:5c7a1386-7728-4e0b-a4a0-b9c73ff91fb6>
CC-MAIN-2017-04
http://www.networkworld.com/article/2230794/security/air-force-set-to-fly-mach-6-scramjet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00417-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945603
730
3.1875
3
Kaspersky Lab invites contributors to help solve the mystery of Gauss’s encrypted payload 14 Aug 2012 Kaspersky Lab recently announced the discovery of Gauss, a complex, nation-state sponsored cyber-espionage toolkit. Gauss contains many info-stealing capabilities, with a specific focus on browser passwords, online banking account credentials, and system configurations of infected machines. Kaspersky Lab’s experts discovered Gauss by identifying the commonalities the malicious program shares with Flame. Since late May 2012, more than 2,500 infections have been recorded by Kaspersky Lab’s cloud-based security system, with the majority of infections found in the Middle East. Kaspersky Lab’s experts published a research paper about Gauss that analyzed its primary functions and characteristics, in addition to its architecture, the malware’s unique modules, communication methods, and its infection statistics. However, several mysteries and unanswered questions about Gauss still remain. One of the most intriguing aspects is related to Gauss’s encrypted payload. The encrypted payload is located in Gauss’s USB data-stealing modules and is designed to surgically target a certain system (or systems) which have a specific program installed. Once an infected USB stick is plugged into a vulnerable computer, the malware is executed and tries to decrypt the payload by creating a key to unlock it. The key is derived from specific system configurations on the machine. For instance, it includes the name of a folder in Program Files which must have its first character written into an extended character set such as Arabic or Hebrew. If the malware identifies the appropriate system configurations, it will successfully unlock and execute the payload. “The purpose and functions of the encrypted payload currently remain a mystery,” said Aleks Gostev, Chief Security Expert, Global Research and Analysis Team, Kaspersky Lab. “The use of cryptography and the precautions the authors have used to hide this payload indicate its targets are high profile. The size of the payload is also a concern. It’s big enough to contain coding that could be used for cyber-sabotage, similar to Stuxnet’s SCADA code. Decrypting the payload will provide a better understanding of its overall objective and the nature of this threat.” Kaspersky Lab would like to invite anyone with an interest in cryptography, reverse engineering or mathematics to help find the decryption keys and unlock the hidden payload. More details and a technical description of the problem are available in our blogpost at Securelist.com
<urn:uuid:2a0683ff-2613-4dc3-bd7c-a27aecec1685>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2012/Kaspersky_Lab_invites_contributors_to_help_solve_the_mystery_of_Gauss_encrypted_payload
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00233-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93135
531
2.640625
3
Business Continuity (BC) is an integrated, enterprisewide process that includes all activities—both internal and external to IT—that a business must perform to mitigate the impact of planned and unplanned downtime. This entails preparing for, responding to and recovering from a system outage that adversely impacts business operations. The goal of BC is to ensure the availability of information required to conduct essential business operations. Information Availability (IA) refers to the ability of an IT infrastructure to function according to business expectations during its specified period of operation. When discussing IA, we need to make certain: • Information is accessible at the right place to the right user (accessibility) • Information is reliable and correct in all aspects (reliability) • The information defines the exact moment during which information must be accessible (timeliness). Various planned and unplanned incidents result in information unavailability. Planned outages may include installations, maintenance of hardware, software upgrades/patches, restores and facility upgrade operations. Unplanned outages include human error-induced failures, database corruption and failure of components. Other incidents that may cause information unavailability are natural and/or man-made disasters such as floods, hurricanes, fires, earthquakes and terrorist incidents. The majority of outages are planned; historically, statistics show the cause of information unavailability due to unforeseen disasters is less than 1 percent. Information unavailability (downtime) results in loss of productivity and revenue, poor financial performance and damage to a business’s reputation. The Business Impact (BI) of downtime is the sum of all losses sustained as a result of a given disruption. One common metric used to measure BI is the average cost of downtime per hour. This is often used as a key estimate in determining the appropriate BC solution for an enterprise. Figure 1 shows the average cost of downtime per hour for several key industries. How Do We Measure IA? IA relies on the availability of both physical and virtual components of a data center; failure of these components may disrupt IA. A failure is defined as the termination of a component’s capability to perform a required function. The component’s capability may be restored by performing some sort of manual, corrective action; for example, a reboot, repair or replacement of the failed component(s). By repair, we mean that a component is restored to a condition that enables it to perform its required function(s). Part of the BC planning process should include a proactive risk analysis that considers the component failure rate and average repair time: • Mean Time Between Failure (MTBF) is the average time available for a system or component to perform its normal operations between failures. It’s a measure of how reliable a hardware product, system or component is. For most components, the measure is typically in thousands or even tens of thousands of hours between failures. • Mean Time To Repair (MTTR) is a basic measure of the maintainability of repairable items. It’s the average time required to repair a failed component. Calculations of MTTR assume that the fault responsible for the failure is correctly identified, and the required spare parts and personnel are available. We can formally define IA as the period during which a system is in a condition to perform its intended function upon demand. IA can be expressed in terms of system uptime and system downtime, and measured as the amount or percentage of system uptime: IA=system uptime / (system uptime + system downtime) where system uptime is the period of time during which the system is in an accessible state. When it isn’t accessible, it’s termed system downtime. In terms of MTBF and MTTR, IA can be expressed as:
<urn:uuid:65d4ebe3-af86-4e2a-bc9a-2fb148dbc47e>
CC-MAIN-2017-04
http://enterprisesystemsmedia.com/article/the-new-paradigm-of-business-continuity-disaster-recovery-and-continuous-av
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00077-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92678
763
3.171875
3
Social networking sites have taken the world by storm and continue to find their way into the workplace. While these sites can work as an effective promotional tool for any organization, they also pose a series of problems, not only including cyberslacking but also malware attacks. Alarming results from Google Safe Browsing Diagnostic Tool Google Safe Browsing Diagnostic Tool showed some alarming results when various popular social networking sites were diagnosed. In July 2010, Facebook.com, was victim to malicious software including 133 scripting exploits, two trojans and one worm. Also, successful infection resulted in an average of 4 new processes on the target machine. Even worse, Twitter.com was also victim to malware including 4724 scripting exploits, 3727 trojans and 216 exploits in July 2010. Clearly, social networking sites are unsafe and organizations need to find an immediate solution to this problem. What is at stake? Attackers see social networking sites as an excellent opportunity to spread malware. Having short URLs makes it easy for malware creators to mask links to infected sites and send users to websites they would usually think twice before visiting. If the corporate network gets infected by any type of malware, it can interrupt a good share of the organization’s productivity and its bottom line. To mention a few of the consequences, sales and orders could be lost, the provision of products and service packages could be interrupted and a number of important processes might not be performed in time. This could also lead to legal repercussions and risks in cases where personal information was saved in the databases and was meant to be kept secured. What can companies do? Unfortunately, a number of SMBs (small and medium-size businesses) only recognize lost productivity – when employees spend an amount of time browsing non work-related sites – as these social networking sites’ primary flaw. Because of this, organizations end up either blocking them completely or setting up usage policies without any controls put in place. Blocking access has proven to be counterproductive as studies, such as the one by ENGAGEMENTdp (2009), have shown that the most valuable brands in the world are experiencing a direct correlation between top financial performance and deep social media engagement. On the other hand, usage policies are essential but not enough to protect the corporate network form malware attacks. Businesses need to establish a good web filtering and security solution to protect their network from such risks. For instance, GFI WebMonitor offers multiple virus scanners which can scan for hidden downloads and prevent employees from inadvertently downloading malicious software, reducing the average time taken to obtain the latest virus signatures and decreasing the risk to the organization’s site by each new virus. In this way employees will not feel that their Internet access is restricted while safety measures are still being taken.
<urn:uuid:9cdd42f0-2d88-4d46-a2ea-d1e634f5731f>
CC-MAIN-2017-04
https://techtalk.gfi.com/google-advisory-facebook-twitter-regularly-victims-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00223-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956773
564
2.65625
3
The human race is spending more and more time inputting information into electronic devices of all types. So it is important that we find easier, faster and more accurate ways of transferring the information from our heads to our electronic beasts. Using a keyboard has been the way to do this since the beginning of the computer age. More recently, voice recognition has taken off but still accounts for a small percentage of the information entered. Video cameras and audio recorders now account for most of the new content but do not displace much of the text content being produced. Gestures are the latest input method but are really only used for controlling the device not for input, although we might see some simple gestures for: hello, goodbye, yes, no, etc. Thought transference is in the labs but it will some considerable time before I can think this sentence and then see it on the screen. All of this suggests that typing is going to remain a major method of input to electronic devices for years to come. To make matters worse, devices are getting smaller so that a full size QWERTY keyboard becomes impractical. Tablets and smartphones with touch screens do not even give any tactile feedback, although this may change in the next few years. So, as typing is going to remain and the physical interface is not going to improve how can we make it easier, faster and more accurate? Predictive text has been around, especially for 12 key telephone input, for some years but has been of limited use because the predictions were often not right and just got in the way. KeyPoint Technologies (KPT) have extended the concept of predictive text technology with new methods and greater intelligence; to such a degree that to type the 1700 odd characters above should require less than 500 key presses. With that increase in speed we should all become more productive and the use of on screen keyboards would become an acceptable input device for more than just a quick note. Hence helping to narrow and bridge the gap, what KPT describe as 'the chasm of inutility', between the desires of the users and capabilities of the input devices. To promote the technology KPT has announced the Open Adaptxt engine; this is an open source version of the engine freely available for a variety of mobile platforms. What does the engine do that makes it so much more productive than standard predictive text? There are a collection of techniques which include: - Intelligent prediction. As you type it will predict the word you are typing not just by the letters you typed but also by the context of the sentence and your personal word usage. This greatly increase the chance that the word you are trying to type will be in the prediction list and will require fewer characters to be typed. Further it will predict the next word before you even start typing; it can also predict whole phrases when that would be helpful. - Intelligent error processing. If you type a word that is not recognised it will provide a list of alternatives. If a QWERTY keyboard is being used these alternatives will include those that would occur because of typical typing errors; for example letters typed in the wrong order, or adjacent letters ('a' instead of 's'). It can also automatically correct the word when you press space and will deal with capitalisation of proper names and acronyms. There are further methods for specific issues that complete the engine. Adaptxt is being marketed as a general purpose solution that should benefit all users by speeding up text entry from a keyboard. However, it should be of particular interest to users with limited dexterity who type slowly and are more likely to hit the wrong key. In fact it was originally developed to help a relative, who had lost an arm, to be able to type more easily. I am keen to see examples of Adaptxt being built-in to applications and will write about them and hopefully with them soon.
<urn:uuid:31719e9c-f9f2-4b8a-9bbf-78f725dd6876>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/faster-keyboard-entry-open-adaptxt/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00131-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95457
781
3.21875
3
Part 68 of the FCC Rules contains detailed technical rules that were adopted to limit hazardous voltages that might be produced by customer-owned telephones, in order to protect the health and safety of telephone company employees. But it was quickly expanded at the request of telephone companies to prevent customers from bypassing the telephone network billing systems. Because these provisions are part of the Code of Federal Regulations, amending them requires a rulemaking proceeding to comply with the Administrative Procedures Act. It typically takes the FCC about two years or so to adopt rule changes from the time a Petition for Rulemaking is filed. This combination of detailed technical specifications and the administrative difficulty of making changes has constrained the introduction of beneficial new technologies into the telephone network. For example, many new private telephone switching systems use telephone sets that communicate with the switch using digital control channel signals; this permits a variety of new services, such as allowing a telephone set to change its identity so that a telephone number can follow an employee as he moves through the building. But these digital telephone sets cannot be connected directly to the telephone network, because their digital signals do not conform to Part 68. The private switch filters out the control channel and other digital signals before any calls are connected into the public telephone network. Part 68 has prevented residential subscribers from using these new digital phones, because we don't have switches to filter out the digital signals. But the FCC prefers to believe that Part 68 has brought nothing but benefits to telephone subscribers, in the form of increased competition in the supply of telephone sets. The FCC intends to impose a similar regime on the cable industry. The FCC's goal is to promote customer ownership of cable boxes, and one way to do this is to assure "portability." This means a set-top box that works in one cable system must also work if the subscriber moves to another city. The cable industry does use one standard connector, and it does use one standard channel plan (three of them, actually, but that's close enough for government work). But a cable box that works in one city won't work in another city because the security and system designs - scrambling methods, channel capacities and control channel specifications - are different. Descrambler authorization messages, channel tuning data and other commands and messages are transmitted from the headend to set-top boxes over a control channel. The frequency, bandwidth, modulation, data rate and internal structure of the control channel vary from manufacturer to manufacturer, and from one model to another from the same manufacturer. To thwart cable pirates, the structure of the data within the control channel is a closely held secret. Set-top box portability would require a standardized control channel, and the data structure of the channel would have to be published in Part 68 or its cable TV equivalent. This would simplify the pirate's attacks on the security of addressable cable systems. The FCC evidently wants more than just set-top portability, it wants a competitive supply of set-top boxes. This means no more proprietary boxes for services like the Sega Channel. A generic game box would have to be used. And I guess we will wind up with a single, generic, program guide service. Too bad, StarSight.Signal leakage Customer ownership of set-top boxes and inside wiring will lead to more signal leakage problems. Today, the cable operator is responsible for eliminating leakage, even if the customer uses a lamp cord to carry the video signal from one room to another. The operator must, as a last resort, disconnect the subscriber from the network, if that's what it takes to eliminate a leak. The FCC does recognize that telephone companies will soon be installing broadband networks that could create new leakage problems. Until now, telephone networks have not used frequencies that would cause interference if they leaked. Satellite master antenna systems have operated on critical frequencies, and informal information in the cable industry suggests that SMATV systems have been responsible for serious signal leaks. But the FCC has never considered SMATVs to be enough of a problem to impose the same stringent standards that cable systems must meet. Cable signal leakage can be a serious threat if it interferes with aeronautical communications. For this reason, the FCC would be expected to impose the same leakage rules on telco broadband systems that now apply to cable. But there is no indication that the FCC has thought about the signal leakage implications of customer ownership of set-top boxes. Maybe someone will point this out. Or maybe we should rely on Circuit City to send a crew around to track down leaks. The FCC is pursuing an industrial policy to change cable TV business practices. But it's trying to fix something that isn't broken.
<urn:uuid:3b30c4e3-2de4-4005-9ead-d2319acb4e22>
CC-MAIN-2017-04
https://www.cedmagazine.com/print/articles/1996/02/inside-wiring-the-next-fcc-attack
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00131-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942975
946
2.640625
3
The Tera-scale approach is a radical change from Intels Xeon 5100, which uses two complex processor cores. But one of the driving forces behind the Tera-scale research is the fact that chip transistor counts, already in the billions, will continue to double over time.The rising transistors numbers give Intel the option to go with large numbers of smaller cores without radically increasing chip area. Thus far, Intel and others have used the extra transistors to create more complex chips with larger onboard memory caches. But while the current approach brings increases in instruction processing or work done per clock, that doesnt mean youre getting a commensurate increase in terms of overall efficiency, said Steve Pawlowski, chief technology officer for Intels Digital Enterprise Group in Hillsboro, Ore. "One of the ways to get efficiency is you make the cores simpler and you do a lot of them and put them on the die. Thats where Tera-scale is coming in. Were saying, Hey, for a certain class of workloads, you can take advantage of this parallelism. You can have extremely efficient architectures because you can use more of [the cores]." Shifting toward lots of simple corestrading two Woodcrest cores for tens of 386-style coreswould greatly increase a chips parallel processing abilities, and thus offer more performance, analysts agreed. But it brings its own issues. Click here to read more about Intels swifter transistors. "The bigger question is, how do you take advantage of such a system?" said Dean McCarron, principal analyst with Mercury Research, in Cave Creek, Ariz. "Not everything lends itself to that [many threads]. But, that said, everybody seems to be in agreement that this is the path were pretty much forced to go down." But programming for Tera-scale chips will require a completely different approach that uses lots of different threads simultaneously. Thats a concept only a few programmers are currently familiar with, Pawlowski said. Next Page: Getting to work. Intel chips will approach 32 billion transistors by the end of the decade, researchers said.
<urn:uuid:f74e95bc-7099-4269-a841-3e4c14d5d576>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Terascale-Computing-Intels-Attack-of-the-Cores/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00435-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934516
430
2.671875
3
184.108.40.206 What is a Linear Feedback Shift Register? A Linear Feedback Shift Register (LFSR) is a mechanism for generating a sequence of binary bits. The register (see Figure 2.6) consists of a series of cells that are set by an initialization vector that is, most often, the secret key. The behavior of the register is regulated by a counter (in hardware this counter is often referred to as a ``clock''). At each instant, the contents of the cells of the register are shifted right by one position, and the XOR of a subset of the cell contents is placed in the leftmost cell. One bit of output is usually derived during this update procedure. LFSRs are fast and easy to implement in both hardware and software. With a judicious choice of feedback taps (the particular bits that are used; in Figure 2.6, the first and fifth bits are "tapped") the sequences that are generated can have a good statistical appearance. However, the sequences generated by a single LFSR are not secure because a powerful mathematical framework has been developed over the years which allows for their straightforward analysis. However, LFSRs are useful as building blocks in more secure systems. A shift register cascade is a set of LFSRs connected together in such a way that the behavior of one particular LFSR depends on the behavior of the previous LFSRs in the cascade. This dependent behavior is usually achieved by using one LFSR to control the counter of the following LFSR. For instance, one register might be advanced by one step if the preceding register output is 1 and advanced by two steps otherwise. Many different configurations are possible and certain parameter choices appear to offer very good security. For more detail, see an excellent survey article by Gollman and Chambers [GC89]. The shrinking generator was developed by Coppersmith, Krawczyk, and Mansour [CKM94]. It is a stream cipher based on the simple interaction between the outputs from two LFSRs. The bits of one output are used to determine whether the corresponding bits of the second output will be used as part of the overall keystream. The shrinking generator is simple and scaleable, and has good security properties. One drawback of the shrinking generator is that the output rate of the keystream will not be constant unless precautions are taken. A variant of the shrinking generator is the self-shrinking generator [MS95b], where instead of using one output from one LFSR to "shrink" the output of another (as in the shrinking generator), the output of a single LFSR is used to extract bits from the same output. There are as yet no results on the cryptanalysis of either technique.
<urn:uuid:e1408c2e-ae63-4b8a-9c09-cb2b0194d64c>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-a-linear-feedback-shift-register.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00343-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936955
563
3.53125
4
Security researchers at Kaspersky has came across a cross-platform malware which is capable of running on Windows, Mac and Linux. The malware is completely written in Java. Even the exploit used for delivering the malware is also well-known Java exploit(CVE-2013-2465) which makes the campaign completely cross-platform. Once the bot has infected a system, it copies itself into user's home directory as well as add itself to the autostart programs list to ensure it gets executed whenever user reboots the system. Once the configuration is done, the malware generates an unique identifier and informs its master. Cyber criminals later communicates with this bot through IRC protocol. The main purpose of this bot is appeared to be participate in Distributed-denial-of-service(DDOS) attacks. Attacker can instruct the bot to attack a specific address and specify a duration for the attack. The malware uses few techniques to make the malware analysis and detection more difficult. It uses the Zelix Klassmaster obfuscator. This obfuscator not only obfuscate the byte code but also encrypts string constants. All machines running Java 7 update 21 and earlier versions are likely to be vulnerable to this attack.
<urn:uuid:3c28c490-6150-4f99-86e7-3352a40f56d0>
CC-MAIN-2017-04
http://www.ehackingnews.com/search/label/Cross-platform%20malware
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00251-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894903
248
2.59375
3
The W3C publishes the first working draft of its XMLHttpRequest Object specification, which may have huge implications for AJAX and Atlas programmers. (Dev Source) The W3C is not sleeping. A few weeks ago, the first working draft of the XMLHttpRequest Object specification was published. The beginnings of this standard may have huge implications for AJAX and Atlas programmers. This is an important step for highly interactive Web applications to become mainstream, and its part of a wider W3C initiative to standardize Web APIs. The XMLHttpRequest object is an interface exposed by Web browsers scripting engine to perform HTTP client functionality. Click here to read about why Java experts are predicting that AJAX will be huge. The term was coined in the article "Ajax: A New Approach to Web Applications," written in February 2005 by Jesse James Garrett. The W3C, or the World Wide Web Consortium, was founded by Tim Berners-Lee in 1994 at MIT. Guru Jakob Nielsen offers advice on designing applications for usability. Click here to watch the video. The W3C develops open technical specifications that can be used for free by anyone. These specifications are reached by a democratic process and any member can suggest a new project. If there is sufficient support within the consortium, the project proceeds. When it is finished, it is released by the consortium as a "recommendation." The W3C does not enforce its recommendations; it simply encourages everyone to adopt them. Read the full story on Dev Source: The Beginning of AJAX Standardization Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.
<urn:uuid:614aff5d-b7d1-4195-aa5a-36cb2d739989>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/The-Beginning-of-AJAX-Standardization
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00490-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933912
350
2.90625
3
The subject of this lecture is the Input/Output system of the IBM 370 series. To quote the textbook “Input/output on the 370–series The reason for this complexity is a desire to achieve from the computing system, especially when it is running multiple jobs. Basically there are two forms of Input/Output processing available for use in the design of a computer. 1. The Central Processing Unit can manage all This has the trouble of idling the CPU during long I/O operations. 2. The I/O can be separately managed by independent hardware that is directed by the CPU and communicates with the CPU. IBM calls this an “I/O Channel”. This structure frees the CPU to do other tasks while the I/O is in process. Those who have taken computer architecture will remember that: 1. The I/O Channel is a special form of Direct Memory Access I/O device, which is a device that interacts directly with the computer’s main memory. 2. There are several classes of I/O channels: some for faster devices (such as disks), and some for slower devices (such as keyboard input). The Input/Output Process At the most fundamental level, all input and output is a cooperative process between the Central Processing Unit and the various I/O Channels of the computer. Input: 1. The CPU signals the I/O Channel with a request for input. 2. The I/O Channel reads its “channel program”, which is a sequence of “channel commands” that indicate what to do. 3. The I/O Channel deposits data in a buffer in the main memory of the computer. This buffer exists in system memory and is accessed by the OS. 4. The I/O Channel signals the CPU that the input has been completed. If needed, it can send an error code. Output: 1. The OS has received data from a user program and deposited the data in a buffer in the main system memory of the computer. 2. The OS signals the I/O Channel with a request for output. 3. The I/O Channel reads its channel program, which includes the address of the memory buffer containing the data. 4. The I/O Channel completes the output and signals the CPU on completion. Input/Output in a Shared Computer a large mainframe system, the input and output for each program is managed the requirement to share the resources of the computer among a number of programs. This process is called “Time Sharing”; it allows each user of the computer to believe that he or she is the only one accessing the computer at that time. the Time Sharing model, we have 1. A single computer with its CPU, memory, and sharable I/O resources, 2. A number of computer terminals attached to the CPU, and 3. A number of users, each of whom wants to use the computer. In order to share this expensive computer more fairly, we establish two rules. 1. Each user process is allocated a “time slice”, during which it can be At the end of this time, it must give up the CPU, go to the “back of the line” and await its turn for another time slice. 2. When a process is blocked and waiting on completion of either input or output, it must give up the CPU and cannot run until the I/O has been completed. this convention, each user typically gets reasonably efficient service from the computer. Thus the computer is “time shared”. Users and Supervisors a modern computer system, users are commonly restricted from direct access to the input and output devices. There are a number of good reasons for this. TRUE STORY: While I was a Systems Analyst at a company in Cambridge, I was one of two programmers who had “superuser access” to a PDP–11 computer. This meant that each of us could write and run programs with what IBM calls “supervisor privilege”. As an experiment, each of us wrote a program that output directly to a shared printer. When run at the same time, the two programs produced almost nothing but illegible junk: some of his output followed by some of mine, etc. Neither of us was able to produce an entire line of uninterrupted text output. computers (including all variants of the PDP–11) solve this problem by a called SPOOLing, (Simultaneous Peripheral Operation On Line). The user program attempting to write to a shared printer actually writes to a disk file. the program closes its output, a SPOOL program operating with supervisor privilege, queues up the disk file for output to the printer. IBM terminology for the two states is supervisor and problem. The UNIX terminology is superuser and user. Your User Program Requests Input noted above, your user program cannot communicate directly with an I/O device. Thus it has to request I/O service by the O/S on its behalf. 1. The user program creates a work area of size sufficient to hold the input data. 2. The user program passes a request, including the address of this work area, to the operating system. This might be called a SVC (Supervisor Call). NOTE: This is not the same as a subroutine call. A subroutine operates the control of the user program. These I/O routines are controlled by the O/S. 3. If the Operating System determines that the user request is appropriate and can be granted within the security guidelines, it will start an Input/Output Control Program. 4. The I/O Control Program generates sets aside a buffer in system space for the data that will be input. It then generates a sequence of channel commands, which are directions to the I/O Channel of what it to be done. It then sends a signal to the I/O Channel, which begins the appropriate processing. 5. The I/O Control Program suspends itself and the Operating System marks the user program as “blocked, waiting for I/O”. It then suspends the user program and grants the CPU to another user program that is ready to run. The Classis Process Diagram Here is the standard process state diagram associated with modern operating systems. a process (think “user program”) executes an I/O trap instruction (remember cannot execute the I/O directly), the O/S suspends its operation & starts I/O on its behalf. The job is then marked as “blocked”, awaiting completion of the I/O. Another job is run. the I/O is complete, the O/S marks the process as “ready to run”. It will be assigned to the CPU when it next becomes available. Levels of I/O Commands goal of the I/O Macros is to isolate the programmer from the detailed control of the Input/Output devices. Much of the detailed code is tedious and difficult. Channel Command Level can be seen as the lowest level. Commands at this level are all privileged and not executable by user programs. Physical I/O Level is the next lowest level. This level initiates the commands and handles I/O interrupts, all still low–level stuff. Commands include: EXCP Execute Channel Program WAIT Wait for completion of the channel program CCB Channel Control Block Logical I/O Level is the one that most user programs access to define the layout and start both input (GET) and output (PUT). Double buffering refers to a process that is built on the fact that the use of I/O Channels allows for simultaneous processing and input/output of data. double buffering scheme calls for two data buffers; call them B1 and B2. This assumes that each buffer can contain enough data to allow processing based only on the data in that buffer. process basically has two phases: a startup and a processing. Here is an example, supposing that buffer B1 is used first. 1. Start reading data into buffer B1. Wait on the buffer to be filled. 2. Start reading data into buffer B2. While reading into that buffer, process the data in buffer B1. 3. Wait on buffer B2 to be filled. Start reading data into buffer B1. While reading into B1, process the data in buffer B2. 4. Go to step 2. needs to be careful in order to process all of the data in the last buffer. When the I/O completes, there is still more to do. The IOCS and Its Macros Here are the common macros used by the IBM IOCS (Input/Output Control System). We have mentioned a few of these in previous lectures. DCB Data Control Block, used to define files. OPEN This makes a file available to a program, for either input or output. terminates access to a file in an orderly way. For a buffered output approach, this ensures that all data have been output properly. GET This makes a record available for processing. writes a record to an output file. In a buffered output, this might write only to an output buffer for later writing to the file. Usage of the General Purpose Registers The operating system has special purposes for a number of these registers. 0 and 1 Logical IOCS macros, supervisor macros, and other IBM macros use these registers to pass addresses. 2 The TRT (Translate and Test) instruction uses this to store a generated value. This instruction is often used to translate other character code sets into EBCDIC. 13 Used by logical IOCS and other supervisory routines to hold the address of a save area. This area holds the contents of the user program’s general purpose registers and restores them on return. 14 and 15 Logical IOCS uses these registers for linkage. A GET or PUT will load the address of the following instruction into register 14 and will load the address of the actual I/O routine into register 15. use of registers 13, 14, and 15 follows the IBM standard for subroutine linkage, which will be discussed in a later chapter. general, only registers 3 through 12 can be considered to be truly general with register 2 useable only if the TRT is not used. Magnetic Tape Units and Record Blocking In order to understand the standard forms of record organization, one must recall that magnetic tape was often used to store data. The standard magnetic tape was 0.5 inches wide and 1200 or 2400 feet in length. The tape was wound on a removable reel that was about 10.5 inches in diameter. The IBM 727 and 729 were two early models. The IBM 727 was officially announced on September 25, 1963 and marketed until May 12, 1971. The figure at left was taken from the IBM archives, and is used by permission. It is important to remember that the tape drive is an mechanical unit. Specifically, the tape cannot be read unless it is moving across the read/write heads. This implies a certain amount of inertia; physical movement can be started and stopped quickly, but not instantaneously. The tape must have blank gaps between records. problem with inter–record gaps is that they consume space that cannot be used to write data. This is an inefficient use of magnetic tape. Consider a sequence of small records, written one per physical block. on tape can be used more efficiently if the tape is written in blocks, each block comprising a number of independent logical records. the last physical record is not completely filled with logical records, it will be filled with dummy records to achieve its full size. Record Blocking Example Consider a set of 17 logical records written to a tape with a blocking factor of 5. There would be four physical records on the tape. Physical record 1 would contain logical records 1 – 5. Physical record 2 would contain logical records 6 – 10. Physical record 3 would contain logical records 11 – 15. record 4 would contain logical records 16 and 17, as well as three dummy logical records to fill out the count to five. Magnetic tape is little used today for large data storage. idea of record blocking persists, and has been adapted to use on large disk structures. Use of the I/O Facilities In order to use the data management facilities offered by the I/O system, a few steps The program must do the following: 1. Describe the physical characteristics of the data to be read or written with respect to data set organization, record sizes, record blocking, and buffering to be used. 2. Logically connect the data set to the program. 3. Access the records in the data set using the correct macros. 4. Properly terminate access to the data set so that buffered data (if any) can be properly handled before the connection is broken. If the program accesses a magnetic tape, the programmer must do the following. 1. Either submit the magnetic tape with the job or tell the computer operator where to find the tape. The operator must mount the tape. 2. Give the operator any special instructions for mounting the tape on the tape drive. While some of these steps might be handled automatically by the run–time system of a modern high–level language, each must be executed explicitly in an assembler program. Standard Style for Writing Invocations of I/O Macros have evolved a standard style for writing macro invocations in order to make them easier to read. Here is an example written in standard style. FILEIN DCB DDNAME=FILEIN, X Note the “X” in column 72 of each of the lines except the last one. This is a continuation character indicating that the next physical line is a continuation of the present physical line. that these seven physical lines of text form one logical line and should be read as a single logical statement. The File Definition Macro The DCB (Data Control Block) is the file definition macro that is most commonly used in the programs that we shall encounter. This is a keyword macro, meaning that every argument is passed in the form While the parameters can be passed in any order, it is good practice to adopt a standard order and use that exclusively. Some other programmer might have to read your work. The example above shows a DCB invocation that has been shown to work on the particular mainframe system now being used by Columbus State University. It has the form: Filename DCB DDNAME=Symbolic_Name, X The DCB Macro Label The name used as the label for the DCB is used by the other macros in order to identify the file that is being accessed. Consider the following pair of lines. we see that the file being opened for input is that defined by this specific DCB macro. we shall see later, this macro expands into a large number of assembly Parameters for the DCB Macro Filename DCB DDNAME=Symbolic_Name, X DDNAME identifies the file’s symbolic name, such as SYSIN for the primary system input device and SYSPRINT for the primary listing device. DSORG identifies the data set organization. The most common PS Physical sequential, as in a set of cards with one record per card. DEVD defines a particular I/O unit. The only value we shall use is DA, which indicates a direct access device, such as a disk. All of our I/O will be disk oriented; even our print copy will be sent to disk and not actually placed on paper. RECFM specifies the format of the records. The two common F Fixed length and unblocked FB Fixed length and blocked. More Parameters for the DCB Macro Filename DCB DDNAME=Symbolic_Name, X LRECL specified the length (in bytes) of the logical record. typical value would be a positive decimal number. Our programs will all assume the use of 80–column punched cards for input, so that we set LRECL=80. EODAD is a parameter that is specified only for input operations. It specifies the symbolic address of the line of code to be executed when an end–of–file condition MACRF specifies the macros to be used to access the records in the data set. In the case of GET and PUT, it also specifies whether a work area is to be used for processing the data. The work area is a block of memory set aside by the user program and used by the program to manipulate the data. We use MACRF=(GM) to select the work area option. There are many other options. Expansion of the DCB Macro Here is a complete assembly language expansion of a single DCB. It is long. 163 FILEIN DCB DSORG=PS, 166+* DATA CONTROL BLOCK 0000F8 168+FILEIN DC 0F'0' ORIGIN ON 169+* DIRECT ACCESS DE 0000F8 0000000000000000 170+ DC BL16'0' FDAD, DVTB 000108 00000000 171+ DC A(0) KEYLEN, DE 172+* COMMON ACCESS ME 00010C 00 173+ DC AL1(0) BUFNO, NUM 00010D 000001 174+ DC AL3(1) BUFCB, BUF 000110 0000 175+ DC AL2(0) BUFL, BUFF 000112 4000 176+ DC BL2'0100000000000000' DSO 000114 00000001 177+ DC A(1) IOBAD FOR 178+* FOUNDATION EXTEN 000118 00 179+ DC BL1'00000000' BFTEK, BFA 000119 000074 180+ DC AL3(A90END) EODAD (END 00011C 90 181+ DC BL1'10010000' RECFM (REC 00011D 000000 182+ DC AL3(0) EXLST (EXI 183+* FOUNDATION BLOCK 000120 C6C9D3C5C9D54040 184+ DC CL8'FILEIN' DDNAME 000128 02 185+ DC BL1'00000010' OFLGS (OPE 000129 00 186+ DC BL1'00000000' IFLGS (IOS 00012A 5000 187+ DC BL2'0101000000000000' MAC 188+* BSAM-BPAM-QSAM I 00012C 00 189+ DC BL1'00000000' OPTCD, OPT 00012D 000001 190+ DC AL3(1) CHECK OR I 000130 00000001 191+ DC A(1) SYNAD, SYN 000134 0000 192+ DC H'0' INTERNAL A 000136 0000 193+ DC AL2(0) BLKSIZE, B 000138 00000000 194+ DC F'0' INTERNAL A 00013C 00000001 195+ DC A(1) INTERNAL A 196+* QSAM INTERF 000140 00000001 197+ DC A(1) EOBAD 000144 00000001 198+ DC A(1) RECAD 000148 0000 199+ DC H'0' QSWS (FLAG 00014A 0050 200+ DC AL2(80) LRECL 00014C 00 201+ DC BL1'00000000' EROPT, ERR 00014D 000001 202+ DC AL3(1) CNTRL 000150 00000000 203+ DC H'0,0' RESERVED A 000154 00000001 204+ DC A(1) EOB, INTER
<urn:uuid:4fddf5ff-0326-4d5c-8843-07805f0330db>
CC-MAIN-2017-04
http://edwardbosworth.com/MY3121_LectureSlides_HTML/InputOutputMacros.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00216-ip-10-171-10-70.ec2.internal.warc.gz
en
0.874127
4,348
3.578125
4
An A-Z guide to the technical terms used in Labs Adware is F-Secure's classification name for software that displays advertisements on the computer or device. The advertisements may be displayed on the desktop or during a web browsing session. Adware is often bundled with free software that provides some functionality to the user. Revenue from the advertising is used to offset the cost of developing the software, which is therefore known as 'ad-supported'. Most users on a computer system will log into a restricted 'user account', which only allows them to makes setting changes to the computer that would affect their own account. Changes made to one user account may not affect settings in another account. For system administration purposes, most computer operating systems have a special, restricted account for making critical changes that may affect all accounts on the machine. Depending on the operating system, this account may be known as root, administrator, admin or similar. A user with access to this account is said to have administrative rights, or essentially total control of the computer system. An alias is the name given by other antivirus vendor(s) for the same unique malware file or family. The differences in names for a given file or family is due to differences in naming procedures used by various antivirus vendors. In describing a malicious file or family, aliases are usually provided to indicate that the varying names identify the same malware. For example, the worm identified by F-Secure as 'Downadup' also has the aliases 'Conficker' or 'Kido', depending on the antivirus vendor in question. Alternate Data Stream An extension to Microsoft's Windows NT File System (NTFS) that provides compatibility with files created using Apple's Hierarchical File System (HFS). Applications must write special code if they want to access and manipulate data stored in an alternate stream. Some applications use these streams to evade detection. An anti-spyware program may be a standalone application, though nowadays many anti-virus programs also include anti-spyware functionality. A program that scans for and identifies malicious files on a computer system. An antivirus program's core is the scanning engine, the module responsible for scanning every file on the computer system to find supicious or malicious files. The scanning engine works in tandem with the program's antivirus database, a collection of virus signatures that identify known malicious files. During the scanning process, the scanning engine compares to each scanned file to those in its database. If a match is found between a virus signature and a scanned file, the file is considered malicious. A collection of virus detections or signatures used by an antivirus program during its scanning process to identify malware. When scanning a computer for malicious programs, an antivirus program compares each file inspected against the virus signatures in its database; if a match is found, this indicates that the file is shares enough similarities with a known malware to be flagged. Because this type of analysis depends on the antivirus program having an accurate signature with which to perform a comparison, it is known as signature-based detection. As new malware is constantly being created, new virus signatures must continually be added to antivirus databases to identify these new threats. An antivirus program is therefore most effective if its antivirus database contains the latest updates. Application Programming Interface (API) An Application Programming Interface (API) is a defined set of instructions, specifications or protocols used to transfer commands or requests between applications. There are many APIs available, and their use is usually dependant on the programming language or software(s) involved.
<urn:uuid:538877c7-ee24-4fdb-8316-694c38b2025b>
CC-MAIN-2017-04
https://www.f-secure.com/en/web/labs_global/terminology
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00216-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914733
735
3.15625
3
First proposed in 1991, the Square-Kilometer Array (SKA) project seeks to build and operate the largest radio telescope in the world to peer into the deepest recesses of the cosmos. Instead of seeing light waves, the SKA telescope will turn radio waves into images. The array will be 50 times more sensitive than any other radio instrument and will require extremely high performance computing to process and analyze its data. Over at the Cray blog, Bill Boas, Director of Business Development for SKA at Cray, discusses the project’s goals and addresses the computational demands of such an undertaking. Radio astronomy works because cosmic bodies like stars emit radio waves. As opposed to light-based astronomy, which is limited by obscuring entities like clouds or cosmic dust, radio telescopes are able to avoid these disruptions. In doing so, they are able to identify invisible gasses and other astronomic bodies that cannot be viewed through optical means. Signal processing by the SKA generates a wealth of data, which requires large-scale computational resources to make sense of. Data-intensive radio astronomy is already using supercomputing, notes Boas. For example, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) Pawsey Centre in Perth, Australia, is installing a Cray XC30 Supercomputer and Sonexion storage system to support several key research areas in partnership with the Australian Square Kilometer Array Pathfinder (ASKAP) and Murchison Widefield Array (MWA) radio telescopes. The enormity of the SKA project is such that it requires the innovation of new technologies, including a supercomputing system that is three times more powerful than any in existence today, according to Boas. The project’s website states that the SKA central computer will have the processing power equivalent to one hundred million PCs. “The reality is that the storage and high-performance computing technologies required for the SKA Project do not currently exist,” writes Boas. “As such, this initiative serves as a vital benchmark showcasing where the industry needs to move to. The solutions to be used to meet these needs may well end up proving instrumental to guiding the supercomputing sector. The highlight functions that may come as part of this movement include an increased dependence on data streaming models that enable real-time analysis in a newly-realized HPC environment.” Boas makes the point that the supercomputing industry is part of the innovation engine that helps move humanity forward. The SKA Project is overseen by the SKA Organization, a 10-country consortium. With a budget of €1.5 billion (US$1.9-billion), the SKA will be built over two sites in Australia and Africa. Construction is scheduled to begin in 2016 and first observations could be made by the end of the decade. When completed, around 2024, the SKA will provide an unprecedented look at the early universe before the first stars and galaxies came into being. With its unprecedented power and scope, scientists anticipate that the project will shed light on issues such as Einstein’s theories about gravity, the evolution of galaxies and stars, and the origins of dark energy, magnetic forces, and black holes. The telescope could even detect signs of extraterrestrial life.
<urn:uuid:872eca9d-f331-4836-a072-62c00a9dbf54>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/12/12/supercomputing-fundamental-ska-project/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00178-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912775
671
3.796875
4
Malicious encodings attacks are a technique used to bypass a server's security filter using various types of character encodings (URL, Unicode, etc.). Application developers are increasingly aware of security problems and try to avoid them. Since most security risks arise as a result of user-manipulated input (e.g., Parameter Tampering, Directory Traversal), one solution is to verify or filter the received input. As a result, most modern applications have some sort of input filters. Most security filters operate on the input received from the users, and attempt to detect malicious input. Some filters may operate on outgoing data (mainly used to avoid sending malicious code to users). The most common technique attackers use to bypass these filters is encodings. The most common encoding format is the ASCII characters encoding, using 7-bit representation for each character. Additional encodings, however, are supported by different environments, and are often required when embedding free text in parsed protocols. The two major types of encoding used by attackers to bypass security filters are URL encoding and Unicode/UTF8 encoding. Data used in Web applications is not restricted, and may be encoded using any character set or binary data. URL encoding is a technique for mapping 8-bit data to the subset of the US-ASCII character set allowed in a URL. Without proper validation, URL-encoded input can be used to disguise malicious code for use in a variety of attacks. URL encoding can be used by the attacker to pass parameters to the application, bypassing URL filtering in the Web server or intrusion detection systems. It can also fool the application, bypassing filtering mechanisms. URL encoding of a character is performed by taking the 8-bit hexadecimal value of a character and prefixing it with a "%". For example, the US-ASCII character set represents a space with decimal code 32, or hexadecimal 20. Thus its URL-encoded representation is %20. When the attacker sends an encoded URL, the Web server passes the request to the application (bypassing all the security filters) and the disguised malicious code is executed. This method can be used by attackers in a number of attacks such as Parameter Tampering, Directory Traversal, source code disclosure and Cross-Site Scripting. Another encoding method that can be used to implement malicious encoding is the Unicode/UTF-8. Unicode is a method of referencing and storing characters with multiple bytes by providing a unique reference number for every character, regardless of the language or platform. It is designed to allow a Universal Character Set (UCS) to encompass most of the world's writing systems. Unfortunately, the extended referencing system is not completely compatible with many old (albeit common) protocols and applications, and this has led to the development of a few UCS transformation formats (UTF) with varying characteristics. One of the most commonly formats, UTF-8, has the characteristic of preserving the full US-ASCII range. UTF-8 has multiple character mappings of UCS, so the same character can have several representations. For example, The UTF-8 sequence for the "." (dot) character represented as 2E, C0 AE, E0 80 AE, F0 80 80 AE, F8 80 80 80 AE, or FC 80 80 80 80 AE.
<urn:uuid:c1242f2f-3aaa-40e3-8f7d-984657918c18>
CC-MAIN-2017-04
https://www.imperva.com/Resources/Glossary?term=malicious_encodings
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00206-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893808
680
3.4375
3
Originally published December 15, 2005 Proprietary software vendors sow fear, uncertainty and doubt (FUD) about free and open source software licenses because those licenses are new and different. More notably, open source software licenses threaten their business modes—and they want their customers to believe that those licenses will somehow "infect" their organizations. The truth is that open source software licenses will almost always be more beneficial for consumers than proprietary licenses. Furthermore, open source licenses rarely (if ever) impose more terms on software users that are any more onerous than those imposed by proprietary licenses. This article is about software licenses, which spell out the rules by which you must play to use the software. Agreeing to an End User License Agreement (EULA) by opening a shrink-wrap package, clicking "OK" for an installer or downloading software from a website means you agree to abide by the terms the software owner sets. If you have questions about the terms of those agreements, you can contact your attorney for definitive answers. In part one, I will introduce the concept of copyright protection and how it relates to software sales. I also explain why all types of software is licensed, not sold or given away. Using Microsoft's Windows XP Home Edition End User License Agreement as an example of a typical EULA, we can observe some of the restrictions that proprietary software vendors place on their products. Although proprietary software licenses restrict the rights granted to end users, pioneers of the free and open source software movements believe that there must be a better way to distribute software. In part two we'll start with an introduction to open source and free software licensing, after examining why releasing software without any license (putting it in the public domain) or publishing it as shareware does NOT answer the question "How do I release software free from the constraints imposed by proprietary software licenses?" Then, we'll look more deeply into the philosophical roots of the Free Software and Open Source Software movements. While most "Free Software" is also "Open Source Software" and vice versa, there are important distinctions between the two. Part two examines these distinctions, and discusses the extent to which F/OSS licenses are "viral," and how terms of F/OSS licenses affect the way you do business. Part two concludes with a look at different F/OSS licensing approaches including the Free Software Foundation's GNU General Public License (GPL), the MIT license and use of dual-licensing for marketing open source software commercially. Software and "Intellectual Property" Rights Copyright protections have evolved in a world of solid things: books, magazines, musical recordings and other media upon which narratives, ideas or other forms of expression are imprinted. Publishers manufacture books out of paper and ink, and immediately sell them. When you buy a book, you purchase the paper and ink on which those words are recorded, but not the words themselves. You can do whatever you want with the book: write in the margins, rip it up and paste the scraps into patterns, even sell it to someone else. This is known as the First Sale Doctrine: the publisher makes it, immediately sells it and has no further interest in it. But you cannot copy the book’s content and use it in any way that affects the ability of the copyright owner to profit. Any expression of ideas, fiction or non-fiction books, magazine articles, songs, photographs, motion pictures—or software programs—can get copyright protection. By default, the creator of that expression ("author") owns the copyright. In practice, content publishers purchase some or all of the rights protected by copyright from content creators. The publisher buys those rights so the organization can sell books (or other products) to individuals who may read (or listen or watch) it, loan it to their friends, throw it away, sell it, or give it away. Software is different because copyright isn't enough. When you buy software, you're not buying content: it's a set of instructions for your computer that are compiled into programs. You are essentially buying the ability to run those instructions on your hardware; you don't need to see the source, you just need to execute compiled code. Proprietary software vendors don't want to expose the internal workings of their software. This would let competitors buy their software and find a different way to express the same ideas, since copyright protects the expression of ideas, not the idea itself. Another problem is that proprietary software vendors may want to know who is using their software as well. If the First Sale Doctrine were applied to software, they would lose that control. Part of the concern is loss of revenue through the creation of used software markets, but it also complicates how support for old software is provided. Other challenges are distributing bug fixes and security patches. In addition to copyright, proprietary software vendors add a set of rules, the license, which comprehensively details what rights the vendor is willing to assign to its customers and what rights it reserves for itself. Thus, users are bound by the system of copyright law and the terms of the license agreement. A Typical Proprietary Software License With millions of installed copies, Microsoft's Windows XP Home Edition (retail) End User License Agreement (EULA) is one of the most popular software licenses available. This is a good example for various reasons: at about 3,000 words, it is not excessively long though far from a model of simplicity; it incorporates some terms that were initially considered excessive or controversial; and it is otherwise reasonable in the rights granted to the end user. The first two clauses contain most of the controversy: at the risk of oversimplifying a complex issue, this EULA gives Microsoft a significant degree of control over your system. You must activate your software by registering it with Microsoft; you may have to go through a reactivation process if you upgrade your hardware. You must also allow Microsoft to install Digital Rights Management (DRM) software to mediate your use of certain content. When you do this, you give Microsoft permission to collect certain information about your system. Otherwise, the license reads much like other contemporary licenses. Clause 3 states: "The Software is licensed, not sold." You only have rights to the software as defined by the license; you are specifically prohibited from examining the code, as spelled out in Clause 4: “LIMITATIONS ON REVERSE ENGINEERING, DECOMPILATION, AND DISASSEMBLY. You may not reverse engineer, decompile, or disassemble the Software, except and only to the extent that such activity is expressly permitted by applicable law notwithstanding this limitation." Other clauses address such issues as commercial hosting or software rental (you can't do it), internal software transfer (you can move it from one system to another, as long as you remove it from the first system), and external transfer (you can transfer it once to another user, but you must remove it from your system). A large portion of the EULA addresses what Microsoft warrants (the software distribution only) and which remedies you have (refund or replacement of the media). Under terms of the EULA, Microsoft has absolutely no liability for any damages incurred as a result of using their software. The only exceptions to this are replacing or refunding the cost of faulty distribution media. All software licenses, including free and open source licenses, include similar disclaimers of warranty. No software developer will accept responsibility if something goes wrong with their software; virtually all risk in using the software is vested with the user. Remarkably, Microsoft's (or any vendor's) EULA disclaimers undercut vendor claims that proprietary software is better than open source software because there is a corporate entity "standing behind the product." Also, many Microsoft products may have other restrictions. For example, the Microsoft SQL Server 2000 EULA forbids disclosing results of any benchmark testing to third parties without Microsoft's "prior written approval." According to the PowerPoint 2003 EULA terms, "You may not create obscene or scandalous works, as defined by federal law at the time the work is created" using PowerPoint 2003 Media Elements. This goes on to state, "You must (a) indemnify and defend Microsoft from and against any claims or lawsuits, including attorneys' fees that arise from or result from the licensing, use or distribution of Media Elements as modified by you, and (b) include a valid copyright notice on your products and services that include the Media Elements." This type of license term may place both an individual and an entire organization under significant legal obligations. For example, the PowerPoint 2003 user must be able to identify works that could be construed as “scandalous.” It is easy, yet uninformative, to view free or open source software as software you can download for free, and proprietary software as software you can buy. Whether the software is free, open or proprietary, its use is governed by copyright and the license terms under which the software is published. Proprietary software licenses are often complex legal documents that sometimes contain frighteningly restrictive terms (for the end user). Similarly, free and open source software licenses incorporate clauses that put restrictions on how software can be used. In all cases, the license dictates restrictions; if you want to benefit from the licensed software, you must agree to the license terms. Next month, I’ll closely examine the emergence of free and open source licenses, and how they differ from proprietary licenses. I will discuss how software freedom tends to encourage better software as free and open source software licenses differ. I’ll also examine how they are used, whether alone or in conjunction with other licenses, to produce software that is both high quality and profitable. Recent articles by Pete Loshin
<urn:uuid:a1d852fd-f0ec-4b48-85a5-64231a0ec4b8>
CC-MAIN-2017-04
http://www.b-eye-network.com/view/2025
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00022-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938372
1,994
2.609375
3
NASA today said it will officially begin seeking the company or companies that it will contract with to begin the mission to capture an asteroid and move it near the moon, where it could be studied and perhaps mined. A Broad Agency Announcement or BAA on the Asteroid Redirect Mission (ARM)will be published March 21, NASA said. "NASA is developing concepts for ARM, which would use a robotic spacecraft to capture a small near-Earth asteroid, or remove a boulder from the surface of a larger asteroid, and redirect the asteroid mass into a stable orbit around the moon. Astronauts aboard the Orion spacecraft launched on the Space Launch System would rendezvous with the asteroid mass in lunar orbit, and collect samples for return to Earth. NASA said in order to support mission formulation and reduce risk and cost, the BAA will solicit proposals for studies and related technology development activities in the following areas: - Asteroid capture system concepts using deployable structures and autonomous robotic manipulators - Rendezvous sensors that can be used for a wide range of mission applications including automated rendezvous and docking, and asteroid characterization and proximity operations - Commercial spacecraft design, manufacture, integration, and testing capabilities that could be adapted for development of the Asteroid Redirect Vehicle (ARV) or its main components such as the Solar Electric Propulsion (SEP) tug - Partnership opportunities for secondary payloads on either the ARV or the Space Launch System (SLS). - Partnership opportunities for the Asteroid Redirect Crewed Mission or future missions in areas such as advancing science and in-situ resource utilization, enabling commercial activities, and enhancing U.S. exploration activities in cis-lunar space after the first crewed mission to an asteroid. - The duration of the studies is anticipated to be 180 days, with an interim report due in October. NASA may use the data resulting from this effort to inform the Asteroid Redirect Mission Concept Review (MCR), which is planned for late 2014 or early 2015. An Asteroid Initiative Opportunities Forum will be held at NASA Headquarters on March 26, 2014, The meeting agenda and registration information is posted on the BAA website at http://www.nasa.gov/asteroidinitiative In February NASA said it was assessing two concepts to robotically capture and redirect an asteroid mass into a stable orbit around the moon. In the first proposed concept, NASA would capture and redirect an entire very small asteroid. In the alternative concept, NASA would retrieve a large, boulder-like mass from a larger asteroid and return it to this same lunar orbit. In both cases, astronauts aboard an Orion spacecraft would then study the redirected asteroid mass in the vicinity of the moon and bring back samples. Very few known near-Earth objects are ARM candidates. Most known asteroids are too big to be fully captured and have orbits unsuitable for a spacecraft to redirect them into orbit around the moon. Some are so distant when discovered that their size and makeup are difficult for even our most powerful telescopes to discern. Still others could be potential targets, but go from newly discovered to out of range of our telescopes so quickly there is not enough time to observe them adequately, NASA stated. "There are other elements involved, but if size were the only factor, we'd be looking for an asteroid smaller than about 40 feet (12 meters) across," said Paul Chodas, a senior scientist in the Near-Earth Object Program Office at NASA's Jet Propulsion Laboratory, in a statement. "There are hundreds of millions of objects out there in this size range, but they are small and don't reflect a lot of sunlight, so they can be hard to spot. The best time to discover them is when they are brightest, when they are close to Earth." From NASA: Asteroids are discovered by small, dedicated teams of astronomers using optical telescopes that repeatedly scan the sky looking for star-like objects, which change location in the sky slightly over the course of an hour or so. Asteroid surveys detect hundreds of such moving objects in a single night, but only a fraction of these will turn out to be new discoveries. The coordinates of detected moving objects are passed along to the Minor Planet Center in Cambridge, Mass., which either identifies each as a previously known object or assigns it a new designation. The observations are collated and then electronically published, along with an estimate of the object's orbit and intrinsic brightness. Automatic systems at NASA's Near-Earth Object Program Office at JPL take the Minor Planet Center data, compute refined orbit and brightness estimates, and update its online small-body database. A new screening process for the asteroid redirect mission has been set up which regularly checks the small-body database, looking for potential new candidates for the ARM mission. "If an asteroid looks as if it could meet the criteria of size and orbit, our automated system sends us an email with the subject "'New ARM Candidate,'" said Chodas. "When that happens, and it has happened several dozen times since we implemented the system in March of 2013, I know we'll have a busy day." Check out these other hot stories:
<urn:uuid:98767841-cf47-4637-b885-bd17c18df6ee>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226516/security/nasa-setting-stage-for-asteroid-mission.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00234-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940733
1,047
2.640625
3
On Thursday, the world learned that attackers were breaking into computers using a previously undocumented security hole in Java, a program that is installed on hundreds of millions of computers worldwide. This post aims to answer some of the most frequently asked questions about the vulnerability, and to outline simple steps that users can take to protect themselves. Update, Jan. 13, 8:14 p.m. ET: Oracle just released a patch to fix this vulnerability. Read more here. Q: What is Java, anyway? A: Java is a programming language and computing platform that powers programs including utilities, games, and business applications. According to Java maker Oracle Corp., Java runs on more than 850 million personal computers worldwide, and on billions of devices worldwide, including mobile and TV devices. It is required by some Web sites that use it to run interactive games and applications. Q: So what is all the fuss about? A: Researchers have discovered that cybercrooks are attacking a previously unknown security hole in Java 7 that can be used to seize control over a computer if a user visits a compromised or malicious Web site. Q: Yikes. How do I protect my computer? A: The version of Java that runs on most consumer PCs includes a browser plug-in. According to researchers at Carnegie Mellon University‘s CERT, unplugging the Java plugin from the browser essentially prevents exploitation of the vulnerability. Not long ago, disconnecting Java from the browser was not straightforward, but with the release of the latest version of Java 7 — Update 10 — Oracle included a very simple method for removing Java from the browser. You can find their instructions for doing this here. Q: How do I know if I have Java installed, and if so, which version? A: The simplest way is to visit this link and click the “Do I have Java” link, just below the big red “Download Java” button. Q: I’m using Java 6. Does that mean I don’t have to worry about this? A: There have been conflicting findings on this front. The description of this bug at the National Vulnerability Database (NVD), for example, states that the vulnerability is present in Java versions going back several years, including version 4 and 5. Analysts at vulnerability research firm Immunity say the bug could impact Java 6 and possibly earlier versions. But Will Dormann, a security expert who’s been examining this flaw closely for CERT, said the NVD’s advisory is incorrect: CERT maintains that this vulnerability stems from a component that Oracle introduced with Java 7. Dormann points to a detailed technical analysis of the Java flaw by Adam Gowdiak of Security Explorations, a security research team that has alerted Java maker Oracle about a large number of flaws in Java. Gowdiak says Oracle tried to fix this particular flaw in a previous update but failed to address it completely. Either way, it’s important not to get too hung up on which versions are affected, as this could become a moving target. Also, a new zero-day flaw is discovered in Java several times a year. That’s why I’ve urged readers to either uninstall Java completely or unplug it from the browser no matter what version you’re using. Q: A site I use often requires the Java plugin to be enabled. What should I do? A: You could downgrade to Java 6, but that is not a very good solution. Oracle will stop supporting Java 6 at the end of February 2013, and will soon be transitioning Java 6 users to Java 7 anyway. If you need Java for specific Web sites, a better solution is to adopt a two-browser approach. If you normally browse the Web with Firefox, for example, consider disabling the Java plugin in Firefox, and then using an alternative browser (Chrome, IE9, Safari, etc.) with Java enabled to browse only the site(s) that require(s) it. Q: I am using a Mac, so I should be okay, right? A: Not exactly. Experts have found that this flaw in Java 7 can be exploited to foist malware on Mac and Linux systems, in addition to Microsoft Windows machines. Java is made to run programs across multiple platforms, which makes it especially dangerous when new flaws in it are discovered. For instance, the Flashback worm that infected more than 600,000 Macs wiggled into OS X systems via a Java flaw. Oracle’s instructions include advice on how to unplug Java from Safari. I should note that Apple has not provided a version of Java for OS X beyond 6, but users can still download and install Java 7 on Mac systems. However, it appears that in response to this threat, Apple has taken steps to block Java from running on OS X systems. Q: I don’t browse random sites or visit dodgy porn sites, so I shouldn’t have to worry about this, correct? A: Wrong. This vulnerability is mainly being exploited by exploit packs, which are crimeware tools made to be stitched into Web sites so that when visitors come to the site with vulnerable/outdated browser plugins (like this Java bug), the site can silently install malware on the visitor’s PC. Exploit packs can be just as easily stitched into porn sites as they can be inserted into legitimate, hacked Web sites. All it takes is for the attackers to be able to insert one line of code into a compromised Web site. Q: I’ve read in several places that this is the first time that the U.S. government has urged computer users to remove or wholesale avoid using a particular piece of software because of a widespread threat. Is this true? A: Not really. During previous high-alert situations, CERT has advised Windows users to avoid using Internet Explorer. In this case, CERT is not really recommending that users uninstall Java: just that users unplug Java from their Web browser. Q: I’m pretty sure that my Windows PC has Java installed, but I can’t seem to locate the Java Control Panel from the Windows Start Menu or Windows Control Panel. What gives? A: According to CERT’s Dormann, due to what appears to potentially be a bug in the Java installer, the Java Control Panel applet may be missing on some Windows systems. In such cases, the Java Control Panel applet may be launched by finding and executing javacpl.exe manually. This file is likely to be found in C:\Program Files\Java\jre7\bin or C:\Program Files (x86)\Java\jre7\bin. Q: I can’t remember the last time I used Java, and it doesn’t look like I even need this program anymore. Should I keep it? A: Java is not as widely used as it once was, and most users probably can get by without having the program installed at all. I have long recommended that users remove Java unless they have a specific use for it. If you discover later that you really do need Java, it is trivial and free to reinstall it. Q: This is all well and good advice for consumers, but I manage many PCs in a business environment. Is there a way to deploy Java but keep the plugin disconnected from the browser? A: CERT advises that system administrators wishing to deploy Java 7 Update 10 or later with the “Enable Java content in the browser” feature disabled can invoke the Java installer with the WEB_JAVA=0 command-line option. More details are available in the Java documentation. Tags: Adam Gowdiak, CERT, chrome, CVE-2013-0422, do i have java, firefox, http://www.kb.cert.org/vuls/id/625617, ie, java, Java 6, Java 7 Update 10, Java exploit, National Vulnerability Database, noscript, NotScripts, Oracle, safari, Security Explorations, Will Dormann
<urn:uuid:1f521ee9-41b5-4d75-a458-8e1fd82c70d0>
CC-MAIN-2017-04
https://krebsonsecurity.com/2013/01/what-you-need-to-know-about-the-java-exploit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00142-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922886
1,681
2.84375
3
Recently a research project by the Amsterdam University [PDF Alert] revealed that US law allows for the US government to access information stored in the Cloud, by (ab)using the PATRIOT act. Multiple Dutch politicians have started asking questions from state secretary Teeven of the Justice department as to whether he knew about this before the research project, and whether he did anything to prevent this or to warn Dutch citizens about this potential breach of privacy. He has since sent in an official answer. Unsurprisingly, he confirms that the issue is real, but does not answer the question about whether he knew about this beforehand. He goes on to saying that it is up to each individual to be careful with any information they publish online, be it to a cloud-based service or anywhere else. What surprises me, is that people still don't seem to understand what the Cloud is, what it does and how it works. The effects of the PATRIOT act have long been known, and its effects have been hotly debated for years. How is this any surprise to anyone? Please follow this logic: The Cloud is the Internet. It really is that simple. Cloud Services are simply applications that run on clustered computer systems. Maybe on two, ten, a hundred or a thousand systems at a time, it doesn't matter. Users –and data- are replicated to every system in this cloud regardless of where they are. There could be ten in your own country, twenty in the US and another fifty in Russia. This is (most often) invisible to the end user, and very often special effort is made to keep this invisible to the end user, and to make it one big system regardless of what server you are connecting to, or from where. To be on the safe side, you should assume that regardless of where you are located when you upload data, it is uploaded to the entire grid – not just the part in your country. And it matters where these systems are located geographically, because that is the only factor in the question as to what country's laws this system –and more importantly the data on that system- is subject to. For example: Google has servers dedicated to Google Docs in a lot of countries such as the Netherlands, Germany, Britain, the US and probably several countries in Asia. You upload a document to Google Docs while in the Netherlands. As soon as you do, it is replicated to either all the systems all over the globe, or replicated between central data storages all over the globe. It is generally safe to assume that your data will be everywhere, regardless of where you are. ANY country that has Google servers for Google Docs within its borders can in theory –this depends on what laws exist in said country- demand access to this data. The US is almost certainly not the only government that can do this, but even if no other country has such laws, you can rest assured that if the need ever arises (from a national security standpoint) to access your data, things tend to get very 'flexible' on very short notice in most countries. Therefore you should assume that you can not trust any online service with your data, regardless of its classification or nature. As has always been the case, in the end you –and only you- remain the only person responsible for what happens to your data. If you absolutely do not want it leaked, don't put it on the internet. Don Eijndhoven writes articles on the state of Cyber Warfare and related topics, and can often be found speaking at various conferences. His articles have been published in places such as InfosecIsland.com, ICTTF.org, ITGRCForum.com, PenTest Magazine, PvIB Magazine and a variety of other magazines and websites. Don is an independent security researcher and entrepreneur and the founder of the Dutch Cyber Warfare Community, a platform for all people working in the Cyber Warfare industry in the Netherlands, and a founding board member of the Netherlands Cyber Doctrine Institute (NCDI). The NCDI is a foundation that aims to assist the Dutch Ministry of Defence with writing proper cyber doctrine. He is also the CEO of Argent Consulting, a Dutch firm that offers a full range of services in all areas of Cyber. He holds a Bachelor in Computer Science and is currently working on his MBA at Nyenrode Business University.
<urn:uuid:55c18d56-02c5-4493-8bd2-0980469b7370>
CC-MAIN-2017-04
http://infosecisland.com/blogview/22789-The-Dutch-the-Yanks-the-Cloud-and-YOU.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00170-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969979
888
2.765625
3
I was digging around the NASA archives when I stumbled upon the flight surgeon's report for the Mercury-Redstone 3 mission, otherwise known as the second flight by a human into space, and the first by an American. Alan Shepard was the man chosen by the United States to leave Earth. The astronauts were accompanied by doctors at all times. They were fed a strict diet. Their vitals were measured. They were monitored constantly. But while I've known this in the abstract, it wasn't until reading the surgeon's report that I realized that these flights, from a biomedical perspective, were experiments playing out in the astronauts' bodies. As such, as many variables as possible had to be controlled, while still allowing the pilots to function normally. Here are 13 tidbits I extracted from William K. Douglas' report detailing the pre-flight ritual. 1. For the three days before the flight, the pilot lived in the Crew Quarters of Hangar "S" at Cape Canaveral: "Here he is provided with a comfortable bed, pleasant surroundings, television, radio, reading materials and, above all, privacy. In addition to protection from the curious-minded public, the establishment of the pilot and the backup pilot in the Crew Quarters also provides a modicum of isolation from carriers of infectious disease organisms." 2. The pilot ate "in a special feeding facility" with a personal chef, "whose sole duty during this period is to prepare these meals."
<urn:uuid:af33fe70-e8d0-4f8e-8092-cbcd24496109>
CC-MAIN-2017-04
http://www.nextgov.com/health/2013/08/13-little-things-nasa-did-get-alan-shepard-ready-space/68435/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00106-ip-10-171-10-70.ec2.internal.warc.gz
en
0.97872
298
2.609375
3
A Vanguard Response Systems technician demonstrates a chemical bomb after simulated disarming by the company's robot. Photo by Blake Harris. Given the changing face of disaster management, are we really prepared? That was the fundamental question addressed by the 14th World Conference on Disaster Management held in Toronto, June 20-23, attended by over 1,300 delegates from the U.S., Canada and overseas. Predictably, speaker after speaker from both the public and private sector answered that question in much the same way. "Of course we are better prepared than we were, but there is still much much more to do to meet the challenges of a post-9/11 world." Like most things, when it comes to preparedness, the devil is in the details. And those details span a broad array of threats and dangers that make preparedness an ever more complex issue involving foresight, organization, communication, cooperation, private-public partnerships, and of course running through all of these -- technology. The prominence of technology in disaster readiness was clearly illustrated through the accompanying trade show, which, as one might expect, featured mainly technology products. Vendor displays included bomb-disposal robots, emergency hospital camp facilities and biological safe tents as well as disaster management and GIS emergency-coordination software. Two high points that livened up the conference were elaborate demonstrations of emergency response. Vanguard Response Systems ' remote-controlled robot took a chemical bomb off a bus and disarmed it. And Toronto fire fighters demonstrated how they would decontaminate citizens exposed to dangerous chemicals such as might be found in a terrorist attack. In a simulated chemical terrorist attack exercise, a Toronto firefighter checks for chemical contamination after decontamination has been complete. Photo by Blake Harris. Yet the presentations that seemed to make have the most lasting impact among attendees were keynote speakers who spelled out not just the necessity for better preparedness for all types of disaster, but also better approaches to assess risk, develop resources and then respond to and manage disasters when they do occur. Disaster management is a field where best practices are avidly devoured. Professionals clearly recognize that there are new lessons to be learned from all the major disasters. Examining each in detail offers insights on how to better prepare for the next one. But in this there are liabilities. Karl Hofmann, executive secretary of the State Department and special assistant to the Secretary of State asked one important question: "Are we preparing for the next disaster simply by reliving the last one? Or are we thinking ahead?" The implication was that in all likelihood, the next disaster, whether natural or man-made, would be different. And only preparing for disasters based on what has happened in the past could severely limit the ability to effectively respond to a new and very different event, whether that is a terrorist attack or a natural catastrophe. What also was made abundantly clear is that there is indisputably a role for IT in disaster readiness. And while the most sophisticated satellite mobile communication systems for disaster response command posts might be wonderful, IT can also help to ensure better emergency coordination at very little cost. One example is Seattle's Emergency Management program, which has established and tested the Business Emergency Network (BEN) that uses e-mail to provide emergency information to businesses and to coordinate needed citizen responses. At virtually no additional expense to the city, businesses no longer have to rely on the media for disaster information which is often inaccurate and speculative, especially when disasters first occur. Instead they get their information directly from the emergency command center. And it is exactly the same information that city officials are using to manage the crisis. So at the very least, there was one lesson IT professionals could take away from a conference like this. The role that technology can play to increase disaster readiness does not just involve high-end satellite communication systems and the like. There is also much that can be done by using existing resources and simply thinking more creatively about how to use them to better prepare for disasters. Additionally, IT managers have to realize that disaster preparedness involves far more that designated emergency responders. Most major disasters affect everyone in the area -- private and public institutions as well as the citizens at large. So disaster readiness is at least a small piece of everybody's business, whether they work inside or outside of government and no matter where they live.
<urn:uuid:61c21c36-7710-4345-8a7e-38cb60c89e61>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/Preparing-for-the-Next-Disaster.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00252-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964319
885
2.71875
3
How the Internet of Things is improving performance on the Formula One power unit operation Formula One racing has changed dramatically during the last decade. The focus has shifted from raw power to using technology that can squeeze every possible bit of performance out of the cars. In 2014, the Fédération Internationale de l'Automobile (FIA), which governs Formula One racing, published new regulations that required all Formula One cars to use hybrid engines and limited fuel consumption during races. These regulations are driving the sport to focus on developing increasingly energy-efficient automotive technologies. Power unit optimization Today’s Formula One engines are limited to a displacement of 1.6 liters, similar to the displacement found in a compact sedan. Yet they exceed 180 miles per hour and drive for more than 90 minutes on approximately 25 gallons of fuel. Formula One teams achieve this feat with the help from a power unit and energy recovery system that consists of motors for producing added power from electricity that is generated using exhaust heat. In addition, an electronics control system manages the entire process. Power units are as important to a car’s success as is the engine in today’s Formula One circuit. Honda recently discovered this point after their first year back in 2015 after a seven-year hiatus from the racing circuit. Honda is implementing Internet of Things technology in 2016 to monitor and analyze data from more than 160 sensors on the power unit of its Formula One car. Race data from the sensors will help Honda optimize its power units by streamlining performance and improving fuel efficiency. Honda R&D developed a new cognitive-based system to analyze data from the power unit, quickly and efficiently check residual fuel levels and estimate the possibility of mechanical problems. Honda is using the IBM Internet of Things for Automotive solution to deliver data generated from power units directly to the cloud for near-real-time analysis. “After a seven year hiatus from F1 racing, Honda R&D is excited to work with IBM to apply Internet of Things technology to improve our power units,” said Satoru Nada, chief engineer and manager, Power Unit Development Division, Honda R&D Co., Ltd., HRD Sakura. “Our racing initiative is typically a gateway to extend advanced technology to produce highly efficient and more eco-friendly vehicles.” Results are shared by the Honda's trackside members equipped with tablets and mobile technology, and analyzed in near real time by researchers at HRD Sakura, and Honda’s R&D facility in Japan. Transmitting this analysis as the race is taking place allows for adjustments to basic metrics such as temperature, pressure and power levels. In addition, IBM Cognos software helps the research team to build very complex performance models to measure energy recovery of the power unit. Impressive performance results Honda noted the improved reliability of the power unit in trial runs that took place in Barcelona, Spain in late February 2016. The 2016 Formula One season kicks off soon, with the Australian Grand Prix on 20 March 2016.
<urn:uuid:e1df31bd-a92d-4ed7-9eb7-af750426526c>
CC-MAIN-2017-04
http://www.ibmbigdatahub.com/blog/how-internet-things-improving-performance-formula-one-circuit?platform=hootsuite
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00068-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934603
619
2.796875
3
The traditional lecture model is the standard learning method in most American classrooms, but there is growing interest in new learning models that are encouraging students and teachers to “learn now, lecture later.” CDW-G’s new report, Learn Now, Lecture Later, looks at the different learning methods teachers and students are using and how technology is supporting the move to these new learning models. The report also examines the challenges that high schools and colleges must overcome to make a successful transition. To view an in-depth analysis of Learn Now, Lecture Later, please complete the information form at the link below. - Get to the heart of what students and faculty want: Understand the technology students and teachers already have, how they want to use it in class and how they best learn and teach - Consider how to incorporate different learning models: Work closely with faculty to meet their subject-area and curriculum needs and personal teaching styles - Explore how technology can support and enhance learn now, lecture later: Enable the community to consult with each other and share best practices - Support faculty with professional development and IT with infrastructure: Unless faculty are comfortable, the change will be slow; without IT, the change will not happen at all
<urn:uuid:d561171e-6360-403e-aa87-3749b5300335>
CC-MAIN-2017-04
http://www.cdwnewsroom.com/2012-learn-now-lecture-later-report/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938634
251
2.78125
3
Adaptive Learning Systems What are the adaptive learning systems? Adaptive Learning System is a tool for transforming the learning system from a passive reception of information by the receiver to an interactive process by collaborating in the education process. Adaptive Learning System is used in education as well as in business training. There are various thick and thin client versions of adaptive learning systems available from various vendors.
<urn:uuid:a7429a9c-0f3f-4f1e-b9e9-aa3f226ab378>
CC-MAIN-2017-04
https://www.hcltech.com/technology-qa/what-is-adaptive-learning-system
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00125-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956239
77
3.015625
3
Distributed denial of service is a type of DoS attack where multiple compromised systems, which are often joined with a Trojan, are used to target a single system causing an attack. The DDoS attack itself may be a bit more sinister, according to NSFOCUS IB. A DDoS attack is an attempt to exhaust resources so that you deny access to resources for legitimate users. “It has never been easier to launch a sustained attack designed to debilitate, humiliate or steal from any company or organization connected to the Internet. These attacks often threaten the availability of both network and application resources, and result in loss of revenue, loss of customers, damage to brand and theft of vital data,” NSFOCUS Global wrote in a business white paper. [ ALSO ON CSO: 4 trends in DDoS security in 2016 ] In a question-and-answer session, Dave Martin, director of product marketing at NSFOCUS, IB, explained the different types of DDoS attacks and how to detect and respond to these attacks. What are some of the most common types of DDoS attacks? There are actually three styles of attacks that we see often. Application order, volumetric, and hybrid. Can you explain the differences in each method? Application order is less volumetric but still tries to consume resources. Attacker connect to a website and asks for a password. They send data and get a response from the server. Rather than send all data at once, they send a character at a time. As an attacker, you can create hundreds of thousands of connections at a time. They are opening up a secure connection to a website that appears normal but is consuming memory. Volumetric attempts to overwhelm the target with traffic. The hybrid attack is often application order and volumetric used in combination. The consequence is loss of revenue, loss of customers, and damage to reputation. These are not even about denial of service. These are smoke screens for exfiltration of data. Because of the distraction, attackers are able to plant back doors in other areas of the network. How can security teams detect these attacks? Detecting the DDoS attack itself really requires specialized hardware that will send alerts like emails or management tracks. The goal is to get these notifications before resource becomes unavailable. If you don’t have anti DDoS detection, you won’t know until the service goes down. How do security teams respond once they identify these attacks? It takes a while for service providers to identify and clean that traffic. A lot of service providers black hole the traffic so that all of your traffic is offline. How can security professionals differentiate when an attack is DDoS? These attacks are advanced persistent threats. Often the bad actors install a back door and sit on a network making them difficult to detect. Why are these attacks so persistent? These DDoS attacks are very easy to pull off. There are botnets available that criminals can rent for as little as $10 a month, and they require no technical expertise. These can generate a very large attack. Also, a lot of folks think they can handle these attacks with firewall, but many people are finding that those types of general purpose tools fall over in the face of an attack. People are starting to recognize that existing security equipment is not going to provide adequate protection. A firewall is great, you have to have it, but it’s not a panacea. How do security teams determine what tools are best in mitigating the risks of these attacks? They first have to ask, “Is it a good solution that fits in my budget?” Be sure that the technology has been battle tested. While enterprises like major banks have enormous budgets for their security strategy, small to midsize organizations are working with more limited resources. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:e12e7296-6a3c-4d5f-8d04-0c64373e884e>
CC-MAIN-2017-04
http://www.csoonline.com/article/3036742/advanced-persistent-threats/ddos-attacks-how-to-mitigate-these-persistent-threats.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00519-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953072
808
2.703125
3
Agencies prep for arctic oil spills - By Kathleen Hickey - Sep 04, 2014 The ability to detect and manage oil spills in the Arctic is becoming more of an issue as oil and gas exploration, tourism and fishing are expected to increase in the area. In August, teams of researchers started on a month-long voyage to study how to deal with potential oil spills in the Arctic. Dealing with Arctic oil spills presents unique challenges. Normally the National Oceanic and Atmospheric Administration or Coast Guard would survey the ocean for the oil’s precise location from the air to improve its model of the oil’s expected behavior. However, teams may be unable to get aircraft to the location, or flying in the area may be unsafe. Additionally, a major cleanup would require a massive number of boats, airplanes, equipment and personnel. But infrastructure to transport and support those resources is lacking in the Arctic, which has scarce numbers of roads, airports, hotels and other critical items. As an example, the BP oil spill in the Gulf of Mexico required 13,000 people, 520 vessels, 1.4 million feet of boom and a half million barrels of dispersants in the first three weeks after the incident, according to the Pew Charitable Trust. In total, the spill required 40,000 people, according to NOAA. If an oil spill occurs in the Beaufort Sea, north of Alaska, the nearest and largest community is Barrow, population 4,429. Spotty communications in remote locations adds to the difficulty of coordinating efforts and understanding what is happening when time is of the essence. The Arctic Shield 2014 expedition gave government, industry and university researchers the ability to study an oil spill under real world conditions. Today scientists have virtually no experience with Arctic oil spills – most research has been done in labs or in small scale trials. NOAA partnered with the U.S. Coast Guard Research and Development Center for the testing, travelling aboard the USCG Cutter Healy, an icebreaker, to remote Arctic locations. Once the Healy was far enough north, the Coast Guard simulated an oil spill using fluorescein dye, an inert tracer, to test oil spill detection and recovery technologies in ice conditions. The team also tested unmanned airborne and underwater vehicles and techniques to collect data on ocean conditions, such as temperature and currents in the areas where oil is mixing and spreading in the water column. The Coast Guard and Marine Exchange of Alaska teamed to test the capabilities of existing electronic Maritime Safety Information infrastructure in the Arctic as part of Arctic Shield 2014. And the Navy tested its Mobile User Objective System (MUOS) next-generation narrowband military satellite communications system, which is designed to provide smartphone-like communications to mobile forces. Other areas of research included boat operations, communications, navigational safety and engineering improvements for Coast Guard boats in a cold weather environment. One of NOAA’s goals was testing the Arctic Environmental Response Management Application (ERMA), a stand-alone computer model for use in remote locations. ERMA is a web-based GIS tool that helps emergency responders and environmental resource managers in dealing with potentially harmful environmental incidents. Arctic ERMA is an Arctic-specific version of the mapping tool that integrates data streams from the Arctic Shield technologies to provide a centralized visual representation, according to Zachary Winters-Staszak, a geographic information systems specialist with NOAA’s Response and Restoration office. Normally an online tool, scientists will be using an independent version of ERMA, known as Stand-alone ERMA, for gathering scientific data during the voyage, according to NOAA’s Office of Response and Restoration. Radio tower and satellite connectivity, needed for Internet access, is extremely limited in the Arctic. Above the 77th parallel, there are few radio towers, and satellites often drop calls and can only support basic text email, according to a blog post from NOAA’s Office of Response and Restoration. During Arctic Shield, NOAA tested the feasibility of using unmanned, remote-controlled aircraft to collect and report information back to responders on the ship, then pulling this information into Arctic ERMA, which displays the data. In addition to NOAA and the USCG, there were 13 other agencies and universities participating in Arctic Shield 2014, including US NORTHCOM, US Space and Naval Warfare Systems Command, and the U.S. Army Corps of Engineers Cold Region Research and Engineering Laboratory. ERMA is available at the NOAA ERMA website and additional information on how to use the tool is at NOAA.
<urn:uuid:841c74ad-4c75-4ab9-82c6-cbd37dc6105c>
CC-MAIN-2017-04
https://gcn.com/articles/2014/09/04/arctic-shield.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00363-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931677
927
3.1875
3
With private-sector backing, California’s state government and the state’s flagship public university are teaming up to develop an intelligent transportation solution that will help drivers avoid congested traffic. The California Department of Transportation (Caltrans), the University of California, Berkeley’s California Center for Innovative Transportation, and IBM Research hope to improve the reliability of estimated commute times, and give drivers personalized travel recommendations that save time and fuel. U.S. commuters waste 28 gallons of gas and $808 each year because they are stuck in traffic, according to the IBM announcement of the intelligent transportation project. Traffic snarls are notoriously acute in California. The average person in Los Angeles wasted 38 hours per year in highway traffic jams, according to 2007 U.S. Bureau of Transportation Statistics. In San Francisco, it was 30 hours; in San Diego, it was 29 hours. “As the number of cars and drivers in the Bay Area continue to grow, so too has road traffic. However, it’s unrealistic to think we can solve this congestion problem simply by adding more lanes to roadways, so we need to proactively address these problems before they pile up,” said Greg Larson, chief of the Office of Traffic Operations Research for Caltrans. The collaborative research team hopes to give California reliable real-time traffic information before drivers get behind the wheel. “Even with advances in GPS navigation, real-time traffic alerts and mapping, daily commute times are often unreliable, and relevant updates on how to avoid congestion often reach commuters when they are already stuck in traffic and it is too late to change course,” according to IBM. The company said its researchers have developed a new traffic modeling tool for travelers that continuously analyzes existing congestion data, commuter locations and expected travel start and arrival times throughout a metropolitan region for a variety of transportation modes, including mass transit. The tool could someday recommend the most efficient travel route and also integrate parking information.
<urn:uuid:6cd94d74-5f3d-4e6d-a473-46922622b7c5>
CC-MAIN-2017-04
http://www.govtech.com/transportation/119861559.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00271-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935814
404
2.75
3
AAA Secures IBM i Server July 21, 2010 Timothy Prickett Morgan The IBM HTTP Server for i, powered by Apache, has three distinct ways to handle whether a particular request for a resource will result in that resource actually being returned. These three techniques are access control, authentication, and authorization, or AAA. In this article, I’ll share how AAA works within IBM HTTP Server for i. First A: Access Control Access control refers to any means of controlling access to any resource. This A is distinct from authentication and authorization. IBM HTTP Server for i uses Allow and Deny directives to implement the criteria of access control. The Order directive tells the order to apply the filters. Let’s see how the criteria access control works. First, you need to create an HTTP server. With IBM Web Administration for i, you can quickly create an HTTP server. As to the details, see the section on “Create HTTP Server” in the IBM i information center. After your HTTP server is created, check the configuration file, which should now appear as: /www/conf/<instancename>/httpd.conf. In your case, <instancename> will be the HTTP server name that you created. You will see the following lines, which indicate that the HTTP server, by default, prevents any clients from seeing the entire file system. Whether these clients are valid or not does not matter. This proves that access control is a separate item from authentication and authorization. Second A: Authentication Authentication is any process by which you verify that someone is really who they claim they are. This usually involves a user name and a password. IBM i uses validation lists to implement the criteria of authentication. A validation list is an IBM i object of type *VLDL. Each validation list contains a list of Internet users and their passwords. Each Internet user has one valid password defined for it. In order to see how authentication works, we continue our example based on the HTTP server we created. We must follow these three steps: Here are two ways to create and delete validation lists. After a validation list is created, you can add an Internet user by using IBM Web Administrator for i. Figure 1 shows how to use the IBM Web Administrator for i to add an Internet user to the validation list. The fields of a validation list are as follows: The fields for Group File and Group will be covered in the Authorization section. After creating the validation list and adding Internet users, the next action is to set the configuration to use this validation list. In our example, the HTTP server we created is pigm. The particular resource that we need to protect is directory /www/pigm/proctected. Basic authentication, the simplest method of authentication, is adopted. The validation list we specify is QGPL/PIGM. Edit the following lines in the HTTP server configuration file /www/conf/<instancename>/httpd.conf. The definitions of the directives are described below: Now, let’s take a look how basic authentication works. When a particular resource has been protected using basic authentication, HTTP Server sends a 401 Authentication Required header with the response to the request, in order to notify the client that user credentials must be supplied in order for the resource to be returned as requested. Upon receiving a 401 response header, the client’s browser, if it supports basic authentication as IE and FireFox do, will pop up a box to ask the user to supply a user name and password to be sent back to the server. If the user name is in the validation list, and if the password supplied is correct, the resource will be returned to the client. Apart from validation list authentication, the IBM HTTP Server for i also provides other authentication methods. IBM i user profile authentication is one of them. You can specify IBM i user profile authentication by just replacing the following line: The new line is: Using this value indicates that the server should use the IBM i User Profile support to validate user name and password. Third A: Authorization Authorization is any process by which someone, once identified, is permitted to use the resource. In the example above, all of the valid users specified in the validation list have authority to access a protected resource, but can we only allow the specific person or group to access it? The answer is yes. The IBM HTTP Server for i uses validation lists in conjunction with other resources, like group files, to limit access to server resources. You can use validation lists in conjunction with group file to manage a group of people that have access to that resource. You can add and remove members, without having to edit the server configuration file and restart IBM HTTP Server for i each time. Next, we combine authentication and authorization by executing the following steps: The first step is the same as above. The second step is optional. You can use the group file API to create the group file. For the third step, remember to specify the group file and group when you try to add Internet users. Figure 2 shows how to use IBM Web Administrator for i to add an Internet user to a group and a group file. If you enter a group file that does not exist, the system will create it for you. I create a sample group file /home/pigm/groupfile, in which two groups–g1 and g2–are defined. Then I add three Internet users: PIGM, Bob (who belongs to group g1), and James (who belongs to g2). Here are the contents of the lists: g1: PIGM, Bob g2: James The last step is to set the configuration to use this validation list and group file. Once this file has been created, we can require that someone be in a particular group, say g1 in our example, in order to get the requested resource. This is done with the GroupFile directive, as shown in the following example. Again, edit the following lines in the HTTP server configuration file: /www/con/<instancename>/httpd.conf. The directives are defined as follows: In this example, we can see all of three users are defined in the validation list. However, only the user PIGM and Bob, both of whom belong to group g1, have authority to access the protected area, whereas the user James will be denied even though he also exists in the validation list. Here these two criteria, Authentication and Authorization, work together to limit access to server resources. Now, you are armed with the knowledge of how to leverages Access control, Authentication, and Authorization, the AAA techniques to provide a powerful security module for IBM HTTP Server for i. Pi Guang Ming is a software engineer for IBM’s i Web integration development team at the China System and Technology Lab. The i Web integration development team’s focus is on the Web-based management of middleware running on i, including WebSphere Application Server, WebSphere Portal Server, Integrated Web Services Server, Integrated Application Server, and the i HTTP server. Send your questions or comments for Jon to Ted Holt via the IT Jungle Contact page.
<urn:uuid:88ad867a-544c-4afd-89e5-720254630c20>
CC-MAIN-2017-04
https://www.itjungle.com/2010/07/21/fhg072110-story01/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00391-ip-10-171-10-70.ec2.internal.warc.gz
en
0.877493
1,503
2.65625
3
Level of Govt: State Problem/situation: To preserve wetlands, regulatory agencies need to ascertain the location and extent of the protected areas. Solution: GIS database accurate to within an acre. Jurisdiction: New Jersey Vendors: ESRI, Markhurd, Greenhorne & O'Mara. By Bill McGarigle Special to Government Technology Few areas in the nation have lost more wetlands than the state of New Jersey. The major causes are a growing population, expanding development and, until recently, a lack of effective resource management tools. The problem is compounded by New Jersey's geography. According to the state's Land Use Regulation Program administrator, Ernest Hahn, "If you get any size parcel in New Jersey, you'll probably have some wetlands associated with them - there are no vast areas of the state that do not have wetlands resources, even in the hilly country of North Jersey." The state's first attempt to stem the loss of coastal wetlands was the Wetlands Act of 1970. "They passed that act," explained Hahn, "because we were having wholesale destruction of those areas as a result of houses built out on the marshes. [Developers] would just fill in the marsh and push out a new road. Everybody would get a lagoon and a waterfront lot." The Wetlands Act of 1970 required the state to completely map all coastal wetlands and create maps that could be used as a regulatory tool in screening applications. However, wetlands and their associated wildlife habitats continued to disappear. Checking the decline required more personnel and state-of-the-art, resource-management tools than budget-strapped states and the federal government could provide. Despite the National Clean Water Act, cutbacks in the federal budget had left the Army Corps of Engineers (ACOE), Environmental Protection Agency (EPA) and National Fish and Wildlife (NF&WL;) Service with insufficient resources to adequately protect and enforce regulation of New Jersey's freshwater wetlands. In 1987, the only graphic resources available were the National Fish and Wildlife Service inventory (NWI) maps, at a scale of 1:24,000 - excellent for some applications, but not a large enough scale for regulatory purposes. For zoning, management and the protection of wildlife habitats, planning boards need maps with linear features down to an acre. In 1986, the accelerating loss of wetlands prompted then-Gov. Thomas Kean to put a moratorium on building in the state's wetlands until the Legislature came up with a bill that gave New Jersey the resources and authority needed to regulate and protect its own wetlands resources. New Jersey lawmakers responded with the Freshwater Wetlands Protection Act (FWWPA) of 1987. Under the Clean Water Act, the federal government has the authority to regulate and permit wetlands throughout the nation. However, Section 404 of the act grants states with sufficient resources the right to assume these responsibilities. In view of the limited resources of the ACOE, EPA and USF&WL; services, New Jersey lawmakers felt the state could do a better job regulating and monitoring their own wetlands. By the mid-1980s, the New Jersey Department of Environmental Protection (NJDEP) had an ArcInfo Geographic Information System (GIS) from Redlands, Calif.-based Environmental Systems Research Institute (ESRI) in place, and strict regulations protecting and permitting wetlands. The state then assumed self-regulation of its wetlands under a mandate from the FWWPA, referred to as "Assumption 404." Another mandate under the FWWPA charged NJDEP with developing a comprehensive inventory of the state's freshwater wetlands. The purpose: to provide agencies with a statewide planning tool for early detection and assessment of changes in wetlands. The project called for mapping and classifying over 620,00 acres in the state. Begun in 1988 as a three-year project, state administrators and contractors struggled through delays caused by budget cutbacks, years of unstable funding and unusually heavy snowfall that delayed the color infrared aerial photography. The project was finally completed last spring at a cost of $3.7 million. The prime contractor for the freshwater wetlands mapping project was Markhurd of Minneapolis. The company was responsible for developing digital ortho quarter-quad (DOQQ)2 basemaps and delineation overlays in ArcInfo format, and conventional photo basemaps on Mylar with superimposed wetlands delineations. To date, New Jersey is only one of two states to have complete DOQQ coverage of the entire state. The subcontractor, Greenhorne & O'Mara, of Greenbelt, Md., provided photo analysis and interpretation, field verification, and a signature key to identify and classify ground features. The 1:58,000-scale aerial photography, taken by the National Aerial Photographic Program was shot in March and April of 1986 and 1992, during the wettest ground conditions and before the spring leaf out. Color infrared (CIR) imagery was chosen specifically for its ability to discriminate vegetation types and various levels of soil saturation. The aerial photos provided the images from which DOQQ basemaps were developed. Each quarter quad covers an area of roughly 12.5 square miles and has a minimum mapping unit of one acre, enabling analysts to delineate all wetland features down to 10 feet in width. NJDEP's contract manager for the project, Bob Cubberly, said that since the quarter-quad basemaps were digitized for GIS applications, they had to fit together like puzzle pieces. "We had to have a cartographic base that produced a seamless edge match. That's where the prime contractor, Markhurd, came in. They considered the whole state in the solution, instead of in blocks or strips, so all the map edges match; the corner coordinates of all the quarter quads are shared between adjacent maps." A total of 624 quarter quads were required to cover the entire state. Following extensive quality-control processes, digitized composites of the basemaps with wetland delineations were delivered to the NJDEP in ArcInfo format, along with a database of all field information, including post-processed GPS data recorded by field verification teams. Other deliverables included hard copies of basemaps, Mylar composites of the basemaps and delineations, and acetate copies of only wetlands delineations. The wetlands GIS maps and database are proving effective and versatile resource-management tools for the NJDEP, enabling state officials and scientists to make informed decisions about present applications, and to rapidly assess the impact of developments on wetland resources. Wildlife biologists are also using the maps to identify and monitor locations of threatened and endangered species habitats and to protect rare plant communities. Under data-sharing agreements, most counties in New Jersey with GIS now have overlays of the wetlands delineations. The databases are also being used in emergency preparedness; officials can model the probable extent of an oil spill in a waterway, identify areas that will be affected, and determine appropriate responses. As for the overall cost of the mapping project, some of that will be recovered through map sales to developers, local Realtors and property owners. Ernest Hahn stresses that the GIS is not a regulatory tool. "There are some people who would like to see us going into a GIS terminal, bringing up a map and regulate from the computer. While the GIS has valuable applications in resource management, the bottom line is that most of the calls we make just can't be made off the computer screen. You're looking at things like endangered and threatened species habitats, whether or not they are connected to surface-water features," he said. "Those are really the things that need to be looked at in the field." The reason, he said, is that connections between habitats are analyzed. "You can only get to a certain level of detail with remote sensing," he said. Comparing the present program with previous federal and state efforts, Hahn said "we have a much more comprehensive program. We've thrown a lot more people at it than the Army Corps of Engineers did. They didn't have the resources to adequately enforce the program. We believe that we do. "As a result, other [state] resource agencies will get a better handle on what's going on because we do a more thorough investigation of all the permits that come in front of us," Hahn said. "So the advantage is twofold: one, it streamlines the process, and two, we're more effective in protecting the resources of the state." Bill McGarigle is a freelance writer residing in Santa Cruz, Calif.
<urn:uuid:5197cdb6-4d74-473a-8c6a-c19d93c21097>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/New-Jersey-Maps-its-Wetlands.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00235-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95252
1,774
3.390625
3
Big data is the buzzword in the IT industry these days. While traditional data warehousing involves terabytes of human-generated transactional data to record facts, big data involves petabytes of human and machine-generated data to harvest facts. Big data becomes supremely valuable when it is captured, stored, searched, shared, transferred, deeply analyzed and visualized. The platform that is frequently cited as the enabler for all of these things is Hadoop, the open source project from Apache that has become the major technology movement for big data. Hadoop has emerged as the preferred way to handle massive amounts of not only structured data, but also complex petabytes of semi-structured and unstructured data generated daily by humans and machines. The major components of Hadoop include Hadoop Distributed File System (HDFS) as well as implementation of MapReduce. HDFS distributes and replicates files across a cluster of standardized computers/servers. MapReduce parses the data into workable portions across the cluster, so they can be concurrently processed based on a map function configured by the user. Hadoop relies on each compute node to process its own chunk of data allowing for efficient “scaling-out” without degrading performance. Hadoop’s popularity is largely due to its ability to store, analyze and access large amounts of data, quickly and cost effectively across these clusters of commodity hardware. Some use cases include digital marketing automation, fraud detection and prevention, social network and relationship analysis, predictive modeling for new drugs, retail in-store behavior analysis, mobile device location-based marketing within an almost endless variety of verticals. Although Hadoop is not considered a direct replacement for traditional data warehouses, it enhances enterprise data architectures with potential for deep analytics to attain true value big data. When building and deploying big data solutions with scale-out architecture, cloud is a natural consideration. The value of a virtualized IaaS solution, like our own AgileCLOUD is clear – configuration options are extensive, provisioning is fast and easy, and the use cases are wide-ranging. When considering hosting solutions for Hadoop deployments, shared public cloud architectures usually have performance trade-offs to reach scale, such as I/O bottlenecks that can arise when MapReduce workloads scale. Moreover, virtualization and shared tenancy can impact CPU and RAM performance. Purchasing larger and larger virtual instances or additional services to reach higher IOPS to compensate for those bottlenecks can get expensive and/or lack the desired results. Hence the beauty of on demand bare metal cloud solutions for many resource intensive use cases: Disks are local and can be configured with SSDs to achieve higher IOPS. RAM and storage are fully dedicated and server nodes can be provisioned and deprovisioned programmatically depending on demand. Depending on the application and use case, a single bare-metal server can support greater workloads than multiple similarly sized VMs. Under the right circumstances, the use of both virtualized and bare metal server nodes can yield significant cost savings and better performance.
<urn:uuid:ac7b89ee-4a63-4e32-a039-4699412ae4e3>
CC-MAIN-2017-04
http://www.internap.com/2013/05/08/bare-metal-cloud-fits-big-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00143-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917284
646
2.6875
3
RIP (raster image processor) A raster image processor (RIP) is a hardware or combination hardware/software product that converts images described in the form of vector graphics statements into raster graphics images or bitmaps. For example, laser printers use RIPs to convert images that arrive in vector form (for example, text in a specified font) into rasterized and therefore printable form. RIPs are also used to enlarge images for printing. They use special algorithms (such as error diffusion and schochastic) to provide large blow-ups without loss of clarity. (NOTE: This information comes from www.whatis.com.) In a Lexmark context, a 900 RIP message means there is corrupted data being presented to the printer. This will generally be due to: A hardware issue with the printer. To check this: With the data cables detached, turn the printer off/on. If there are any extra options installed (flash SIMM, extra memory, ImageQuick SIMM, etc.), take them out and repeat step 1. Print a settings page from the printer. If this works, the printer is fine and you should investigate the points below. A hardware issue with your network (bad cables, hub, switch, router, etc.). To resolve this: Try a different LAN drop, if possible. Corrupted basecode software. To check this: Investigate from the macro to the micro level (for example, a particular workstation, print driver, program, document). Try printing a different document to the printer in question. NOTE: If the RIP error message occurs intermittently, you should create an ERROR LOG before calling Technical Support.
<urn:uuid:9dc1d327-3ef2-42ab-9bb7-d22cc6a3e307>
CC-MAIN-2017-04
http://support.lexmark.com/index?page=content&id=FA185&locale=en&userlocale=EN_US
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00445-ip-10-171-10-70.ec2.internal.warc.gz
en
0.838234
345
3.046875
3
NASA today said its Wide-field Infrared Survey Explorer satellite has unearthed a "bonanza of newfound supermassive black holes and extreme galaxies called hot DOGs, or dust-obscured galaxies." NASA said the latest discoveries help astronomers better understand how galaxies and the behemoth black holes at their centers grow and evolve together. "For example, the giant black hole at the center of our Milky Way galaxy, called Sagittarius A, has 4 million times the mass of our sun and has gone through periodic feeding frenzies where material falls towards the black hole, heats up, and irradiates its surroundings. Bigger central black holes, up to a billion times the mass of our sun, even may shut down star formation in galaxies," NASA said. NASA defines a black hole as an object whose gravitational pull is so intense that nothing, not even light, can escape it once inside a certain region called the event horizon. "As gas and dust (or even entire stars) are sucked in, the material is accelerated and heated to very high temperatures. This in turn results in the emission of X-ray light. Black holes containing lots of nearby gas and dust such as this Perseus cluster galaxy produce tremendous amounts of X-ray light. Still more X-ray light is generated when some of the material swirling into the black hole doesn't fall in but rather is spit out at incredibly fast speeds (close to the speed of light). To understand why some material is spit out, think of the analogy of someone trying to eat too much food at once. Such a messy eater will have food fall from their mouth." WISE, which was launched in 2009 and concluded is mission in 2011, scanned the sky twice with its infrared light, capturing millions of images. All the data from the mission have been released publicly, allowing astronomers to dig in and make new discoveries like the ones announced this week, NASA said. NASA said WISE images have revealed millions of dusty black hole candidates across the universe and about 1,000 even dustier objects thought to be among the brightest galaxies ever found. These powerful galaxies that burn brightly with infrared light are nicknamed hot DOGs. WISE easily picks out these monsters because their powerful, accreting black holes warm the dust, causing it to glow in infrared light, NASA stated. In one study, astronomers used WISE to identify about 2.5 million actively feeding supermassive black holes across the full sky, stretching back to distances more than 10 billion light-years away, NASA said. WISE observations were part of another key report NASA issued in May offered a better idea of the number of asteroids buzzing around in space. That report found that there are roughly 4,700 potentially hazardous asteroids, or as NASA calls them PHAs. NASA says these PHAs are a subset of a larger group of near-Earth asteroids but have the closest orbits to Earth's - passing within five million miles (or about eight million kilometers) and are big enough to survive passing through Earth's atmosphere and cause damage on a regional, or greater, scale. NASA points out too that ''potential'' to make close Earth approaches does not mean a PHA will impact the Earth. It only means there is a possibility for such a threat. WISE looked at the objects that orbit within 120 million miles of the sun into Earth's orbital vicinity, NASA said. WISE scanned the celestial sky twice in infrared light between January 2010 and February 2011, continuously snapping pictures of everything from distant galaxies to near-Earth asteroids and comets. The asteroid-hunting portion of the WISE mission called NEOWISE has seem more than 100 thousand asteroids in the main belt between Mars and Jupiter, in addition to at least 585 near Earth, NASA noted. Specifically NASA said WISE sampled 107 PHAs to make predictions about the entire population as a whole. Findings indicate there are roughly 4,700 PHAs, plus or minus 1,500, with diameters larger than 330 feet (about 100 meters). So far, an estimated 20 to 30% of these objects have been found, NASA stated. Previous estimates of PHAs predicted similar numbers, they were rough approximations, NASA said. Layer 8 Extra Check out these other hot stories:
<urn:uuid:821fe23a-0fff-47ec-a8a1-daf67bfbcd07>
CC-MAIN-2017-04
http://www.networkworld.com/article/2223039/security/nasa-uncovers-millions-of-black-holes-in-space.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00381-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949271
868
3.828125
4
A scientist with a team of investigators for the Department of Homeland Security explains how a simulated chemical attack will take place throughout the subway system in Boston as part of a test on airflow. Through the tunnels The team is investigating how chemical or biological contaminants released into the air would travel through the subway system's underground tunnel network. Understanding how substances travel through the subway's five lines will help the MBTA Transit Police fine tune evacuation plans to protect the subway's more than 1.3 million daily riders About 40 gas samplers and more than 25 particle counters placed throughout the underground system monitored the concentration of the tracer gases and particles. Equipment measures how hot, humid summer weather impacts the movement of airborne material. Tests were also conducted in winter to gauge temperature, humidity and other weather factors. Analyzing contaminant travel Monitoring and tracking equipment analyzes gas and concentrations, giving scientists an idea of how the chemicals travel and how quickly they disperse. New plans may save lives Data will be shared with first responders who can use them in devising evacuation plans for riders, as well as to adjust ventilation, and modify train movements after an attack or accidental release.
<urn:uuid:c6ac3d14-9e97-4517-a4ce-412ef5919a5c>
CC-MAIN-2017-04
http://www.csoonline.com/article/2126044/strategic-planning-erm/images-from-a-homeland-security-experiment.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00501-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923235
239
3.265625
3
Dynamic NAT, which could allow several hosts to use the same public IP address at different times of the day, still translates on a “one-to-one” address basis. That is, each inside local address (usually private) being actively translated requires one global address (usually public). In PAT (Port Address Translation, also known as “overloading”), many inside local addresses are simultaneously mapped to one inside global address (that is, the global address is “overloaded”). Thus, PAT is a “many-to-one” translation scheme. To configure PAT, the syntax is: - Router(config)#ip nat inside source list 1 interface serial 0/0 overload The translation tells the router that if a packet with source address matching a permit in ACL 1 hits the inside interface, and it is bound for the outside interface, translate the source address to the address of the Serial0/0 interface. Thus, all translated traffic has the same source address (note the keyword “overload”), and no pool is required. What happens, then, when the return traffic hits the router? If it’s all destined for the same address, how does the router know to which local address the destination address of the returning packet should be translated? The key to PAT is that, unlike with dynamic NAT, the port numbers are also tracked, and, if necessary, manipulated. Remember that when an application using TCP or UDP starts, it is assigned a port number by the IP stack. Specifically, server-side apps are assigned “well-known” ports below 1024. Examples are TCP 23 for Telnet, TCP 80 for HTTP (web service), and UDP 69 for TFTP. Client-side apps are assigned random port numbers in the range 1024 and above. For example, let’s say that host 10.0.0.1 (the client) initiates a Telnet session with a host at address 126.96.36.199 (the server). The client process on host 10.0.0.1 will be assigned a TCP port by host 10.0.0.1’s IP stack, which we’ll assume is 2000 (and, of course, the Telnet server at 188.8.131.52 is using TCP port 23). When the traffic from 10.0.0.1 hits the inside interface and is bound for the outside interface, it’s checked against the ACL. Let’s assume that 10.0.0.1 is permitted by the ACL, so the translation occurs. Since no corresponding entry yet exists in the translation table, the inside local address and port number will be entered (10.0.0.1:2000). For the corresponding inside global address, that of Serial 0/0 will be used (let’s assume that it’s 184.108.40.206), and the port number will be unchanged (220.127.116.11:2000), unless that port number already appears in the inside global list. If the port number is already in use by another host, the port will be changed to a value that is not already in use (the algorithm for this is implementation-specific). As with dynamic NAT, timeouts are used to free up unused translation entries for ICMP and UDP. The defaults are 300 seconds for UDP and 60 seconds for ICMP, but they can be configured. TCP also has a fail-safe timeout of 24 hours. Finally, what if you want to use a pool (dynamic NAT), but switch to PAT if the addresses in the available pool addresses are exhausted? In this case, you combine the “pool” and “overload” options, like this: - Router(config)#ip nat inside source list 1 pool test overload The effect of this is implementation-specific, but in my experience a Cisco router will allocate the pool addresses in ascending order, and then overload on the last address if necessary. Author: Al Friebe Related articles by Zemanta - Phoenix Labs ” PeerGuardian OS X (menson.wordpress.com)
<urn:uuid:c5c3ab52-4c21-40a4-9e6d-02b3633b079b>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/08/12/nat-and-pat-part-3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.871325
869
3.109375
3
Multihoming ISP Links Multihoming ISP links Today, problems associated with ISP link availability continue to cause organizations to lose millions of dollars each year. However, deploying a solution that is cost effective and operationally efficient can also be a challenge. The following are four alternatives on how to facilitate multihomed networks. 1. Border Gateway Protocol Typically, larger organizations multihome their sites with two links from two separate ISPs, using Border Gateway Protocol (BGP) to route across the links. While BGP can provide link availability in the case of a failure, it is a slow and complex routing protocol. It is costly to deploy because it requires special Autonomous System (AS) numbers from the ISPs and it requires router upgrades to be installed. BGP is also not well-suited to provide multihoming and intelligent link load balancing. In the case of a failure, ISP cooperation is often required for link recovery. In general, BGP causes long and unpredictable failover times, which will not meet high availability requirements. 2. WAN link load balancing Also known as multihoming, WAN link load balancing is a session-based process of directing Internet traffic among multiple and varied network connections. It requires a single WAN link controller located at the main site between the gateway modems/routers and the internal network. It intelligently load balances and provides failover for both inbound and outbound traffic among the network connections. Assuming there are two ISP connections, both network connections can be used at the same time. The benefit here is that you don't pay for bandwidth that is only used as a backup for when an outage occurs. For example, traffic will go through network connection number one. If the WAN link controller detects that connection number one is overtaxed or failed, it will direct users across the second ISP connection. Intelligent WAN link controllers will continuously spread the traffic across the network connections based on the available resources. For example, with two T1s, it will not wait until the first T1 is overutilized before sending traffic out the second WAN; it will make use of both lines evenly. Having two 1.5Mbps network connections does not mean that users have 3Mbps of bandwidth available to them. You would have 3Mbps available, but not for a single session. In other words, you would have 3Mbps of available bandwidth, but only 1.5 of throughput could be dedicated to any individual session. A single session will still only have 1.5Mbps of throughput; as with WAN link load balancing, each user can use only one ISP connection at a time. 3. Site-to-site channel bonding Site-to-site channel bonding is a form of WAN link load balancing with a different approach that can increase the total combined network bandwidth of multiple network connections between two locations. This approach requires a WAN link controller at the main site and also at the remote site. Unlike WAN link load balancing, site-to-site channel bonding conducts continuous health checks (up and down status) of the network connections in use, and uses packet-based load balancing to distribute traffic across all network connections. However, with site-to-site channel bonding, two 1.5Mbps network connections will equal approximately 3Mbps, providing all traffic with the combined throughput from the multiple network connections. 4. Multiple ISPs Organizations can multihome their sites with two WAN links from the same ISP. While implementing this solution may be a lower cost to deploy, it is not a very efficient solution, as an outage at the ISP will still cause a network failure, or at least create a bottleneck when both links are unavailable or oversubscribed. For greater WAN redundancy, it is best to have two or more different ISPs and load balance and provide failover for traffic among them.
<urn:uuid:c654714a-8851-43f7-973c-c5cbd3ab76a5>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Enterprise-Networking/How-to-Overcome-Multihomed-ISP-Link-Challenges/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00225-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936655
790
2.6875
3
Public key encryption was invented 30 years ago, but even now most email is not encrypted. The Josephson junction was an inevitable technology in the early 1980s. Is quantum computing inevitable in the same way? This isnt to say that none of these projects had an impact. However, what they delivered was far less than what was promised. Its easy to get sidetracked to the reasons why these technologies failedconsumer resistance, the economics didnt make sense, technical difficulties, and so onbut these reasons are beside the point. These technologies were all big in their day. There was lots of buzz. The experts predicted great things for them. But history shows that most inevitable technologies arent. In fact, most new products fail. The moral of the story? Be skeptical next time you hear about the "Next Big Thing." Bob Seidensticker is an engineer who writes and speaks on the topic of technology change. A graduate of MIT, Bob has more than 25 years of experience in the computer industry. He is author of Future Hype: The Myths of Technology Change and holds 13 software patents.
<urn:uuid:c73ff357-5b86-4f77-b66a-163c1a4e6314>
CC-MAIN-2017-04
http://www.cioupdate.com/trends/article.php/11047_3631826_2/Be-Skeptical-About-the-147Next-Big-Thing148.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00437-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972819
227
2.703125
3
In this formatted SD card recovery case study, the client lost critical files from their SD card. The photos and videos on their SD card were irreplaceable—one could say they were “crucial”. When the client had gone to transfer the files over to their computer, the computer had failed to recognize the device and prompted them to reformat it. A simple accidental keystroke was all it took to completely reformat the little 2 GB Crucial memory card. By the time the client had realized what had happened, it was too late. The client came to us for our SD card data recovery services. Formatted SD Card Recovery Case Study: Crucial SD Memory Card Device Manufacturer: Crucial Device Capacity: 2 GB Operating System: FAT32 Situation: SD card was accidentally reformatted Type of Data Recovered: Photos and Videos Binary Read: 99.9% Gillware Data Recovery Case Rating: 9 In our data recovery technicians’ evaluation of the 2 GB Crucial SD card, we found a handful of bad cells on the card. SD cards do not have physical “sectors” in the same sense that hard disk drives do, since they have no platters. Inside an SD card is a small NAND chip on a circuit board, along with a controller chip. Instead of storing data on tiny sectors of a magnetic disk, NAND flash memory stores it in tiny NAND transistor cells. Essentially, the data is stored electronically, instead of magnetically. However, to the end user, flash memory devices and hard drives appear to store their data in the same way. A few of these bad cells had prevented the client’s computer from recognizing the device. When the client accidentally reformatted the SD card, those sectors got skipped over, and the client was left with a pristine—and empty—memory card. This was not what the client had intended. But fortunately, Gillware could help. Formatted SD Card Recovery – Small Change, Big Consequences A reformat is a small change with big consequences. Whether you do it to a hard drive, an SSD, or an SD card, it tends to play out in the same way. There are only a relative handful of sectors or cells on a data storage device that govern how its filesystem behaves. A partition table on the “front” of the device tells the reader how many partitions to expect. SD cards and other small devices tend to have just one partition. Each partition is further defined by a superblock, which lays out the rest of the device’s architecture. This SD card used FAT32, as do many small portable storage devices, since it is compatible with just about every computer operating system and has seen wide use for 20 years. The ground rules for this partition, what its directory structure looks like, and where its files live can all be found using the data in the superblock. When the SD card was accidentally reformatted, the old partition table and superblock for its FAT32 partition were completely overwritten. When you reformat most devices, only the parts that govern the device’s architecture change. In most cases, everything else is left alone. Reformats can cause file corruption, and any data subsequently written to the device can overwrite the old data. But for the most part, the old data tends to be intact, especially if the user was prudent enough to immediately stop using the device. Crucial SD Card Reformat Data Recovery – Conclusion Because of the few bad cells, our engineers couldn’t get a full read of the SD card’s flash memory chip. We had to settle for a 99.9% read. The client’s Crucial SD card had lost some of its filesystem information due to these bad cells. The rest had been wiped out by the reformat. This had an additional effect of destroying the file directory structure on the client’s memory card. Every file system uses a directory structure to organize itself. For example, the folders on your hard drive, which help you navigate through your files, are a part of your drive’s directory structure. Without the directory structure, files can appear to be in a jumbled and disorganized mess, or simply nonexistent. To recover the client’s data, our data recovery technicians had to sift through the SD card’s binary data, on the prowl for common file headers. This is our last resort technique for data recovery if the directory structure is missing or heavily damaged. The raw analysis proved fruitful, and we were able to find over 1,000 photos and videos for the client. We showed the client the results of our labor, and they were very happy to look through the recovered files. The vast majority of the client’s crucial photos and videos on their Crucial memory card had been perfectly recovered. We rated this formatted SD card recovery case a 9 on our ten-point case rating scale.
<urn:uuid:682ab550-b1d4-449d-86f4-beb2c48e1df7>
CC-MAIN-2017-04
https://www.gillware.com/blog/data-recovery-case/formatted-sd-card-recovery-crucial/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00069-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953218
1,026
2.75
3
TURNING GRASS INTO CAR PARTS COVENTRY, England -- Researchers at the University of Warwick are working with elephant-grass farmers to study methods for using the hardy plant to make biodegradable car parts. The farmers and researchers already use elephant grass -- which produces bamboo-like canes up to three feet tall -- as structural filler in plastic car parts such as wheel rims. Short pieces of the cane are used to strengthen biodegradable plastics that would otherwise be too weak for automotive applications. The group of farmers has formed a company, Biomass Industrial Crops Limited, to market the plants potential and sell products. The organization believes that auto manufacturers will be interested in elephant grass as part of an environmentally friendly solution for disposing of vehicles when they go kaput. The car parts can simply be composted at the end of their life rather than taking up space in landfills. A ROBOT FOR ANY OCCASION PALO ALTO, Calif. -- Researchers at Xeroxs Palo Alto Research Center (PARC) have finished work on a modular robot that changes shape -- transforming itself to crawl like a spider or slither like a snake -- to match its environment. PARCs work on modular robotics is based on building robots out of identical, single modules. Twenty modules make up the first version of the robot, known as a "Polybot," and PARC researchers foresee modular robots one day being built with hundreds or thousands of such modules. The cube-shaped modules are five centimeters on a side, said Mark Yim, Polybot project leader, noting that the seven-member research team anticipates modules getting as small as a grain of rice in the future. According to PARC, if the robot has to move across a level surface, it configures itself to move like a tractor tread; for going down stairs or climbing over an obstacle, it configures itself like a snake; and for moving across rough terrain, it changes into a shape that resembles a four-legged spider. "The whole idea with these systems is that they can do a lot of different things," said Yim. "Search and rescue is one potential application, and in this instance, you dont know what the rubble pile might look like so you dont necessarily know what type of robot will be the best. Other applications are planetary exploration and undersea mining exploration." For now, a human operator controls the Polybots configuration, Yim said, although the research team is working on programming the Polybot with a degree of autonomy so the robot can reconfigure itself. Yim anticipates achieving that capability in about one year, noting that his team is exploring the autonomous reconfiguration as much as possible because its impractical to have a human operator telling every single module of a 200-module Polybot what to do. "Were pushing toward the autonomous side, where you could give the Polybot higher-level commands," he explained. "In the search-and-rescue aspect, you might say, Go look through this rubble pile and see if you can find someone. The robot would go by itself, figure out where to go and when to reconfigure." At left: Palo Alto Research Centers module robot reconfigures itself to different forms for navigating different kinds of terrain. BROADBAND LASERS MAY COVER "LAST MILE" SEATTLE, Wash. -- Although many people covet broadband Internet access for their homes, hassling with telecommunications companies for DSL or cable-modem service is proving to be a battle neither for the faint-hearted, nor the impatient. A smattering of companies in the United States and Europe have hit on what they think is the solution: a technology known as free-space lasers. The companies use laser beams, instead of radio waves, to transmit data, video or sound. Various reports peg transmission speeds at anywhere from 5Mbps to 10Mbps, at prices that vary depending on what market a consumer lives in. Terabeam, based in Seattle, is one of the companies offering Internet services via free-space lasers, and The Wall Street Journal reported in late February that the company had three customers signed up in the city. Another company, Dallas-based Tellaire, is delivering 10Mbps LAN services in several cities, including Houston and Austin, Texas; Washington, D.C.; and New York. Laser companies contend they can offer data services much more quickly to customers than other telecommunications providers because they dont need to dig trenches, lay cable or even buy spectrum. VIRTUAL ASSISTED LIVING ST. PAUL, Minn. -- The states Department of Human Services (DHS) is taking technology to senior citizens, though not in the digital divide sense. DHS received a $1 million grant from the Bush Foundation last year to fund organizations that devise and evaluate new methods of combining health services and housing for senior citizens, with the ultimate goal of duplicating successful methods in communities across the state. Seven organizations will receive start-up funding from DHS this year to develop "virtual assisted living" programs to deliver services to seniors where they live, said Maria Gomez, assistant commissioner of continuing care. "We will begin to create a system where innovation is valued and innovation can be disseminated," Gomez said. "The idea is that if you do this consistently, in a number of years you will have a very dynamic and efficient system out there, rather than these sort of encrusted systems that are inflexible and cant change with time. Thats the philosophy behind doing it this way." She said the idea came from discussions about separating assisted-living services from dedicated physical locations that provide those services. "By putting together a package of supporting services, we can bring those services to any house, and then we have an assisted-living program without having to build a big building," she said. The state has plenty of senior housing, Gomez said, so senior citizens have a place to live. What they need is more health services. "Our seniors have a place to stay, and they want to stay where they are," she said. "Its just that they need more services to stay where they are. It makes sense for us to go this route, rather than building more and more buildings." One funding recipient, Good Samaritan Home Health Care in Windom, Minn., will use the money to provide assisted-living services via technology. The organization will use a TV monitor and camera connected to a telephone line to check on senior citizens in their homes. During the day, home health aides can use the system to make sure senior citizens are healthy or to pass along reminders of medication schedules. ONE FAST ELECTRIC CAR SAN DIMAS, Calif. -- When people consider buying an electric car, their motivation probably is not racing. But the tZero, an electric car manufactured by AC Propulsion , will give a Porsche a run for its money. In fact, the vehicle goes from zero to 60 in a remarkable 4.1 seconds and covers the quarter mile in 13.2 seconds, the company claims. The tZeros top speed is 90 mph -- which falls far short of what a Porsche can muster, but still should give speed junkies reason to take a look. The tZero sports a 200-horsepower, premium copper-rotor AC induction motor. It also has a 100-mile range at 60 mph, although the company notes that "making frequent use of the performance capability of the car can drop the range to 50 miles." Key to broadening the appeal of electric vehicles is making it easy for drivers to charge their cars batteries, the company says. AC Propulsions "reductive charger" allows owners to use the existing electric power infrastructure to recharge the tZero. Motorists can plug into any existing outlet, from common 110V/15A household sockets, to existing electric-vehicle conductive wall boxes, to 240V/80A commercial welding plugs. At its maximum power rating, the reductive charger provides a full charge in just one hour. While that zero-to-60 acceleration is impressive, the tZero really smokes when it accelerates from 30 mph to 50 mph -- which takes a mere 1.4 seconds. This sort of performance wont come cheap, though. Initial plans call for the car to be ready for delivery in 2002, and the sticker may indeed shock some people: The car will sell for $80,000, plus tax. E-VILLAGE PROMOTES TELECOMMUTING AND TECHNOLOGY FRESNO, Calif. -- A diverse team of experts is designing an electronic village as part of a master-planned community being built in the heart of the Central Valley. The premise of the e-village is to create a community for telecommuters and their families. Representatives from the California State University at Fresno, Chawanakee School District, Edison Utility Services, Sierra Foothills Public Utilities District, Caltrans, Nortel Networks and the Property Development Group are working together on the project. CSU Fresno is applying for a grant from Caltrans to create a definition of a telecommuter village. The university also will study how a telecommuting community affects road congestion and traffic trips, said Tom Wielicki, a CSUF professor of information systems. In addition, the university will use the e-village as a pilot to test online delivery of CSUF courses, he said. Planners want to build a community-learning center that will provide coursework from the preschool level to the university level. While the learning center and the e-villages attention to telecommuters are both beneficial, Wielicki believes other considerations may be more important. "If I were to describe how important the e-village is for the overall lifestyle and economy of the 21st century, I would say the concept of telecommuting -- making location irrelevant to what you do -- is going to revolutionize our lives, much more than the Internet and all the dot-coms put together," he said. "This will change the lifestyle of people in that work isnt going to be where you go. Its going to be what you do, regardless of where you are. E-village is a vehicle that will make it possible, more than all of the telecommuting programs weve had in this country." The task of wiring the village will fall to Nortel, which will design, install and support a high-speed voice, video and data network for an initial 500 housing units. UK GOVT. GIVES AWAY 12,000 PCs LONDON -- The British government launched in mid-March a 10-million pound plan (US$15 million) to give away 12,000 computers and peripherals to low-income households. The giveaway builds on an "alpha pilot" project launched last year in the Kensington area of Liverpool, in which several hundred families were offered a refurbished computer, printer and modem free of charge. The project, operated under the auspices of the governments Wired Up Communities program, aims to tackle the digital divide and will be paralleled by an initiative to give free PCs to needy schools. Citing research showing that professionals are three times more likely to use the Internet than those from semi-skilled or unskilled family backgrounds, Michael Wills, the governments learning and technology minister, said the gap between these citizens must be narrowed if Britain is to create a fair and prosperous society. The PC giveaway is not confined to inner-city areas; it also includes rural regions where low-income citizens need access to the Web. Computer distribution under the program will include Newham in east London, 750 households and a primary school; Framlingham in Suffolk, 1,500 homes and a school; east Manchester, 4,500 homes and several schools; Blackburn, 2,500 homes and five schools; Alston in Cumbria, 1,200 homes, isolated farms and three schools; and Brampton-upon-Dearne in South Yorkshire, 1,500 households and laptops for 265 primary school children. Wills said the government wants to avoid the development of a technology underclass. "That is why we are piloting innovative ways of getting technology to the most deprived sectors of society," he said. -- Newsbytes
<urn:uuid:bbe1aa95-b002-4c99-a31b-1c4935361b17>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/100497954.html?page=3
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00190-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942415
2,546
2.703125
3
You’ve probably heard the phrase about “canaries in a coal mine.” In the mid 1900s, a guy named John Haldane figured out that birds die pretty quickly when poisoned by carbon monoxide, after which coal miners started using them as early warning systems for toxic gas. We need the same for computer security. No defense is infallible, so organizations need digital canaries to warn us about poisoned networks. When you think about the layers of security your business needs, you probably think about firewalls, authentication systems, intrusion prevention, antivirus, and other common security controls. However, I suspect few think about honeypots. That’s a shame, as honeypots make perfect network security canaries, and can improve any organization’s defense. As an infosec professional, you’ve probably heard of a honeypot—a digital trap set to catch computer attacks in action. In essence, honeypots are systems that mimic resources that might entice an attacker, while in reality they’re fake systems designed to contain and monitor attacks. In the same vein, a honeynet is just a collection of different honeypots. There are many different varieties of honeypots, each designed to recognize and observe diverse types of attacks. Some catch network attacks (Honeyd), others catch web application attacks (Glastopf), and some are designed to collect and observe malware (Dionaea). You can check out The Honeynet Project for a fairly complete list of different kinds of honeypots. These different honeypots also have varying levels of depth. For instance, a low-interaction honeypot might just emulate basic network services, perhaps only presenting a service banner and command prompt, but not offering much interaction to potential attackers (making them easier for attackers to detect). Whereas, high-interaction honeypots can imitate full server systems, tricking hackers into carrying out their attacks further, allowing you to analyze them in depth. With all the different varieties to choose from, each with varying levels of capability, honeypots might sound a little over complicated and perhaps too cumbersome for a small organization. In fact, some of the research-focused ones are certainly overkill for anyone but security academics. However, you don’t need the most complex feature-packed honeypot for our simple purpose. A production honeypot is a relatively low maintenance system, primarily used to detect attacks (rather than fully emulate and analyze them). Production honeypots make great network canaries. Over the years, production honeypots have evolved and become much easier for the average Joe to deploy. While most honeypots began as command line Linux packages, requiring manual installation and configuration, new solutions have surfaced making these packages more user-friendly, even for Linux newbies. For instance, lately a number of Live CD distributions have come out specifically made for honeypots and honeynets. Rather than having to install a Linux distribution (distro) from scratch, and configuring everything yourself, these live honeypot distros have everything set up and ready to go. All you have to do is boot from a USB key or spin-up a virtual machine. Best of all, these honeypot distros are free. Three great examples include: HoneyDrive, Active Defense Harbinger Distribution (ADHD) and Stratagem. If the convenience of live honeypot distros wasn’t enough, newer honeynet projects have also made the older command line tools much easier to use. For instance, Project Nova adds a GUI, and many additional capabilities, to the trusty and popular Honeyd project. Nova makes Honeyd much more approachable to the average IT guy, making it dead simple for you to deploy a simple production honeynet in even the smallest organization. Better yet, Nova comes preinstalled in distros like ADHD, so all you have to do is boot ADHD, start Nova, and you are ready to experiment. With all these easy and free options, there’s little excuse not to at least try a honeypot. I suggest starting with the combination I mentioned above. Use the ADHD ISO to create either a bootable USB drive or virtual machine, spin it up, and give Nova a try. When you first boot ADHD, you’ll see a “Usage documentation” link on your desktop. Double-clicking it will bring up a file that shares all the information you need to know to get started with some of the honeypot packages, including Nova. Or just refer to this guide on how to get Nova started. If you run Nova with its default settings, it sets up three fake honeypot machines—a Linux server, Windows Server, and BSD Server—and it monitors them for network connections. These basic honeypots act like those canaries in coal mines, warning you of dangerous activity. If Nova sees unusual connections to these machines, you know someone might be snooping around your network. Nova will also monitor for other types of attack traffic too, and warn you when it finds any IP addresses that act suspiciously. Once you set up this simple honeynet, all you have to do is occasionally monitor it for weird activity. However, after seeing what this simple setup can do, you might find you’re intrigued by the capabilities of honeypots. If so, there’s a lot you can explore in ADHD and Nova. For example, rather than sticking with Nova’s default setup, you can add a bunch of fake nodes that emulate your actual server setup. You can also explore the other types of honeypots ADHD provides, such the web application honeypot, Weblabyrinth, or file system honeypots like Artillery. Whether or not you deeply explore all the available honeypots is up to you, but you really should consider installing at least a basic one. All the big public data breaches over the past few years have shown us that we’ll never have impermeable defenses. No matter how many walls you build around your information, attackers will find weakness, and you data will leak out. That’s why honeypots can play a crucial role in your organization’s security strategy as the digital canary warning you before impending disaster.
<urn:uuid:7f974102-d11e-4f05-98ea-eb666a329e9a>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/08/27/why-every-security-conscious-organization-needs-a-honeypot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00218-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928324
1,278
2.984375
3
Non verbal communication (body language) is the means by which animals convey information through conscious or subconscious gestures, bodily movements or facial expressions. Human body language is as old as our species. The first book on body language appeared more than 350 years ago: John Bulwer's Chirologia: or the Natural Language of the Hand (1644), a pioneering study of meaningful hand movements. Kinesics is the study of communication by the bodily movements used when people talk to one another. Proxemics is the study of how people use the space around them to convey informations nonverbally. Body reversing is the term I'll use meaning how to understand what is the HIDDEN meaning of the bodily movements that we can observe in our fellow humans. NOTE: As usual certain body language may be exhibited for quite DIFFERENT reasons. For example, a certain posture or attitude may be struck out of habit, for the sake of comfort or because of nervousness. Expert body reversers will learn how to interpretate correctly the various signals. There are many places where you can gather an incredible lot of "material": Crowded lifts, excalators, busses and underground trains are very useful for Proxemic studies; Parties and work meetings are extremely interesting for Kinesics studies; Doctors' waiting rooms, hairdressers' waiting "chairs", cinemas and all sort of human queues (busses, shops) hyde incredible treasures of information for the attentive eyes of us body reversers. Precision grips are used when we hold small things between our thumb and fingertips. Empty handed precision grips are displayed during speeches when the speaker wants to make a point with 'nicety'. Palm of the hand faces the body. 1. Thumb touching index finger, hand twenty cm from chin. A speaker using this gesture mimics the precise grip of a craftsman manipulating a fine tool. The speaker reinforces a statement with "precision" and "delicacy". 2. Thumb touching all fingers. Same as above, a signal for something that should be kept in mind. 3. Thumb almost touching index finger (~three cm between them, remaining fingers are closed) Speaker is asking a question or is uncertain about a point at issue Power grips are used when wielding objects, such as hammers, whit force. Can have a mild (hand bent) or forceful (closed fist) form. The grips show a speaker's wish to make a point with strength or to control the audience. Had the stupid Germans studied a bit body language in the thirties, they would probably never have believed the false promises of the nazist hystrionical little leader. 1. Fingers and thumb make a tightly closed fist. A forceful power grip. It signals convictiona nd determination. usually deliberately exploited by public speakers, priests and politicians who might in reality have neither. 2. Fingers and thumb curl inwards as if loosely grasping an object. A mild power grip, usually employed by a person saying something without great force of convinction, but who nevertheless wishes to be taken seriously. Men are clumsy at signalling interest in someone of the opposite sex, ande very slow to pick up unspoken responses, when comlpared with women. ~ Smoothing the hair down with a hand, making little grooming gestures involving collar and shirtsleeves, or straightening the tie, and sweeping fanciful specks off a shoulder with a hand are typical preening male gestures. ~ Three things that are done with a jacket that buttons or zips are crucial signs of courtship. Men begin by slowly buttoning and unbuttoning, or zipping and unzipping, indicating slight nervousness. They then open the jacket at the waste with both hands and hold it open in this position for a few minutes; indicating further discomfort and nervousness. The third and most crucial step is when they take the jacket off after having completed steps one and two. Taking the jacket off means you have them hook, line, and sinker. ~ Men show interest in women by playing with circular objects in the presenct of a woman. He may squeeze, then let go of, a Coca Cola can or a glass, then squeeze and let go again. ~ Glancing at a woman's body, and letting her see him do it is also a courtship jesture made by males. ~ Used all over the world is the sock-pulling gesture. When uncomfortable or nervous, in the presence of a woman, men tend to pull up their socks. ~ Lightly stroking either the outer or, less often, the inner thigh is an indication of sexual interest. ~ When seated in a chair or leaning against a wall, he may sometimes spread his legs to give a crotch display. ~ To accentuate physical size and show readiness to be involved with a female, men will often stand with their hands on their hips. ~ The most aggresive sexual display a man can make is the agressive thumbs-in-belt, "cowpoke" stance. This is accomplished when one or both thumbs are hooked into the belt of the pants with the downwardpointing fingers framing the groin area. This posture draws attention to the male's crotch. ~ Men in a courtship situation usually tend to have high muscle tone, that is, body sagging seems to disappear, stomachs are tucked in a little tighter and chests tend to protrude a little more. It seems that the body assumes a more erect posture than usual. ~ Sly winks, accidental touches beneath a business table, gentle rugging of the back and moving in closer are also considered courtship gestures. ~ Excited interest can be seen in a flushed appearance in the cheeks and unconscious pupil dilatation. NOTE: As I wrote above, certain body language may be exhibited for reasons other than sexual attraction. For example, a certain posture or attitude may be struck out of habit, for the sake of comfort or because of nervousness. real body reversers will learn how to interpretate correctly the various signals. In other words: don't start getting too hot if a girl gives you the "shoulder look", may be you'r just a pain in her neck :-) ~ Women toss their hair, whether short or long, briskly from side to side, over a shoulder, or away from the face to indicate preening. Hair is removed from face to leave it exposed for male admiration. ~ Sometimes with partially closed eyelids, the woman holds the man's gaze just long enough for him to notice, then she quickly looks away. This has the tauntilizing feeling of peeping or being peeped at, and can light the fires of most normal men. ~ Women also use the sideways glance to show interest. This glance involves looking at the man through partially closed eyelids, but dropping the gaze a moment after it has been noticed. ~ Licking the lips, slightly pouting the mouth, or applying cosmetics to moisten or redden the lips all are indicators of a courtship invitation. Unconsciously imitating the appearance of sexually stimulated and receptive female genitals. ~ Slight exposure of the shoulder from a partially fallen blouse is again an example of "flirting." Rae Dawn Cong said it best: "You can seduce a man without taking anything off...without even touching him." This revealed shoulder is one example. Also the "shoulder look": Looking at the man behind over a raised shoulder is typical self-mimicry: the shoulder resembles the brest and so is sexually inviting. ~ When women massage their necks or head with one hand, it has the effect of raising the breast on one side of the body intensifying cleavage. It also exposes the armpit, which, even when shaved, has an erotic significance. ~ A female interested in making a subtle courtship gesture might gradually expose the smooth, soft skin of her wrists. The wrist area has long been considered one of the highly erotic areas of the body. In this position, the palms of a woman are also made visible to the male. This is an inconscious invitation to caress. ~ Playing with any cylindrical object such as a pencil, pen, stem if wineglass or finger is a reflection of subconscious desires. ~ Sometimes women will even accentuate the roll in their hips when walking in front of a male they want to attract. ~ When a woman sits with one leg tucked under the other and points the folded leg toward the person whom she wants to attract, the message communicated is, "I feel very comfortable with you. I'd like to get to know you better." ~ Women tend to stand with their legs apart with weight on one foot, when displaying a sign of openness or availability. This draws attention to genital area. (Of course this may also be a feeling of superiority, agressivity or impatience as well, duh) ~ Slowly crossing or uncrossing the legs while being watched by an interested male is a strong attraction signal, especailly when the female is slightly stroking her thigh. ~ Women entwine their legs to draw attention. Most men agree that the leg twine, (one leg is pressed firmly against the other to give the appearance of high muscle tone which the body displays when it is ready for sexual intercourse) is the most appealing sitting position a woman can take. (Of course also: nervousness, shyness, defensiveness, duh) ~ Once the legs are crossed, sometimes a woman begins to slightly kick her top leg back and forth. this kicking or thrusting, again, displays a courtship signal. ~ Dangling one shoe while seated in a relaxed position, with one leg crossed over the other knee, is one of the most intense courtship signals woman use to indicate interest in a male. Phallic mimicry, as the foot makes tiny thrusting movements with the dangling shoe. ~ Even when a woman keeps time to music with her head and hands, leans forward towards a male, or even brushes the male's body with her hand or breast, she is still conveying effective courtship gestures. The palms of the hands face each other and the fingertips touch, forming a shape rather like a church steeple. This is a characteristic gesture that people make, usually while seated, when fdeeling especially confident duringa converstaion. There are several variants: 1. The high steeple Both elbows rest on a table or desk and teh forearms are raised,so taht the steepling fingers point upwards (Academics, Doctors, Lawyers while delivering an 'expert' opinion). 2. The low steeple Both elbows rest on the arms of a chair or the tops of the steepler's thights, with the forearm pointing forwards and the fingertips steepling between the thights or knees. Most women steeple this way: in their laps if seated, at wais level if standing. 3. The concentrated (poker player's) steeple The hands steeple while hidden under a table, for instance. This tends to occurr when an individual wants to hide his or her confident feelings. Poker players may betray that they have a good hand like this. 4. The semi-steeple When sitting, the steepler places the arms in the low steepling position and the hands in the lap. The fingers of one hand clasp the back of the other, which is CLOSED, and forming a fist, its knuckles opressed into the upper hand's palm. This is a far ýmore subtle indication of confidence than the full steepling gestures. Crossing the arms in front of the body is an almost istinctive attempt to protect the heart and lungs against threat (Remember "the countrary position" as well: the 'hands behind the back' walkabout by teachers or police on foot patrol, holding the head high and both hands clasped behind the back has a precise meaning: this leaves the body vulnerable front area unprotected and signals a combination of superiority and self-assurance). 1. Basic crossed arms Both arms are folded across the chest with one forearm crossing the other, so that one hand rests on an upper arm and the other arm is tucked between elbow and chest. We tend to do this whenever we feel slightly anxious, for instance standing in a crowded lift or in a queue.
<urn:uuid:9b75317a-7757-4577-b7d2-6816ea43705e>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/fravia.org/rebodila.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00034-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934994
2,588
3.5625
4
Watch the video companion to the tutorial here! One of the more frustrating experiences when using a computer is when you want to delete or rename a file or folder in Windows, but get an error stating that the file is in use, open, shared, or locked by a program currently using it. You start to shut down every program running on the computer hoping that you will be lucky and be able to delete the file, but it still won't delete. What do you do? This tutorial is designed to answer these questions and provide methods that will allow you to delete or remove practically any file in Windows. When attempting to delete or rename a file or folder you should follow these steps in the following order: If you are still reading this, then you must be at step 4 above. To help solve this problem, we introduce a program called IObit Unlocker. Unlocker is a program that kicks in when Windows can't delete or rename a file and provides a series of options for enabling you to do so. The first step is to download the Unlocker program. This program can be downloaded from the following link: Once the program is downloaded, save it to your desktop and double-click on the file called unlocker-setup.exe to start the setup program. When the installation program starts, keep pressing the Next button until you get to the Finish button. Now that the program has been installed, IObit Unlocker will have added a autorun statement to your registry to start the Unlocker Assistant when Windows starts. This background program stays resident and detects when you try to delete, copy, rename, or move a file that is in use. When it detects one of these operations, it automatically open up a Window in order to assist you in working with the file. An example is shown below. In our example, we are trying to delete a file called C:\Test\Readme.doc. When we try to delete the file Windows tells us the file is being user by another person or program and that it cannot delete it. It also suggests we close any programs that might be using the file and try again. Well since we already tried the previous steps, we know this won't work and press OK. To unlock the file, we navigate to the folder C:\Test, where the file we wish to remove is located. Once we are in that folder we right-click on it and select the IObit Unlocker option as shown in the image below. When you click on the IObit Unlocker option, the program will start and display the locked file you selected. When it opens, if you receive a prompt asking if you wish to allow the program to run, you can allow it to do so. To unlock the file, simply click on the Unlock button. If it doesnt, work you should try these steps again, but this time select the Force button before clicking the Unlock button. When you select the Force option, IObit Unlocker will terminate the process using the file in order to unlock it. Obviously, make sure you save any documents that this program is currently using before using this feature. If IObit Unlocker is able to unlock the file, it will change it's status to unlocked as shown in the image below. You can also use IObit Unlocker to perform and Unlock and immediate delete, rename, copy, or move action. To do this click on the little drop down arrow next to the Unlock button as shown in the image below. Depending on the option you choose, Unlocker will then ask where you would like copy or move the file or what file name you wish to rename it to. With this information you now know that living with stubborn files is not your only answer. Before, working with a truly stubborn file was the province of a more experienced computer user, but not anymore. Now with programs like Unlocker, even a casual or beginning computer user can take back control of their computer and its files. A very common question we see here at Bleeping Computer involves people concerned that there are too many SVCHOST.EXE processes running on their computer. The confusion typically stems from a lack of knowledge about SVCHOST.EXE, its purpose, and Windows services in general. This tutorial will clear up this confusion and provide information as to what these processes are and how to find out more ... In the past when you needed to resize a partition in Windows you had to use a 3rd party utility such as Partition Magic, Disk Director, or open source utilities such as Gparted and Ranish Partition Manager. These 3rd party programs, though, are no longer needed when using Windows as it has partition, or volume, resizing functionality built directly into the Windows Disk Management utility. Let's admit it, we have all at one time or another mistakenly deleted a directory or uninstalled a program incorrectly and are now left with entries in the Add/Remove Programs list for programs that no longer exist on our hard drives. When you click on these entries to remove them, Windows complains with an error or nothing happens. For some of the neat freaks out there, this can cause a ... Some programs provide the ability to add arguments when executing it in order to change a particular behavior or modify how the program operates. As an example lets look at the command line argument for Firefox called safe-mode. If you start Firefox with the command line firefox.exe -safe-mode Firefox will start without any extensions or themes. As you can see adding a command line argument to the ... One of the top questions I see on forums is "How do I know if I have been hacked?". When something strange occurs on a computer such as programs shutting down on their own, your mouse moving by itself, or your CD constantly opening and closing on its own, the first thing that people think is that they have been hacked. In the vast majority of cases there is a non-malicious explanation ...
<urn:uuid:3d9091d2-dbcf-4a18-ac43-93432be57ffc>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/delete-rename-locked-files-folders-in-windows/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00520-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942639
1,222
2.859375
3
A new hack for Google Glass enables users to use brainwaves to take photos through the device’s camera and post them to social media without moving a muscle, the BBC reports. Google, however, has yet to review or approve the application, and a spokesman told the BBC that, for now at least, "Google Glass cannot read your mind." Developed by a London startup called This Place, the MindRDR software combines with an electroencephalographic (EEG) headset that measures the user’s brainwaves and reacts to spikes in activity. By attaching the EEG headset to Glass, the MindRDR software monitors users' levels of concentration and projects a horizontal white line on the Glass display, which rises as concentration levels increase. Once the white line reaches a certain point, the Glass unit automatically snaps a photo of the field of vision on which the wearer is concentrating. Immediately after the photo is taken, the white line returns to the screen. Bringing the white line back to the top of the screen automatically shares the photo to social media. So, yes, you can now share photos to social media without moving. I’ve always said we don’t have enough ways to post photos online. While Google has initially distanced itself from the app, which it has done in the past with independently developed Glass apps, the spokeswoman who spoke to the BBC didn't rule out the possibility of its approval in the future. "Of course, we are always interested in hearing about new applications of Glass and we've already seen some great research from a variety of medical fields from surgery to Parkinson's,” she said. MindRDR’s developers actually cited a handful of medical conditions that the brainwave-control for Glass could alleviate, including multiple sclerosis, quadriplegia, and locked-in syndrome. The medical applications of this technology are the most likely to help its case with Google, which has staunchly opposed certain apps, such as facial recognition, that early Glass adopters developed in spite of Glass’s developer policies prohibiting them. Outside of the medical world, however, this kind of functionality is likely to draw serious privacy concerns. The concern that Glass wearers are recording those around them without their knowledge has damaged the technology’s reputation, and has occasionally led to violence against those who wear it in public. If Glass wearers can take photos and share them without even moving, there will literally be no way of knowing whether a Glass user is recording and sharing the activity of those around them. In that scenario, just the presence of the technology could make some people uncomfortable.
<urn:uuid:88c949e3-ece2-48dd-9d70-b13cf16a891b>
CC-MAIN-2017-04
http://www.networkworld.com/article/2452705/opensource-subnet/google-glass-cannot-read-your-mind-google-says-about-mind-reading-glass-app.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00336-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955755
538
2.53125
3
Getting Hotter? IBM Supercomputer Will Study Global Warming IBM Corp. announced that the Oak Ridge National Laboratory (ORNL) will install and use a IBM eServer supercomputer for scientific research. ORNL's research focuses on improving the U.S. government's ability to predict long-range climate trends as well as tackle a wide spectrum of other scientific projects. ORNL hopes the supercomputer will help researchers understand how global warming may affect agricultural output and water supply levels. The machine will incorporate IBM eServer POWER4 technology to achieve a target peak performance level of four trillion calculations per second. Nearly tripling the amount of processing power in ORNL's data centers, the IBM system is expected to rank among the world's five most powerful supercomputers when completed in early 2002. POWER4 is the advanced microprocessor that powers the next generation of IBM eServer Unix systems -- code-named "Regatta" -- which are scheduled to begin shipping later this year. The ORNL supercomputer will be used to investigate extremely sophisticated computer models that simulate the world's climate. These computer models -- containing hundreds of thousands of lines of code -- will predict the potential impact that increased greenhouse gases in the atmosphere could have on crop yields, public drinking water supplies and ocean levels. Other areas expected to benefit include computational chemistry, high energy and nuclear physics and fusion energy research. ORNL is a DOE multiprogram research facility managed by UT-Battelle. For more information, visit www.ibm.com/servers/hpc.
<urn:uuid:ae309779-60d2-40d1-a21e-03a68d9cd380>
CC-MAIN-2017-04
https://esj.com/articles/2001/08/31/getting-hotter-ibm-supercomputer-will-study-global-warming.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00154-ip-10-171-10-70.ec2.internal.warc.gz
en
0.890726
315
3.671875
4
4 Ways to Safeguard Mobile DevicesDHS: Cybercriminals Increasingly Target Mobile Devices - Access the Internet over a secure network: Only browse the web through your service provider's network such as 3G or a secure Wi-Fi network. - Be suspicious of unknown links or requests sent through email or text message: Do not click on unknown links or answer strange questions sent to your mobile device, regardless of who the sender appears to be. - Download only trusted applications: Download "apps" from trusted sources or marketplaces that have positive reviews and feedback. - Be vigilant about online security: Keep anti-virus and malware software up to date, use varying and strong passwords, and never provide your personal or financial information without knowing who's asking and why they need it. DHS says nearly half of Americans are expected to own a mobile device by year's end. Citing experts, DHS says smartphones and mobile devices will surpass computers as the primary target for cybercrime within three years. "If a hacker can gain access to a mobile device, they can easily find e-mail addresses, stored passwords, banking information, social media accounts and phone numbers, allowing them to steal your information, your money and even your identity," DHS says, as part of its Stop Think Connect cybersecurity campaign.
<urn:uuid:45fa166b-d8d3-412c-bee7-e9e3b9159e21>
CC-MAIN-2017-04
http://www.govinfosecurity.com/articles.php?art_id=3768
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00392-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920675
267
2.671875
3
Accuracy Proves Quality Analytics January 21, 2013 Accuracy is key for analytics, because it validates information performed by a computer while the human user was away doing other business. The only way to measure accuracy is to compare human analysis to computer analysis. The Attensity Blog focuses on “How Accuracy In Analytics Matters For Businesses.” The article explains accuracy is measured to how well a computer can mimic a human brain: “Computers only do what we tell them to do. They have (almost) infinite computational power, and can apply any set of rules to any computational variables. This means that if we tell computers that a specific word or combination of words means something positive, then the computer cannot make it mean something negative. In other words, we are not really rating the computer’s ability to determine a sentiment we are rating whether humans did a good job, or not, in biasing the computer to pick that sentiment. This means we can accurately predict an outcome selected by the computer before the first variable is computed against the first rule.” In other words, accuracy is human bias and for better analytics it should be reduced. To reduce bias, analytics’’ core elements must be examined: what is analyzed and what it is compared to. The article outlines the steps taken to help reduce bias and how it can improve a company’s standing, finance, etc. It look like that accuracy means adding the extra ingredient of love that grandma puts in her cookies, i.e. you have to care about it. Whitney Grace, January 21, 2013 Sponsored by ArnoldIT.com, developer of Beyond Search
<urn:uuid:d8022135-b829-48de-b8b8-9eec8fd9ef91>
CC-MAIN-2017-04
http://arnoldit.com/wordpress/2013/01/21/accuracy-proves-quality-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00024-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949729
341
2.546875
3
With over 1 billion users, the Internet has become a conduit for businesses and people to access information, do banking, go shopping, connect with people, and reach out to an audience through social media platforms. The downside of all this convenience, however, is its vulnerability to disruption. Cybercriminals have the means and the ability to steal information or halt normal system operations with motives ranging from industrial espionage and financial gain to activism and advancing political agendas. Over the past few years, distributed denial-of-service (DDoS) attacks have become a growing security problem for private and public sector organizations. DDoS attacks escalate in size and impact. Moreover, there has been a trend toward greater peak bandwidth, longer attack duration and the use of DDoS as not only a hacktivism tool, but for extortion purposes as well. Previous incidents and trends related to DDoS attacks between 2013 and 2015 revealed that average peak bandwidth had doubled. Towards the end of 2014, after the Occupy Central protests in Hong Kong, CloudFlare CEO Matthew Prince stated that the largest DDoS attack was done against independent media sites in the province. According to Prince, it was larger even the previous record-holder, a 400Gbps attack in Europe in early 2014. What is a DDoS attack? A DDoS attack is designed to interrupt or shut down a network, service, or website. A DDoS attack happens when attackers utilize a large network of remote PCs called botnets to overwhelm another system’s connection or processor, causing it to deny service to the legitimate traffic it’s receiving. The goal and end result of a successful DDoS attack is to make the website of the target server unavailable to legitimate traffic requests. How does it work? The logistics of a DDoS attack can be best explained by a figurative example. Let’s say a user walks in to a bank that only has one teller window open. As soon as the user approaches the teller, another person cuts in front the user and begins making small talk with the teller, with no real intention of making any bank-related transactions. Even as a legitimate user of the bank, the user is unable to deposit his check, and is forced to wait until the “malicious” user has finished his conversation. However, after this malicious user leaves, another person walks in front of the legitimate user, delaying the legitimate user all over again. This process can continue for hours, even days, preventing the user, or any other legitimate users from performing bank transactions. A DDoS attack on a web server works similarly, because there is virtually no way to determine traffic from legitimate requests against traffic from attackers until the web server processes the request. What actually happens when an organization is the victim of a DDoS attack? For starters, it immediately has to divert attention from running crucial operations to getting its website back in working order. The DDoS Surge An increasing number of perpetrators and groups have shown that they have the ability to launch successful DDoS attacks. In 2013, a 300Gbps attack on Spamhaus was listed as the largest ever. The attack was initiated by a teenager in London. At the same time, nation-states like Iran and China have been suspected to have been involved in several DDoS incidents, namely a wave of attacks against the US banks and the aforementioned Occupy Central cyberattack, respectively, in 2012. In 2015, a government may also have been involved in the DDoS attack on GitHub (a site for sharing code repositories), and may have been larger than the Hong Kong attack. In addition to GitHub and the Hong Kong media, video game properties such as “League of Legends” and Electronic Arts’ Origin portal, public sector institutions including the Dutch government, and software companies like Evernote all dealt with sustained disruption from DDoS attacks that took their sites temporarily offline. In the second quarter of 2015, the number of DDOS attacks reached an all-time high in popularity. According to the Q3 2015 State of the Internet – Security Report from Akamai, DDoS attacks increased by 180 percent compared to the same quarter in 2014. The biggest DDoS attack recorded in the quarter lasted over thirteen hours at 240Gbps—notable because attacks typically last about one to two hours. Between them, the software and gaming industries accounted for more than 75 percent of all the DDoS attacks documented in the Akamai report. Game companies saw their share of the total surge from 35 to 50 percent in just one year. Recently, BBC’s websites and Republican presidential candidate Donald Trump’s main campaign website were hit by the largest DDoS attacks to date. Between the two, the bigger DDoS attack was carried out against BBC with over 600Gbps. According to reports, BBC announced that the outage was due to some “technical” fault, but later acknowledged that a group called “New World Hacking” claimed responsibility for launching the DDoS attack. With the increased popularity of DDoS extortion campaigns, knowing the causes and characteristics of these attacks is essential for guiding investment in anti-DDoS tools and security software. Enterprise CIOs should ensure that encryption is in place in the analytics and other web tools that their organizations use, be aware of possible DDOS attack vectors, and invest in network security tools that spot traffic anomalies and issues. Like it? Add this infographic to your site: 1. Click on the box below. 2. Press Ctrl+A to select all. 3. Press Ctrl+C to copy. 4. Paste the code into your page (Ctrl+V). Image will appear the same size as you see above.
<urn:uuid:4afe29ab-bc65-4ea1-acf9-5b79d5bf42f8>
CC-MAIN-2017-04
http://www.trendmicro.com.au/vinfo/au/security/news/cyber-attacks/security-101-distributed-denial-of-service-ddos-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00144-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960615
1,162
3.140625
3
There seems to be an endless stream of vulnerabilities on Google's mobile OS, but is it really less secure than iOS or Windows? The security of a device can be very dependent on how it is used, however Android devices are designed to offer users control over their mobile environment. Consequently, Android users can install whatever they want from where ever they want, which exposes them to a comparatively high level of risk. However, if a user only loads applications from official stores and keeps their device and its applications up to date they will have a generally secure experience. That being said, a number of remote access vulnerabilities have been discovered this year for which the Android patches have not been made immediately available, if at all, by some vendors. At present, mobile devices are not the path of least resistance for gaining access to sensitive content and consequently are not as appealing a target as they otherwise would be. This is in part, a result of the fact that it is not often possible to target users remotely. An attacker would normally have to pick their targets and focus specifically on those individuals. Most of the malware that exploit users indiscriminately attempt to trick users into sending premium rate SMSes. These malicious application are rarely available on the Google Play Store. If the current techniques preferred by the majority attackers were to become less viable, we would likely see a change in the number and type of exploits developed for mobile phones and applications. As with traditional computers, the most secure system can still be breached if the people who use it do not operate in a secure manner. User education is vital, and will remain vital even if the patching policy of vendors improved. Android is a platform that is intended to provide users with a large degree of freedom. While this remains a core component of the Android platform, the security of individual Android devices will be subject to the practices of its owner.
<urn:uuid:e3edc25b-29b3-43ae-a190-b79a72a8214a>
CC-MAIN-2017-04
https://www.mwrinfosecurity.com/our-thinking/is-android-less-secure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00410-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964991
378
2.546875
3
Distributed computing has undergone many permutations, from its roots in grid computing to support large scientific endeavors to Sun-style utility computing, to the kind of public cloud computing popularized by Amazon Web Services. One of the newest projects of this type combines cycle sharing, think SETI@home, with cryptocurrency principles. Zennet, as it’s called, is being presented as a free-market alternative to Amazon Web Services. Founder and innovator Ohad Asor describes the service as a distributed and decentralized arbitrary computation grid. The open market platform connects providers, those who make computation power available for a negotiated fee, with publishers, users who require compute cycles to run arbitrary computational tasks. Participants are free to pay or charge any rate they want. The setup relies on lightweight OS virtualization technology to ensure a safe computing experience. “The network is 100% distributed and decentralized: there is no central entity of any kind, just like Bitcoin,” explains Asor. “All software will be open source. Publishers pay Providers directly, there is no middleman. Accordingly, there are no payable commissions, except for regular transaction fees which are being paid to XenCoin miners.” The system’s building blocks consist of the lightweight OS virtualization tool, Docker, benchmarking technologies, an improved Blockchain technology, and a novel protocol. Software components include a Linux distribution (zennet-os) with Zennet code to manage Docker containers and collect measurements; a client software (zennetd and zennet-qt) that configures and manages the zennet-vm according to the preferences of publishers and providers; and blockchain (xencoind and xencoin-qt), implementing XenCoin as a cryptocurrency, being used as tokens to use computing machines. Although XenCoin is not itself a currency, it can be monetized by using it to rent computational power used for mining cryptocurrencies (such as Bitcoin). The upstart sees big potential in the big data space. Target applications include number-crunching, MapReduce, text analytics, predictive, or molecular dynamics tasks. “Typical Zennet publishers do not need one virtual machine – they need thousands of them,” notes company literature. “Take, for example, protein folding related computations. A single researcher wants to fold a certain protein. It would take him days or weeks to run computations on the university’s computer labs, if he gets lucky enough to get access to the university’s computation resources. He may get even luckier and have F@H fold it for him. But for most researchers, AWS may be the only practical option.” Asor says that “presale” is expected and that “development is progressing at full pace.” Update: The article was modified on December 9, 2014, to reflect a name change from Xennet to Zennet.
<urn:uuid:0580be80-9fea-426d-8dde-329c50424a53>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/08/18/free-market-hpc-cloud-development/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00162-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911818
607
2.578125
3
One day there will be no more IPv4. Internet protocol version 6 (IPv6) is the latest generation of internet protocol that will eventually replace the current beta IPv4. Fun fact: The closest thing to the fifth version of Internet Protocol was something that was created to test out the transition from v4 to v6. It never even got the official IPv5 name. The first three versions of internet protocol were also all experimental, and were never officially implemented This transition to IPv6 is going to be massive, but why are we going ahead with it? We’re going to take a look at where we are at and where we are headed in the road of transitioning to this new internet protocol. How did we get here? IPv4 is the most widely deployed internet protocol used to connect devices to the internet. IPv4 uses a 32-bit addressing scheme, written in decimal form as four octets separated by periods. Each number can be zero to 255. With this naming convention, a total of more than four billion addresses are possible (4,294,967,296, to be exact). Yes, four billion is a lot of addresses. But, as the number of people connecting to the internet has grown, and more and more devices are being connected to the internet, this number becomes a lot less significant. Businesses use hundreds and thousands of computers. Work computers, servers and network devices, and smart equipment for board and conference rooms. Even GPS devices for delivery trucks. All of these devices needed their own unique IP address for IP to IP connectivity. Thankfully, the Internet Engineering Task Force (IETF) saw this vast expanse of numbers dwindling and determined it needed to do something. Starting the process all the way back in 1994, they began creating a more detailed way to connect devices to the web, and a naming convention that lends itself to a lot more room for growth. Unlike IPv4 addresses, IPv6 addresses are 128-bit IP addresses written in hexadecimal and separated by colons. Because of this switch from binary to hexadecimal, there is a lot more room for the internet to grow. The approximate number of possible addresses made possible with an IPv6 system is 340 undecillion. That’s exactly 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses for anyone who is wondering (impress your friends at that next party by learning to say that number). The heavily increased pool of addresses is not the only benefit that can result from switching to IPv6. With the newer internet protocol, the routing process is streamlined, everything is auto-configured, there’s built-in authentication and privacy support, and an easier administration process. IPv6 has been in existence for a long time. It was released December 1998, and manufacturers started using the new conventions immediately. Being designed as an evolutionary upgrade to the Internet Protocol, it has been coexisting alongside IPv4 for quite some time. All modern operating systems have supported IPv6 since 2011. However, the issues are not only that there are still a lot of routers and servers that don’t support it, but many internet service providers (ISPs) don’t provide IPv6 service. This means that a connection between a device with an IPv6 address to a router or server that only supports IPv4 would be impossible to achieve. With recent developments such as network address translation (NAT) and private network addressing, every device now does not need its own direct IP address, which gives the internet protocol even more room to grow. What’s the hold up? With this upcoming switch, there are a few hang ups that are preventing the change from taking full effect quicker. A lack of security support and knowledge of the new and upcoming IPv6 could prompt a wave of cyberattacks against underlying systems. All of the security products being used today, especially those converted from IPv4 to IPv6, have not matured enough and just don’t know enough to match the expanding and unknown threat they’re protecting against. Another concern with IPv6 is a lack of education within the IT world on this impending change. IPv6 is eventually coming to the networks that you control, whether you’re ready for it or not. The transition will introduce reliability and security risks, and will bring additional functionality that will be useful in control system applications. Like any new technology, it’s important to learn how to best utilize IPv6, especially the addressing scheme and protocols, in order to facilitate incident handling and related activities. Even with these concerns, IPv6 usage is growing steadily. As of November 2016, Google reported that for the first time, 15 percent of their services were being reached with IPv6. Now that you know about the change, here’s what you have to do: nothing. Devices are already being designed with this switch in mind, and are being set up to slowly phase out IPv4 and usher in IPv6. Want to find out if your devices are ready for the future? Use this IPv6 test, and see if your device can handle IPv6. Note: Don’t worry if your systems aren’t ready for the switch, they aren’t going to shut down an entire internet protocol anytime soon. The internet is expanding and evolving. Making sure that there’s enough room to facilitate this growth is a tough job. Thankfully, the IETF is moving all the pieces and setting up the world for full internet access. Nobody knows how much the transition will cost or how long it will take, but this transition has to be made so that the internet can continue to function. Want to learn more about IPv6? Well, we have you covered. Not a CBT Nuggets subscriber? Start your free week now.
<urn:uuid:930172cd-977f-486e-aff2-dbdaafa487a4>
CC-MAIN-2017-04
https://blog.cbtnuggets.com/2016/12/what-to-expect-ipv4-to-ipv6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00467-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947972
1,216
3.28125
3
A new study by the Ponemon Institute takes a deep dive into consumer perceptions around how organizations are securing their access, and what they would consider to be the ideal steps and technologies used to ensure that their personal information is protected. The study includes results from more than 1,900 consumers between the ages of 18 and 65-years-old in the United States, United Kingdom and Germany. Key findings include: Failed authentication thwarts online business. Approximately 50 percent of respondents were “very frequently” or “frequently” unable to perform an online transaction such as buying a product or obtaining a service because of an authentication failure on the website. Most authentication failures happen because of the use of usernames and passwords. The majority of authentication failures happen because of forgotten passwords, usernames or a response to a knowledge-based question (such as a mother’s maiden name). Less than 50 percent of respondents said authentication failures occur because of glitches or inaccuracies within website systems or identity verification procedures. Many consumers favor a single identity credential for a variety of authentication purposes. The majority of consumers (60 percent) would use a multi-purpose identity credential to verify who they are before providing secure access to data, systems and physical locations. The benefits of a multi-purpose identity credential are convenience (US & UK consumers) and security (German consumers). Most respondents are comfortable with using biometrics. The majority of respondents believe it is acceptable for a trusted organization such as their bank, credit card company, health care provider, telecom, email provider or governmental organization to use factors such as voice or fingerprints to verify their identity. Financial institutions provide the best online validation. According to respondents, the top five organizations that have the most secure authentication (in order of best to worst): banking institutions, credit card and Internet payment providers, social media, retailers, and Internet service providers. “It comes as no surprise that we continue to see an increase in dissatisfaction from consumers when it comes to traditional authentication schemes involving usernames and passwords,” said Dr. Larry Ponemon , chairman and founder of the Ponemon Institute. “The good news is that there is a new sense of willingness to try emerging technologies and more complex identity verification systems to fix this broken system.”
<urn:uuid:4a57e6b9-0504-4290-abe2-2ac24d30374f>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/04/22/reliance-on-passwords-inhibits-online-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00467-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93369
469
2.640625
3
By Andrew Zonenberg @azonenberg In the post “Reading CMOS layout,” we discussed understanding CMOS layout in order to reverse-engineer photographs of a circuit to a transistor-level schematic. This was all well and good, but I glossed over an important (and often overlooked) part of the process: using the photos to observe and understand the circuit’s actual geometry. Let’s start with brightfield optical microscope imagery. (Darkfield microscopy is rarely used for semiconductor work.) Although reading lower metal layers on modern deep-submicron processes does usually require electron microscopy, optical microscopes still have their place in the reverse engineer’s toolbox. They are much easier to set up and run quickly, have a wider field of view at low magnifications, need less sophisticated sample preparation, and provide real-time full-color imagery. An optical microscope can also see through glass insulators, allowing inspection of some underlying structures without needing to deprocess the device. This can be both a blessing and a curse. If you can see underlying structures in upper-layer images, it can be much easier to align views of different layers. But it can also be much harder to tell what you’re actually looking at! Luckily, another effect comes to the rescue – depth of field. Depth of field When using an objective with 40x power or higher, a typical optical microscope has a useful focal plane of less than 1 µm. This means that it is critical to keep the sample stage extremely flat – a slope of only 100 nm per mm (0.005 degrees) can result in one side of a 10x10mm die being in razor-sharp focus while the other side is blurred beyond recognition. In the image below (from a Micrel KSZ9021RN gigabit Ethernet PHY) the top layer is in sharp focus but all of the features below are blurred—the deeper the layer, the less easy it is to see. We as reverse engineers can use this to our advantage. By sweeping the focus up or down, we can get a qualitative feel for which wires are above, below, or on the same layer as other wires. Although it can be useful in still photos, the effect is most intuitively understood when looking through the eyepiece and adjusting the focus knob by hand. Compare the previous image to this one, with the focal plane shifted to one of the lower metal layers. I also find that it’s sometimes beneficial to image a multi-layer IC using a higher magnification than strictly necessary, in order to deliberately limit the depth of field and blur out other wiring layers. This can provide a cleaner, more easily understood image, even if the additional resolution isn’t necessary. Another important piece of information the optical microscope provides is color. The color of a feature under an optical microscope is typically dependent on three factors: - Material color - Orientation of the surface relative to incident light - Thickness of the glass/transparent material over it Material color is the easiest to understand. A flat, smooth surface of a substance with nothing on top will have the same color as the bulk material. The octagonal bond pads in the image below (a Xilinx XC3S50A FPGA), for example, are made of bare aluminum and show up as a smooth silvery color, just as one would expect. Unfortunately, most materials used in integrated circuits are either silvery (silicon, polysilicon, aluminum, tungsten) or clear (silicon dioxide or nitride). Copper is the lone exception. Orientation is another factor to consider. If a feature is tilted relative to the incident light, it will be less brightly lit. The dark squares in the image below are vias in the upper metal layer which go down to the next layer; the “sag” in the top layer is not filled in this process so the resulting slopes show up as darker. This makes topography visible on an otherwise featureless surface. The third property affecting observed color of a feature is the glass thickness above it. When light hits a reflective surface under a transparent, reflective surface, some of the beam bounces off the lower surface and some bounces off the top of the glass. The two beams interfere with each other, producing constructive and destructive interference at wavelengths equal to multiples of the glass thickness. This is the same effect responsible for the colors seen in a film of oil floating on a puddle of water–the reflections from the oil’s surface and the oil-water interface interfere. Since the oil film is not exactly the same thickness across the entire puddle, the observed colors vary slightly. In the image above, the clear silicon nitride passivation is uniform in thickness, so the top layer wiring (aluminum, mostly for power distribution) shows up as a uniform tannish color. The next layer down has more glass over it and shows up as a slightly different pink color. Compare that to the image below (an Altera EPM3064A CPLD). The thickness of the top passivation layer varies significantly across the die surface, resulting in rainbow-colored fringes. The scanning electron microscope is the preferred tool for imaging finer pitch features (below about 250 nm). Due to the smaller wavelength of electron beams as compared to visible light, this tool can obtain significantly higher resolutions. The basic operating principle of a SEM is similar to an old-fashioned CRT display: electromagnets move a beam of electrons in a vacuum chamber in a raster-scan pattern over the sample. At each pixel, the beam interacts with the sample, producing several forms of radiation that the microscope can detect and use for imaging. Electron microscopy in general has an extremely high depth of field, making it very useful for imaging 3D structures. The image below (copper bond wires on a Microchip PIC12F683) has about the same field of view as the optical images from the beginning of this article, but even from a tilted perspective the entire loop of wire is in sharp focus. Secondary Electron Images The most common general-purpose image detector for the SEM is the secondary electron detector. When a high-energy electron from the scanning beam grazes an atom in the sample, it sometimes dislodges an electron from the outer shell. Secondary electrons have very low energy, and will slow to a stop after traveling a fairly short distance. As a result, only those generated very near the surface of the sample will escape and be detected. This makes secondary electron images very sensitive to topography. Outside edges, tilted surfaces, and small point features (dust and particulates) show up brighter than a flat surface because a high percentage of the secondary electrons are generated near exposed surfaces of the specimen. Inward-facing edges show up dimmer than a flat surface because a high percentage of the secondary electrons are absorbed in the material. The general appearance of a secondary electron image is similar to a surface lit up with a floodlight. The eye position is that of the objective lens, and the “light source” appears to come from the position of the secondary electron detector. In the image below (the polysilicon layer of a Microchip PIC12F683 before cleaning), the polysilicon word lines running horizontally across the memory array have bright edges, which shows that they are raised above the background. The diamond-shaped source/drain areas have dark “shadowed” edges, showing that they are lower than their surroundings (and thus many of the secondary electrons are being absorbed). The dust particles and loose tungsten via plugs scattered around the image show up very brightly because they have so much exposed surface area. Compare the above SEM view to the optical image of the same area below. Note that the SEM image has much higher resolution, but the optical image reveals (through color changes) thickness variations in the glass layer that are not obvious in the SEM. This can be very helpful when trying to gauge progress or uniformity of an etch/polish operation. In addition to the primary contrast mechanism discussed above, the efficiency of secondary electron emission is weakly dependent on the elemental composition of the material being observed. For example, at 20 kV the number of secondary electrons produced for a given beam current is about four times higher for tungsten than for silicon (see this paper). While this may lead to some visible contrast in a secondary electron image, if elemental information is desired, it would be preferable to use a less topography-sensitive imaging mode. Backscattered Electron Images Secondary electron imaging does not work well on flat specimens, such as a die that has been polished to remove upper metal layers or a cross section. Although it’s often possible to etch such a sample to produce topography for imaging in secondary electron mode, it's usually easier to image the flat sample using backscatter mode. When a high-energy beam electron directly impacts the nucleus of an atom in the sample, it will bounce back at high speed in the approximate direction it came from. The probability of such a “backscatter” event happening depends on the atomic number Z of the material being imaged. Since backscatters are very energetic, the surrounding material does not easily absorb them. As a result, the appearance of the resulting image is not significantly influenced by topography and contrast is primarily dependent on material (Z-contrast). In the image below (cross section of a Xilinx XC2C32A CPLD), the silicon substrate (bottom, Z=14) shows up as a medium gray. The silicon dioxide insulator between the wires is darker due to the lower average atomic number (Z=8 for oxygen). The aluminum wires (Z=13) are about the same color as the silicon, but the titanium barrier layer (Z=22) above and below is significantly brighter. The tungsten vias (Z=74) are extremely bright white. Looking at the bottom right where the via plugs touch the silicon, a thin layer of cobalt (Z=27) silicide is visible. Depending on the device you are analyzing, any or all of these three imaging techniques may be useful. Knowledge of the pros and cons of these techniques and the ability to interpret their results are key skills for the semiconductor reverse engineer.
<urn:uuid:a29b3453-89ad-4dce-aef5-0b9032fb5d14>
CC-MAIN-2017-04
http://blog.ioactive.com/2016/03/inside-ioactive-silicon-lab.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914075
2,153
3.234375
3
Introduction to Global Communications Learn best practices for global communication standards and the networks which support them. In this course, you will focus on topics that define worldwide communications standards and the networks that provide the services. Historically, telecom providers built and interconnected their networks with copper-based standards with a main purpose of providing voice services. Today, global communications have evolved and networks are now migrating solely to packet-switched services, leaving behind yesterday's circuit-switching and the copper those networks once depended on. You will begin with the basics of switching and routing and then cover the complex network topologies of today's service providers. You will then examine the fiber-optic links that provide the fabric for today's communications backbone. You will learn the mobile wireless standards as they transition through the various generations, review satellite networks, and cover the evolution of communications technology.
<urn:uuid:02a34bab-99a4-45c6-ad8b-feaed7a44667>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/116914/introduction-to-global-communications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00365-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912875
173
3.625
4
A few days before Christmas and Hanukkah festivities began, Apple gave a little something to the artificial intelligence research community: its first research paper. The paper, authored by six of Apple’s researchers, doesn’t focus on AI that someone with an iPhone might interact with, but rather how to create enough data to effectively train it. Specifically, the research focuses on making realistic fake images—mostly of humans—to train facial recognition AI. It addresses a core problem: training a machine takes a huge amount of data. Moreover, training a machine on matters like faces and body language can take a ton of personal data. The ability to manufacture this kind of training data and still achieve high results could allow Apple to build AI that understand how humans function (the way we move our hands or look around a screen) without needing to use any user data while building the software. Apple’s published research focuses on those two examples: identifying hand gestures and detecting where people are looking, examples of basic image recognition problems that could be applied to anything from tracking user behavior to a wave-to-unlock iPhone feature. In both cases, the researchers took established datasets of synthetic images, and used a neural network trained on real images to refine them to look more realistic. The system then compares the refined image to a real image, attempts to decide which picture is real, and then updates itself based on what the system judged as fake compared to the real image. As the researchers write, the end result is “state-of-the-art results without any labeled real data.” The work Apple decided to present first is interesting. It’s not speech recognition for Siri, or a PR stunt for some new Maps feature. Rather, it’s research that very much falls in line with an established trend of 2016: using neural networks to generate new data instead of just identifying it. The research also nods toward user data security, a drum that Apple beats loudly and often. While some companies like Google and Facebook use vast quantities of user data to train their algorithms, Apple’s entire pitch has been that nobody has access to what’s on an iPhone but the iPhone’s owner. This kind of work makes the statement that Apple will keep up with other tech companies and still honor the privacy it promised users. Researchers write the next possible avenues of research could be using the same technique for videos.
<urn:uuid:6aefba9b-74bd-4a67-a5e9-764cbf0a29b8>
CC-MAIN-2017-04
http://www.nextgov.com/big-data/2016/12/apples-first-research-paper-tries-solve-problem-facing-every-company-working-ai/134197/?oref=ng-relatedstories
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00209-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940019
499
3.0625
3
A severe March weather system displaced residents and exhausted emergency managers, leaving 20 dead and historic flooding in its wake in parts of Arkansas, Missouri and Ohio. The steady rain hammered an already-saturated Missouri and Arkansas, closed 60 Ohio state roads, then turned to heavy snow in Illinois, forcing the cancellation of more than 450 flights at Chicago's O'Hare International Airport. The storm dumped more than a foot of rain in parts of Missouri in a 36-hour period, flooding rivers to the point that four crested at record levels between March 17 and March 19. Severe storms are nothing new to Missourians. Since August 2005, Missouri has received 14 presidential disaster declarations including strong summer storms, massive power outages and serious flooding. Still, this storm opened the eyes of emergency managers. "I have to admit being somewhat surprised by the scope of this flooding event," said Dante Gliniecki, statewide volunteer coordinator of the Missouri State Emergency Management Agency (SEMA). "This is one of the biggest flooding disasters in Missouri since the mid-1990s." The previous disasters and emergencies set the stage for a better, more-cooperative effort this time. During a recent ice storm, state emergency managers learned the value of a coordinated conference call system for state and local emergency managers, along with the National Weather Service, so that communities most in need are the first to get state resources. The system was established during the December 2006-January 2007 ice storms, when a lack of connectivity between state and local government left thousands of citizens without power for weeks. During the floods, state, local and federal officials and volunteers were summoned to a conference call, during which every jurisdiction aired its status and needs. Every agency was briefed by the National Weather Service on what to expect; volunteer organizations talked about shelter and food availability; and rescue agencies discussed the availability of rescue personnel like water rescue teams. A "situation report" was posted on SEMA's Web site, which compiled the conference call and subsequent efforts to find resources that were requested, such as generators and drinking water. It proved to be an invaluable way to communicate. Another lesson learned from previous floods was the establishment of a Multi-Agency Coordination Center in southeast Missouri to help manage swift water rescue requests. Though evacuation is voluntary in Missouri, hundreds were forced to leave their homes during the March floods, and police and other rescuers were busy aiding stranded residents. "The continuous rains saturated the ground and created additional flash flooding and rising backwaters, so many residents who normally would not evacuate found themselves in conditions where evacuation was necessary," said Susie Stonner, SEMA's public information officer. "More than 100 state employees and fire personnel with swift water rescue training responded in St. Louis, Cape Girardeau, Scott and Butler counties." The devastation could have been much worse if not for Missouri's long-standing effort to move citizens out of harm's way. After severe flooding during 1993, '94 and '95, Missouri began an aggressive buyout program, offering mitigation funds to remove families from floodplains. Since then, more than 5,000 homes have been purchased by local communities, which turn the land into open space, parks or low-maintenance recreational facilities. "If the earlier buyout program had not been implemented, many more Missourians would have suffered from floods," Stonner said via e-mail. Gliniecki said the state hopes to increase the number of buyouts in the near future to prevent more flooded residences. In another effort to improve the way Missourians respond to disasters, Gov. Matt Blunt launched a faith-based initiative in April 2008 for mass care and disaster outreach. The initiative provides coordination of nongovernmental, volunteer and faith-based organizations. These organizations will attend regional training sessions on how to set up and run a shelter in accordance with American Red Cross standards. Arkansas Lacking Resources Arkansas was also struck by heavy weather, which left emergency managers struggling to pay for cleanup and some officials contemplating whether housing should be limited in flood-prone areas. Heavy, consistent rains and floods followed the Feb. 5, 2008, tornadoes that had emergency managers scrambling through mid-April and prompted President George W. Bush to declare 35 counties in Arkansas federal disaster areas. "It's been front, after front, after front," said David Maxwell, Arkansas Department of Emergency Management director. "I have folks who have not been in the office since shortly after the Feb. 5 tornadoes." Maxwell is short of help and trying to keep track of various declarations, which have been taxing: keeping staff coordinated, accompanying FEMA representatives to assess damage, and continuing to monitor rising waterways. Being able to enlist out-of-state assistance would be helpful during emergencies, which is why the Emergency Management Assistant Compact (EMAC) was created. EMAC is a congressionally ratified organization that provides interstate mutual aid when requested. But Maxwell would need to pay for an EMAC team if he requested one because of a new FEMA policy, and he can't afford it, he said. The new FEMA Disaster Assistance Policy 9525.9 (Section 324 Management Costs and Direct Administrative Costs) went into effect March 12, 2008, and says that management costs reimbursed by FEMA won't exceed 3.34 percent. The state had already totaled management costs of about $16 million in mid-April and wouldn't be eligible for FEMA reimbursement for an EMAC under the new policy. "It's going to mean that states that are unable to bear the full cost of EMAC response would not be able to use EMAC," Maxwell said. "It will have the effect of damaging mutual aid in this country." Arkansas, like Missouri, is accustomed to this kind of havoc, and experts say to expect more of the same. "It is well documented that more and more people want to live near water," said Frank Richards, meteorologist for the National Weather Service's Hydrologic Services Program. "The resulting migration, along with increasing values of infrastructure like plumbing, heating and communications systems, increases the impact of flooding even if there is no enhancement due to El Niño/La Niña or global warming." "In my opinion, while emphasis on addressing possible anthropogenic impacts on global climate change is prudent, in reality, our ability to control climate is considerably less than our ability to manage growth and development in weather-sensitive areas," said Richards.
<urn:uuid:d41b51da-fff1-48d1-ac33-f51385d6300c>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/Coordinated-Communications-Eases-Impact-of-Midwest.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966667
1,322
2.71875
3
Why do we use databases? Because they’re there, of course. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Obviously, that isn’t the whole story – but it's true that databases are everywhere. They tend to proliferate, and the speed of proliferation has increased dramatically since the Web got going. Nonetheless, many developers of Web-based applications ignore this rule: "The Database Must Be Kept Separate From The Application". Is that actually a problem? And indeed, why do we spend time and effort separating the database from the application? Embarking on a Cook’s Tour of computer and database development might shed some light on basic database concepts you should keep in mind. Let's start with some history. People argue endlessly over the identity of the first computer. But whatever it was, it was definitely created by a Brit. The heavyweight contender is the Difference Engine tabulating machine designed in the 1800s by Charles Babbage (born in December 1791 in England – there’s some dispute over exactly where). There’s also the Colossus system designed during World War II by Tommy Flowers (born 22 December 1905, London) to decipher encrypted German messages: this is considered the first programmable electronic computer. Then there’s EDSAC, the first stored-program computer, completed at the University of Cambridge in 1949. Its design was expanded upon in 1951 by J. Lyons & Co. (of teashop fame) to produce LEO 1, purported to be the first computer used for a commercial business application. (Pay no attention to those upstart Yanks and their claims that ENIAC was the first general-purpose, all-electronic computer.) The advent of database concepts and DBMS software The applications that ran on these early machines mixed the data, data processing and application into a single program. Initially, that worked pretty well, but people rapidly found it useful to separate the data from the application. For a start, doing so allowed one application to process many different sets of data. Then people realised that all applications essentially manipulate data in some way – so there was massive duplication of effort going on because all applications were being written with built-in data storage and manipulation capabilities. It was natural, then, to create specialised applications that could hold and manipulate data – what we now know as database engines. Once you have a database, you need a model of how it’s going to hold the data. Various methods, such as the hierarchical and network database models, were tried with relative success. Then in the mid-1970s, Ted Codd and Chris Date developed the relational model. Relational database management systems (DBMS) came into being, along with a communication mechanism that has become a standard: the SQL language. Those developments essentially gave us an ‘abstraction layer’ between the application and database. The abstraction layer enables us to do many things. For example, if we use standard SQL, it is theoretically possible to move data between databases – say, from Oracle to Microsoft’s SQL Server to IBM’s DB2 – without the application being aware of any difference. Admittedly, that scenario rarely works in practice – perhaps only in fairyland, in fact – for the simple reason that very few systems use only standard SQL. Almost all vendor-specific versions of SQL support a range of non-standard add-ons. And even standard SQL may not behave in exactly the same way when run against different DBMS software. However, portability between databases is still achievable, thanks to the abstraction layer. A well-developed system can keep non-standard SQL separate from standard SQL to make it clear to programmers which parts of the code need attention in order to ensure a clean port. Not all applications require a database engine As you'll probably guess, I'm a fan of databases. So you might think I'd tell you that every application must hold its data in a separate database and that anyone who doesn't do so should be punished with a weekend in Watford – but I wouldn’t. In fact, I’ve even written apps without a database. If an application doesn’t use much data, will never need to be ported and has a limited life expectancy, you can just write your data directly to disk. In such cases, using a database is an extra step that you may simply not need. The message here is not that you must use a database – or else. If there are no benefits to be gained, then don’t use one. But if you understand the benefits that using a database engine can bring, you’re much better placed to make informed decisions about when and when not to employ one. One final point is that as applications become more complex, the tipping point between using and not using a database will shift. With a simple application, there is clearly less work to do if you don’t use DBMS software. But if the application requires extensive development or is mission-critical, the lack of a database can actually create more work and lead to serious data integrity issues. To avoid possible problems, consider building it with a database engine from the word ‘Go’. And lest anyone take offense, I have nothing against Watford – I just couldn’t resist the alliteration. I couldn’t hate the place: a great hero of mine, Chris Date, was born there in 1941. By the way, Ted Codd was also English (born 23 August 1929, Portland Bill). And did I mention that the first programmer was British as well? That would be Ada Lovelace (born 10 December 1815, London), who worked with Charles Babbage on the design of his Analytical Engine, a general-purpose follow-on to the Difference Engine. And then there’s... Dr. Mark Whitehorn specializes in the areas of data analysis, data modeling and business intelligence (BI). Based in the UK, he works as a consultant for a number of national and international companies and is a mentor with Solid Quality Mentors. In addition, he is a well-recognized commentator on the computer world, publishing articles, white papers and books. Whitehorn also is a senior lecturer in the School of Computing at the University of Dundee, where he teaches the masters course in BI. His academic interests include the application of BI to scientific research.
<urn:uuid:06b61c04-4b8d-4615-b851-c83e9fb1a1fb>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240020344/Database-concepts-101-Why-we-use-DBMS-software
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00053-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953535
1,343
2.75
3
A small Japanese town, abandoned because of radiation concerns after the Fukushima nuclear plant disaster in 2011, is working with Google's map service to keep its memory alive. Google said it will map the streets of Namie, in Fukushima Prefecture northeast Japan using Street View. The town is about 20 kilometers from the nuclear power plant that suffered meltdowns and released radioactive materials after a powerful earthquake and tsunami struck the region two years ago. The Internet company said in a blog posting that the mapping will take several weeks, and the company aims to post the data online in a few months time. "All of the residents of our town, 21,000 people, are currently evacuated all over Japan. Everyone wants to know the state of the disaster area, there are a lot of people that need to see how things are," said Tamotsu Baba, town mayor. "I think there are many people all over the world that want to see images of the tragic conditions of the nuclear accident." Baba said the town is happy to cooperate with Google in the filming project. Namie was split between two evacuation zones established by the Japanese government after the Fukushima disaster. It is partly in the "security zone" where access is limited and partly in the "planned evacuation zone," where residents were told to leave within a month's time. Google said its staff is following recommended national and local guidelines for safety during filming. The company posted about the project on its Japanese blog, including a short video. "We hope that this project will also help protect against the fading of memories of the disaster, as we approach the two-year mark from when it occurred," wrote project manager Keiichi Kawai.
<urn:uuid:b9d15bf6-0ca2-423f-8077-b3ff0946d2dc>
CC-MAIN-2017-04
http://www.cio.com/article/2387849/internet/google-street-view-to-map-abandoned-fukushima-town.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.977078
347
2.859375
3
— By George Paul, Research Analyst, Industrial Automation and Electronics, Frost & Sullivan With various security subsystems, storage requirements would seem to be high for a physical security system. However, among the various security subsystems only the video surveillance system would require significant storage space, when compared to the other systems. The output from the other security systems such as access control, intrusion systems, and fire and safety systems would be typically textual data carrying status and control information. With the video surveillance system depending upon the resolution of the camera, the frame rate, and the total number of cameras, the storage requirements can range from a few gigabytes to terabytes per system. Prior to the advent of IP-based communication the storage systems for video surveillance was not overly complicated. The storage system would involve a tape recorder or a DVR. In the tape-based system, once the tape was full it was replaced by another tape. After the stipulated number of days required for backup, the tapes were overwritten using new video footage from the surveillance camera. Similarly, the DVR used to record until it was full and then overwrite when there was no more space. These were basic storage units that were built for the video surveillance system. These surveillance systems used a separate communication network and did not share resources with the computer network. The advent of IP-based physical security system shifted the focus from tape and DVR to IP-based storage systems such as NVRs, network attached storage (NAS), and storage area network (SAN). These IP-based storage systems were developed for IT systems and hence had developed technologies that were used to backup, retrieve, archive, and protect enterprise level data. With the shift of video surveillance systems from serial to IP-based communication networks these technologies became available to the security managers to maintain their storage systems. Besides, in situations where the storage resources are shared with the enterprise system, the security managers need to be aware of certain storage technologies that are used to maintain those systems. A few of the major storage technologies that are currently being used to maintain and operate current generation storage systems are discussed below. Data deduplication is one of an IP-based storage technology that is gaining traction in storage management, as it directly affects the amount of storage space available. In enterprise storage there might be situations where the same set of data would be stored multiple times by multiple users across the enterprises network. This uses up valuable space if the data stored are large in size such as a video file. Data deduplication is a technology that prevents this by deleting duplicates or copies of the same data set. The additional instances of the data would be deleted, keeping the indexing of those files. All these indexes of the duplicate files will point to the same data location, thereby reducing the space required for storage. This makes it easier to reduce the costs of storage, data protection, backup, and retrieval. There are various types of data deduplication methods and they all use hash calculations to return a value for each file that is compared with existing files to remove duplicate files. Data deduplication would be useful in distributed video surveillance systems, where the same video feed is stored in different locations. There would be the source location and then there would be the central location, where select video is analyzed. There might also be secondary locations for both forensic and training purposes where the video data might have been copied. When all these data are backed up or archived, data deduplication automates the process and maximizes the capacity utilization. Storage virtualization is a technology that separates the physical storage hardware from the logical representation of that storage space in a computer network. Prior to virtualization if there is 100 GB disk drive, the computer or the server to which the storage unit is attached can show a maximum of only 100 GB or less if there are more that one logical drive. Hence, the logical size of the storage drive is limited by the storage space on the physical disc drive. Virtualization overcomes this by combining all the physical storage units or disk drives available into one large logical drive. This also makes it possible to locate all the storage units in one physical location and provide logical drives to the users connected over the IP network. This is usually used in SAN-based systems, where the data on the logical drive are mapped to its physical location and can be independent of the location of the computer or server on which the user is working. This allows the IT manger to allocate additional storage space to the users and applications in real time without the delay in physically adding a new disk drive. If the entire storage system is running low on space, the IT managers can add additional storage racks and the system will automatically balance the load on existing disk drives to take advantage of the new disk space. All these operations would be performed without affecting the productivity of the users that are logged on at that time. Virtualization would significantly improve the ease of maintaining a large video surveillance system. Systems that are continuously growing — such as a city, road network or rail network system — would benefit the most. As the number of video channels increases, the storage space required can be increased in steps for the entire system without being concerned about mapping the space to the individual channels. The system would then balance the newer storage racks to maintain equal availability of storage space over the entire system. Thin provisioning is a technology that is used along with storage virtualization to increase the capacity utilization of the storage system. In systems where virtualization has not been implemented, the storage is provided based on the expected requirements by the user. Hence, if there are ten users and on an average the space requirement is 10 GB, the total storage even at the beginning of the installation would be 100 GB. However, not all of them might use the entire 10 GB of space. Some may require 2 GB, some may use around 10 GB and a few may require more than 10 GB of space. Thus, the users that use only around 2 GB will have 8 GB of storage space that is not being used and for the few who use more than 10 GB will require an additional 10 GB disk drive for their requirement. Hence the capacity utilization of these systems is brought down due to this irregular distribution of storage space. This is also known as fat provisioning. In thin provisioning, a beginning storage of the minimum requirements by the users is provided and as and when more storage space is required, the IT manager allocates the additional space, thereby distributing the storage to achieve almost 100 percent capacity utilization. Just as virtualization helps improve data management, thin provisioning also helps the storage manager to make the best use of all the existing storage space. Even if there is a combination of cameras with different resolution, frame rate, and on-screen activity, the system uses nearly 100 percent of the capacity by distributing the available storage space in real time. Cloud Storage Solutions Cloud storage is a form of storage virtualization technology, where the physical storage location is not even within the same office building. The storage can be maintained by the same company in an offsite location connected through the Internet or it can be outsourced to a third-party data center. The SAN would be maintained in these offsite data centers, connected to the user servers over the Internet. The user provides the fee for whatever storage they use and does not have to worry about maintaining the hardware. Besides, as the data are accessed through the Internet the same data set can be accessed from anywhere through a Web-based interface. The only two concerns would be reliability in accessing the data and the security of the data being stored offsite. These can be mitigated by faster Internet connections and encryption technologies. At present, most of the enterprise-level cloud storage is used for backup and disaster recovery. These data sets are not used on a daily basis and hence, are encrypted and stored in the cloud. In case of an emergency the data sets are retrieved and reinstalled to the operational SAN. Encryption is a second level of data security to prevent the data being accessed by unauthorized users. The first level of security would be logical security systems that use passwords, smart cards and biometrics to provide access to the data. However, if the hacker gets access to the physical hard drive, he can directly copy the data by running third-party applications that extract the data stored on the hard drives. Encryption is a technology that prevents this by encoding the data being stored on the hard disks. Even if the hacker gets access to the physical hard drive, without the decryption keys the data retrieved would be encoded and would like gibberish. Hence, this is an important technology in securing data, especially over the cloud. Solid State Storage The technologies seen till now where mostly on the software level. The underlying hardware still use the magnetic disk drives that were used in DVR for individual storage units, albeit with greater density and retrieval speeds. There is a new form of storage system that use solid state or integrated circuits that do not have any moving part within them. Unlike magnetic drives that require the disks to spin at thousands of revolutions per minute, solid state devices (SSDs) store and retrieve information from IC chips. Before the advent of nanometer silicon manufacturing process, solid state devices were only able to store data in the kilobyte range. This was not feasible for data storage applications. However, with nanometer silicon manufacturing, the SSDs were able to store data in the gigabyte range in a small form factor. As it has no moving parts it used less power and had much lower wear and tear. In addition, the storage and retrieval speed was comparable to random access memories (RAM) in computers, which was significantly higher than magnetic disks. The only restraining factor with SSDs is the price when compared to an equal sized magnetic disk drive. However, prices are coming down and at present, the SSDs are used for storage of data that are used frequently and on a daily basis, with magnetic disks used for backup and optical disks or tape drives used for archival. Spin Down and Self-Healing Data Storage Not all of the disk drives would be in use all the time. The power required for running and cooling these disk drives is immense. Hence, the spin down technology was introduced to reduce the operating costs of these storage systems. The storage management application automatically switches off the disk drives that are not in use and puts them on a mode similar to hibernate on the PCs. This is called spinning down the disk drives and is being implemented by most of the storage solution providers as part of their green initiative to conserve and improve energy efficiency. One of the other problems with disk storage systems is the inherent nature of disk drives to sustain radial and spiral scratches due to continuous operation. This cannot be prevented, as the mechanical contact between the magnetic disks and the reader head is susceptible to this type of damage. These scratches lead to data corruption, which in most cases would lead to data loss. Only in some cases can the corrupted data be retrieved and in most cases it would lead to a complete loss of data. The self healing data storage system locates such scratches at the beginning stages and transfers the affected and adjacent data to a safer location to prevent data loss. Most of the technologies mentioned above can be used for stand-alone physical security storage systems. However, for systems that share the resources with enterprise IT systems, the adoption of these technologies would be based on the IT administrator. Apart from cloud computing, encryption, and SSDs, most of the technologies can even be adopted for stand-alone physical security storage systems and for providing immense savings in terms of both cost and time.
<urn:uuid:c1c9d7ad-a25f-4f75-ad07-522bc7ce09f1>
CC-MAIN-2017-04
https://www.asmag.com/showpost/8874.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00255-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947547
2,370
2.5625
3
35 percent of the world’s websites are still using insecure SHA-1 certificates, according to Venafi. This is despite the fact that leading browser providers, such as Microsoft, Mozilla and Google, have publicly stated they will no longer trust sites that use SHA-1 from early 2017. By February 2017, Chrome, Firefox and Edge, will mark websites that still rely on certificates that use SHA-1 algorithms as insecure. As a result, web transactions and traffic may be disrupted in a variety of ways: - Browsers will display warnings to users that the site is insecure, prompting users to look for an alternative site. - Browsers will not display the ‘green padlock’ on the address line for HTTPS transactions; consumers rely on this icon as an indication that online transactions are secure and private. - Sites may experience performance problems; in some cases, access to websites may be completely blocked. In addition to the serious impact on user experience, websites that continue to use SHA-1 certificates should expect a significant increase in help desk calls and a reduction in revenue from online transactions. They may also suffer long-term reputation damage. Walter Goulet, cloud solutions product manager at Venafi, commented: “The results of our analysis clearly show that, while the most popular websites have done a good job of migrating away from SHA-1 certificates, a significant portion of the Internet continues to rely on them. According to Netcraft’s September 2016 Web Server Survey, there are over 173 million active websites on the Internet. Extrapolating from our results, as many as 61 million websites may still be using SHA-1 certificates.” Digital certificates are used to derive key material needed to encrypt traffic between users and websites. Encryption is required for private and secure communications and transactions. Digital certificates also verify that the website to which the user is connecting is legitimate. All web browsers use certificates to determine what can and can’t be trusted during online transactions. This is particularly critical in transactions that include sensitive data such as eCommerce and online banking. However, the SHA-1 encryption algorithm used by many website certificates is weak and can be easily manipulated. For example, SHA-1 certificates are vulnerable to collision attacks that allow cyber criminals to forge certificates and perform man-in-the-middle attacks on TLS connections. The SHA-2 algorithm solves these problems, but Venafi Labs’ research shows that many companies have still not made this update, leaving them open to security breaches, compliance problems and outages that can affect security, availability and reliability. “Our whole online world is predicated on the system of trust that is underpinned by these digital certificates used for authentication and authorization; organizations have an obligation to ensure that only secure certificates are used,” commented Kevin Bocek, chief security strategist at Venafi. “Leaving SHA-1 certificates in place is like putting up a welcome sign for hackers that says, ‘We don’t care about the security of our applications, data and customers.’” Bocek also points out, “The average organization has over 23,000 keys and certificates, according to Ponemon research. But most organizations don’t have the tools or visibility to find all of their SHA-1 certificates in their IT environment. This means migration to SHA-2 can be complex and chaotic, and, as a result, many businesses have just stuck their heads in the sand and not completed this migration. Unfortunately, in January there will be nowhere for these businesses to hide. My advice is to get a plan in place now, because it will be even more difficult to fix after the impending deprecation deadline when things start to break.”
<urn:uuid:8f4375e5-c1b8-494b-b2b0-c2d2fb298ee3>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2016/11/21/insecure-sha-1-certificates/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00163-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941355
771
2.546875
3
Supply and demand tend to move towards equilibrium in markets with prices serving as the mechanism by which that equilibrium comes about. The imbalance between supply and demand exerts a downward or upward pressure on prices, the results of which, in turn, modify amounts supplied and amounts demanded in opposite direction until a price emerges that makes supply and demand equal. Until recently, mathematical economists focused almost exclusively on proving the existence and mapping the structure of a market in equilibrium. The last few years, however, have witnessed an explosion of research into the algorithmic processes by which markets come into equilibrium (or by which game players settle on strategies – which comes to much the same thing.) This study, which usually goes by the name of Algorithmic Game Theory (AGT,) has already had significant commercial impact through its systemization and expansion of our understanding of optimal auction design. As AGT matures further, however, it will also profoundly impact Infrastructure and Application Management (IAM) in a world that revolves around the access of cloud-based services, across mobile interfaces, by users who are embedded within social networks. • First, it will allow enterprises to design better chargeback and internal pricing systems, both with regard to ensuring that pricing schemes do indeed achieve the resource allocation and behaviour modification effects an enterprise has hoped for and with regard to justifying the very principles on which a resource allocation goal is based. • Second, it will support the development of automated dynamic decentralized resource allocation systems by working out the principles by which software agents can coordinate local actions without requiring a powerful, centralized manager of managers to ensure that scarce resources are fairly distributed across multiple business needs. • Third, it will provide the industry with an understanding of how to extract a coherent end to end performance picture across multiple cloud service providers by providing them with a set of incentives to be open without compromising their individual interests. • Fourth, algorithmic markets are themselves a kind of distributing computing model which could be deployed for the purposes of IAM. It is interesting to note that one of the issues that bedevils the algorithms capable of driving markets towards equilibrium is computational complexity. The theory of computational complexity segregates algorithms into classes depending upon how the rate at which resources are consumed grows with the size of algorithm inputs. One famous class of algorithms is called P (for Polynomial) which contains those algorithms where the rate of resource consumption (expressed as time it takes for the algorithm to execute) grows according to a polynomial function. P algorithms are generally considered to be efficient. Another famous class is called NP (for Non-deterministic Polynomial – think of an algorithm with branching paths, each path of which can grow with input according to a polynomial function.) The NP class has many famous members like the algorithm for determining an optimal path through a network and is generally considered to indicate an high degree of inefficiency or resource consumptiveness. It turns out that the complexity or resource consumptiveness of equilibrium discovery algorithms fall somewhere between the level characteristic of P and the level characteristic of NP. So while such algorithms are not as hungry as general network path optimization algorithms, they could, in theory, start consuming a lot of compute resources very quickly and without much warning, potentially undermining the second and fourth scenarios mentioned above. On the other hand, markets occur in the real world and get to equilibrium pretty rapidly so even if the complexity here is a theoretical problem, it could be the case that in practice (or, in other words, in the neighbour hood of the input sizes we are actually likely to encounter) chances of a resource consumption blow up are small. Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.
<urn:uuid:e26b2ef1-69fe-4356-94f9-8a40beb4ae9c>
CC-MAIN-2017-04
http://blogs.gartner.com/will-cappell/ai-and-iam-markets-and-algorithms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00246-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943272
853
2.625
3
So far there have been two World IPv6 Days, and two of the five Regional Internet Registries (NICs) have reached IPv4 exhaustion, so everyone has deployed IPv6. Wait, they haven’t? Here’s why. IPv6 isn’t Compatible IPv4 is the primary protocol of the Internet. The Internet changed the way we interact with data, with the world around us, and with each other. It is a primary utility as real as water and power. The primary problem with IPv6 is that it is not directly compatible with IPv4. Transition mechanisms are required to enable IPv6-IPv4 communication, adding an additional level of complication over the protocol transition It is a similar problem to adopting hydrogen-powered cars: people won’t buy them until there are refueling stations, and the stations won’t get built until enough people buy them. However, the problem is more complicated with IPv6: imagine if hydrogen-powered cars couldn’t use the same roads as gas-powered cars. “Transition mechanisms” would consist of car-hauling trucks or changing cars partway through the trip. Even if such mechanisms were automated, there would still be hesitation about adoption. Demand for IPv6 should be driven by a shortage of IPv4 addresses, but the “exhaustion” of IPv4 addresses has yet to affect most people for a few reasons. First, “exhaustion” is not full depletion: it means that APNIC and RIPE have both reached a point where they’re only giving out a maximum of about 1000 IPv4 addresses to any organization, and they still have almost 2 million IP addresses each (as of October 8 2012). Compare that to ARIN, which has almost 10 million addresses available. Furthermore, these are “new” IP addresses. It’s comparable to studies that cite new home construction as an indicator of economic health. Organizations are not losing their Internet connectivity due to the IPv4 shortage. The limitation is on the expansion of number of hosts that weren’t previously online. In preparation for the IPv4 shortage, many organizations like ISPs “stockpiled” IP addresses, requesting more than they thought they’d need, so it’s still not that difficult for most users to find IPv4 addresses. In the APNIC region, there is also a 2nd tier of national registries that may still have available addresses. Additionally, widespread use of client NAT has greatly reduced the need for routable or public IPv4 addresses. NAT allows consumers of information or clients to share IP addresses when connecting to servers, by using the Layer 4 address like the TCP port as a unique flow indicator. While there are limits to the scalability of each routable address, the technique is being industrialized through a technique called Carrier-Grade NAT, in which each end node’s address undergoes at least 2 layers of NAT for further address demand reduction. NAT can also be applied in a more limited fashion for servers through the use of load balancers, which require only a single routable IP address for multiple internal IP addresses. More advanced server-side NAT can be accomplished by using Layer 7 information, like the host field in an HTTP request. While NAT has certain disadvantages, like complexity of configuration and management, it has kept the Internet working on IPv4 for the last several years. Who Needs IPv6? The question is, if there are still plenty of IPv4 addresses available, even in places where common knowledge holds that there are none left, does anyone actually need to use IPv6? The answer is still yes. Large-scale deployments that require more than 1000 addresses, even with NAT, are no longer possible with IPv4 in the APNIC and RIPE regions. ARIN has also implemented more stringent requirements for /16 and larger allocations (65,000+ addresses) for phase 2 of their IPv4 countdown. Examples of these types of deployments come from recent IPv6 adopters: cable operators are using IPv6 to manage the set-top boxes. Smart grid deployments are also using IPv6. Will Any Users Need IPv6? The prediction still stands that India and China will see large-scale growth in Internet usage in the next 2-3 years, and their current IPv4 allocations simply aren’t large enough to support increased Internet-accessible services. The question is whether providers there will start rolling out IPv6 with NAT64 or similar combinations of gateways and DNS to provide transparent Internet access to the IPv4 Internet. Granted, many popular applications, like Skype, still don’t support IPv6, and are unlikely to work in that environment until demand warrants it. What’s the Current Status of IPv6? There’s a lot of uncertainty about adding IPv6 that won’t get resolved until it is more widely deployed, but it won’t get more widely deployed until the uncertainty is resolved. There was even an article in August 2012 cautioning against internal deployment of IPv6 due to security concerns. However, it is important not to confuse end-user adoption with network adoption. There has been tremendous growth in IPv6 deployment when measured by advertised AS networks – potential sources and especially destinations for IPv6 traffic. RIPE measured a global increase of 5 percentage points in every region between 2009 and the first World IPv6 Day in June 2011. The perceived lack of end users in APNIC is in stark contrast to their lead in IPv6-advertised networks, at almost 20% of total networks. Even websites aren’t lagging: an ongoing study by Hurricane Electric shows over 45,000 of the Alexa top 1,000,000 websites are available via IPv6. When Will IPv6 Get Here? Market demand for IPv6 at this point shouldn’t be measured by user demand, since it’s likely that the only users that want IPv6 are the types that would read this blog post. However, I remain hopeful that we will soon reach a tipping point, driven by demand for other services that require IPv6. Author Profile - Jim MacLeod is a Product Manager at WildPackets. He has been in the networking industry since 1994, and started doing protocol analysis in 1996. His experience includes positions in firewall and VPN setup and policy analysis, log management, Internet filtering, anti-spam, intrusion detection, network monitoring and control, and of course packet sniffing.
<urn:uuid:b5d63867-6992-4bbf-a772-735e6e5325c7>
CC-MAIN-2017-04
http://www.lovemytool.com/blog/2012/10/ipv6-adoption-challenges-by-jim-macleod.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00156-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951866
1,345
2.6875
3