text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
The Internet needs saving because, eventually, there will need to be enough IP addresses to connect mobile Internet devices for every person on the planet, all computers, and all network devices, with enough space left over to connect TVs, video players, and even alarm clocks to the Internet. IPv6--the latest Internet layer of the TCP/IP suite--also promises interruption-free connections, improved security, and easier management than its predecessor, IPv4. Internet service providers, mobile computing vendors, and governments are ramping up development and implementation of IPv6. When will "eventually" happen? IPv6 is arriving in Europe and Asia, mainly in emerging technology hot spots that lack IPv4 and other legacy networks. It's also coming to mobile networks because they depend on rapid, reliable connections. But it might not be in U.S. offices for a while. Most industry watchers agree that organizations must support connections to and from IPv6 networks by 2011, at least at the gateway. This also is the year that IPv4 addresses are expected to run out. But adoption is likely to be slow going until then. Obstacles include the continued widespread use of IPv4, because upgrading to IPv6 means replacing operating systems and software that isn't IPv6-aware. This could be anything from management software and monitoring software to middleware applications. Larger companies that haven't recently updated systems, devices, or software face the highest expenses. Major operating system and network device vendors such as Apple, Cisco, Hewlett-Packard, and Microsoft support IPv6, but application vendors are lagging behind, says Adam Powers, chief technical officer of Lancope, a network behavior analysis vendor. Monitoring applications, communication suites, and peer-to-peer applications may need to be and upgraded or replaced. These issues will lead many companies to utilize IPv4-to-IPv6 gateways rather than replace IPv4 networks. One appeal of IPv6 is that it's supposed to be more secure than IPv4, because IPSec is built in. But the security benefits of IPv6 only come if partners, clients, and other connecting parties also use IPv6. Many will recall IPv4's IPSec problems; it was back-ported to work with IPv4 and put high-performance overhead on routing devices. Troubleshooting and monitoring encrypted packets also were nettlesome with IPSec on IPv4, leaving network administrators wary of IPv6's potential security-induced performance problems. Before pushing an IPv6 deployment, business technology leaders must thoroughly consider who will have authorization for automatically assigned addresses and configurations, how IP packets will be protected, and what traffic will be exchanged with the Internet. IPv6 obviates the Network Address Transport protocol, allowing devices to talk directly, as was originally intended in IP design. This will simplify network troubleshooting, but also will require careful network configuration. Many organizations rely on NAT for security and privacy and will need to thoroughly consider access controls to ensure that moving away from NAT doesn't create new security problems. For small or home office deployments, installing IPv6 and using stateless auto-configuration may make sense now, because IPv6 simplifies the process. The trade-off is less control of address assignment and possible problems with network-discovery tools that scan IP ranges to identify hosts. If nodes are allowed to pull an address from a wide range, as is suggested for security purposes, it may be impossible to identify all nodes on the network because of the vast size of IPv6 subnets. IPv6 support is coming to the U.S. market in operating systems and network devices. Early adopters have included large Internet properties. For example, Google launched an IPv6-supported site to prepare for IPv6 connectivity. Mobile providers, who stand to receive the biggest benefits, are doing the most to get ready for it, but not all mobile apps and handsets support IPv6. Mobile OS vendors such as Nokia are shipping IPv6-compatible devices, but Apple's iPhone doesn't support IPv6, even though its Mac OS X operating system does. |IPv6: Arriving Slowly| |IPv6 RFC 2460 published||Cisco implements IPv6 in IOS||Microsoft implements IPv6 in Windows XP operating system||IPv6 functionality added to Japan and South Korea top-level domains||Apple's Airport Extreme uses IPv6 by default||Estimated date that all IPv4 addresses will be allocated|
<urn:uuid:32ae95a3-6cfd-4abb-8c7c-d91d9b39cbcf>
CC-MAIN-2017-04
http://www.networkcomputing.com/networking/ipv6-makes-slow-progress/1951412558
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00346-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927324
920
2.53125
3
High-resolution satellite imagery, previously restricted to intelligence organizations, is about to make a commercial debut. In the next two years, U.S. companies plan to launch satellites carrying sensors capable of recording and transmitting images of the earth's surface with one-meter resolution. Fully processed images will be precise enough to meet U.S. accuracy standards for 1:2400 scale mapping (1.5-meter positional accuracy) -- detailed enough to spot a motorcycle from 400 miles out in space. Before the end of the decade, providers will downlink, process, and distribute black and white, color, infrared and stereo pairs to customers via the Internet, all within hours or days of an order, depending on the product requested. Both special and off-the-shelf imagery will be compatible with GIS, CAD and desktop mapping systems, and will run on PCs having sufficient memory and disk space to handle map files. Customers will have the option of buying fully processed images ready for GIS/CAD integration or processing the raw data on their own. Knowledgeable buyers with established accounts will shop for satellite imagery at vendors' Web sites by simply entering the coordinates of an area and providing accompanying information. Before the end of the decade, affordable, high-resolution, space-based imagery will undoubtedly find application in nearly all levels of city and county government, as well as across the entire spectrum of U.S. commerce. The combination of domestic and foreign competition may put the cost of high-resolution imagery well below that of aerial photography. Until recently, the same circumstances that blocked commercial development of space-based imagery in the United States effectively limited the image resolution of satellites owned by foreign companies. As of this writing no image-recording satellites are owned by U.S. companies, nor can any commercially available satellite imagery match the spatial resolution and positional accuracy of aerial photography. All that, however, is soon to change. Spot Image, S.A., of Toulouse, France, began producing 10-meter resolution imagery from its own satellites as early as 1986, and today is a major supplier in this market. Other foreign companies quickly followed -- the Japanese JERS satellite, providing 20-meter resolution; Europe's remote sensing ERS-1 and ERS-2 satellites offering 10-meter resolution; and India's IRS-1C, with 5-meter resolution -- all with black & white imagery. With even Russia selling 5- and 8-meter resolution from their KFA-1000 and MK-4 satellites, respectively -- and on very rare occasions, letting go of 2-meter military images -- security concerns began falling through the cracks. In 1992, Congress passed the Land Remote Sensing Act, allowing private companies to launch satellites with sensors capable of producing 3-meter resolution. In 1994, the Clinton administration bumped that to 1-meter resolution. Before looking at currently available satellite imagery, it might help to review some concepts and typical applications associated with this technology. Specific wavelengths of light recorded by orbiting sensors not only provide different levels of detail but different information. Panchromatic, or pan, (black and white) has a much higher resolution than multispectral color imagery. Pan sensors capture a broad spectrum of light in a single measurement in which each pixel (smallest picture element that can be individually addressed) is assigned a specific gray-scale value. In 1-meter resolution imagery, each pixel represents an area of about three feet. Multispectral imagery is made up of three or more bands -- blue, green, red, infrared, and parts of the thermal spectrum. Each band is separately recorded, processed and integrated into a composite color image. Since pan delivers considerably higher resolution, it generally costs more than color. Ten-meter pan imagery, presently a commonly available product, has applications in regional, county and urban planning; updating road maps, looking at subdivisions, individual houses or buildings, mapping street centerlines and evaluating site proposals. To make map information more understandable as well as attractive, pan imagery is often overlaid with color. The process causes only a slight reduction of the higher resolution of the pan image. The applications for multispectral imagery depend to some extent on the bands the sensor is directed to measure: Spectral modes of green, red and infrared are effective for determining the health and vigor of vegetation, the chlorophyll content, and the early detection in trees and forests of such diseases as pitch canker and Dutch elm disease. Measurements in the near- and middle-infrared range reveal moisture, which in turn indicates the cell structure of the leaf or plant. "Almost anything affecting vegetation health and vigor," said EOSAT director of Strategic Marketing, Tina Cary, "can be monitored and analyzed using multispectral data." Some satellites have pan sensors that can be directed to record an image from two slightly different angles to provide stereo pairs. With one of the higher-quality processing software packages such as ERDAS Imagine, PowerScene, or TNTmips, users can view the pairs in 3D. Applications include land-cover analysis and terrain modeling, providing elevations, sighting and routing pipelines and roads, and determining slopes, angles and inclines. The same software packages enable users to do virtual "fly-overs," viewing features and structures of the terrain model from different angles. Satellite radar has specialized applications. Unlike optical sensors which passively measure light energy reflecting from objects, radar transmits microwave energy, a portion of which is reflected back from objects, conveying information about their size, depth, texture and reflectivity. Since radar is not limited to daylight or to clear skies, it is highly effective in applications such as tracking oil spills at night, monitoring ice in shipping lanes, detecting solid surfaces beneath ground cover such as sand or vegetation, or resource mapping typically cloud-covered tropical regions to determine the status of forests. Methods used for processing radar imagery are different from those used for optical data. According to Coleen Hanley of Spot Image, the cost of radar imagery varies with the provider, and from the European Space Agency(ESA), it is less than most optical imagery. Satellite imagery from the U.S. government and from France, India, Japan, Russia, and other European countries are currently available through U.S. subsidiaries, distributors, aerial photography companies and former aerospace firms. A cross-section includes Atterbury Consultants; EarthWatch; EOSAT; Hammon, Jensen, Wallen & Associates; SPOT Image; and UGC Consultants. EOSAT, the U.S. firm that acquired rights to operate NASA Landsat satellites in 1986, produces 30-meter multispectral products from the ThematicMapper sensor. Since Landsat sensors measure bands from blue through infrared and into thermal wavelengths, ThematicMapper data is used in monitoring a wide variety of conditions associated with vegetation. India's IRS-1C satellite produces 5.8-meter resolution pan imagery suitable for community level applications. The same satellite carries a 23-meter multispectral sensor. EOSAT is the exclusive provider of IRS data outside India, and the data is available through the firm's distributor, Atterbury Consultants. "IRS products are very reasonable in price," said Atterbury's Bob Wright. The company recently began taking orders for IRS products. Russian imagery is produced from scanned film. The resolution is 8-meters for the MK-4 satellite, and 5-meters for the KFA-1000. Some MK-4 imagery is in color. The processing for film-based images is more involved and slightly more expensive than for digital imagery. The problem with Russian satellite imagery, Wright said, is getting it. "It's available on a catch-as-catch-can basis." Atterbury has 1995 Russian imagery of California, Nevada, New Mexico, Texas and Oklahoma. "That is about the most recent for North America; it's mostly archived KFA-1000 imagery, comparable to India's IRS-1C, and slightly better than SPOT's 10-meter imagery." SPOT Image Corp., the U.S. subsidiary of SPOT, distributes panchromatic and multispectral data with varying degrees of image processing for a variety of applications. The highest-resolution imagery SPOT produces is 10-meter panchromatic. Products include SPOTView, a highly processed, geocoded, digital orthoimagery; and SPOT MetroView, pre-packaged 15x15-minute image sets of selected U.S. cities. MetroView is formatted for most GIS and desktop mapping systems, and is available in five map projections. Pan imagery of any entire U.S. state goes for 36
<urn:uuid:175f8e14-2482-4244-aa7d-83b1e5c44bac>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Shopping-for-Satellite-Imagery.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00464-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919692
1,800
2.609375
3
GCN LAB IMPRESSIONS MIT team's radar array can see through walls, for real - By Greg Crowe - Oct 27, 2011 When radar was first developed, it was heralded as a wonder that could do anything. Need to file your back taxes? Radar. Need to save a roast? Radar. Need to find the position and velocity of an aircraft that’s miles away? Well, that one actually is radar, but you get the idea. This miracle technology didn’t even lose its super powers in films such as “Radar Secret Service,” whose reference my boss, Lab Director John Breeden II, insisted I include in this article. But in reality, a basic radar setup works pretty much the same way as our regular mortal human eyes do. Electromagnetic energy of some wavelength bounces off an object and back into the receiver, at which point the image is processed. The only difference is that radar devices work in a different frequency than visible light – down in the microwave/radio end of the spectrum. And since the vast majority of these electromagnetic waves — 99.4 percent — are blocked by solid matter, radar generally has as much trouble seeing through walls as our eyes do. Now, researchers at The Massachusetts Institute of Technology’s Lincoln Laboratory have developed a phased array of radar transmitters and receivers that can emit waves with enough power to go through a concrete wall, bounce off of the target inside, and go back through wall with enough strength left to be detected by the receivers. Their system also is able to differentiate between the waves that are doing that and the ones that are simply bouncing off of the wall in the first place, making the wall effectively invisible to it. But that isn’t the impressive part, they claim. In order to get the array to accomplish this, they had to get the system to be able to analyze signals much more quickly and much closer to real time. Larger-scale conventional radar doesn’t have to update nearly as frequently because the objects tend to be farther away — think of the periodic blips on those low-res flight-control screens in the movies. But at distances of 20 to 60 feet, rapid processing is vital, as the situation inside the wall you are looking though can literally change in the blink of an eye. Research team members demonstrated the radar at 20 feet, which they said is a practical distance for an urban combat situation, MIT News reported. The system provides a real-time picture of movement behind via a 10.8-frames/sec video in which people would be depicted as red blobs on the screen. In a video with the MIT News report, researcher Gregory Charvat said their goal is “to aid the urban warfighter, to increase his situational awareness,” but it’s not hard to imagine other uses, such as surveillance or in some kind of standoff/hostage type situation. Now if we can just get radar to help me file my back taxes, I’ll be set. But either way, this is a huge step forward in a new application of a 70-year-old technology.
<urn:uuid:662428b8-13ad-4e62-bc54-9e9b10373bee>
CC-MAIN-2017-04
https://gcn.com/articles/2011/10/27/mit-radar-see-through-walls.aspx?admgarea=TC_EMERGINGTECH
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00098-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963412
654
3.109375
3
The amount of power consumed by data centers in the U.S. and around the world continues to grow, but not as fast as previously estimated, according to a new study by Analytics Press. The study, which was written by Jonathan Koomey, a consulting professor at Stanford University, and sponsored by the New York Times, found that a slowing in the installed base of servers due to virtualization and the 2008 economic downturn more than made up for the increased power consumption per server over the last few years. The study also estimated that one server user in particular, Google, alone accounted for an estimated 0.8 percent of all data center power consumption worldwide and 0.011 percent of the world's total power consumption. Koomey, who did a similar study in 2007, took advantage of new server installed base and server sales estimates from analyst firm IDC to revise earlier projections about data center power consumption downward. In his report on the study, Koomey said total data center power consumption from servers, storage, communications, cooling, and power distribution equipment accounts for between 1.7 percent and 2.2 percent of total electricity use in the U.S. in 2010. This is up from 0.8 percent of total U.S. power consumption in 2000 and 1.5 percent in 2005. However, it is down significantly from the 3.5 percent of total U.S. power consumption previously estimated based on continuing historical trends, and the previous estimate of 2.8 percent assuming that power saving technologies would be adopted. The 2007 predictions of U.S. data center power consumption were based on a report that year from the Environmental Protection Agency. The lower range of the EPA's projected power consumption assumed that increased virtualization and increased use of technology to cut server power consumption would account for the difference, Koomey wrote. Worldwide data center power consumption trends were similar to those of the U.S. Koomey wrote that the world's data centers consumed an estimated 1.1 percent to 1.5 percent of all electricity used in 2010, up from 0.5 percent in 2000 and 1.0 percent in 2005, but down from the 1.7 percent to 2.2 percent previously estimated. The key factor behind the less-than-expected data center power consumption lies in a slower growth in the server installed base than early projected, Koomey wrote. Using IDC estimates about the server installed base and server sales, Koomey estimated that the total U.S. installed base of servers in 2010 was 11.5 million volume servers, 326,000 midrange servers, and 36,500 high-end servers in the U.S. That is significantly lower than projections from four years ago of an estimated 15.4 million volume servers, 326,000 midrange servers, and 15,200 high-end servers, Koomey wrote. Koomey also said the 2007 report assumed a Power Usage Effectiveness (PUE) of 2.0, which means that for 1 kWh (kilowatt hour) of power used by a server, 1 kWh is needed to run the data center infrastructure for things like cooling. Koomey estimated that the average PUE in 2010 was between 1.83 and 1.92. "The main reason for the lower estimates in this study is the much lower IDC installed base estimates, not the significant operational improvements and installed base reductions from virtualization assumed in that scenario," Koomey wrote. "Of course, some operational improvements are captured in this study's new data. . . but they are not as important as the installed base estimates to the results." Koomey admitted that his study suffers from a couple of areas which require more study. Next: The Impact Of Storage, Cloud, Virtualization, And Google
<urn:uuid:93c6ae00-7dc7-40a5-ba31-8d21bf624bc9>
CC-MAIN-2017-04
http://www.crn.com/news/data-center/231400014/data-center-power-consumption-grows-less-than-expected-report.htm?pgno=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00492-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955662
778
2.5625
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: 16.7 Introducing NAT Select a size 16.7 Introducing NAT Introducing NAT Section 16: Enabling Internet Connectivity In the early years of the Internet, IP addresses were directly allocated to any organization (and sometimes individuals) who requested them. With a 32bit address space yielding nearly 4 billion IP addresses, it was not uncommon for organizations to obtain Class B blocks (up to about 16,000 hosts), even if they only had a few hundred actual IP hosts. Because IP addresses were essentially free and plentiful, all hosts used public (globally routable) addresses. Even companies that did not connect to the Internet were using public IP addresses in their internal networks. With the release of the Mosaic web browser in 1993, the Internet began to rapidly expand. By the mid1990s, the Internet community recognized that the exponential demand for public IP addresses would eventually deplete the supply. There was a two phase plan devised for overcoming the problem of IP address depletion: RFC 1918: This RFC defines (reserves) blocks of private (nonroutable IP addresses) to be allocated to IP hosts that are on the inside of the network of an organization. IPv6: IPv6 defines a new version of the Internet Protocol that includes a 128bit address space. 2 addresses equates to 340 trillion trillion trillion potential IP hosts. To put this number into perspective, there are enough IPv6 addresses to grant every person on earth the equivalent of the entire IPv4 address space (2 ) and still have many trillions of IP addresses remaining. The private addressing plan that is defined in RFC 1918 provides enterprises with considerable flexibility in network design. This addressing enables operationally and administratively convenient address allocation, as well as easier growth. RFC 1918 defines three blocks of IP addresses that are dedicated to private use. Private Addressing Plan IP Address Class Private IP Address Range A 10.0.0.0 to 10.255.255.255 B 172.16.0.0 to 172.31.255.255 C 192.168.0.0 to 192.168.255.255 128 32 However, private IP addresses are not routable over the Internet. And since there are not enough public addresses to allow all organizations to provide private addresses to all of their hosts, a mechanism is needed to translate private addresses to public addresses (and back) at the edge of their networks. NAT provides this mechanism. Without NAT, a host with a private address cannot access the Internet. Using NAT, companies can provide some or all of their hosts with private addresses and also use NAT to provide address translation to allow access to the Internet. The NAT process of mapping a private IP address for a public address is separate from the convention that is used to determine what is public and private, and devices must be configured to recognize which IP networks are to be translated. Benefits of NAT Benefits of NAT include the following: NAT eliminates, or significantly reduces, the need to purchase public IP addresses from your ISP. It protects network security. Because private networks do not advertise their addresses or internal topology, they remain reasonably secure when they gain controlled external access with NAT. Drawbacks of NAT Potential drawbacks of NAT include the following: NAT may have a slight impact on router performance. The router needs to alter the IP header and possibly alter the TCP or UDP header. The performance impact of NAT is very minor in current Cisco IOS routers that use Cisco Express Forwarding switching. Some applications rely on the source and destination IP addresses remaining constant, with unmodified packets forwarded from the source to the destination. By changing endtoend addresses, NAT blocks some applications that embed IP addressing in the application payload. For example, some security applications, such as digital signatures, fail because the source IP address changes at a router border. Applications that use IP addresses instead of an FQDN do not reach destinations that are translated across the NAT router. Endtoend IP traceability ends at a NAT boundary. When using commands such as ping or traceroute, the termination point of the diagnostic output usually ends at the NAT border device. Using NAT complicates tunneled protocols, such as IPsec. Because NAT modifies values in the IP headers, this process can interfere with the integrity checks that are performed by IPsec and other tunneling protocols. Special precautions are required to exempt traffic from being translated if the source and destination IP hosts connect over an IPSec tunnel. Services that require the initiation of TCP connections from the outside network, or stateless protocols such as those using UDP, can be disrupted. Unless the NAT router makes a specific effort to support such protocols, incoming packets cannot reach their destinations. Some protocols can accommodate one instance of NAT between participating hosts (passive mode FTP, for example) but they can fail when both systems are separated from the Internet by NAT. Applications that require special handling are supported by combining NAT with the Cisco IOS ZoneBased Policy Firewall feature. Types of Addresses in NAT In NAT terminology, the inside network is the set of networks that is subject to translation. The outside network refers to all other addresses. Usually, these are valid addresses that are located on the Internet. To speak confidently about NAT, there are some NATspecific terms you should understand: Inside local address: This IPv4 address is assigned to an IP host on the inside network, or private network. The inside local IP address is typically an RFC 1918 private IP address and is not globally routable. Inside global address: An inside global IP address is the IP address of the IP host on the Internet as it appears on the inside network. Depending on how the translation is configured, this IP address can appear as a publicly routable IP address or as a private IP address. The most common scenario is that the inside global address is the same as the outside global IP address. Outside local address: An outside local address represents the mapping of the inside local address to a globally routable address on the public Internet. This IP address can be assigned from a pool of one or more available public IP addresses. Outside global address: An outside global address is an IP address that is assigned to a host on the outside network by the host owner. The outside global address is allocated from a globally routable address or network space. In the figure, the inside IP host 10.1.1.100 (an inside local address) translates to 18.104.22.168 (an inside global address). The inside host wants to connect to http://www.cisco.com. When 10.1.1.100 makes a DNS query, the IP address 22.214.171.124 is returned by the DNS server. In the context of NAT, 126.96.36.199 represents both the outside global and inside global IP addresses for http://www.cisco.com. From the perspective of the Cisco web server, the server would respond to the inside global IP address 188.8.131.52. Types of NAT On a Cisco IOS router, NAT can be divided into three distinct categories, each having a clear use case. Static NAT: This type of NAT is employed when an inside global address requires a permanent mapping to its outside global IP address. The common use case for static IP addresses is for IP hosts that need to remain at a constant IP address, like mail servers, DNS servers, and web servers (to name a few). Dynamic NAT: This type of NAT works well when the number of IP hosts is fewer than the number of public addresses available in the pool. When a dynamic NAT translation is made, an inactivity timer begins a countdown. At the end of the countdown, the translation is cleared and that IP address is added back to the pool for another user to map to its outside global address. NAT overloading: This type of dynamic NAT is implemented when there are not enough public IP addresses to satisfy the number of inside local IP hosts that need Internet access. The pool of available public IP addresses may be as small as one. PAT Versus NAT One of the most common implementations of NAT is PAT, which is also referred to as overload in the context of a Cisco IOS configuration. NAT can use PAT to translate many inside local addresses into just one or a few inside global addresses. Most home routers operate in this manner. Your ISP assigns one address to your router, yet several members of your family can simultaneously connect to the Internet. With static or dynamic NAT, the router replaces the source IP address of the inside local address with the inside global IP address. NAT generally translates IP addresses only as a 1:1 correspondence between publicly exposed IP addresses and privately held IP addresses. When NAT overload is configured, multiple addresses can be mapped to one or a few addresses because the router maintains a table of TCP and UDP port numbers that are associated with the connections of each private address. When a client opens a TCP/IP session, the NAT router assigns a port number to its source address. NAT overload ensures that clients use a different TCP or UDP port number for each client session with a server on the Internet. NAT overload modifies the private IP address and potentially the port number of the sender. NAT overload chooses the port numbers that hosts see on the public network. When a response comes back from the server, the source port number (which becomes the destination port number on the return trip) determines the client to which the router routes the packets. It also validates that the incoming packets were requested, thus adding a degree of security to the session. NAT routes incoming packets to their inside destination by referring to the incoming source IP address given by the host on the public network. With NAT overload, there is generally only one publicly exposed IP address (or a very few). Incoming packets from the public network are routed to their destinations on the private network by referring to a table in the NAT overload device that tracks public and private port pairs. This mechanism is called connection tracking. PAT port number determination is based on the following: PAT uses unique source port numbers on the inside global IPv4 address to distinguish between translations. Because the port number is encoded in 16 bits, the total number of internal addresses that NAT can translate into one external address is, theoretically, as many as 65,536. PAT attempts to preserve the original source port. If the source port is already allocated, PAT attempts to find the first available port number. It starts from the beginning of the appropriate port group, 0 to 511, 512 to 1023, or 1024 to 65535 (in the figure, port 2031 is used). If PAT does not find an available port from the appropriate port group and if more than one external IPv4 address is configured, PAT moves to the next IPv4 address and tries to allocate the original source port again. PAT continues trying to allocate the original source port until it runs out of available ports and external IPv4 addresses. Up Next: Understanding Static NAT IOS router, NAT can be divided into three distinct categories, each having a clear use case. Static NAT: This type of NAT is employed when an inside global address requires a permanent mapping to its outside global IP address. The common use case for static IP addresses is for IP hosts that need to remain at a constant IP address, like mail servers, DNS servers, and web servers (to name a few). Dynamic NAT: This type of NAT works well when the number of IP hosts is fewer than the number of public addresses available in the pool. When a dynamic NAT translation is made, an inactivity timer begins a countdown. At the end of the countdown, the translation is cleared and that IP address is added back to the pool for another user to map to its outside global address. NAT overloading: This type of dynamic NAT is implemented when there are not enough public IP addresses to satisfy the number of inside local IP hosts that need Internet access. The pool of available public IP addresses may be as small as one. PAT Versus NAT One of the most common implementations of NAT is PAT, which is also referred to as overload in
<urn:uuid:51113bb0-e045-44fb-9825-bd44975d4587>
CC-MAIN-2017-04
https://docs.com/robertgabos/8836/16-7-introducing-nat
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00034-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940254
2,507
2.96875
3
2.3.8 What are the best discrete logarithm methods in use today? Currently, the best algorithms to solve the discrete logarithm problem (see Question 2.3.7) are broken into two classes: index-calculus methods and collision search methods. The two classes of algorithms differ in the ways they are applied. Index calculus methods generally require certain arithmetic properties to be present in order to be successful, whereas collision search algorithms can be applied much more generally. The absence of more properties in elliptic curve groups prevents the more powerful index-calculus techniques from being used to attack the elliptic curve analogues of the more traditional discrete logarithm based cryptosystems (see Section 3.5). Index calculus methods are very similar to the fastest current methods for integer factoring and they run in what is termed sub-exponential time. They are not as fast as polynomial time algorithms, yet they are considerably faster than exponential time methods. There are two basic index calculus methods closely related to the quadratic sieve and number field sieve factoring algorithms (see Question 2.3.4). As of this time, the largest discrete logarithm problem that has been solved was over GF(2503). Collision search algorithms have purely exponential running time. The best general method is known as the Pollard rho method, so-called because the algorithm produces a trail of numbers that when graphically represented with a line connecting successive elements of the trail looks like the Greek letter rho. There is a tail and a loop; the objective is basically to find where the tail meets the loop. This method runs in time O(Ön) (more accurately, in Ö steps) where n is the size of the group. The largest such problem that has been publicly solved has n ~ 297 (see Question 3.5.5). This is the best known method of attack for the general elliptic curve discrete logarithm problem.
<urn:uuid:20072a02-994b-4ac6-8cbc-7ac57bbf5dca>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-are-the-best-discrete-logarithm-methods.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00428-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943215
410
2.828125
3
Most websites have an Achilles’ heel, which is a single master password to unlock the entire vault. A group of researchers have developed a type of password manager that creates decoy passwords vaults if a wrong master password is supplied. On May 19 at the IEEE Symposium on Security and Privacy, NoCrack was presented in San Jose, California. NoCrack was designed to make it much more time-consuming and difficult for attackers to figure out if they have hit pay dirt. One main problem with password mangers is that they store all of their passwords in an encrypted file and if that file is stolen, can be subjected to so-called brute force attacks, which thousands of passwords are tried in quick succession. Rahul Chatterjee, a master’ student at the University of Wisconsin in Madison said if an incorrect password is entered, it’s easy for an attacker to know it’s wrong. The file that is generated is junk and the attacker doesn’t have to bother trying the credentials at an online web service. Chatterjee said they’re working on solutions, but no plans as of yet to commercialize NoCrack.
<urn:uuid:8e8bc79d-c1c5-42fc-b754-556ecc5236bb>
CC-MAIN-2017-04
http://www.bvainc.com/nocrack-password-manager-creates-decoy-password-vaults/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00392-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962465
242
2.625
3
Network diagnostic tools, as the name suggests, are used in diagnosis and troubleshooting the network problems. These tools can be used to check the availability, route, and health of a system in network using ICMP and SNMP. Tools included under network diagnostics are: - Ping Tool: Helps in discovering the status of a network device, that is whether the device is alive or not. - SNMP Ping: Checks whether a node is SNMP-enabled or not. - Proxy Ping: Enables you to ping a target device using a Cisco router. - Trace Route: Records the route followed in the network between the senders computer and a specific destination computer.
<urn:uuid:25fb6699-5516-48f0-8431-1a97c5998de4>
CC-MAIN-2017-04
https://www.manageengine.com/products/oputils/free-network-diagnostic-tools.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00024-ip-10-171-10-70.ec2.internal.warc.gz
en
0.859934
140
2.734375
3
Celebrates anniversary with first ever LTE connection. Google has successfully used one of its giant helium-filled balloons to deliver mobile internet to a school in Brazil. In the week of the project’s one year anniversary, a ‘Loon’ connected a school called Linoca Gayosa to the internet for the first time ever, and is the first case of Google successfully testing its LTE delivery. Project Loon, started by Google last June, uses fleets of helium balloons that fly at an altitude twice that of a commercial airliner to circumnavigate the globe beaming mobile internet to rural areas. By next June, Google hopes to have a fleet of 300 to 400 Loons which can stay aloft for over 100 days delivering internet connections worldwide in hard to reach areas. The Loons can reportedly dish out speeds of 22MB per second to ground receiving equipment and 5MB per second to mobiles. Google said in a blog post yesterday: "This test flight marked a few significant ‘firsts’ for Project Loon as well. Launching near the equator taught us to overcome more dramatic temperature profiles, dripping humidity and scorpions. And we tested LTE technology for the first time; this could enable us to provide an Internet signal directly to mobile phones, opening up more options for bringing Internet access to more places." Project Loon team members install a Loon Internet antenna while the schoolchildren look on. Image: Google "Project Loon began with a pilot test in June 2013, when thirty balloons were launched from New Zealand’s South Island and beamed Internet to a small group of pilot testers. The pilot test has since expanded to include a greater number of people over a wider area," said Google. "Looking ahead, Project Loon will continue to expand the pilot through 2014, with the goal of establishing a ring of uninterrupted connectivity around the 40th southern parallel, so that pilot testers at this latitude can receive continuous service via balloon-powered Internet." Each ‘Loon’ can communicate with other Loons, never straying out of the network and away from users that need their service. The balloons are actually controllable from the ground. "Is it possible to have a nicely spaced out flock of balloons? The answer is yes. Once people could see this was possible, it became a feasible project, not some crazy science project," said Dan Piponi from Project Loon.
<urn:uuid:9a0224cc-c8f3-4101-ba62-13c78f36ef48>
CC-MAIN-2017-04
http://www.cbronline.com/news/verticals/google-celebrates-one-year-of-loons-by-connecting-remote-school-to-the-internet-4295122
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00354-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949876
497
2.890625
3
Understanding Windows Active Directory A properly designed directory represents a model of the organization it serves, including not only information about computers, users and resources, but also establishing and enforcing security policies, access controls, data flows and much more. That’s why Active Directory sits at the heart of any modern Windows network, and it’s what makes understanding Active Directory techniques so important. This is a subject that anyone could spend years studying, and a decade to master completely. Here, we’ll explain what’s involved in working with Active Directory throughout information system lifecycles. Understanding Comes First Before you try to do anything with Active Directory, it’s essential to understand what it is and what it does. By analogy, Active Directory is to a collection of Windows servers (called domain controllers), along with the computers, users and other resources that fall under their control, what the Windows registry is to any individual Windows machine. By definition, Active Directory (AD) is Microsoft’s proprietary directory service and provides an information storage and control system that’s both centralized (there’s only one logical database behind any single given directory tree) and distributed (the system allows multiple copies to exist and keeps them synchronized and coordinated). The intent of AD is to capture information about and to automate management of user data, security and access controls, and distributed resources of all kinds. AD uses standard directory protocols and services (such as the IP-based LDAP protocol) so that it can work with other directory services, such as Novell Directory Services (NDS) or Sun Directory services (though this is always easier in theory than in actual practice). Active Directory is organized into individual containers called directory trees, which may be further aggregated into directory forests. It’s a complex environment with many tools and utilities involved in its design, maintenance, troubleshooting and so forth. Key AD features include the following: - Support for the ISO X.500 standard for global directories. - Support for secure, Web-based network operations. - Hierarchical organization with delegation of authority to enable local management of local resources and centralized management of global resources and controls (to a restricted class of domain/directory administrators). - Object-oriented data representation and storage, for easy searching of and access to directory data. - Designed to work with older Windows domain models (such as NT 4.0 domain controllers) and to interoperate with newer implementations (so that AD for Windows 2000 works well with AD for Windows Server 2003, albeit through a restricted logical view). Before anyone goes to work on AD, some learning and study is highly recommended. Microsoft offers lots of tutorials and educational material through TechNet and applicable product documentation. The company has also published numerous books on AD under the Microsoft Press imprint, and it offers numerous training courses on AD for both Windows 2000 and Windows Server 2003. A plethora of third-party books, courses and other information about AD is also available. Two Paths to Active Directory Implementation The best techniques and practices that apply to AD vary according to whether an organization has already implemented AD or whether it seeks to implement (or migrate to) Active Directory for the first time. For those on the migration or first-time-implementation path, some initial design and planning is absolutely essential. For those working in environments where AD is already up and running, assessment and analysis will indicate whether additional design and planning are needed or not. In the sections that follow, we’ll step through a complete collection of categories under which Active Directory techniques and best practices can be organized; these may not apply in all situations, so use your best judgment as you decide on their applicability to your circumstances. Planning for Active Directory For many organizations, moving to AD also means migrating to newer versions of Windows—namely, Windows 2000 Server (the first platform to support AD) or Windows Server 2003 (the most current AD implementation available). During this phase of activity, planning falls into multiple categories: - Examining processor, memory, storage and other system requirements for the chosen Windows version, and deciding if existing equipment is suitable or if new equipment must be acquired. - Identifying and piloting migration from earlier Windows environments (typically, Windows NT 4.0) to understand and learn the process before moving into full-scale production. Please note that Microsoft offers numerous migration tools to help administrators preserve and transport such information about systems, users, resources, access controls and so forth as makes sense during such a move. (Search Microsoft.com or TechNet for “Active Directory migration tools” to see what’s available.) - Establishing relationships with IT and other executives to educate them about AD and to explain how building directory services can have political ramifications. (This gets increasingly important as more sites or autonomous operating units fall under a single organizational umbrella.) Numerous consulting companies specialize in Active Directory-related services and are available to help with all phases of AD activity. Use them if you can’t grow sufficient expertise in your own organization to do things entirely on your own. Designing an Active Directory This phase requires that you inventory system and information assets, review (or formulate) security policy and understand the kinds of users, user communities, communications links and access controls your organization requires. This is roughly the same as the assessment phase mentioned later in this story, except it’s always more work to do this for the first time than it is to inspect an existing directory services environment and decide how well it continues to fit current needs and circumstances. Once the inventory and assessment phases are completed, you’ll need to create a model of your organization that includes information about users, how users fit into various organizational operating units or job roles, how desktops and servers fit into information processing and delivery needs and how other resources fit into the overall picture. This brief description can’t really tally the amount of work that needs to be done, nor the levels of approval and management buy-in that are necessary, but this phase often takes three months or longer to complete and usually involves a team of professionals. This is also the point at which security policy is mapped into AD Group Policy Objects and where controls for local and remote network access must be formulated. Implementing Active Directory If a pilot migration has succeeded, a real migration will get underway, followed by adding all the data that AD requires that Windows NT domains never dreamed existed. Many organizations choose to implement AD piecemeal and create organizational units, each with its own directory context, so that entire multi-site networks don’t have to make the switch all in one go. Experience teaches that the more complex and far-flung the organization, the more sense incremental directory implementation makes. This is particularly true when not all sites or organizations have trained, directory-savvy IT staff on site and must rely on headquarters staff or experts housed in other locations. This is also the final step in the first-time process, so that IT professionals working in existing AD environments may not need to tackle them any time soon (but
<urn:uuid:c731b6c8-aa56-4ca7-9351-bcdea6312aaa>
CC-MAIN-2017-04
http://certmag.com/understanding-windows-active-directory/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00262-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926906
1,454
3.1875
3
How the System WillBy Paul F. Roberts | Posted 09-22-2005 US-CERT Malware Naming Plan Faces Obstacles US-CERT, the U.S. Computer Emergency Readiness Team, will begin issuing uniform names for computer viruses, worms and other malicious code next month, as part of a program called the Common Malware Enumeration initiative. The program is intended to clear up confusion that results from the current decentralized system for naming Internet threats, which often results in the same virus or worm receiving different names from different anti-virus vendors. However, anti-virus experts say the voluntary CME (Common Malware Enumeration) program will face a number of challenges, including that of responding quickly to virulent virus and worm outbreaks. CME is being run by the Mitre Corp., based in Bedford, Mass. and McLean, Va., for the U.S. DHS (Department of Homeland Security) National Cyber Security Division. Work was begun on the program about one year ago. So far, CME numbers have been assigned to a handful of critical worms and viruses, said Julie Connolly, principal information security engineer at Mitre. New malicious code samples are held for 2 hours and, if no other example of the new code is submitted, assigned a CME number. When multiple examples of new malicious code are submitted within the 2-hour window, Mitre will ask anti-virus company researchers to work out conflicts in definitions and submit one or more samples for numbering, Connolly said. Contrast that with the present system for naming malicious code, in which each company that discovers a threat assigns it a name based on that company's database of threats. Most companies make cursory attempts to synchronize their virus and worm names with those of other vendors, but there are frequent divergences and differences. For example, on Sunday, Symantec Corp. issued an alert for a Category 2 mass-mailing worm it named "W32.Lanieca.H@mm." However, Kaspersky Lab, another anti-virus company, named the same worm "Email-Worm.Win32.Tanatos.p," McAfee Inc. called the threat "W32.Eyeveg.worm" and Trend Micro Inc. called it "WORM-WURMARK.P," according to Symantec's Web site. "Naming is a problem for everybody," said Bruce Hughes, senior anti-virus researcher at Trend Micro. The CME program will help security administrators and end users of anti-virus software, as well as anti-virus companies, Hughes said. The new system could make it easier for operations staff at large companies to coordinate response to virus outbreaks, said Erik Johnson, vice president and program manager at Bank of America Corp. in Boston. Bank of America has different teams that handle viruses both at the network perimeter and on the company's internal network. In addition, the company uses a number of different anti-virus products simultaneously, he said. "For operations folks, it might make a difference," Johnson said. "I don't care what they name them as long as they kill those suckers," said Hap Cluff, director of IT for the City of Norfolk, Va. Cluff said the new naming system will make it easier to respond to questions from users about new viruses and worms. Next Page: How the system will play out. How the System Will Currently, Mitre is working with major anti-virus vendors including McAfee, Symantec, Trend Micro, Sophos Plc, F-Secure Corp., Computer Associates International Inc. and Microsoft Corp. to launch the program, but the program is open to smaller anti-virus and security software vendors as well, Connolly said. Mitre has created a secure server to which participating anti-virus companies pass their discoveries, and will launch a CME Web site on Oct. 3 that will list about 21 viruses with CME numbers. Initially, only high-impact viruses and worms will receive CME numbers, though Mitre may extend CME numbers to lower-level threats once the program is up and running, she said. The CME number and links to a description of the threat will appear on a Mitre Web site akin to the CVE (Common Vulnerabilities and Exposures) Web site. Anti-virus companies will link to that definition from their own advisories, Trend Micro's Hughes said. Vincent Weafer, senior director of security response at Symantec, said the CME number may not be available in the first hours or even days after a big outbreak, but will provide a reference point for a malicious code threat in the weeks, months and years that follow. Even more importantly, the common ID number will make it easier to program tools to automatically respond to threats, he said. Still, anti-virus experts said they doubted that the new system would eliminate conflicts between vendors, or replace the habit of assigning catchy names like "Code Red" and "Slammer" to viruses. "Think about Code Red, AV," Hughes said. "Anti-virus companies had a different name for that virus, but had to eventually refer to it as Code Red because the name took offthere was a sexiness to it." Check out eWEEK.com's for the latest security news, reviews and analysis. And for insights on security coverage around the Web, take a look at eWEEK.com Security Center Editor Larry Seltzer's Weblog.
<urn:uuid:7d212025-8eb9-46df-baa3-2b75de1093cf>
CC-MAIN-2017-04
http://www.cioinsight.com/print/c/a/Special-Reports/USCERT-Malware-Naming-Plan-Faces-Obstacles/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00198-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939812
1,168
2.921875
3
Dynamic malware analysis - or sandboxing - has become a central piece of every major security solution... and so has the presence of evasive code in malicious software. Practically all variants of current threats include some sort of sandbox-detection logic. One very simple form of evasive code is to delay execution of any suspicious functionality for a certain amount of time - the basic idea is to leverage the fact that dynamic analysis systems monitor execution for a limited amount of time, and in the absence of malicious behavior classify a program as benign. On a victim machine, on the other hand, delaying behavior for a few minutes does not have a real impact, allowing the attacker to easily achieve different behavior in the analysis environment and on a real target machine. The easiest, and definitely most prevalent method of stalling behavior is to make a program “sleep” for a certain amount of time. Since this is such a common behavior, most analysis sandboxes are able to detect this kind of evasion, and in most cases, simply “skip” the sleep. While this sounds like a simple solution, it can have a wide range of unintended effects as we will see in this blog post. The Power of Procrastination In our whitepaper Automated Detection and Mitigation of Execution-Stalling Malicious Code we describe the basic principle behind stalling code used against sandboxes: Stalling code is typically executed before any malicious behavior. The attacker’s aim is to delay the execution of the malicious activity long enough so that an automated dynamic analysis system fails to extract the interesting malicious behavior. Code stalling can be achieved in a number of ways: Waiting for a specific action of the user, wasting CPU cycles computing useless data, or simply delaying execution using a call to the Sleep() function. According to MSDN VOID WINAPI Sleep( _In_ DWORD dwMilliseconds); Suspends the execution of the current thread until the time-out interval elapses. a call to Sleep() will delay the execution of the current thread by the time passed as argument. Most sandboxes monitor the system- or API-calls of a program under analysis and will therefore see this evasion attempt. Therefore, the sandbox is able to detect, and in most cases even react to this, either by patching the delay argument passed to the operating system, by replacing the called function with a custom implementation, or simply by returning immediately to the calling code (skipping the sleep altogether). Detecting Sleep Patching Recently, we have come across an interesting malware family that uses this anti-evasion trick used by sandboxes to detect the presence of the analysis environment (one could call it an anti-evasion-evasion trick…) This malware detects sleep-patching using the rdtsc instruction in combination with Sleep() to check acceleration of execution, as one can see in the following code extract: In summary, this code: - executes rdtsc, which reads the CPU’s timestamp counter, and stores the timestamp in a temporary value, - invokes Sleep() to delay execution, - re-executes rdtsc, and - compares the two timestamps. Sleep Patching Using High-Resolution Dynamic Analysis Different from traditional sandboxes, Lastline’s high-resolution analysis engine monitors more than just the interaction of programs with the operating system (or API functions). Our engine sees - and can thus influence - every instruction that is executed by the malicious program, not just API function invocations. Thus, since we can also manipulate the values returned by the rdtsc instruction, we can maintain a consistent execution state even when patching a sleep, for example by fast-forwarding the timestamps returned by the CPU to the program each time a sleep is skipped or accelerated. As a result, the program can no-longer distinguish if a sleep was truly executed in full, or if the analysis system simply forwarded the time inside the sandbox. Side-Effects of Sleep Patching: User Emulation We found other interesting side-effects introduced by sleep patching that might not be directly related to deliberate sandbox detection, as can be seen in the following piece of code: Here, the malware sample checks for user-activity by repeatedly checking the cursor position (in 30 second intervals). Most sandboxes have some mechanism to trigger (or simulate) user activity. Typically this means repeatedly changing cursor position, opening new windows, click on dialog-boxes, etc, just to name a few. In the code above, the malware sample uses the Sleep() method not for delaying malicious activity, but merely to have a simple way for checking that some user-activity --mouse movement, in this case-- was observed within a certain time period. Clearly, if a sandbox naively accelerates this code by patching the sleeps, the behavior that was expected to happen while the malware sample is dormant will not happen, and as a consequence, the presence of the analysis environment will be detected, evading analysis. Therefore, again, a naive approach to execution-stalling will allow an attacker identify the presence of the sandbox, or, as in this case, the absence of a real user, evading analysis. Side-Effects of Sleep Patching: Race Conditions Another interesting problem related to sleep-patching are race conditions: Race conditions are a non-trivial programming error, where multi-threaded code needs to be executed in a specific order to work correctly. One (ugly, as many programmers would agree) way of avoiding race conditions is to delay code depending on completion of another task by the amount of time this task typically needs. In the presence of sleep-patching, however, this approach is bound to fail, as the sandbox influences the amount of time that is slept. One such example can be seen in the code below, extracted from another malware family: In this code, the malware decrypts and executes code from a dropped file, cleaning up after the program has executed (by deleting the file). Between invoking and deleting the program, the malware sample uses a - one already guessed - sleep to make sure the program is started before it is deleted. Once again, by patching the sleep incorrectly, the sandbox breaks this logic, causing the malware to delete the payload before it is ever executed. A more complex example can be seen below: Here, malware reads encrypted code from a file on disc and executes it in the context of the current process using a separate thread. Once the payload has been started, the main thread goes into an infinite sleep (but this could equally be a long sleep), before executing ExitProcess (which terminates the execution of all threads in the process). If this sleep is patched to be shorter than the execution of the malicious payload, the process is terminated before completing its activity, unintentionally stopping the process before it can completely reveal its malicious behavior. Timing attacks are common to most malware families today. While some of these timing attacks are easy to detect, naive approaches to overcoming these evasion attempts often cause more harm than they do good, opening gates to evasion attacks based on anti-evasion systems. Using high-resolution dynamic analysis and leveraging its insight into each instruction that is executed by the malicious program, the Lastline sandbox is able to foil these attacks and reveal the malicious behavior.
<urn:uuid:99e2f802-01e1-48d8-a8b3-07f5ca797514>
CC-MAIN-2017-04
http://labs.lastline.com/not-so-fast-my-friend-using-inverted-timing-attacks-to-bypass-dynamic-analysis
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00106-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921591
1,523
2.609375
3
Set Up a Project These questions are based on 70-632: TS – Microsoft Office Project 2007, Configuring Microsoft Self Test Software Practice Test Objective: Set up a project. Sub-objective: Import and export data. Single answer, multiple-choice In a master project, you insert a new project for which you have no resources. You want to assign resources to the new project. What should you do? - Use resources from the master project. - Use generic resources. - Set the Leveling calculations option to Automatic. - Create temporary resources. B. Use generic resources. You should use generic resources in this scenario. You will first need to create generic resources and then assign them to the tasks in your project. If you do not have any specific resources available, the recommended option is to create generic resources and assign them to your project. Once you have specific resources available to perform the tasks, you can replace the generic resources with the specific resources. You cannot use resources from the master project. In this scenario, you have inserted the project into the master project, but simply inserting the projects into a master project will not grant you automatic access to the master project’s resources. You should not set the Leveling calculations option to Automatic. You can configure this option by selecting LevelResources from the Tools menu. In the Resource Leveling dialog box, you can configure the Leveling calculations option to Automatic. This option is set to Automatic by default and is used for leveling resource allocation to different tasks. When you have resources over-allocated to tasks, you should use this option to level resources’ work. Leveling is not possible until resources actually are assigned to the project. You should not create temporary resources in this scenario. The most suitable option is to create generic resource instead of temporary resources, which do not exist in Project.
<urn:uuid:54b3a142-602b-4b78-a304-fdeaa4f1d49a>
CC-MAIN-2017-04
http://certmag.com/set-up-a-project/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00135-ip-10-171-10-70.ec2.internal.warc.gz
en
0.833411
388
2.53125
3
Someday, will all learning be as quick and convenient as the Kung Fu lessons downloaded into Keanu Reeves' brain in The Matrix? Researchers from Boston University and Japan's ATR Computational Neuroscience Laboratories have figured out how to use data from functional MRIs to create a method of neurofeedback that can project a pre-recorded pattern into some sections of the brain. [ Free download: 6 things every IT person should know ] The resulting pattern in the brain is very similar to the same material imprinted using more conventional learning techniques, according to a paper published in the journal Science. The authors figured out a way to effectively imprint information onto the visual cortex– information that was absorbed well enough to allow human test subjects to perform vision-oriented tasks the imprinted pattern described with more efficiency than they could manage beforehand. Their conclusion is that it may be possible to use the approach to "teach" humans some things the same way we "teach" computers – by downloading the lesson into available storage, relying on the self-deterministic ability of the brain itself to adapt the imprinted material into a form it can use in much the same way it would if it had learned the material the old-fashioned way. So, is it possible? Is it realistically possible? How hard would it be to port an app to your brain? Consider how difficult it is to transfer not just raw data, but instructions and data from one computer to another and get the new one to perform correctly. Data is relatively easy, which implies you might be able to transfer memories, or raw information like the names and dates in office of all the American presidents into a human brain fairly easily. But programmatic commands? Go here. Do this. Kick Agent Smith(s). Change facial expression (it's Keanu, remember?). No. Viruses, bad programming, misconfigured data-projection machines and all the other things that could possibly go wrong with silicon-based data and instructions can go far wronger with meat-based data and instructions. And that's just assuming there's no problem with the receiving platform itself. Getting one computer to accurately run a set of instructions designed to run on one with different components, a different version of the operating system, different drivers, diagnostics and programmatic interfaces is almost impossible. Usually it requires throwing out the new machine and replacing it with one almost identical to the old one. Or recoding every bit of instruction by hand so it will run on the new machine. Or building an emulation layer so the program will think it's running on the old machine and the new machine will think the program is written for it. Emulation works, but slows everything down and is almost always inaccurate enough to create exciting new bugs in the new system that may not be found for years. Human brains are a lot less standardized than computer hardware. The OSes are all wildly dissimilar; the wetware comes in such a variety of configurations most can't be considered to be the same "platform" from a programming perspective. How hard would your brain resist the implant of knowledge? Even assuming instructions simplified enough that they won't be warped in transmission (or warp the mind trying to perceive them), there's a good chance any instructions would be rejected like a bad liver or the wrong side of a political hot-button. Human brains obviously have an as-yet-unidentified physical characteristic that allows them to reject even obvious and well-proven ideas that conflict with more dearly held beliefs. How else can you explain all those fools who disagree with you on abortion, defense, taxes, immigration, drugs, education and whether Starbucks and Hipsters should be allowed to live peacefully in neighborhoods that are otherwise not terribly annoying. Trying to squeeze anything into a human head is tremendously difficult, dangerous to both squeezee and squeezer and frustrating due to its short half life. Ask any teacher two days after the end of a semester, or even yourself half an hour after the end of a final exam. Human knowledge is fleeting and ephemeral; human error lasts forever. The only things an adult human brain can retain for the long term are those that are either false, trivial or diabolical. (How long has it been since you've heard "Tie a Yellow Ribbon 'Round the Old Oak Tree? Still remember the tune? Can't get it out of your head? Sorry.) Actually the technique probably will lead to some form of effective lightweight training, though not about anything involving belief or even, most likely, deep decision-making. Remembering where you left your keys or where the light switch is in a room is easy compared to understanding calculus or, for example, Kung Fu. Even if the ability to imprint functional information ever works, it's too much to expect it would ever work well enough to overcome the two characteristics of the human brain that have been the ultimate downfall of educators, dictators and saints throughout human history: determined ignorance among those who choose not to learn, and stubborn bloody mindednessamong those who do. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:93a80eff-66c1-4cdf-8a6b-27548ccd6f4f>
CC-MAIN-2017-04
http://www.itworld.com/article/2731686/consumer-tech-science/beaming-information-onto-the-brain--learning-like-kung-fu-keanu-in-the-matrix.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941616
1,094
3.203125
3
Chances are, if you were to ask 10 people at a cloud computing event the question, “What is cloud computing?”, you would get 10 different answers. This can be really frustrating to those trying to learn more. The reason for this confusion is that there are so many ways that the cloud can be utilized. It might be best to think of cloud computing in terms of SaaS - software as a service, PaaS - Platform as a Service and IaaS - Infrastructure as a Service. Here are some basic level definitions on these terms and some examples of how they can be utilized. What they all have in common is that they are all scalable, on-demand, cost effective and secure. They are completely managed so that the end user can focus on their business rather than maintaining their software, applications or hardware. Software as a Service: Also called: SaaS, web-based software, on-demand software or hosted software. Essentially, it is software made available to end users via the Internet. It is accessible from anywhere a user has an Internet connection, including mobile phones, and it is not downloaded to any computer. Software as a Service has the ability to be customizable to the user. It is a very low maintenance and cost effective because SaaS providers manage the servers where software is stored to ensure availability and performance. They also apply any needed software upgrades and administer the security of the product. Examples of SaaS are all around us. In business applications, you may be familiar with CRM tools (such as Salesforces), banking services, project management tools, CAD/CAM, retail point of sale, meeting software (WebEx), etc. In personal use, Gmail, Yahoo Mail, Facebook, Turbotax, Twitter, YouTube, etc. are all examples of Software as a Service. Some are free and paid for by ad space, while offers are paid for with monthly or annual subscriptions. Platform as a Service: Also dubbed PaaS, is possibly the most nebulous of terms related to cloud. Even companies selling the service seem to have different definitions. But, PaaS is referring to application development platforms where the development tool itself is hosted in the cloud and accessed and deployed through the Internet. And, just like SaaS, this service is maintained by the PaaS provider. Examples of PaaS are: force[dot]com (it supports Salesforce, an SaaS), AppEngine (from Google), Bungee Connect, Long Jump, Wavemaker, and more. Usually developers are more familiar with these platforms as they are typically specialized to those in development. Infrastructure as a Service: IaaS is considered the most flexible cloud model. Infrastructure as a Service provides fully scalable computing resources such as RAM, CPU and storage infrastructure. This enables companies to design IT systems that can scale based on demand. It is the responsibility of the IaaS provider to maintain uptime on all systems including power, broadband, and associated hardware. Therefore, most IaaS providers incorporate a high availability design model. High availability means that there are always infrastructure resources available if there is a hardware failure of any kind. High availability enables IaaS users to not only grow their infrastructure as needed, but also provide extremely secure uptime. IaaS providers can provide the necessary infrastructure to the PaaS and SaaS companies, and can even manage it for them, too. No matter if it is an SaaS, PaaS or IaaS, the goal is a happy end user who can access their software or application via the Internet. They take away the pains of maintenance, security and storage, providing an always available, always working end product. Hopefully, this has added a little bit of clarity to the differences in the hazy world of cloud computing. Talk to an infrastructure consultant today.
<urn:uuid:b9445352-9138-4dc0-bf2c-fe02e43c7e68>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/cloud-computing-defined-iaas-vs.-saas-vs.-paas
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956049
795
3.09375
3
Suppose a terrorist holding hostages at a secret location makes a video demanding ransom. Now imagine that law enforcement officials can take that video, process it and run it through a database that pinpoints the precise location where it was shot based on images and sounds in the video. Or perhaps a video containing important clues to a crime at a unknown location is uploaded to the Internet. New software could look at the video to determine the time and place of the crime. These are just two potential uses of a nascent video-recognition technology in development at the International Computer Science Institute (ICSI) in Berkeley, California. I was lucky enough to attend an open house at ICSI last week, and I got to take a look at some exciting new research. Researchers at ICSI are currently building a video database by analyzing videos downloaded from Flickr, says Gerald Friedland, who leads ICSI’s multimedia efforts. Data from videos taken at known locations is used to develop profiles of the respective locations. Data may include text data such as location tags, visual cues such as textures and colors, and sounds, such as bird song. The attributes of a test video are then compared against the profiles and its location is estimated. As more videos with embedded geographical information are downloaded, the researchers will use them to "train" the software to recognize more and more locations. Unfortunately, only three to five percent of the video uploaded to the Internet contains geographical information that can be used to reveal the locations where it shot, which means it will take a long time to build a database with more than just selected test videos. Even so, the system is remarkably accurate. By comparing the information in the database to some 5000 "wild, unfiltered" videos, researcher Jaeyoung Choi, who is developing the system, was able to pinpoint the location where 14 percent of the videos were shot to within 10 meters or about 33 feet. Even more startling is the system’s ability to pinpoint a location by analyzing sounds in a video. It can, for example, "listen" to a train whistle and know that it came from a train passing through Tokyo, says Friedland. And no, that’s not hypothetical. It’s already been done, and the software has been trained to recognize sounds from 32 cities around the world. The same technology could be applied to photographs, which means that the huge trove of precise geographical data generated by Google Street View could be used to train a system much more extensive than the one currently in use at Berkeley. I’m aware, of course, that video-recognition technology raises potential fears that Big Brother could find out many more details about us. Many of us also have similar concerns about facial recognition technology. The ICSI researchers are also well aware of the potential dark side of thier current research, and it’s not coincidental that funding for the project came initially from the National Geospatial-Intelligence Agency, says Friedland. "The world is a very big place and this will never be 100 percent accurate," says Friedland. Some locations, particularly remote, barren areas, may never be charted. Still, the fact that software can now recognize a train whistle in Tokyo is amazing—and a little bit scary. Image: Courtesy of ICSI.
<urn:uuid:da98e354-d818-4b53-b164-b63bdc8d8020>
CC-MAIN-2017-04
http://www.cio.com/article/2370683/consumer-technology/researchers-find-way-to-pinpoint-location-where-online-video-was-shot.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00035-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959489
684
3.328125
3
NIST tests accuracy in iris recognition for identification - By Alice Lipowicz - Apr 23, 2012 Iris recognition technology used to identify an individual from a crowd is accurate 90 percent to 99 percent of the time, according to a new report from the National Institute of Standards & Technology (NIST). NIST’s Iris Exchange III report also found some trade-offs between accuracy and speed, the April 16 final report indicated. Faster searches tended to be less accurate. At the same time, the iris recognition tests proved to be more accurate than facial recognition tests in some, but not all, circumstances. NIST’s report is on the first public and independent test of commercially-available algorithms used to determine the accuracy of a one-to-many match, which is a check of an individual’s data against a large database of potential identities to determine if there is a match. Previous tests had looked only at one-to-one verification, in which testing is done to confirm whether an iris with a known identity can be confirmed against a specific record. The institute evaluated 92 different iris recognition algorithms submitted to the agency by nine private companies and two university labs. The goal was to identify individuals from an iris image, tested against a database of images taken from more than 2.2 million people. “Accuracy varied substantially across the algorithms the NIST team tested,” according to a NIST blog post on April 23. “Success rates ranged between 90 and 99 percent among the algorithms, meaning that no software was perfect, and some produced as many as 10 times more errors than others. Also, the tests found that while some algorithms would be fast enough to run through a dataset equivalent to the size of the entire U.S. population in less than 10 seconds using a typical computer, there could be limitations to their accuracy.” If iris recognition is used in combination with other biometric testing, accuracy rates can approach 100 percent, NIST said in the blog post. The false negative error rate for tests done with single irises, in which a correct match is “missed” by the algorithm, are at 1.5 percent or higher. For two eyes, the rate is .7 percent. The reason for the failure rate is primarily poor quality images due to blur, glare, unusual features of eye or eyelid or defective image preparation or storage. False negative error rates (“miss rates”) varied by as much as a factor of 10. The most accurate algorithms had false negative rates below 2.5 percent while the least accurate had 20 percent or more false negatives. NIST said the variation in accuracy suggests a need for additional research. When compared with similar types of testing for single-face recognition, the single-iris identification provided significantly fewer errors. The false negative rate was about 10 times less for iris recognition vs. facial recognition. However, the gap narrowed in certain circumstances, such as when the databases for comparison have many false positive themselves. Several federal agencies have explored iris recognition technologies in recent years. The Homeland Security Department has tried out iris scans at the Texas border and in trusted traveler programs. The FBI is incorporating iris recognition into its next-generation biometric identification system. Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week.
<urn:uuid:1cf1ad5c-2890-4a44-93b5-b6410e18f3bf>
CC-MAIN-2017-04
https://fcw.com/articles/2012/04/23/nist-iris-recognition.aspx?m=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94843
703
2.6875
3
The commercialization of all manner of space technologies has always been a forte of NASA, but the space agency faces a number of economic and internal challenges if that success is to continue. A report by released this week by NASA Inspector General Paul Martin that assesses NASA's technology commercialization efforts is highly critical of the space agency's ability to identify and get important technologies out of the lab and out the door to commercial applications. Other NASA stuff: 10 wicked off-the-cuff uses for retired NASA space shuttles The report noted that decreased funding and reductions in personnel have hindered NASA's technology transfer efforts. Specifically, funding for technology transfer has decreased from $60 million in fiscal year (FY) 2004 to $19 million in FY 2012 while the number of patent attorneys at the Centers dropped from 29 to 19 over the same period. As a result, patent filings decreased by 37%. Martin's report cites a number of "missed opportunities to transfer technologies from its research and development efforts and to maximize partnerships with other entities that could benefit from NASA-developed technologies." For example: - Algorithms designed to enable an aircraft to fly precisely through the same airspace on multiple flights - a development that could have commercial application for improving the autopilot function of older aircraft - was not considered for technology transfer because project personnel were not aware of the various types of innovations that could be candidates for the program. - NASA personnel failed to capitalize fully on the Flight Loads Laboratory at Dryden Flight Research Center - a unique facility used for aeronautic testing services - because they did not recognize the facility as a transferable technology and consequently had not developed a Commercialization Plan to manage customer demand. - The NASA project team for a precision landing and hazard avoidance project was not aware of NASA's technology commercialization policy and had not conducted a commercial assessment or developed a Commercialization Plan for the project. However, team members provided us with several examples from their work that could be considered new technologies with potential commercial application, such as technology to improve communication between aircraft and air traffic control that could be useful to the aviation community and technology to aid helicopter landings during dust storms, low cloud cover, fog, or other periods of low visibility that could be useful to the military. Aside from reduced money for their efforts, NASA project managers and other personnel responsible for executing NASA's technology transfer processes could improve their effectiveness in identifying and planning for the transfer and commercialization of NASA technologies. Specifically, NASA personnel did not realize the transfer potential of some technological assets and project managers did not develop Technology Commercialization Plans that provide a methodology for identifying potential commercial partners, Martin stated. Other news: 12 seriously cool "toys" for big boys and girls For its part NASA did not disagree with the report's observations and promised to address the situation with training and other The office of the NASA Chief Technologist said he will review "personnel and funding requirements needed to implement the updated technology transfer and commercialization requirements and will assess whether fiscal and personnel resources are aligned with and adequate to meet the updated requirements." Creating new technologies is fundamental to NASA's mission, and facilitating the transfer of these technologies to other government agencies, industry, and international entities is one of the Agency's strategic goals. Technology transfer promotes commerce, encourages economic growth, stimulates innovation, and offers benefits to the public and industry, Martin notes. NASA has had plenty of success in transferring technology with some reports showing it has spun off some 1,500 different technologies. The Martin report points to aerodynamics research conducted at Dryden Flight Research Center that led to a method to decrease the "box-shaped" aerodynamic drag of trucks by 40%, thereby increasing fuel efficiency. Truck manufacturers that have incorporated these design improvements are realizing 15 to 25 percent more fuel efficiency at highway speeds. Other tools such as NASA hyperbaric chamber technology is being used in medical treatments. For example NASA has a patent license with OxyHeal systems to use three technologies developed at NASA's Johnson Space Center in Houston that are associated with inflatable spacecraft modules and portable hyperbaric chambers. Layer 8 Extra Check out these other hot stories:
<urn:uuid:697e8ffe-eab2-440d-895d-6c564507e33a>
CC-MAIN-2017-04
http://www.networkworld.com/article/2221809/security/nasa-squandering-technology-commercialization-opportunities.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00337-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944617
842
2.765625
3
Password-based authentication is widely used today, despite problems with security and usability. To control the negative effects of some of these problems, best practice mandates that servers do not store passwords in clear, but password hashes are used. Password hashes slow down the password verification and thus the rate of password guessing in the event of a server compromise. A slower password hash is more secure, as the attacker needs more resources to test password guesses, but at the same time it slows down password verification for the legitimate server. This puts a practical limit on the hardness of the password hash and thus the security of password storage. We propose a conceptually new method to construct password hashes called “useful” password hashes (UPHs), that do not simply waste computing cycles as other constructions do (e.g., iterating MD5 for several thousand times), but use those cycles to solve other computational problems at the same time, while still being a secure password hash. This way, we are convinced that server operators are willing to use slower password hashes, thus increasing the overall security of password-based authentication. We give three constructions, based on problems from the field of cryptography: brute-forcing block ciphers, solving discrete logarithms, and factoring integers. These constructions demonstrate that UPHs can be constructed from problems of practical interest, and we are convinced that these constructions can be adapted to a variety of other problems as well. Author: Markus Duermuth. Conference: PasswordsCon Bergen 2013.
<urn:uuid:ee5b46ad-4f51-4942-8c4f-b1b9bd77a5ae>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/12/19/useful-password-hashing-how-to-waste-computing-cycles-with-style/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00153-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924143
317
2.640625
3
What you can expect The effect that deduplication has on replication and disaster recovery windows can be profound. To start, deduplication means a lot less data needs transmission to keep the DR site up to date so much less expensive wide area network (WAN) links may be used. Also, replication goes a lot faster because there's less data to send. The length of the deduplication process—beginning to end—depends on many variables including the deduplication approach, the speed of the architecture, and the DR process. For the most efficient time-to-DR readiness, inline deduplication and replication of deduplicated data yield the most aggressive, efficient results. In an inline deduplication approach, replication occurs during the backup, significantly improving the time by which there's a complete restore point at the DR site or improving time-to-DR readiness. Typically, less than one percent of a full backup actually consists of new, unique deduplicated data sequences, which ensures that data can be sent over a WAN quickly and efficiently. Aggressive cross-site deduplication, when multiple sites replicate to the same destination, can add additional value by deduplicating across all backup replication streams and all local backups. Unique deduplicated segments previously transferred by any remote site or held in local backup are then used as references to further improve network efficiency by reducing the data to be replicated. In other words, if the destination system already has a data sequence that came from a remote site or a local backup, and that same sequence is created at another remote site, it will be identified as redundant by the EMC Data Domain system before it consumes bandwidth traveling across the network to the destination system. All data collected at the destination site can be safely replicated off-site to a single location or multiple DR sites.
<urn:uuid:6c85a017-04c5-4457-94aa-07874e195b92>
CC-MAIN-2017-04
https://www.emc.com/corporate/glossary/network-efficient-replication.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00549-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925476
386
3
3
After demonstrating a successful GPS spoofing attack against a drone (UAV – unmanned aerial vehicle) last June, Cockrell School of Engineering Assistant Professor Todd Humphreys and his student research team have now proved that a GPS flaw and a few relatively cheap tools can be used to hijacks both ships and planes. The results of their research was demonstrated aboard the “White Rose of Drachs”, a 210-foot-long, $80 million worth yacht, while cruising in the Mediterranean. With a laptop, an antenna, and a custom GPS spoofer that cost only $3,000 to build, the team managed to create a false GPS signal that the crew unknowingly accepted as the correct one and used it for navigation, and this resulted in the ship veering way off the original course. “Professor Humphreys and his team did a number of attacks and basically we on the bridge were absolutely unaware of any difference,” the ship’s captain Andrew Schofield told Fox News, adding that also no alarm systems were triggered during the demonstration. (You can check out the video here.) With 90 percent of the world’s cargo going across the seas, the implications of their research are huge. Attackers can run ships aground, or make two ships collide, or even shut down a port – all with disastrous consequences. Humphreys also noted that with the extreme similarities between the navigation systems of ships and that of commercial aircraft, the same type of attack can be mounted against them. When compared to last year’s research of similar attacks that can be directed against drones, this latest one is more complex and sophisticated. “Before we couldn’t control the UAV. We could only push it off course. This time my students have designed a closed loop controller such that they can dictate the heading of this vessel even when the vessel wants to go a different direction,” Humphreys says. Whether he will be once again called to testify before the US Congress about his research remains to be seen. His drone-hijacking attacks have so far garnered more attention from both the US political establishment and the nation’s Army forces that this latest one, so for the time being he’s trying to spread this information far and wide, and to make the world know that this type of attack is easy and cheap to execute. In the meantime, The Economist has published a timely and interesting piece about GPS jamming, which supports Humphreys’ claims about how simple and trivial is to disrupt the workings of satellite positioning systems.
<urn:uuid:2a962c26-79b0-41e7-99b9-2a3f542d45a4>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/07/29/hijacking-ships-and-planes-with-cheap-gps-spoofers-and-laptops/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00457-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961997
530
2.875
3
And how does it compare to myspace? A new report has found that many conversations on Twitter are heavily gender-biased towards men. Researchers at the Swiss Federal Institute of Technology in Zurich used an algorithm to apply the Bechdel test, which measures bias in conversations between two members of the same gender, to real-life conversations on Twitter, and found that the service was heavily gender-biased towards males. The study analysed American Twitter users who shared the link to a movie trailer on YouTube over six days, as well as users who tweeted with them over a longer period of time. About 300 million tweets from around 170,000 users were analysed to create something like a long movie script which formed the basis of an ‘interaction network’. David Garcia, a researcher at the Chair of Systems Design at ETH Zurich, said, "I expected that on Twitter men would mention women in their conversations as often as women mentioned men." "The analysis revealed a different picture: Twitter conversations among men featured fewer mentions of women," Garcia said. "In turn, there were more conversations between female Twitter users that contained references to men than conversations without a male reference." However, the researchers said that the tweets from students were much less biased against women. Tweets from fathers, on the other hand, were more gender biased, as they interacted less with female users and mentioned women less often even than childless men. "Possibly this is because fathers tend to be married while men without children may be married or single", Garcia added. "It appears that Twitter is more male-biased. In comparison, conversations via Myspace, another social network, displayed less of a gender bias than those on Twitter, probably because conversations are more private on Myspace." The Bechdel test is named after American cartoonist Alison Bechdel, and tests whether a work of fiction features at least two women who talk to each other about something other than a man.
<urn:uuid:e9b73320-d048-4db8-b18f-078997ff3d0c>
CC-MAIN-2017-04
http://www.cbronline.com/news/social-media/are-twitter-conversations-gender-biased-250414-4220800
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00393-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975247
404
2.734375
3
The Control of Broadcast In every network where data is sent over to some node broadcasts take place, there are three things upon which the frequency of broadcast depends: - Kind of protocol - Type of the running functions and various applications on the internetwork - The way these services are utilized. In order to lower the bandwidth requirements of older functions or applications these have been written again to make the code simpler and bandwidth consumption lower. The problem is in the advance generation of functions and sophisticated applications that require more and more bandwidth, to be honest these applications consume all they get. The multimedia applications are included in these bandwidth abusers that make use of extensive multicasts as well as broadcasts. If there is a defect in the device, the segmentation is not adequate, and the firewalls are designed poorly then this will create more problems to the problems created by broadcast-intensive applications. On one hand, the network design has improved but on the other hand, the administrator will be facing new problems. It is quite important to adequately segment the network so that the issue of every segment can be separated as a result of which the spreading or propagation of the problem throughout the internetwork can be avoided. If we say a problem, we think about network congestion of some kind for that network segment. With the help of routing and strategic switching this can be carried out quite effectively. Nowadays, these switches have become a standard and the hubs ar now a part of a past. Wast majority of companies, if any has these devices until our days, are replacing their old networks (flat hub) with switched network as well as VLANs setting. In VLAN all the devices are actually members of the common broadcast domain and so they get all broadcasts. The filtering of the broadcasts take place from all ports on a switch that are not affiliated with the same VLAN. This is a positive point as it offers the advantages that you get with a switched design without experiencing any problem provided if all the users were part of the same broadcast domain.
<urn:uuid:fc34e824-d7c1-4209-b979-e4796c6c5856>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/vlans-controling-broatcast
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00117-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960747
407
3
3
There is untapped potential in the wireless signals generated by routers around town. Emergency responders around the world may one day use a mesh network knitted together from privately owned network routers as a means of backup communication, according to a recent study performed in Germany at the Technical University in Darmstadt. Kamill Panitzek and colleagues used an Android application to sniff out wireless networks in a small rectangular area in the center of the city of Darmstadt. In the 0.19 square-mile area, researchers detected 1,971 wireless routers, 212 of which had public, unencrypted signals. In the event of an emergency making cellphone and data network service unavailable, emergency responders could use the wireless signals of homes and businesses to communicate. "With a communication range of 30 metres (yards), a mesh network could be easily constructed in urban areas like our hometown," the team said, reported Phys.org. Just as many routers have a “guest mode” that allows access without compromising network security, routers could be designed to have a channel designated for emergency responder use. Results of the study were published in International Journal of Mobile Network Design and Innovation. In a related study from 2011, Australian researchers at Flinders University developed an ad-hoc phone system called Serval that can relay VoIP calls between phones by using Wi-Fi networking. The system, which was developed with an eye on emergency responders, only works when the phones are within a few hundred yards of each other. Also, the call quality is poor, reported Ars Technica. Phones in a Serval system can also act as relay points, theoretically creating a bridge between distant callers in the event that cellular service is unavailable. The software is open source and freely available for download. A demonstration video of Serval featured by ABC News can be found on YouTube.
<urn:uuid:37f0059f-ee89-4c23-bc1e-c521a162c629>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/Wi-Fi-Mesh-Could-Connect-Emergency-Responders.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00419-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954393
379
3.0625
3
Welcome back to our Security 101 series. We talk time and time again about making 2016 the year of multi-factor authentication. It’s really one of the best things you can do to help secure user logons. But, since it will require change and budget and we all know those are two things the company hates, let’s talk about passwords and what makes the difference between good ones and bad ones. Good passwords don’t exist in dictionaries (of any language.) Passwords should appear random or at least not easily mapped to a common word, proper name, or anything else that might exist on a list used by dictionary attacks. “P@ssw0rd” may map to most organizations’ password requirements, as it includes all four types of character (upper and lower case letters, numbers, and punctuation) and is at least eight characters long, but it appears on practically every brute force tool’s dictionary list too. Simple substitutions like swapping in numbers or punctuation for letters can help make a password more complex, but you have to balance that with what is common. Purely random strings are more complex and difficult to brute force, but of course they are also more likely to be written down. You should also not include anything that indicates the system for which the password is set, so don’t use the word money in your password for your bank. Without running passwords through a checker, the best way to prevent dictionary words is to require complexity in the password policy. See below for more details on that. Longer is better. That’s pretty straight forward. The longer the password minimums, the greater the number of possible combinations of characters an attacker must cycle through to find a match. If you use only letters in a non-case sensitive password, an 8-character password has 2 billion possible combinations and would take the most powerful super computer or distributed attackers less than four minutes to crack. A single modern machine might need 35 minutes to do the same. But if you made that same password, with only 26 possible characters to choose from, 15 characters long, you would have 1.6 sextillion possible combinations. A supercomputer would need 53K years to crack that, and a single computer would need almost half a million years to do the same. The password policy should set a minimum based on what meets the security needs of your organization and the sensitivity of the data, without being too onerous to users. 12 characters is a good compromise for most needs. One tip from Edward Snowden himself? Think of passphrases rather than passwords. But of course, passwords are case sensitive, and there are far more characters available on a standard QWERTY keyboard than just letters. If you include upper case and lower case letters, numbers, and punctuation, there are 96 possible characters on a keyboard that can be entered just using a standard key with or without SHIFT. Using repeated characters compromises complexity, so don’t use the same character or even consecutive characters in a password. Passwords should use at least three of the four possible character sets, and the password policy should enforce that. Passwords should be changed with some regularity and frequency. 30 to 60 days is a pretty good range for most needs, but if you are in a higher security setting, you may want to force changes even more frequently. For customers, you need to find a good balance between security and usability. A customer who shops with you once every couple of months and has to change their password every time may soon decide to shop elsewhere. You may want to enforce once a year for them, or at least suggest that they change their password but not require it. Passwords need to be unique, both on the system they are set within, and across systems. You should not use the same password on more than one system, application, or social network, and you should not use the same password on the same system when prompted to change it. The password policy should require a new password with the change interval and remember at least the previous ten to ensure users are not cycling through the same password again and again. Passwords must never be shared, ever. Administrators and support personnel must understand that there is never a situation where they should ask a user for their password, and end users must be trained that they should never give out their password to anyone, ever. They should also ensure that they never write passwords down. Of course, a long and complex password that must be changed regularly and cannot be used on more than one system begs to be forgotten, or worse, written down, so ensuring users can remember passwords will help minimize that. Teach them to use passphrases that might mean something to them that makes it easier to remember, but won’t be readily guessable by someone who has access to their social networking information. For example, if you have an account at Amazon, think about something you only get there, or the first thing you ever got there, and use that as the basis for your password. I always buy my coffee there, so I create a password based on that-“IBuyc0ffeeHere.” Not including the quotes, that is a password that includes all four possible character types, is 15 characters long, and memorable. Of course, now that I shared that with you, I have to change it! While using multi-factor authentication is the better way to go, when you just don’t have that option, creating, using, and enforcing good password practices can help with security. Use the guidelines above to help create a good password policy in your network, and to teach your users good password practices.
<urn:uuid:c8d8acc1-eb18-432e-b3bf-0e3cdc80a76e>
CC-MAIN-2017-04
https://techtalk.gfi.com/what-makes-a-good-password/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00419-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950384
1,169
3.3125
3
Beware of Clicky, and Where is Google? Instant Messaging (IM) applications, such as MSN Messenger, ICQ, AIM and iChat have grown in popularity in recent years as they allow near realtime text communication between two or more people across the internet (or local networks). Some applications even include voice chat, video chat, games, file transfer, and a range of other features. While computer users are slowly becoming more aware of the risks of clicking random links in unsolicited or strange looking emails, the perceived increased personalisation of IM means that some users let down their guard slightly and will click links suggested by other IM users. Worms, viruses and trojan horses are now taking advantage of this mannerism by hijacking, or creating new, IM sessions and sending suggested links to other users listed in the 'Buddy Lists'. When these links are clicked, a range of malicious software is downloaded such as spyware, adware and viruses. The malware installs itself at this time, and then looks to propogate itself again using the new list of IM users on the victim's computer. Unless a computer user is expecting to be sent a link as part of the normal conversation flow, the same caution should be applied as that which should be applied to unsolicited email message links. That is: Beware of the Clicky. In further news, Time Warner has had 600,000 of its employees' Identities compromised when an external storage company lost the tapes that they were stored on. The tapes contained identifying information for employess, dependents and beneficiaries dating back as far as 1986. The latest in a long list of US Universities suffering from network intrusion is Florida University, which effectively had its network compromised recently. Although only 5% of the computers were compromised, 3,000 systems are being inspected, upgraded and updated on the basis that the intrusion could have gained access to all systems easily. This intrusion was only discovered when a single file was discovered on one of the compromised systems. Given the number of files on an average computer, this would be an extremely fortuitous discovery for Florida University. The SANS Institute has released their list of the top 20 most critical vulnerabilities discovered or patched in the first quarter of 2005. In addition to the expected Microsoft vulnerabilities, the DNS cache poisoning issue (subject of previous columns) was mentioned, as well as buffer overflow vulnerabilities for various anti-virus products and media players. The anti-virus vendors affected included Symantec, F-Secure, Trend Micro and McAfee. The media players affected included RealPlayer, iTunes and Winamp. A buffer overflow is where specially crafted content is forced into the memory allocated to an application. This content overflows the amount of memory allocated (i.e. overflows the buffer) and allows the attacker to execute the commands now placed in the overflowed area of memory, effectively compromising a system. DNS issues continue to be reported, with Google creating their own nightmare over the weekend. Although temporary, and with the details still being resolved, it appears that the records for Google were modified, with different results delivered to users depending on how their local DNS servers were responding. As well as the site not appearing, some users were directed to sogosearch.com (which identified as google.com). This was not a hack, but a result of google.com being sent to google.com.net. Sogosearch owns the domain records for *.com.net (i.e. any sitename.com.net), and this is actually correct behaviour. Google is denying that it was an attack, and it appears that it was the result of modifications by Google to their own DNS record. 9 May 2005 Do you like how we cover Information Security news? How about checking out our company services, delivered the same way our news is. Let our Free OS X Screen Saver deliver the latest security alerts and commentary to your desktop when you're not at your system.Comments will soon be available for registered users.
<urn:uuid:083d771a-f0ba-4dd5-8515-d33926dcf3fd>
CC-MAIN-2017-04
http://www.beskerming.com/commentary/2005/05/09/40/Beware_of_Clicky,_and_Where_is_Google
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00053-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9586
826
2.515625
3
Fiber Optic Transceiver is a device indispensable in network data transmission, then what is the fiber optic transceivers, fiber optic transceivers have what composed it, fiber optic transceivers play what role in the data dissemination process? Fiber optic transceiver includes three basic functional modules: the photoelectric media conversion chip, optical signal interface (optical transceiver module) electrical signal interface (RJ45), if equipped with a network management functions include network management information processing unit. Fiber Transceiver, is a short distance twisted pair electrical signals and optical signals over long distances swap the Ethernet transmission media conversion unit, also called Fiber Optic Converter in many places. The general applicatioin can not be covered in the Ethernet cable, you must use optical fiber to extend transmission distance of the actual network environment, and is usually located in the broadband metropolitan area network access layer application; same time help last mile fiber line is connected to the city area network and the outer layer of the network has also played a huge role. In some large enterprises, network construction directly when using optical fiber as transmission medium build backbone network, and internal LAN transmission medium is commonly copper wire, how to achieve LAN with fiber optic backbone connected? This needs in different port, different linear, different optical fiber links between conversion and ensure quality. The emergence of the optical transceiver, will double stranded wire electrical signal and light signal is mutual conversion, ensure the data packet in two network room smooth transmission, at the same time it will network transmission distance limit from copper wire of 100 meters expanded to more than 100 km (single-mode fiber). The basic characteristics of fiber optic transceivers: 1.Is completely transparent to the network protocol 2.Ultra-low-latency data transmission 3.Support ultra-wide operating temperature range 4.Dedicated ASIC chip data wire-speed forwarding. The programmable ASIC concentrated in a number of functions on a single chip, has a simple design, high reliability, less power consumption, enabling the device to get higher performance and lower cost. 5.Managed devices can provide network diagnostics, upgrades, status reporting, exception reporting and control functions, provide a complete operating log and alarm log. 6.The rack device provides hot-swappable features, ease of maintenance and seamless upgrade 7.Support the full range of transmission distance (0~ 120 km) 8.The device is the use of the 1 +1 power supply design, support wide power supply voltage, power protection and automatic switching.
<urn:uuid:da977132-75b8-405f-9d8d-82db219eb2eb>
CC-MAIN-2017-04
http://www.fs.com/blog/the-fiber-optic-transceiver-basic-common-sense-resolution.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00053-ip-10-171-10-70.ec2.internal.warc.gz
en
0.883557
520
3
3
In his State of the Union address in January, President Clinton pronounced that "the era of big government is over." Rhetoric and politics aside, the president's remarks further confirm -- if such is required -- a diminished faith in centralized government at the end of the 1900s. Top-down, industrial-strength governance -- the product of managing two global conflicts, decades of cold war and an industrial revolution -- is on the ebb. This raises the interesting question: "After big government, what?" DOWNSIZING WON'T DO IT It misses the mark to assume that what is required for governing in the next century is simply the opposite of big government. Yes, there is great value in restructuring what is redundant and obsolete in our public sector, but as many Fortune 500 corporations have discovered, salvation does not lie in simply downsizing. So while "smaller" is a component in the future profile of government, it will not be its defining characteristic. To get an idea of what comes after big government it is instructive to look at the dominant technology of our time. Institutions and the cultures they reflect are shaped by the technologies of their era. As Neil Postman points out in his book Technopoly, the development of the stirrup in the eighth century created a new form of military technology -- mounted combat -- which greatly expanded the power of the knightly class and upset the balance of power in feudal society. In this century the automobile gave rise to America's sprawling suburbs and laid the foundation for the decay of our inner cities. Powerful networked computing is the defining technology as we close in on the year 2000, and it is this which will significantly influence what follows big government of the 1900s. The current decentralization of public-sector programs from Washington out to state and local governments is a natural evolution as the United States moves from an industrial to a digital democracy. The future of governance, like information systems, is in scalable, intelligent, cross-jurisdictional, cooperative networks in which the right resources can be effectively deployed to the right location at the right time. The future lies in networked government services with decision-making and action resident at the community level -- wired to an intelligent infrastructure of regional, national and global resources pooled from both the public and private sectors. This evolution to a networked government can be accomplished because of advances in technology, but it is impelled by the pressing demands for better results from our public institutions. Criminal justice is a prime example of this. Involved local policing is the best insurance for safe communities. But local public safety officials -- be they judges, police or corrections officers -- need a robust, intelligent network connected regionally and nationally that accurately tells them in real-time who they are dealing with. Our cover story this month by Editor Wayne Hanson shows, in very personal terms, the terrible costs of disconnected, stand-alone justice agencies. And if our goal goes beyond law enforcement to safe and healthy communities, then these networks must connect beyond criminal justice to resources for economic development, education and human services. Public health is another area where networked services will play a critical role in future government service delivery. A recent Wall Street Journal article highlighted the growing promise of telemedicine to leverage medical expertise from one part of the country to another -- a heart specialist in New York City, for example, assisting a physician in a small community hospital in the Midwest. Such networks can bring scalability and significant savings to public health care. Consumer advocate Ralph Nader remarked that technology has no inherent democratic imperative. The choice of whether a technology is engaged to forward or limit man's freedom resides with the individuals that harness it. Fifty, 20 or even 10 years ago, we did not have the computing and communications muscle to move our large and moribund government organizations into a decentralized and networked future. Today we do. Our challenge now is to provide the democratic vision to guide and build it. March Table of Contents
<urn:uuid:9b286b88-f27c-460d-abad-ae872e761f99>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/100555974.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00199-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932995
818
2.53125
3
While NASA and National Highway Traffic Safety Administration engineers did not find anything wrong with Toyota's auto engineering, the investigation may prompt changes and perhaps new design standards for auto electronics. NASA and the NHTSA this week eliminated electronic problems as a cause of the now infamous unintended vehicle acceleration problem that caused Toyota to recall nearly 8 million cars in the past year. More on NASA: 20 projects that kept NASA hopping in 2010 "This week NASA engineers found no electronic flaws in Toyota vehicles capable of producing the large throttle openings required to create dangerous high-speed unintended acceleration incidents. The two mechanical safety defects identified by NHTSA more than a year ago - "sticking" accelerator pedals and a design flaw that enabled accelerator pedals to become trapped by floor mats - remain the only known causes for these kinds of unsafe unintended acceleration incidents," the Department of Transportation reported. But the report also noted that while electronics could not be blamed for the problems, the engineers said such systems in all cars need more scrutiny. For example the NHTSA is now considering a number of new tests for electronic car systems including: - Propose rules, by the end of 2011, to require brake override systems, to standardize operation of keyless ignition systems, and to require the installation of event data recorders in all passenger vehicles; - Begin broad research on the reliability and security of electronic control systems; - Research the placement and design of accelerator and brake pedals, as well as driver usage of pedals, to determine whether design and placement can be improved to reduce pedal misapplication. "Based on objective event data recorder (EDR) readings and crash investigations conducted as part of NHTSA's report, NHTSA is researching whether better placement and design of accelerator and brake pedals can reduce pedal misapplication, which occurs in vehicles across the industry. NHTSA's forthcoming rulemaking to require brake override systems in all passenger vehicles will further help ensure that braking can take precedence over the accelerator pedal in emergency situations," the NHSTA stated. Toyota electronics are not out of the woods just yet though. The NHTSA and NASA will be briefing the National Academy of Sciences panel looking onto the sudden acceleration issue. That group is currently conducting a broad review of unintended acceleration heir study is expected later this year. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:cb224287-7594-4397-9881-f80d59118044>
CC-MAIN-2017-04
http://www.networkworld.com/article/2228481/security/nasa-s-investigation-of-toyota-problems-may-force-electronics-changes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00107-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936241
488
2.59375
3
Rise of the Machine frauds involving product pricing, "free prizes," and auto repair; Robotics experts from Carnegie Mellon and several other U.S. universities are developing a tree-climbing robot. The aim of Robots in Scansorial Environments (RiSE) is to develop a machine capable of walking on land and crawling up vertical surfaces. Such robots could have plenty of useful applications, in search-and-rescue and space exploration, for example. But presumably it could also help you reach those really hard-to-prune branches. -- Newscientist.com Americans may not need their wallets to pay for movie tickets or gas at the pump. In the next six to 18 months, they may be able to use their cell phones instead of folding money. This concept is already popular in Europe and Asia. In South Korea, many cell phones sport software that's integrated with banking systems so people can buy groceries and soft drinks from vending machines. Japanese consumers also use "wallet phones" with contactless smart cards, which are not just used as credit cards -- they can contain entrance tickets, metro tickets, loyalty cards, air tickets and employee ID cards. -- Information Week European researchers have developed "neuro-chips" in which living brain cells, or neurons, and silicon circuits are coupled. The achievement could one day lead to the creation of sophisticated neural prostheses to treat neurological disorders, or the development of organic computers that crunch numbers using living neurons. Scientists created the neuro-chip by squeezing more than 16,000 electronic transistors and hundreds of capacitors onto a silicon chip just 1 square millimeter in size. They then used special proteins found in the brain to "glue" neurons onto the chip. The proteins are more than just a simple adhesive: They allow the neuro-chip's electronic components to communicate with the neurons. Researchers are now working on ways to avoid damaging the neurons during stimulation, and exploring the possibility of using a neuron's genetic instructions to control the neuro-chip. -- Foxnews.com Rumors or Tumors? People who regularly use cell phones don't face an increased risk of developing brain tumors, according to a study published in the British Medical Journal. Researchers studied 966 people with the most common type of brain tumor and 1,716 healthy volunteers over a period of four years, and found no relationship between cell phone use and the incidence of tumors. There was no connection between the risk of tumors and the duration of calls, their frequency or the make of the phone, the study said. Wireless phone makers in the United States are facing class-action suits that claim radiation from the devices puts users at increased risk of illnesses, including brain cancer. In October 2005, the U.S. Supreme Court turned away arguments from the manufacturers that federal regulations preclude the lawsuits by consumers. -- Bloomberg.com Slow to Grow The growth of Internet usage in the United States is slowing because late adopters are more difficult to convert than expected, according to a study by Parks Associates and CNET. Current U.S. Internet users access the Web in the following ways. Internet access outside the home only: 13 percent Dial-up at home: 22 percent No Internet access: 23 percent Broadband at home: 42 percent Mind Your Manners A 2006 survey on cell phone etiquette of 2,000 adults in the United States asked where it's OK to yak on your cell phone. Eighty-six percent of the survey respondents own cell phones. -- Conducted by Harris Interactive and sponsored by LetsTalk. 63 percent said it's generally acceptable to chat on a cell phone while driving. 66 percent said it's OK to talk on the phone in the grocery store. 2 percent said taking calls at the movies is acceptable. 21 percent said answering calls at a restaurant is fine. 38 percent said talking in the bathroom is OK. Computer memory chips surpassed cigarettes as Virginia's top manufactured export in 2005, according to economic figures. The state sent about $645 million worth of chips abroad in 2005, up from less than $12 million worth in 1997. Over the same period, cigarette exports fell 83 percent to $440 million. -- USA Today A report by Mediamark Research Inc. suggests that spending time online and engaging in other digital pursuits helps children ages 6 to 11 develop the skills, knowledge and self-confidence they need to fully participate in the world around them. The following indicates Internet usage among U.S. children in this age group. Have used the Internet in the past 30 days: Boys: 56.3 percent Girls: 61.8 percent Use the Internet every day: Boys: 7.6 percent Girls: 8.7 percent Note: The figures expressed are a percentage of survey respondents. In 2005, more than 46 percent of Americans experienced white-collar crime victimization, and more than 62 percent reported personally experiencing these crimes at some point during their lifetime, according to a survey by the National White Collar Crime Center. The types of victimization measured included: 800/900 number scams; unauthorized PIN use; unauthorized credit card use; Internet fraud; and financial planning fraud. One hundred and thirty million Americans have been victimized. -- National White Collar Crime Center
<urn:uuid:1e3c50e1-0b76-4cd6-8453-ba66705ea8fd>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/100494199.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00412-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931529
1,103
3.15625
3
Even if his moniker doesn’t ring a bell immediately, you might notice something familiar about the name of 19th century French engineer Jean-Maurice-Émile Baudot. It’s that surname, isn’t it? Baudot. Ever wonder where the term “baud” originated? Now you know. Baud expresses the number of pulses (or voltage changes) that can be transmitted over a network every second. In the early dial-up Internet era, modem speeds were typically represented in a multiple of the baud rate times the number of bits – the smallest unit of digital computer information – that could be crammed into each pulse. If you had a screeching 56k modem back in the day, for example, it meant that it could achieve a data transmission speed of 56,000 bits per second – the product of seven bits per pulse at a baud rate of 8,000. Baudot developed telegraphs and the transmission systems that supported them. His invention of a multiplexed printing telegraph – one that squeezed multiple transmissions over a single line – was a precursor to the modern telephony environment that has prevailed for decades. Time division multiplexing (or TDM) is a core enabling technology that has long powered the modern telephone network. It sprang from Baudot’s early work in the telegraph medium. TDM relies on an elaborate method of synchronizing switches at either end of a circuit so that transmission lines can accommodate more than one signal at a time. Each burst of digital information occupies the line only for an instant – thus the “time” reference in the term. One of the first commercial TDM implementations for audio happened in 1953, when RCA Communications relayed signals from a New York City facility to a Long Island receiving station. Eight years later, Bell Labs developed a TDM network that combined 24 voice calls over a four-wire copper trunk. The point is that TDM techniques, tracing back to Baudot’s telegraph multiplex system of 1874, have powered modern telecommunications for a long, long time. But there are signs now that time may be running short for TDM. A significant disruptive force, although it’s hardly a newcomer, is session initiation protocol (or SIP), a set of rules for establishing and tearing down voice and data transmissions over IP networks. SIP originally was conceived in 1996 by developers Henning Schulzrinne (now the FCC’s chief technology officer) and Mark Handley, an Internet Engineering Task Force member and professor at University College London. The first iteration of SIP was standardized by the IETF in 2002. Language from the IETF document describes SIP’s essential role for “creating, modifying, and terminating sessions with one or more participants. These sessions include Internet telephone calls, multimedia distribution, and multimedia conferences.” Essentially, SIP accommodated lengthy voice and data sessions that the Internet’s two prevailing protocols, Simple Mail Transfer Protocol (SMTP) and Hypertext Markup Language (HTML), weren’t as good at. It has taken awhile, but SIP is now exerting its force across the telecommunications landscape. As a foundational enabler for voice-over-IP services, SIP now enjoys broad acceptance from large business and institutional customers, and is a key agent in powering residential VoIP over both managed and “over-the-top” data networks. The telecommunications research firm EasternManagement said in a March 2013 report that “SIP is so pervasive now, whenever someone acquires a new telephone system it is routine for it to be SIP phone, SIP protocol, IP based.” Cable companies are now prominent players in SIP Trunking, an increasingly popular successor to TDM telephone networks for large businesses. SIP’s rise to prominence carries some symbolic importance beyond its marketplace impact. Its origin in the world of the Internet, and not in the traditional telecommunications sector, signals to some a historic changing-of-the-guard in the way networks exchange information. The SIP Forum, a not-for-profit advocacy group, contends that “We are in the midst of a global transition from a TDM to an IP-based network infrastructure, with an accompanying explosion of new services.” SIP is all about enabling videoconferencing, intelligent mobility offerings like “follow-me” phone services, live chat over the Internet and more. None of these, of course, could have been envisioned by a French inventor whose name spawned a measurement unit for data transmission speeds that have long since been leapfrogged. But technologies rarely appear out of thin air. They’re the product of incremental adaptation and improvement. For that reason, the protocol known as SIP owes a debt of gratitude to the man responsible for the world’s earliest incarnation of data multiplexing. Even if we no longer speak in baud. Time division multiplexing is a core enabling technology that has long powered the modern telephone network. TDM techniques, tracing back to Baudot’s telegraph multiplex system of 1874, have powered modern telecommunications for a long, long time. But there are signs now that time may be running short for TDM.
<urn:uuid:c647d199-9721-46f1-ae73-0e1943f60388>
CC-MAIN-2017-04
https://www.cedmagazine.com/print/articles/2013/10/memory-lane-homage-to-protocol-past?cmpid=related_content
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00412-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923102
1,107
3.578125
4
Server Gated Cryptograpy SGC SSL Background & Summary SGC Certificates were introduced back in the 1990s as a way to enable 128-bit encryption in web browsers that were only capable of lower levels of encryption due to U.S. Government export regulations. At the time of their introduction, Server Gated Cryptography SSL certificates were issued only to the web servers of financial organizations so that when they engaged in transactions with browsers shipped outside the U.S. they could "bump up" the cryptographic strength of the SSL connection from 40 or 56 bits to 128 bits. The full capabilities of encryption software in versions of Internet Explorer and Netscape Navigator destined for use outside the U.S. were disabled except when the browser connected to a financial institution's SGC-enabled server which could provide the key to unlock them, hence the term "Server-Gated Cryptography." However, by the late 1990s the U.S. Government had begun relaxing its encryption export policy, leading to the allowed use of 128-bit encryption in browsers without the need for SGC certificates. Today very few people still use these old, intentionally disabled browsers, and there is no reason why they should. While one could argue that these users potentially receive some benefit from certificates using Server Gated Cryptography, the risks associated with encouraging the use of old browsers and SGC SSL certificates far outweigh the potential benefits. Not only does the use of SGC-enabled SSL certificates facilitate the use of legacy browsers with heightened vulnerability to malicious software, but there are numerous alternatives more cryptographically secure than SGC that are less expensive and easy to implement. DigiCert strongly recommends that server administrators currently running SGC SSL implementations replace them with DigiCert's Extended Validation Certificate and consider heightening server security settings. Dangers of Facilitating the Use of Legacy Web Browsers Legislation regulating the use of strong encryption was phased out beginning in 1999. Here are some facts that modern web users of 40-bit or 56-bit browsers may benefit from: - Their web browsers and/or operating systems have not received necessary security updates since December, 1999. - Hundreds of thousands of viruses, keyloggers, and other malicious software programs have been created and spread across the web, via websites and email, since their last browser or OS security update. - Easily exploitable vulnerabilities in their web browsers, many of which have since been remedied, could easily be used to facilitate the criminal exploitation of sensitive information entered into online forms, regardless of any action taken by site administrators. - These legacy browsers are many times more susceptable to malicious attacks than users who upgrade to more secure, modern web browsers. - Many of the steps that legacy browser users could take to reasonably protect themselves are simple, free, and readily available. By allowing legacy web browsers to connect to their servers with SGC SSL certificates, server administrators enable a very small percentage of web users to access their sites, while putting the sensitive information that those users enter into their sites at heightened risk of misappropriation for malicious intent. SGC No Longer the Answer In the late 1990s SGC SSL Certificates provided 128-bit encryption when it would not have otherwise been available, and Certificate Authorities were acting in the best interest of server administrators and the people who used their websites alike. However, today it is not only common, but normal that browsers will provide 256-bit encryption without the use of SGC. "Older" web browsers encrypt at 128-bits, and lower encryption levels are all but unheard of. It is the belief of DigiCert that many Certificate Authorities that actively market SGC certificates as an SSL "upgrade" are knowingly engaging in deceptive business practices, sacrificing the integrity of their certificate services in exchange for corporate profit. The best protection a server administrator can offer to legacy browser users is to encourage them to upgrade to modern, secure browser versions. For most common server types, requiring more secure connections is as easy as clicking a checkbox. By requiring 256-bit secure connections, server administrators help to keep their user's private data secure by motivating those users with less secure browsers to upgrade to a more secure computing environment. We recommend that all server administrators currently managing an SGC certificate replace it with an Extended Validation Certificate and force users to replace their older browsers that once relied on Server Gated Cryptography to obtain secure connections with 2048-bit SSL encryption.
<urn:uuid:7cb9eec0-7a1e-43d8-b8fd-8821603e839e>
CC-MAIN-2017-04
https://www.digicert.com/sgc-ssl-certificates.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00530-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928469
917
3.171875
3
The Role of Critical Thinking in Problem Analysis Critical thinking allows us to take control of our thinking rather than letting it become hijacked by convenience, mindset, assumptions, and bias. This white paper will walk you through understanding the implications of inputs (data) and influences (bias) to the reasoning process. You will learn how to develop a questioning outlook and quality standards that will lead you to making more effective decisions. Contrary to what the name implies, critical thinking is not thinking that is critical of others. It is “fundamental” or “vital” thinking. Critical thinking is thinking that drills down to the essence of a problem. It is introspective thinking that questions everything and everyone. Critical thinking should not be thought of as an effort to refute any particular choice or decision, but rather as a way to balance evidence, reason, and options. Critical thinkers make better decisions because they question their understanding of a subject before making a decision. They are aware of the tendency among decision makers toward lazy, superficial thinking and instead ask questions to illustrate their depth of understanding. Critical thinkers pursue reason and logic as the foundation for effective decision making. They “think hard” rather than thinking quickly. Asking questions about what we believe and why we believe it puts the extent of our real understanding (knowledge) into perspective. Introspective thinking reveals what we know and do not know for certain about a subject. It unveils the nature and significance of false assumptions and gaps in information. Questioning what you have been told by others may make it harder to make a decision, but the choice will ultimately be made with a fuller understanding of what is the best option in a given situation. What Is a Good Decision? The first paper in the Critical Thinking Series, What is a “Good” Decision? How is Quality Judged? , provides an explanation of how to judge the quality of decisions. In short, a decision is of high quality to the extent that the decision maker knows what risks they are taking by making that decision. They know how good or bad their information is and the biases inherent in their reasoning. A good decision does not necessarily turn out to be the best decision in hindsight, but is the choice with the best chance of being successful given what is known. The quality of a decision is determined by the quality and quantity of information being utilized and by the reasoning being employed to arrive at the decision. Incorrect and/or incomplete information and reasoning lead to erroneous predictions of future outcomes. A bad decision is one in which the decision maker was poorly informed, because of bad information, incomplete information, or faulty reasoning. The decision maker chooses between options without understanding everything they need to know about the pros and cons of each option, or even whether all options have been considered. They do not know how good or bad their information is. A high-quality (good) decision is based on a methodical analysis of the available information and on sound reasoning. Good decisions do not depend on luck. They are not just the result of “throwing the dice”; they are examples of well-informed risk-taking. The decision maker knows what they do not know and makes the best choice in light of this knowledge. Bias Gets in the Way The second paper in the critical thinking series, Managing Analytical bias – Why Good Decisions Don’t Come Easily, discusses the reason why much of our thinking is not particularly balanced. The natural tendency in decision making is: • to consider only those alternatives that are obvious • to analyze only the areas of uncertainty with which we are familiar • to quickly compare the known options through a haze of bias and assumptions. In general, intuitive or instinctual problem solving (which leads to decisions) is performed by trial and error. Even highly educated people typically muddle through problem analysis in a haphazard way. Most people are content with an occasional success and assume that no one else could do any better. Biased viewpoints are what prevent people from being objective in their analysis of a situation or problem that requires a decision. Bias is created by experience, education, and genetics. It is the expression of how one thinks and reasons about particular subjects. Bias, in its various forms, discourages us from being thorough in our problem analyses. It exaggerates our understanding of the factors that relate to a decision and encourages quick, poorly informed decisions. The influence of bias is always at play, undermining our ability to be truly objective. The Role of Critical Thinking So, good decisions are ones in which the decision maker understands what they do not know about what they must decide. However, people exaggerate what they think they know. Biased viewpoints encourage people to exaggerate their own knowledge and the validity of the information sources they are drawing on. The result is a lot of poorly informed, illogical decisions. The cure that is needed is a structured approach to thinking which will help to ensure balanced reasoning and informed choices. This cure is critical thinking.
<urn:uuid:6cdfdc9a-a1d1-4a0e-8848-0388f0e081c7>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/the-role-of-critical-thinking-in-problem-analysis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00348-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955963
1,030
3.71875
4
One of the most important tools in a security professional’s arsenal is the mighty ‘sniffer’. Its power is never underestimated, never undervalued. A sniffer is many things to many people. In the right hands it is invaluable, allowing for the analysis of complex traffic passing over the network, in the wrong hands it can be a destructive force, allowing for the capture of confidential or sensitive data as it flows on the wire. A sniffer, sometimes called protocol or network analysers, is essentially hardware or software that can intercept or log traffic as it passes over the network. It will then decode that data traffic and present it in a easily understandable format, always in accordance with the particular protocol’s specifications. A sniffer is the ultimate network ‘wire-tap’ offering an insight into the black-art of computer conversations The most common type of network is the Ethernet network. Ethernet was built on the principal that all computers on the same network will share the same ‘wire’. As a result, it is potentially possible that any one computer on the network could see all of the traffic on that network, regardless of whether that traffic was destined for it or not. To overcome this possibility, all Ethernet hardware (your network card) is programmed with a ‘filter’ that instructs it to ignore packets that do not its own MAC address. This has the effect of a single computer only receiving data that has been addressed directly to it, or to the whole network, like broadcast packets. With sniffing, we essentially turn off or disable the filter, forcing the card into what has been aptly named ‘promiscuous mode’. When a network card is operating in promiscuous mode, as long as the traffic is on the same wire, it will see it. And there in lies its power. The sniffing software then translates the captured packets into something more easily understood and displays it in the usual array of fancy ways, depending on the particular software in use. Sniffers have a wide range of uses. Fault analysis and performance analysis are the two most obvious ways that the purchase of a commercial grade sniffer can be justified. Network intrusion detection is another benefit, in that devices running in promiscuous mode can monitor the network for unusual patterns of traffic, and create alerts or take action as appropriate. More sinister uses are the automatic sifting of clear text passwords from the network, or clear text protocols such as SMTP (email) or HTTP (web). In fact, encrypted passwords can be captured, and cracked offline at a later stage. SMTP email is notoriously insecure, but despite repeated warnings many people persist in using email as a means to distribute confidential documents or information. A short sharp wake up call may be to demonstrate, through the use of a sniffer, exactly just how easy it is for an unauthorised individual to capture SMTP email from a network. Many of the more technically adept among you, will surely argue that with the advent of switched networks, sniffing, or at least unauthorised sniffing, have become a thing of the past. Not so. While a switch can provide a good defence against the causual sniffer, it is important to remember that a switch creates a ‘broadcast domain’ providing an attacker the ability to spoof ARP packets and thus gain access to all of the traffic on the wire. One of the best known exploits is to use “router redirection”. ARP queries contain the correct IP-to-MAC mapping for the sender. In order to reduce ARP traffic, and traffic in general on the network, computers cache the information that they read from the query broadcasts. A malicious attacker could redirect nearby machines to forward traffic through it by sending out regular ARP packets containing the router’s IP address mapped to its own MAC address. All the machines on the local wire will believe the hacker is the router, and therefore pass their traffic through him/her. Simple, but effective. A more aggressive, but equally effective strategy, would be to DoS a target victim and force it off the network, then begin using its IP address. If you picked your victim carefully the rewards could be high!! Defending against the rouge sniffer is never easy. As previously mentioned, a switched network will keep the casual sniffer at bay, but the more determined will overcome that obstacle. The most robust method of protection is to enforce the used of encrypted protocols. Replace Telnet with SSH, introduce SSL where possible, use only encrypted email like PGP or S/MIME. Use two-factor or biometric authentication. Unfortunately, due to the nature of Ethernet, sniffing and sniffers will be here for some time to come. There are a large number of sniffing tools available, many for free. The highly regarded and very free packet capture tool Ethereal is a great place to start, but there are many more. A recent and comprehensive list can be found here.
<urn:uuid:a95069a4-9f60-4e01-8a70-6c4cc5ddba21>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2007/01/09/the-mighty-sniffer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94676
1,029
2.921875
3
This is my unified theory of the computer market, technology and industry. I would like to call it the “Black whole theory”. So here goes, the universe, IT industry expands with new features, then as the weight of these features becomes commonplace or too large they collapse or get drawn into black holes (servers, operating systems etc). These features get pulled by gravity (standardisation) into the operating systems (OS), which is analogous to a planet. You have solar systems (hypervisors) or virtualized systems, planets etc grouped together. This continues in parallel with new universes (OS), planets (programs) and black wholes being continually created and collapsing. Therefore, similar to black holes, the CPU, server OS will ultimately swallow up all functions and features. Actually the operating system or hypervisor running on the CPU will turn into a black hole and do this. This is why we see rich storage features from VMware, Hyper-V and new filesystems like ZFS appearing and then becoming integrated and standard. We also see Virtual Storage Appliances (VSA) being created in different incarnations (parallel universes), even though strictly speaking these are not that closely integrated into the OS, just appear in different packages . Some problems with this. As Steven Hawking explains, some things can escape from black holes e.g. hawking radiation. Also, black holes are apparently grey. Therefore since everything is actually a wave when we get the bottom of things in places like CERN with the Higgs Boson, we have waves of centralization and decentralization. Today all these social programs such as youface and booktube (yes an applet is a program) are installed on our mobile computers, yes a mobile phone is a computer as is a tablet. These, planets are the ones that cannot sustain sufficient life and sometimes just move around a Sun or a solar sysem as a dead planet, but are often pretty when seen from a telescope. Marketing is also included within this unified theory, these are the asteroids (bad marketing), that is hard to see or understand or comets (good marketing as they bring water) that shine and move between the planets coming and going but really only carrying one story, which sometimes contains ice, precious metals or ugly useless jagged rocks. If you ever get hit by a marketing asteroid most of them just burn up pretty quickly, the big ones can be quite spectacular for a short time until they come around again, but usually it is the same message in a different part of the sky or just aimless drifting and trying to find something to do. Dark matter or programmers pull all of this together and using background radiation or creativity, but the universe continues to expand. This is all quite simple to understand if you look at everything at the instruction level (sub atomic), which after all is what all the programs (planets) are made from, be it servers, storage, networks or applications. Well anyway what this means for storage admins is that , if you are in a deeply virtualized environment you can now use the hypervisor to migrate data between old and new storage arrays. You can use these features as a wormhole avoid black holes and asteroids. For more details, read this: When Migrating Storage, Use the Tools in Server Virtualization Products . Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.
<urn:uuid:b77f16c3-22dd-4cad-8fe1-fec9e291427a>
CC-MAIN-2017-04
http://blogs.gartner.com/valdis-filks/2014/02/17/my-black-hole-theory-of-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00284-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936951
788
2.5625
3
Big data is constantly growing and changing, and businesses need to adapt accordingly. To adapt successfully, businesses need agile data management processes. Data agility refers to the ability to adapt to the changing nature of big data successfully. The changing nature of big data reflects the dynamic nature of business. Companies constantly have to re-evaluate and modify strategies according to changing business realities. The situation represents a big shift from when companies could afford to stick with a particular strategy for some time. Data agility can be likened to physical agility. Physical agility enables you to become swifter in movements, avoid injuries, and respond to rapidly changing situations quickly. Similarly, data agility reflects how well a company is managing the big data it is collecting. This is much more than simply collecting and analyzing data. This is about using the data so that you are able to respond to change quickly and appropriately, which is a constant in today’s business world. Data agility enables companies to strategize based on insights and to evaluate and modify strategies in real time or as close to real time as possible. For example, instead of structuring data into static schemas, agile data analysis practices can help analysts explore different approaches to the structuring of data so that analysts find more ways of discovering the value hidden in data. This approach is known as schema-on-read. Another example of data agility is the retaining of unused data. Many businesses discard or neglect unused data, a practice that might deprive them of valuable, unearthed insights. The low price points of the Hadoop File System enables companies to retain the unused data. The following tips can help you adopt data agility and make it work: Assess the condition of data. First, your data must be ready to be used in agile mode. So begin by assessing the condition of your data. For example, if there is data duplication, too much junk, or outdated data, or if the quantity of data is not significant enough, you must address these issues before embarking on data agility adoption drive. Draw from all data sources. You need to document all types of customer data that your business produces from all sources, internal or external. You also need to analyze what types of data you are collecting or not collecting. After that, deploy and unify all the data in a customer data platform so that all data are available centrally, in real time. That platform should be the place where all data gets analyzed and you should be accessing the platform all the time. Real-time actions. For organizations adopting the data agility model, the old way of data collection and analysis is out. With the old model, data would be collected, stored, and analyzed, and then insights were collected. The entire process would take a lot of time. Now, organizations need to respond to frequent feedback and changes, and take actions almost instantly. The luxury of taking weeks or months to gather and normalize data into a data warehouse is a thing of the past. Foster a culture of agility. Fostering a culture of agility is as important as having updated tools and processes. You may have the best tools, processes, training programs, and technologies, but you also must have people who are excited about data agility and willing to put things into practice. Different departments need to collaborate and put the processes into practice. For example, the data warehouse team needs to be in sync with the data reporting team so that data additions are reported and analyzed instantly. Data Agility Adoption Challenges It goes without saying that organizations face a lot of challenges when adopting data agility. These challenges are multi-faceted: financial, process, human resources, and tools identification. Financial. Data agility adoption can involve hiring skilled personnel and a significant investment in software tools and technologies, and a lot of training. A lot of companies do not have the budget or financial resources to invest in data agility adoption. While they may do an honest attempt at processes and human resources, arranging finances is a challenge for them. Process. Existing processes are often incompatible with the standards that data agility requires. It can take a long time to fix the errors in existing processes and make them compatible with an agile data environment. An assessment of existing processes is required to identify necessary changes. The challenge is even bigger in organizations that have a lot of legacy processes. For example, analytics collection policies in the waterfall model that follow a sequential process are incompatible with a data agility model because by the time the data is taken for processing, it has become outdated. An agile data environment requires data to be collected in real-time as much as possible. Human resources. This is perhaps the biggest challenge of all. Getting your employees to adopt the agile mindset is not easy, especially if something different had been practiced previously. The challenge is more visible in big organizations with a large employee base and legacy processes. Initiating change and maintaining the tempo are two of the biggest challenges. However, with a concerted effort at culture change, things can improve over time. Identifying the right tools. Organizations need to identify tools that help them adopt data agility in their own context. The choice of tools ideally needs to be based on certain principles. For example, the tools need to be able to process real-time or near real-time data and provide analytics; the framework needs to be flexible to accommodate changes and be platform independent, and the tools need to empower the business users to create analytics based on their unique needs. You also need to have qualified people to handle the tools. For organizations that need to improve their ability to respond to changing business conditions, conditions, and goals, an agile data approach is all but inevitable. A business environment that is more dynamic than ever demands the ability to use data to make decisions and modify strategies in real time. Kaushik Pal has more than 16 years of experience as a technical architect and software consultant in enterprise application and product development. He is interested in new technology and innovation areas, as well as technical writing. His main focus area is web architecture, web technologies, Java/J2EE, Open source, big data, cloud, and mobile technologies. You can find more of his work at techalpine.com. Email him at email@example.com. Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.
<urn:uuid:fe87f9e3-0e7b-44a1-9d77-42d664a7b974>
CC-MAIN-2017-04
http://data-informed.com/adopting-an-agile-data-approach-tips-and-challenges/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00154-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945068
1,289
2.53125
3
Perhaps it was the concern that the nearly 14 ton Russian Mars probe would land smack-dab on the White House or maybe they just came to their senses, the US State Department today said it would indeed work with the European Union and other countries to develop a formal space code of conduct. Of particular concern is the growing amount of space trash and how the world can go about eliminating or controlling the problem. There is also the desire to keep space free of military weaponry. Space news: 10 game-changing space galaxy discoveries A statement from US Secretary of State Hillary Clinton this week read: The long-term sustainability of our space environment is at serious risk from space debris and irresponsible actors. Ensuring the stability, safety, and security of our space systems is of vital interest to the United States and the global community. These systems allow the free flow of information across platforms that open up our global markets, enhance weather forecasting and environmental monitoring, and enable global navigation and transportation. Unless the international community addresses these challenges, the environment around our planet will become increasingly hazardous to human spaceflight and satellite systems, which would create damaging consequences for all of us. In response to these challenges, the United States has decided to join with the European Union and other nations to develop an International Code of Conduct for Outer Space Activities. A Code of Conduct will help maintain the long-term sustainability, safety, stability, and security of space by establishing guidelines for the responsible use of space. As we begin this work, the United States has made clear to our partners that we will not enter into a code of conduct that in any way constrains our national security-related activities in space or our ability to protect the United States and our allies. We are, however, committed to working together to reverse the troubling trends that are damaging our space environment and to preserve the limitless benefits and promise of space for future generations. The European Union has in fact had a code since 2008 which sets standards for minimizing accidents, improve security and bolster the ability for all countries to freely explore outer space. In 2010 the White House week issued a National Space Policy, a document that emphasizes the Obama administration's desire to further commercialize space but also to ensure that the US and international partners have unfettered access to outer space. The policy reflects and expands upon what the White House has been espousing about space and its own space agency, NASA, since late 2009. Background news: 8 surprising hunks of space gear that returned to Earth According to the US National Policy: "The legacy of success in space and its transformation also presents new challenges. When the space age began, the opportunities to use space were limited to only a few nations, and there were limited consequences for irresponsible or unintentional behavior. Now, we find ourselves in a world where the benefits of space permeate almost every facet of our lives. The growth and evolution of the global economy has ushered in an ever-increasing number of nations and organizations using space. The now ubiquitous and interconnected nature of space capabilities and the world's growing dependence on them mean that irresponsible acts in space can have damaging consequences for all of us. For example, decades of space activity have littered Earth's orbit with debris; and as the world's space-faring nations continue to increase activities in space, the chance for a collision increases correspondingly." It seems though it took a spate of satellite crashes -- the Russian Phobos Grunt probe, which crashed back to Earth last week, and now another Russian satellite, Cosmos 2176 expected to reenter the atmosphere this week - not to mention the nearly 6 ton NASA Upper Atmospheric Research Satellite that crashed last year to get the get the US onboard with other nations to clean up their space acts. Still some question the US commitment to the process since as recently as last week administration officials called the EU's policies too restrictive. Layer 8 Extra Check out these other hot stories:
<urn:uuid:09ecb24f-f9ca-4694-b225-0aff41cfe86f>
CC-MAIN-2017-04
http://www.networkworld.com/article/2221492/security/us-finally-backs-an-international-space--code-of-conduct-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00154-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941215
796
3.15625
3
If you think having a computer isolated from the Internet and other computers will keep you "safe," then think again. The same security researchers who came out with Air-Hopper have announced BitWhisper as another method to breach air-gapped systems. This time the Cyber Security Research Center at Ben-Gurion University in Israel jump the air-gap by using heat. The researchers explained the proof-of-concept attack as: BitWhisper is a demonstration for a covert bi-directional communication channel between two close by air-gapped computers communicating via heat. The method allows bridging the air-gap between the two physically adjacent and compromised computers using their heat emissions and built-in thermal sensors to communicate. Computers monitor temperature via "built-in thermal sensors to detect heat" and to trigger internal fans to cool the PC down. BitWhisper utilizes those sensors "to send commands to an air-gapped system or siphon data from it." In the video below, researchers demonstrate "BitWhisper: Covert Signaling Channel between Air-Gapped Computers using Thermal Manipulations." It shows the computer on the left emitting heat and sending a "rotate command" to a toy missile launcher connected to the adjacent air-gapped PC on the right. The Cyber Security Research Center said: The scenario of two adjacent computers is very prevalent in many organizations in which two computers are situated on a single desk, one being connected to the internal network and the other one connected to the Internet. The method demonstrated can serve both for data leakage for low data packages and for command and control. The researchers said they will publish the full research paper "soon." For now, regarding BitWhisper, they pointed to a Wired article that explains that in order for a BitWhisper attack to be successful, both computers must be compromised with malware and the air-gapped system must be within 40 centimeters from the computer controlled by an attacker. The researchers said only "eight bits of data can be reliably transmitted over an hour," but that's enough to steal a password or a secret key. They added that "future research" might involve "using the Internet of Things as an attack vector—an internet-connected heating and air conditioning system or a fax machine that's remotely accessible and can be compromised to emit controlled fluctuations in temperature." Wired's Kim Zetter explained that the BitWhisper attack works somewhat like Morse code, "with the transmitting PC using increased heat to communicate to the receiving PC, which uses its built-in thermal sensors to then detect the temperature changes and translate them into a binary '1' or '0'." She added: The malware on each system can be designed to search for nearby PCs by instructing an infected system to periodically emit a thermal ping—to determine, for example, when a government employee has placed his infected laptop next to a classified desktop system. The two systems would then engage in a handshake, involving a sequence of "thermal pings" of +1C degrees each, to establish a connection. But in situations where the internet-connected computer and the air-gapped one are in close proximity for an ongoing period, the malware could simply be designed to initiate a data transmission automatically at a specified time—perhaps at midnight when no one's working to avoid detection—without needing to conduct a handshake each time. Air-Hopper method to breach air-gapped systems Georgia Tech exploited side-channel signals to steal from 'air-gapped' PCs In January, Georgia Institute of Technology researchers said don't feel smugly safe if you are typing away but didn't connect to a coffee shop's Wi-Fi. "The bad guys may be able to see what you're doing just by analyzing the low-power electronic signals your laptop emits even when it's not connected to the Internet." They explained how keystrokes could be captured from a disconnected PC by exploiting side-channel signals (pdf). "People are focused on security for the Internet and on the wireless communication side, but we are concerned with what can be learned from your computer without it intentionally sending anything," said Georgia Tech assistant professor Alenka Zajic. "Even if you have the Internet connection disabled, you are still emanating information that somebody could use to attack your computer or smartphone." Zajic demonstrated by typing "a simulated password on one laptop that was not connected to the Internet. On the other side of a wall, a colleague using another disconnected laptop read the password as it was being typed by intercepting side-channel signals produced by the first laptop's keyboard software, which had been modified to make the characters easier to identify."
<urn:uuid:66e27576-167d-46df-8887-9a523d8b5f75>
CC-MAIN-2017-04
http://www.networkworld.com/article/2900219/microsoft-subnet/bitwhisper-attack-on-air-gapped-pcs-uses-heat-to-steal-data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00090-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942968
961
3.296875
3
To comply with the Obama administration’s recent proposal to combat climate change, Texas must slash carbon emissions from its power plants by as much as 195 billion pounds in the next 18 years — a total more than any other state — according to a Texas Tribune analysis. According to the newly drafted regulations, if Texas were in compliance, its power sector would emit 329 billion pounds of carbon dioxide per year starting around 2030. Compared with the 524 billion pounds that the state’s generating plants spewed out last year, that would be about a 43 percent reduction. While that is among the larger cuts required, more than a dozen states — including New York, Louisiana and Washington — must reduce their emissions by larger percentages. Texas politicians have already blasted the regulations — which will not take effect for at least three years — and are likely to challenge them in court. Gov. Rick Perry called the proposal “the most direct assault yet on the energy providers that employ thousands of Americans, and fuel both our homes and our nation’s economic growth.” But environmental and health advocates say the rules would help fight climate change and improve public health. If the rules move forward as proposed after months of public hearings and negotiations, Texas will not have an easy time meeting its carbon target. But the goals are also based, in part, on current trends in the state’s power sector, and the Environmental Protection Agency says Texas and other states can basically achieve their goals however they want. Here is how the agency calculated those targets: 1. Make existing coal plants more efficient. Texas’ coal plants are some of the worst polluters in the country, emitting a whopping 2,239 pounds of carbon dioxide per megawatt hour of energy produced. (One megawatt hour can power 500 typical Texas homes for an hour during mild weather.) With better technology and newer equipment, the EPA thinks U.S. coal plants can become 6 percent more efficient on average by 2020, and stay that efficient through 2030. Given that Texas' coal plants are relatively inefficient, the EPA argues, some upgrades are reasonable. But the agency estimates that this step alone would save just 19 billion pounds of carbon a year starting in 2020 and continuing through the next decade. And questions remain about how many plants will simply retire rather than spend money on new equipment — particularly as coal competes with cheap, lower-carbon natural gas. 2. Switch from coal to natural gas. This is the most significant — and most controversial — aspect of the EPA’s proposed rules. In 2012, Texas coal plants generated 139 million megawatt hours of electricity. The EPA thinks almost half of that could be pushed out by natural gas, a much cleaner fuel source, in the coming decades. (Natural gas plants emit only 837 pounds of carbon dioxide per megawatt hour of electricity produced.) This step, along with more efficient coal plants, could save about 100 billion pounds of carbon dioxide emissions per year, the EPA says. It’s unclear how realistic that projection is. Across the U.S., cheap, abundant natural gas has steadily eaten into coal’s market share in recent years. According to the U.S. Energy Information Administration (EIA), natural gas’ share of the state’s energy portfolio increased slightly from 48 percent in 2008 to 50 percent in 2012, while coal dropped from 36 percent to 32 percent. However, more recent data from the Electric Reliability Council of Texas, manager of the grid covering most of the state, shows that coal’s share of that grid actually increased in 2013, partly reversing two years of natural gas’ gains. Given that Texas continually deals with concerns about the electric grid, which has seen recent threats of rolling blackouts in recent years, some experts question whether Texas is ready to replace half of its coal fleet with natural gas, especially with the EPA expecting this to happen by 2020. The state is among those least able to cope with capacity losses, said William Nelson, an analyst at Bloomberg New Energy Finance. 3. Add more renewable energy sources. A lot more. The EPA thinks Texas can generate 47 million megawatt hours of electricity from renewable sources starting in 2020. After that, the EPA says the state could increase that number each year so that eventually 20 percent of Texas’ electricity would be generated from sources such as wind and solar. In 2012, according to the EIA, says Texas generated about 32 million megawatt hours of electricity from wind, and very little from other renewable sources. That’s almost double of what was produced 2008, and environmental advocates have suggested there’s much more room to grow. The EPA projects that Texas will generate 53 million additional megawatt hours of electricity by 2029. That could save tens of billions more pounds of carbon dioxide per year, assuming that carbon-free renewables were replacing plants that run on coal or even natural gas. Still, wind and solar are intermittent fuels — in other words, it isn’t always windy or sunny. As a result, it’s unclear whether such a large shift to renewables would jeopardize Texas' ability to keep its lights on during its hottest days, when air conditioners run on full blast. 4. Increase energy efficiency measures to reduce electricity consumption overall. This is another area where the EPA and many experts believe Texas has a lot of room to grow. The EPA estimates Texas could cut 1.78 percent of its current energy demand through efficiency measures — such as using better light bulbs and appliances and upgrading buildings. Continuing that pace through 2029 would yield 10 percent savings, the agency says, reducing the need for coal and gas generation. Energy experts say consumers have played a major role in slowing the growth in energy demand. Household energy use has declined nationally for two decades, according to the federal data. In a February report, ERCOT said Texas' peak power demand was growing more slowly than previously thought, partly because of changes in consumer behavior. Small changes can have a big impact. For instance, buildings account for almost 40 percent of the state’s total energy use and 70 percent of its electricity consumption, according to the State Energy Conservation Office. Changes to building codes, for instance, could have a major impact on the state’s energy needs. More than 20 Texas cities — including Houston, Austin, College Station and Denison — have recently tightened construction standards, and environmental groups have pressed state officials to overhaul the statewide code. Another promising conservation tool is “demand response,” which relies on high-tech thermostats and meters that allow utilities to power down air conditioners, heaters or pool pumps when demand peaks. A 2012 report by the Brattle Group, a consulting firm, calculated that demand response shaved about 4 percent of energy use during peak demand times in Texas. But if the state took steps to erode barriers to expansion, the report said, that number could reach as high as 15 percent. While some utilities are taking steps to bolster demand response, efficiency advocates say huge obstacles remain. The biggest is that demand response companies cannot participate in ERCOT's wholesale energy market. Outside of a few other small tweaks, those are the basic assumptions EPA made to decide how much Texas should cut its carbon emissions from power plants in the next two decades. There's no question they mean change for Texas and its power grid, although it's still an open question as to how difficult those changes would be — and to what extent economic factors were already driving some of them. These rules themselves could see revision before Texas will have to officially tell the EPA how it plans to comply, so these numbers will probably change. And the assumptions could change, too: Natural gas prices could rise significantly, Texas could grow more or less than expected, and unforeseen weather or disasters could impact energy prices and consumption.
<urn:uuid:dd644c1d-41d7-4b2c-b268-832b98fb6f25>
CC-MAIN-2017-04
http://www.govtech.com/state/What-Texas-Could-Do-to-Follow-Climate-Change-Rules.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00394-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959237
1,606
3.34375
3
Authorization is a process where a system or application makes a run-time decision about whether to allow a user to perform some function or access some data. Authorization decisions generally depend on the identity of the user wishing to perform the action, the action which he wishes to perform, the security entitlements which the user has been assigned and the data on which he wishes to perform the action. In some cases, the decision may also depend on contextual information such as the user's location, the time or date or the type of device using which the user connected to the application. Authorization decisions may be made by application logic, by access controls inside a database that supports an application or by a stand-alone access control engine. They are made by evaluating a security model, with the most popular models being: - Security groups -- where users are attached to groups and groups are granted rights to perform actions. On some systems, groups may be nested, meaning that they can contain other groups as members. - Role-based access control -- where users are assigned roles and roles are assigned collections of entitlements. On some systems, roles may be nested, meaning that parent roles may contain child roles. This implies that users who are granted a parent role also get the child role's entitlements. - The difference between roles and groups is somewhat subjective, where nesting is not a factor. Roles are generally considered to be more representative of "everything a user performing a given job function needs" while groups tend to be more representative of "a set of entitlements that are normally assigned together, but which are typically not a comprehensive list of what a user requires." Where nesting is at play, the difference is more concrete -- with groups, it is the set of users who are nested, while with roles, it is the set of entitlements which are nested. - Attribute-based access control (ABAC), replaces the explicit assignment of entitlements to individual users or groups of users with an implicit model. Whether a user gets a given entitlement depends on some characteristics of the user -- his name, location, department code, job code, etc. The idea is that as identity attributes are adjusted, correct entitlements are automatically Authorization should not be confused with which is the process used to define and manage identities and to assign entitlements to users. The former is a run-time enforcement while the latter refers to updating directories with business-appropriate identity and privilege data. Return to Identity Management Concepts
<urn:uuid:e9d98c3a-5ae1-4ea8-a628-d6e02134bf95>
CC-MAIN-2017-04
http://hitachi-id.com/resource/concepts/authorization.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00302-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920859
545
3.171875
3
Definition: A function which is not defined for some inputs of the right type, that is, for some of a domain. For instance, division is a partial function since division by 0 is undefined (on the Reals). See also total function, partial recursive function. Note: This definition says a function is either partial or total, but not both. Some authors consider that all functions are partial, but some are total, too. Wikipedia entry for partial function. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 13 September 2007. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "partial function", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 13 September 2007. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/partialfunct.html
<urn:uuid:46ac5c0d-4831-4b6f-886d-1d4fb258fa9f>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/partialfunct.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00210-ip-10-171-10-70.ec2.internal.warc.gz
en
0.882193
211
3.0625
3
It is January 2020, 1. You receive the credit card bill showing a billion dollars as you are overdue by 99 years on your pending credit payment of $1000.00 from previous months, and the credit card has been cancelled 2. Your attempt to renew the insurance on your 3 year old car is denied stating that you own a 97 year old car 3. The last trade you did on Dec 31, 2019 is suddenly being shown as overdue for 99 years on Jan 2nd 2020, and the account has been disabled 4. The check deposited in your bank account after the work-shift on Dec 31, 2019 is returned as elapsed, as the check is over 99 years old Consumers may face these potential scenarios due to the solution many companies implemented during the Year 2000 resolution that will arise like the legendary phoenix. The underlying “error”, was the way programs were written in the early eras of computing, when storage was extremely expensive. Programmers focused on writing tightly packed code to ensure that space was saved. One of the common methods was to save the two bytes of storage by not putting in the century part and using the 2-digit year format. The early programmers never expected that their programs would last 20 years; leave alone 30, 40, or 50 years. To address the Y2K problem, multiple solution options were implemented by IT organizations based on the time they initiated their Year 2000 Program. The early birds either expanded the 2-digit year to 4-digit year (Six digit date to eight digit date format), or rewrote their applications completely. The later ones implemented the windowing solution. They took an approach of creating a window of 100 years by developing the logic to assume the first window as “19” century and the second window as “20” century. Example, if a company has incorporated a modified “Pareto Rule” window solution of 80/20 solution, if 21 appears the program interprets it as 1921, and for the year 18 it interprets as 2018. Though other windows like 75/25, 60/40 & 50/50 were implemented the predominant was the 80/20 rule. The other reasons that influenced the decision in many companies to choose windowing was the additional effort involved in migrating the data stored in flat files. At that time, nascent technologies like the web, distributed services, mobile, device computing & new products looking to overwhelm the earlier monolithic applications, the management assumed that all their applications would be replaced or retired way before the Year 2020. Break from the past The windowing solution implemented in many companies has given them a second opportunity to replace their existing IT systems. These companies, today, have better options for their migration/rewrite compared to those early birds during the Y2K. With over 4 years ahead of them, companies can look forward to do a gradual changeover and at the same time make the system future-ready. This provides an opportunity for IT to lead, instead of play catch-up with businesses. Companies should leverage this opportunity to break away from the past and look forward to long-term solutions aligning with strategic corporate goals by rewriting the applications using technologies like cloud, device computing and mobile. The biggest challenge would be to get management acceptance even to perform an impact analysis especially after the Post-Year 2000 backlash against the doomsayers. The second challenge is the computing environment has become a lot more complicated than it was 20 years ago. Companies are running old code in new environments; the applications have been distributed across platforms; and the core applications have become a “black box” with virtual environments (or wrappers). The third challenge would be getting resources conversant with the legacy applications as most of these applications have skeleton staff doing routine maintenance. The final challenge would be the constant struggle to meet the extremely dynamic nature of business, and finding the funding to revisit an old problem. To overcome these challenges, if the companies commence on their journey today, they should be able to adjust their regular development and maintenance budgets to rewrite these applications to ensure their business objectives and at the same time make their applications future ready. Nevertheless, to avoid the Armageddon, Start Early.
<urn:uuid:7cc922df-a153-44b7-984f-5ede29efcec0>
CC-MAIN-2017-04
https://www.capgemini.com/blog/capping-it-off/2016/09/year-2000-rise-of-the-phoenix
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963649
863
2.609375
3
During the course of a recent security audit I was rather surprised to find a critical system still running with a default password. The default password has long been the bug bear of many the security admin. At the same time it has been the savior of many of us at one stage or another, desperately locked out from that system, urgently requiring access, and no clue as to what the password is, or might be. The default password is generally installed by the manufacturer, most often on hardware devices such as routers and wireless access points, but also by software application developers and even on some operating systems, although this is becoming less and less commonplace. The default password exists to allow an administrator initial access, for setup and configuration, and you are generally forced, or at least you should be, to change the password to something more complicated as the configuration advances. Unfortunately, this is not a step that everyone takes. Worse again, there have been numerous accounts of software and hardware products that have ‘undocumented’ administrative accounts installed. So, even if you took the conscientious step of removing or changing what you thought was the default, you may still be exposed. Take Oracle for example. Pete Finnegan, the self-confessed master of all things Oracle, maintains a web page devoted to the Oracle default password. At the last count, there are more than 600 unique accounts in his list. Mr. Finnegan has some interesting views on how many of these accounts come about to be created in the first instance. He says some “are created by Oracle itself when the database is created. For instance the accounts SYS and SYSTEM, DBSNMP and OUTLN are often created by default when a database is created. If the database is created by using the wizard the problem can be much bigger with 10s 0r 20s of accounts being created simply as part of the database creation”. It is also the case that further Oracle default users can be created when third party software is installed for use such as BAAN or SAP. The same issues of default users being added to the database can occur when third party development or maintenance tools are added such as TOAD or PL/SQL Developer. An excellent tool that will scan your Oracle implementation for signs of default accounts can be downloaded here. If your organization uses Oracle, there is a strong chance that you will be susceptible. It is easy to lay some of the blame on the door of the manufacturer. They could be accused of shipping product with poorly configured security settings. Lets face it; it is not hard for them to force the user to change the initial configuration password. But that alone is not enough. What about the ‘undocumented’ password, the one that you don’t even know about? There are resources available on the Internet that allows you to audit your network devices and software applications. This should be performed as part of your yearly audit schedule. A simple Google search for ‘default password list’ yields hundreds of sites that claim to have the most comprehensive database of default passwords. One of the oldest, and still reliable, can be found here. It makes for some interesting reading and is regularly updated. Whatever the organization, whatever the choice of software or hardware vendor, the default password is likely to raise its ugly head from time to time. Be proactive and get scanning. You will be amazed at what you may find.
<urn:uuid:be8dd79d-4442-42f0-83ce-9171e3a87080>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2007/10/01/beware-the-default-password/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00238-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951667
696
2.671875
3
Data centers are the backbone of today’s digital economy, and yet far too many are vulnerable to advanced attacks. Despite $13 billion spent every year to secure them,* attackers are compromising data centers in a flood of cyber assaults. In many breaches, compromised data centers are used in attacks against new targets. These attacks are disrupting business, stealing priceless customer data and intellectual property — and damaging reputations in their wake. Read this paper to learn why so many attacks against data centers are successful and how organizations can better protect them. You will learn: In many cases, compromised targets unwittingly become attackers themselves. At the bidding of cybercriminals who can control comprised systems remotely, the data centers are commandeered as potent weapons in attacks against fresh targets. In one example of this trend, the U.S. Department of Labor became an involuntary attacker in May 2013 when attackers compromised one of its Web pages. Site visitors received malware that exploited a zero-day vulnerability in Internet Explorer 8 to install a variant of the Poison Ivy remote-access Trojan (RAT). To read more, complete the form to the right. *IDC. “Worldwide Datacenter Security 2012–2016 Forecast: Protecting the Heart of the Enterprise 3rd Platform.” November 2012. Download the Report
<urn:uuid:0af6dd1d-81f6-4aac-b607-e0aa276f1cda>
CC-MAIN-2017-04
https://www2.fireeye.com/wp-data-center-security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00540-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944572
269
2.515625
3
University of Maryland researchers are challenging the validity of findings recently published in the Journal of the American Medical Association suggesting that radiation from cell phones can produce biological changes in the brain. The researchers in the A. James Clark School of Engineering also have raised concerns over a World Health Organization pronouncement that classifies radiation from cell phones as a possible carcinogen. In a letter written to the Editor of the Journal of the AMA, the researchers offered a critique of a paper by N.D. Volkow, D. Tomasi and G.J. Wang at a time when the U.S. Supreme Court is reportedly considering whether to hear a class-action lawsuit against cellphone manufacturers over safety risks. Christopher Davis, a professor of electrical and computer engineering, and Quirino Balzano, a senior research scientist, noted “that the highest temperature elevations that occur in the brain during cell phone use as a result of radio frequency fields from the cell phone are on the order of 0.1°C to 02.°C, and that these temperature elevations are smaller than those resulting from physical activity," the A. James Clark School of Engineering stated in a news release. The researchers further “argued that the study did not evaluate the exposure of the brain to the fields from the cell phone correctly, so a causal relation between the radiofrequency signal and the effect detected by Volkow … has no valid experimental support." The controversy over a cell phone’s potential hazards has reached the nation’s highest court. The Supreme Court has asked the Department of Justice for its feedback on whether the judges should hear a class-action suit against 19 defendants, mostly cellphone manufacturers and telecom companies, Reuters reported. The suit, which was dismissed by a lower appeals court on the basis that the plaintiffs’ claims were preempted by federal law, alleges that the defendants misrepresented that their cell phones are safe when they were aware of the potential dangers.
<urn:uuid:081b562a-281c-4fae-974c-3b775588aed0>
CC-MAIN-2017-04
http://www.channelpartnersonline.com/news/2011/06/researchers-challenge-findings-over-cell-phone-s.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00568-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956876
399
2.875
3
Which of the following is NOT a method of Load Balancing with VPN-1/FireWall-1? Domain Load Balancing Quantum Load Balancing Load Balancing Algorithms Now that you’ve learned about the methodologies the logical server/firewall uses to route traffic, you need to consider the algorithms used to decide which server in the server farm will get the load-balanced connection. Check Point provides five algorithms for the logical server; the administrator decides which of these algorithms to use. The algorithms are called server load , , round trip , , round robin , , random , and domain . We’ll describe these algorithms next. The server load algorithm, shown in Figure below, works in conjunction with a load agent that runs on each server in the server farm. The load agent is a small program that communicates to the firewall how busy the machine is. The machine with the lightest load is sent the next packet. You can download this load agent from Check Point’s website (only available for Solaris) or write one using the OPSEC APIs provided by Check Point on the OPSEC website ( www.opsec.com ). The load agent uses UDP port 18212 by default. The firewall checks the load on each server at the configured time and passes the connection to the server that has the lightest load. The round trip algorithm uses ping to decide which server gets the request, as depicted in Figure below.The round trip algorithm is much simpler than the server load algorithm, but not as intuitive-it cannot measure the load on the servers. Therefore, the round trip algorithm’s decision is based solely on network factors rather than the server load. When you use round trip, the server with the least traffic will answer first. The server with the most traffic will be too busy to answer, and the packet will be delivered to the machine that answers first. The drawback to using the round trip method is that the server closest to the firewall usually gets the connection. The round robin algorithm, shown in Figure below, is not very intelligent. This algorithm begins with the first server in the server farm and gives it the first connection. The second connection goes to the second server in the server farm, the third goes to the third, and so on. When the algorithm reaches the bottom of the list, it starts over. Next in the list of load balancing algorithms is random. Do you remember the method you used to choose teams when you were a kid? Eenie, Meanie, Minie , Mo ! That is the same method the firewall uses. The random algorithm is illustrated in Figure below There is an issue with the domain algorithm. Check Point doesn’t recommend using it, because it creates a noticeable delay for requests due to the required reverse DNS lookups. In today’s e-business environment, any delay experienced by users accessing your website could be disastrous. This algorithm was originally designed for clients in Europe and the rest of the world, where they use country names at the end of their URLs (such as ) Forexample, in Figure above, if a client in the U.K. is trying to connect to a website for a global company based in France , the initial connection goes to the logical server in France . At this point, the closest server is in France , and it would be “logical” to send the connection to the server in France . Unfortunately, the domain algorithm will send packets back to the client in the U.K. and redirect them to the server located in the U.K. , wasting precious time in the connection setup. This is an effective method only if all your servers are located in Europe and the client is also located in Europe . To sum up, Check Point offers five algorithms-but in our opinion, only one is a true load balancing method. The server load algorithm is the only method that takes into account the actual load on each server. The rest of the algorithms don’t consider how busy each server is in the server farm. As the administrator, you should check out all methods of load balancing (both Check Point and non-Check Point) before deciding which one is best for
<urn:uuid:10af71e4-69ac-4e4e-9ff5-e2c2c311efbd>
CC-MAIN-2017-04
http://www.aiotestking.com/checkpoint/which-of-the-following-is-not-a-method-of-load-balancing-with-vpn-1firewall-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00505-ip-10-171-10-70.ec2.internal.warc.gz
en
0.895976
915
3.375
3
FTTH (fiber to the home) networks are installed in many areas covering indoor section, outdoor section, as well as the transition in between. To fulfill the cabling requirements from different areas, different types of fiber optic cables are well developed. Drop cable as an important part of FTTH network forms the final external link between the subscriber and the feeder cable. This blog post will focus on this special outdoor fiber optic cable. Drop cables, as previously mentioned, are located on the subscriber end to connect the terminal of a distribution cable to a subscriber’s premises. They are typicality small diameter, low fiber count cables with limited unsupported span lengths, which can be installed aerially, underground or buried. As it is used in outdoor, drop cable shall have a minimum pull strength of 1335 Newtons according to the industry standard. Drop cables are available in many different types. The following part introduces three most commonly used drop cables divided according to the cable structure. Flat Type Drop Cable, also known as flat drop cable, with a flat out-looking, usually consists of a polyethylene jacket, several fibers and two dielectric strength members to give high crush resistance. Drop cable usually contains one or two fibers, however, drop cable with fiber counts up to 12 or more is also available now. The following picture shows the cross section of a flat drop cable with 2 fibers. Figure-8 Aerial Drop Cable is self-supporting cable, with the cable fixed to a steel wire, designed for easy and economical aerial installation for outdoor applications. This type of drop cable is fixed to a steel wire as showed in the following picture. Typical fiber counts of figure-8 Drop Cable are 2 to 48. Tensile load is typically 6000 Newtons. Round Drop Cable usually contains a single bend-insensitive fiber buffered and surrounded by dielectric strength members and an outer jacket, which can provide durability and reliability in the drop segment of the network. The following shows the cross section of a round drop cable with one tight buffered optical fiber. It’s necessary to choose a right architecture for FTTH network from overall. However, drop cable as the final connection from the fiber optic network to customer premises also plays an important role. Thus, finding a flexible, efficient and economical drop cable connectivity method becomes a crucial part of broadband service. Whether to use a fiber optic connector, which can be easily mated and un-mated by hand or a splice, which is a permanent joint? The following will offer the answer and the solutions for your applications. It is known that splice, which eliminates the possibility of the connection point becoming damaged or dirty with a permanent joint, has better optical performance than fiber optic connector. However, splice lack of operational flexibility compared with fiber optic connector. Fiber optic connector can provide an access point for networking testing which cannot be provided by splicing. Both methods have their own pros and cons. Generally, splice is recommended for drop cables in the places where no future fiber rearrangement is necessary, like a greenfield, new construction application where the service provider can easily install all of the drop cables. Fiber optic connector is appropriate for applications which flexibility is required, like ONTs which have a connector interface. For splice, there are two methods, one is fusion splicing, the other is mechanical splicing. Fusion splicers have been proved to provide a high quality splice with low insertion loss and reflection. However, the initial capital expenditures, maintenance costs and slow installation speed of fusion splicing hinder its status as the preferred solution in many cases. Mechanical splicing are widely used in FTTH drop cable installation in countries, as a mechanical splice can be finished in the field by hand using simple hand tools and cheap mechanical splicer (showed in the following picture) within 2 minutes. It’s a commonly used method in many places, like China, Japan and Korea. However, in US mechanical splicing is not popular. For fiber optic connector, there are two types connector for drop cable connection. Field terminated connector, which contains fuse-on connector and mechanical connector, and pre-terminated drop cable, which is factory terminated with connector on the end of drop cable. Fuse-on connector uses the same technology as fusion splicing to provide the high optical connection performance. However, it requires expensive equipment and highly trained technician, and more time like fusion splicing. Mechanical connector could be a replacement of fuse-on connector (showed in the following picture), if the conditions do not fit the mentioned ones. It could be a time-save and cost-save solution for drop cable termination. If you have no limits in cost and want high performance termination in a time-save way, pre-terminated drop cable could be your choice. Many factories can provide you customized drop cables in various fiber types, fiber optic connector and lengths. Customer demand for higher bandwidth will continue to drive the development of FTTH as well as its key component like drop cable. Choosing the right drop cable and drop cable termination method is as important as choosing the right network architecture in FTTH.
<urn:uuid:d0b3f4da-ebe0-4b2b-92f9-c26d2309294e>
CC-MAIN-2017-04
http://www.fs.com/blog/drop-cable-and-its-termination-in-ftth.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00045-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952532
1,055
2.78125
3
Back to Basics With Unix: System Visibility Unix systems have forever been opaque and mysterious to many people. They generally don't have nice graphical utilities for displaying system performance information; you have to know how to coax the information you need. Furthermore, you need to know how to interpret the information you're given. Let's take a look at some common system tools that can provide tons of visibility into what the opaque OS is really doing. Unfortunately, the same tools don't exist universally across all Unix variants. A few commonly underused ones do, however, and that is what we'll focus on first. A common source of "slowness" is disk I/O, or rather the lack of available I/O. On Linux especially, it may be a difficult diagnosis. Often the load average will climb quickly, but without any corresponding processes in top eating much CPU. Linux counts "iowait" as CPU time when calculating load average. I've seen load numbers in the tens of thousands on more than one occasion. The easiest way to see what's happening to your disks is to run the "iostat" program. Via iostat, you can see how many read and write operations are happening per device, how much CPU is being utilized, and how long each transaction takes. Many arguments are available for iostat, so do spend some time with the man page on your specific system. By default, running 'iostat' with no arguments produces a report about disk IO since boot. To get a snapshot of "now" add a numerical argument last, which will prompt iostat to gather statistics for that number of seconds. Linux will show number of blocks read or written per second, along with some useful CPU statistics. This is one particularly busy server: avg-cpu: %user %nice %system %iowait %steal %idle 1.36 0.07 5.21 23.80 0.00 69.57 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 18.22 15723.35 643.25 65474958946 2678596632 Notice that iowait is at 23 percent. This means that 23 percent of the time this server is waiting on disk I/O. Some Solaris iostat output shows a similar thing, just represented differently(iostat -xnz): r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 295.3 79.7 5657.8 211.0 0.0 10.3 0.0 27.4 0 100 d101 134.8 16.4 4069.8 116.0 0.0 3.5 0.0 23.3 0 90 d105 The %b (block) column shows that I/O to device d101 is 100 percent blocked waiting for the device to complete transaction. The average service time isn't good either: disk reads shouldn't take 27.4ms. Arguably, Solaris's output is more friendly to parse, since it gives the reads per second in kilobytes rather than blocks. We can quickly calculate that this server is reading about 19KB per read by dividing the number of KB read per second by the number of reads that happened. In short: this disk array is being taxed by large amounts of read requests. The "vmstat" program is also universally available, and extremely useful. It, too, provides vastly different information between operating systems. The vmstat utility will show you statistics about the virtual memory subsystem, or, to put it simply: swap space. It is much more complex than just swap, as nearly every IO operation involves the VM system when pages of memory are allocated. A disk write, network packet send, and the obvious "program allocates RAM" all impact what you see in vmstat. Running vmstat with the -p argument will print out statistics about disk IO. In Solaris you get some disk information anyway, as seen below: kthr memory page disk faults cpu r b w swap free re mf pi po fr de sr m0 m1 m2 m7 in sy cs us sy id 0 0 0 7856104 526824 386 2401 0 0 0 0 0 3 0 0 0 16586 22969 12576 8 9 83 1 0 0 7851344 522016 18 678 32 0 0 0 0 2 0 0 0 13048 11737 10197 7 6 86 0 0 0 7843584 514128 76 3330 197 0 0 0 0 2 0 0 0 4762 131492 4441 16 8 76 A subtle, but important difference between Solaris and Linux is that Solaris will start scanning for pages of memory that can be freed before it will actually start swapping RAM to disk. The 'sr' column, scan rate, will start increasing right before swapping takes place, and continue until some RAM is available. The normal things are available in all operating systems; these include: swap space, free memory, pages in and out (careful, this doesn't mean swapping is happening), page faults, context switches, and some CPU idle/system/user statistics. Once you know how to interpret these items you quickly learn to infer what they indicate about the usage of your system. The two main programs for finding "slowness" are therefore iostat and vmstat. Before the obligatory tangent into what Dtrace can do for you, here are a few other tools that no Unix junkie should leave home without: - Lists open files (including network ports) for all processes - Lists all sockets in use by the system - Shows CPU statistics (including IO), per-processor We cannot talk about system visibility without mentioning Dtrace. Invented by Sun, Dtrace provides dynamic tracing of everything about a system. Dtrace gives you the ability to ask any arbitrary question about the state of a system, which works by calling "probes" within the kernel. That sounds intimidating, doesn't it? Let's say that we wanted to know what files were being read or written on our Linux server that has a high iowait percentage. There's simply no way to know. Let's ask the same question of Solaris, and instead of learning Dtrace, we'll find something useful in the Dtrace ToolKit. In the kit, you'll find a few neat programs like iosnoop and iotop, which will tell you which processes are doing all the disk IO operations. Neat, but we really want to know what files are being accessed so much. In the FS directory, the rfileio.d script will provide this information. Run it, and you'll see every file that's read or written, and cache hit statistics. There's no way to get this information in other Unixes, and this is just one simple example of how Dtrace is invaluable. The script itself is about 90 lines, inclusive of comments, but the bulk of it is dealing with cache statistics. An excellent way to start learning Dtrace is to simply read the Dtrace ToolKit scripts. Don't worry if you're not a Solaris admin: Dtrace is coming soon to a FreeBSD near you. SystemTap, a replica of Dtrace, will be available for Linux soon as well. Until then, and even afterward, the above mentioned tools will still be invaluable. If you can quickly get disk IO statistics and see if you're swapping the majority of system performance problems are solved. Dtrace also provides amazing application tracing functionality, and if you're looking at the application itself, you already know the slowness isn't likely being caused by a system problem.
<urn:uuid:8dbfcf62-dd6b-4db9-93ec-668838428634>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/3739711/Back-to-Basics-With-Unix-System-Visibility.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00165-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91968
1,617
2.828125
3
DOE wish list: Exascale computing at a price it can afford - By Kathleen Hickey - Jul 19, 2012 The U.S. government wants to build exascale computers -- supercomputer on steroids -- for a wide range of activities, ranging from improving national security to studying climate change and finding cures for diseases. But the energy needed to power an exascale computer using today’s technology would be more than that required for a sizeable city. It would also come with a price tag greater than the GDP of a few small countries. The Department of Energy wants to change all that. In a joint effort with the National Nuclear Security Administration (NNSA), the Energy's Office of Science recently issued several awards through its FastForward program to develop future hardware and software technologies to support these machines, specifically memory, processors, storage and input/output, which is the communication between an information processing system and the outside world. FastForward is contracted through Lawrence Livermore National Laboratory as part of a seven lab consortium - Argonne National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Oak Ridge National Laboratory, Pacific Northwest National Laboratory and Sandia National Laboratories. So far there have been four announced awards: Intel: $19 million for both processor and memory technologies; AMD: $12.6 million for processor and memory technologies; NVIDIA: $12.4 million for processor technology and Whamcloud (along with EMC, Cray and HDF Group): Undisclosed dollar amount for storage and I/O technologies.Additionally, according to U.K.-based The Register, IBM also received an award but the company has not yet released details. In addition, not all the subcontracts have been made public so far. The goal of the program is to create a computer capable of performing one quintillion -- one billion billion -- calculations per second, roughly one thousand times faster than today’s speediest supercomputers, including LLNL’s Sequoia supercomputer. Sequoia, which has an operating speed of 16.32 petaflops (a petaflop is a quadrillion floating-point operations/sec), won the title of the world’s fastest supercomputer, according to the Top500 list released June 18 at the International Supercomputing Conference (ISC12) in Hamburg, Germany, reported GCN. The Defense Advanced Research Projects Agency announced a similar program in 2010, the Omnipresent High Performance Computing program. The problem is that the technology structure used to power today’s supercomputers hasn’t really changed since the early 1990s and isn’t all that different from the technology used for desktop computers. The only sizable difference is scale. Supercomputers require hundreds or thousands of chips. As a result, these computers are costly energy-guzzling beasts. Even Sequoia, one of the most energy-efficient supercomputers, has power usage rate of around 2 gigaflops/watt. An exascale computer built using today’s technology could have an electric bill of over $500 million a year, said Richard Murphy, computer architect at Sandia National Laboratories, in Discover magazine last year. Of course, once these computers are developed someone needs to make sense of the information. To address that issue, Argonne opened the Scalable Data Management, Analysis and Visualization Institute to develop ways to let scientists spend less time sifting through data and more time on science, according to Robert Ross, a computer scientist and deputy director at Argonne. And last month IBM and LLNL announced they had formed a collaboration called Deep Computing Solutions, to be housed within LLNL’s High Performance Computing Innovation Center, to help U.S. industry harness the power of supercomputing to better compete in the global marketplace. Kathleen Hickey is a freelance writer for GCN.
<urn:uuid:7e71fcf6-7cea-4082-b843-5690ef847172>
CC-MAIN-2017-04
https://gcn.com/articles/2012/07/19/doe-labs-to-develop-excascale-computers.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00193-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919556
815
2.53125
3
The Recycle Bin (Recycler) folder provides a safety net when deleting files or folders in Windows. The file(s) remain there until you empty the Ricycle Bin or restore the file. The actual location of the Recycle Bin varies depending on the operating system and file system used. On NTFS file systems, Recycler is the name of the Recycle Bin Folder in each partition . On FAT file systems, the folder is named Recycled The Recycler folder contains a Recycle Bin directory for each registered user on the computer, sorted by their security identifier (SID) . Inside the Recycler folder you will find an image of the recycle bin with a name that includes a long number with dashes (S-1-5-21-1417001333-920026266-725345543-1003) used to identify the user that deleted the files. - S - The string is a SID. - 1 - The revision level. - 5 - The identifier authority value. - 21-1417001333-920026266-725345543 - Domain or local computer identifier. - 1003 – A Relative ID (RID). This number, starting from 1000, increments by 1 for each user that's added by the Administrator. 1003 means the 3rd user profile that was created. For more specific informaton about SIDS, please refer to: Once the recycle bins are empty, the legitimate directories should be empty as well. The Recycler folder is hidden by default unless you reconfigured Windows to show hidden files and folders by unchecking "Hide protected operating system files " in Tools > Folder Options > View. However, even after emptying the Recycler bin, the Recycler folder will still contain a "Recycle Bin" for each user that logs on to the computer, sorted by their security SID. If you delete the C:\Recycler folder, Windows will automatically recreate it on next reboot. If there are numerous files listed taking up a lot of space, you can try manually deleting all but one of the user bins. You may find that although you have determined there are deleted files within one or more of the C:\recycler\S-1-5-21**** folders, these files may be hidden or inaccessible. There are various ways to delete these hidden files. Keep in mind that although the RECYCLER folder contains legitimate files, it is also a common hiding place for some types of malware. Removal of such malicious files sometimes can be difficult and may require security tools that scan such areas for these threats. If malware is present in this location, the computer usually shows other signs or symptoms of infection.
<urn:uuid:d94ef312-10c3-4e10-b090-34ada16f55c4>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/forums/t/272858/recycler-s-1-5-21/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00009-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905427
578
2.765625
3
SMS is Broken and Hackers can Read Text Messages. Never use Regular Texting for ePHI. Security firm Positive Technologies has published a report (see their overview of attack on one time passwords and PDF of the SS7 security problems) that explains how attackers can easily attack the protocols underlying the mobile text messaging networks (i.e. the Signaling System 7 or “SS7” protocol). In their report, they indicate how this makes it easy to attack the two-factor login methods and password recovery schemes where a one-time security code is sent via an insecure text message. Devices and applications send SMS messages via the SS7 network to verify identity, and an attacker can easily intercept these and assume identity of the legitimate user. This result also means that attackers can read all text messages sent over these networks. Beyond the serious implications with respect to attacking accounts and identity theft via access second-factor authentication codes, this work means that all communications over text message must really and truly be considered insecure and open to the public. In the past, we have all acknowledged that text messages are insecure in that they pass through the cellular carriers in plain text, can be archived and backed up, could be surveilled by the government, etc. However, most people have been very complacent about this insecurity … still trusting their cellular carriers, trusting that attackers would really have a hard time actually accessing these text messages. As a result, people have been sending sensitive information via insecure text message for some time … giving security short shrift and going for convenience. This includes ePHI — medical appointment notices sent to patients, communications about patients between medical professionals, etc. The publications by Positive Technologies confirm what the security community has known since at least 2014 … the infrastructure underlying SMS is old and fragile and easily attacked. What can attackers do? They can transparently forward calls, giving them the ability to record or listen in to them. They can also read SMS messages sent between phones, and track the location of a phone using the same system that the phone networks use to help keep a constant service available and deliver phone calls, texts and data. The danger of breach, especially a targeted breach, is real. As a phone number is identifying information, any text message that refers to that person’s past, present or future medical condition or scheduling or billing is ePHI and must be protected by the medical community. It is easy to imagine an automated attack on the SS7 protocol that could identify very large numbers of such messages laden with PHI. That attack would be an automatic breach and the discovered use of SMS for ePHI could be considered willful neglect under HIPAA. The take away? Sensitive communications, especially where HIPAA compliance is involved, must never take place over insecure text message (SMS) channels. It is time to move on to secure communications applications or secure texting solutions.
<urn:uuid:abee91d4-bde7-4a39-924f-c026966907f3>
CC-MAIN-2017-04
https://luxsci.com/blog/sms-is-broken-and-hackers-can-read-text-messages-never-text-ephi.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00339-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9328
595
2.53125
3
The analysis of recently discovered security vulnerabilities, common hacker attacks, virus and spyware infections that took place last year revealed an absolutely new trend – more and more parasites are being designed to exploit not the operating system’s flaws, but bugs found in popular desktop and security-related software including antiviruses, backup utilities, database products, media players and other widely used programs installed to the almost every system. Since Microsoft began to feel concern about Windows protection and with the improvement of its program of regular online updates spyware makers and virus authors started to target their pests on popular third-party applications, as today it is much more difficult to create a threat, which would rapidly spread by penetrating up-to-date systems. This year everyone had a chance to notice this. There were thousands of users affected by parasites exploiting vulnerabilities of popular web browsers, instant messengers, Java and media add-ons. Now it is a time for media players, antiviruses, firewalls and backup tools. According to SANS The Twenty Most Critical Internet Security Vulnerabilities list updated today, users should change their attitude towards system and privacy protection and pay more attention to flaws discovered in popular desktop software and ways to quickly fix them. Indeed, millions of people still use severely outdated programs and think that regular antiviruses and anti-spyware tools can protect them from anything. However, such viewpoint is wrong. Some recently appeared parasites propagating through flaws in installed software first of all attempt to disable or bypass security-related programs exploiting their vulnerabilities and then run a devastating payload. The affected user cannot notice anything unusual, as an antivirus or firewall simply does not alert him or her. Another important security aspect that some users and especially companies miss is a protection of backup and database software. Backups and databases often contain priceless information that malicious persons strive to steal. And this is quite a simple task for them if software managing databases and backups has unpatched security vulnerabilities. Here are few fragments of The Twenty Most Critical Internet Security Vulnerabilities explaining how insecure outdated desktop software can be: Multiple buffer overflow vulnerabilities have been discovered in the anti-virus software provided by various vendors including Symantec, F-secure, Trend Micro, Mcafee, Computer Associates, ClamAV and Sophos. These vulnerabilities can be used to take a complete control of the user’s system with limited or no user interaction. Anti-virus software has also been found to be vulnerable to “evasion” attacks. By specially crafting a malicious file, for instance, an HTML file with an exe header, it may be possible to bypass anti-virus scanning. A number of vulnerabilities have been discovered in various media players during last year. Many of these vulnerabilities allow a malicious webpage or a media file to completely compromise a user’s system without requiring much user interaction. The user’s system can be compromised simply upon visiting a malicious webpage. Hence, these vulnerabilities can be exploited to install malicious software like spyware, Trojans, adware or keyloggers on users’ systems. Exploit code is publicly available in many instances. During last year, a number of critical backup software vulnerabilities have been discovered. These vulnerabilities can be exploited to completely compromise systems running backup servers and/or backup clients. An attacker can leverage these flaws for an enterprise-wide compromise and obtain access to the sensitive backed-up data. Exploits have been publicly posted and several malicious bots are using the published exploit code. There is only one truly effective way to avoid such attacks and infections: users should regularly update every application installed to the system. Yes, this may be quite a difficult and long procedure. But it is much better to spend few extra hours per week than to end up with a compromised computer and stolen confidential information.
<urn:uuid:cca4228b-188e-4269-8410-1083a1f8f769>
CC-MAIN-2017-04
http://www.2-spyware.com/news/post13.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00155-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942913
781
2.53125
3
In 2012, coal was used for about 37 percent of the 4 trillion kilowatt-hours of electricity produced in the United States, making it the most common fuel for generating electricity in the nation. When coal, a fossil fuel, is burned, it emits pollution in the form of smog, acid rain, and greenhouse gases. Coal plants are the number one source of carbon dioxide (CO2) emissions in the US, the primary cause of global warming. Despite these negatives, the economics and demand for energy are such that coal will likely be part of our nation’s energy roadmap for some time to come. Given this market reality, many industry and government stakeholders are focusing their efforts on reducing coal’s environmental impact using carbon capture and storage technologies. Last month, University of Utah officials announced plans for a Carbon Capture Multidisciplinary Simulation Center to simulate and test a low-cost, low-emissions prototype coal plant capable of powering a mid-sized city. Funding for the five-year project comes from a $16 million grant provided by the U.S. Department of Energy’s National Nuclear Security Administration. The center is being headed up by University of Utah researchers Philip J. Smith and Martin Berzins, along with university President David W. Pershing. All three are professors in the university’s College of Engineering. According to the University of Utah news release, “the goal of this ‘predictive science’ effort is to help power poor nations while reducing greenhouse emissions in developed ones.” The researchers will use massive DOE supercomputers to simulate and predict the performance of a proposed 350-megawatt boiler system, designed by international power giant Alstom. The boiler employs a technology called oxy-combustion, which ignites pulverized coal using pure oxygen instead of air. The process leaves behind water vapor and pure carbon dioxide, which is easier to capture and store. The computer modeling will enable researchers to optimize the design and address any uncertainties that arise. Project backers are preparing to run these large-scale simulations on multi-petascale and eventually exascale machines. As researcher Martin Berzins notes, “These simulations and others like them will make use of computers that are expected to be able to perform 1 million-trillion operations per second in the next decade or so, or as many operations per second as a billion personal computers today.” The University of Utah is no stranger to coal research. The university’s participation in combustion research extends back to the 1950s. That tradition grew into the the Institute for Clean and Secure Energy (ICSE), which was officially recognized by the school as a permanent institute in 2004. ICSE is dedicated to interdisciplinary research on high-temperature fuel utilization processes for energy generation and related issues. The institute’s approach combines hands-on experimental work with analytical tools and simulation. A related program, The University of Utah’s Clean and Secure Energy (CASE) from Coal, is working to advance carbon capture and storage technologies, while at the same time addressing the associated legal, environmental, economic and policy concerns. To be fair, while some see carbon capture as a solution for mitigating climate change and providing energy security, many environmental groups question the feasibility of “clean coal.” In an interview with the Salt Lake Tribune, Matt Pacenza, policy director at HEAL Utah, stated, “We’ve been hearing about the myth of clean coal for quite a few years now. The truth is as those technologies have moved forward they’ve either contributed to a rise in the levels of pollution or they’ve turned out to be wildly expensive. Maybe this is the one that’s actually clean and affordable and transformative, but I think everyone should be skeptical.”
<urn:uuid:4e565409-adbf-4aca-9986-38863c1b3c16>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/11/19/supercomputing-cleaner-coal/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00155-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933997
791
3.96875
4
When a call for service comes into the Douglas County, Ga., Fire Department that may involve hazardous materials, Deputy Chief Kim Ransom or another senior member of the department’s HAZMAT team grabs his or her tools and hustles to the scene, along with the initial fire crew. Once the crew arrives with on-board chemical sensors activated, they plug the information into several tools including: the Wireless Information System for Emergency Responders, the Computer-Aided Management of Emergency Operations chemical database, the 2008 Emergency Response Guide and the Chemical Companion decision support tool. Photo: The Chemical Companion software tool runs on personal digital assistants to help first responders obtain the information they need to make critical decisions. Courtesy of the Georgia Tech Research Institute.] The Chemical Companion is a Windows-based reference of 130 of the most common chemicals involved in hazardous materials incidents including their properties, signs of exposure and possible antidotes. When the HAZMAT team arrives on scene, the first thing they do is consult the Emergency Response Guide for basic response information, including initial isolation distance and the initial first aid measures that should be administered to those who have been exposed. Then the data picked up from the sensors aboard the fire apparatus is plugged into the various databases. This gives the initial responders information such as whether the unknown substance has a low explosive limit or if the chemical can be identified. In the summer 2009, the Douglas County Fire Department responded to the scene of an overturned tanker truck involving an unknown mixture of chemicals for which a single material safety data sheet (MSDS) was not available and a chemist wasn’t immediately available on scene. Responders entered how the substance was reacting and its temperature into the Chemical Companion. “We narrowed [it] down to where we had an idea that it was more closely related to one of the MSDS sheets than the other,” Ransom said. “The Chemical Companion is great about being able to measure symptomology and help you kind of narrow down your potential hazard.” Ease of use and an interface that’s mostly driven by menu choices helps minimize mistakes, Ransom said. “When you’re dealing with intricate chemical names — one misspelling can change the whole chemical compound — it’s really nice to be able to have your text in there,” she said. The Chemical Companion also can help first responders choose the appropriate suit for a spill and determine estimated time a responder can work in contact with a chemical at a particular rate before it permeates the suit. “It is nice to know that you have time in a certain type of suit or with boots or with different types of gloves that you can work in the product before it has the potential to permeate the material and become contaminated,” she said. The tool was developed by researchers at the Georgia Tech Research Institute with input from first responders and HAZMAT teams. It operates on Windows CE-based personal digital assistants. The Chemical Companion is available as a free download for emergency responders. Private-sector personnel involved with hazardous materials response may download the software with an approved application. [Photo courtesy of Win Henderson/FEMA.]
<urn:uuid:15f80441-94a6-430a-8b4c-f7dd43b89ce8>
CC-MAIN-2017-04
http://www.govtech.com/em/disaster/Chemical-Companion-HAZMAT.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00551-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933472
650
2.796875
3
Thanks to engineers from the University of California/Berkley and National Chiao Tung University in Taiwan, you may never again have test your milk’s freshness by taking a whiff. Using 3D technology, engineers have created a “smart cap” that can detect when food has gone bad. According to Entrepreneur.com, the technology employs a 3D printed cap embedded with electrical components. When the liquid comes into contact with the cap, the resonant circuit is able to “detect changes in electrical signals caused by the proliferation of bacteria.” In the past 10 years, a variety of 3D printed products have been created, ranging from vehicle parts, building materials, prosthetics and medical implants to toys and food. According to Berkeley News, the smart cap adds another item to the list: sensitive electronic components. Researchers believe the technology can be expanded to test the freshness of all food. “You could imagine a scenario where you can use your cellphone to check the freshness of food while it’s still on the store shelves,” said senior author Liwei Lin, a professor of mechanical engineering and co-director of the Berkeley Sensor and Actuator Center, in a statement. This story, "3D technology creates 'smart cap' to detect spoiled food" was originally published by Fritterati.
<urn:uuid:af092c59-8a0e-454d-8f11-973c280f9ae8>
CC-MAIN-2017-04
http://www.itnews.com/article/2951969/3d-technology-creates-smart-cap-to-detect-spoiled-food.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924464
283
3.09375
3
Being green isn’t always easy, particularly for a city with limited resources to spend on environmental programs. But Santa Clarita, Calif., has found an affordable way to help reduce its carbon footprint — solar-powered, self-compacting trash bins. Thirty-four of the trash and recycling units are now located in parks and various public spaces throughout Santa Clarita, giving citizens a simple way to contribute to the city’s environmental health. In addition, instead of making daily trips to empty the containers, city employees now monitor their capacity remotely with a mobile app, improving workflow efficiency. If a bin appears green on the application, it means there’s plenty of space left for more bottles and cans. If it appears yellow, the bin is starting to get full and will require attention soon. If the container is marked red, there is less than 10 percent capacity left and should be emptied. “Normally when park staff would go out to a site, they would have to check every single container in the park,” said Mark Patti, project development coordinator with Santa Clarita. “With the solar containers … they are being checked once or twice a week.” Designed by Big Belly Solar, the units were installed over a 10-day period earlier this year. Thirty of the bins are strictly for recyclable materials, while the other four compactors are dual-purpose, featuring bins for both regular trash and recyclables. The project cost $120,000, which was paid for by a state grant from the California Department of Resources Recycling and Recovery (CalRecycle). Santa Clarity isn't the first to adopt this solar compactor technology. In the last few years, a number of other cities have successfully deployed the same including Philadelphia, Boston, and Pasadena, Calif., among others. And it was the money and time the containers saved Philadelphia that put the technology on Patti's radar, he said. Prior to the Big Belly Solar compactors being installed, Santa Clarita only had a few 55-gallon containers in its parks to collect recyclables. Those were getting scavenged, however, so the city wasn’t able to judge how much patrons used them. With the new technology in place, however, Santa Clarita has seen a significant amount of recyclable materials being collected. As one of the conditions of receiving the state grant, the city is required to monitor the amount of beverage containers being recycled. In the first five months, it has collected 2.5 tons of material it wasn't capturing before. The city has partnered with a nonprofit group called the LA Conservation Corps to obtain further recycling statistics. After city staff empties the Big Belly containers, the LA Conservation Corps takes and centralizes all the recyclables, divides them up and gives the city quarterly metrics on the amount of glass, aluminum and plastic that has been collected per pound. “We’ve actually had to double our own dumpster capacity at Central Park where we store all the recyclables that are collected from these Big Belly units because more and more people are taking note of the containers,” Patti said. “If that trend were to continue, we’d definitely consider putting more containers in, because it’s clearly saving our staff time.”
<urn:uuid:92a26f70-9144-43f3-a864-19cfe7c76348>
CC-MAIN-2017-04
http://www.govtech.com/applications/Solar-Powered-Trash-Compactors-Spur-Rise-in-Recycling.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959161
690
2.671875
3
Surprise, Surprise. Federal Agencies Not Protecting The Information They Collect About YouThere are many policies, mandates, and laws that govern personally identifiable and financial information for federal agencies. So just how many federal agencies are living up to their responsibilities? There are many policies, mandates, and laws that govern personally identifiable and financial information for federal agencies. So just how many federal agencies are living up to their responsibilities?You guessed it: not many. When it comes to maintaining the privacy of information government agencies collect about U.S. citizenry, there are two overarching laws. These are the Privacy Act of 1974 as well as the E-Government Act of 2002. Each of these laws mandate that federal agencies protect personal information. Other laws and mandates that come into play, depending on the nature of the agency and the information stored, include the Federal Information Security Management Act of 2002, aka FISMA -- which sets forth a good baseline for security policies; the Health Information Portability and Accountability Act, aka HIPAA; as well as the California Database Breach Disclosure law, which is largely known as SB 1386,and now similar laws are in force in more than 40 other states. You'd think federal agencies would have clearly heard the message: citizens want their personal information maintained securely and responsibly. And so does the legislature. If they've heard the message, they certainly haven't listened. If there's one area where the federal government could set an example, you'd think it would be in implementing solid IT security. But it hasn't set such an example. That's why in 2006, and once again last year, the Office of Management and Budget recapped federal agency IT security and privacy responsibilities that should be followed. Unfortunately, here are the findings from the latest Government Accountability Office report on the status of federal agencies when it comes to protecting your personal information: Of 24 major agencies, 22 had developed policies requiring personally identifiable information to be encrypted on mobile computers and devices. Fifteen of the 24 agencies had policies to use a "time-out" function for remote access and mobile devices, requiring user re-authentication after 30 minutes of inactivity. Fewer agencies (11) had established policies to log computer-readable data extracts from databases holding sensitive information and erase the data within 90 days after extraction. Several agencies indicated that they were researching technical solutions to address these issues. At first blush, these results might not seem so bad. After all, 22 of 24 agencies have developed "polices requiring personally identifiable information to be encrypted on mobile computers and devices." That's a start. But the devil is in the implementation and enforcement of polices. Anyone can set a policy requiring data be encrypted. Just as anyone can set a policy to live within a budget, lose weight, quit smoking, or start exercising. Follow-through is the tough part. And that's the rub here, according to the GAO: "Gaps in their [federal agency] policies and procedures reduced agencies' ability to protect personally identifiable information from improper disclosure." Also, I'd like to pose a question: Why does citizen personally identifiable information need be on a notebook or "other mobile device" at all? Is it too much to ask, when working with sensitive information, that workers and consultants actually sit at a workstation, in an office, where the network and system can be kept highly secured? And if they need remote access, why not use a thin device so the data stays in the database, and isn't left at a worksite ... or on a table in Starbucks.
<urn:uuid:50e5eacd-5bc8-4f16-89de-edfa4f237f19>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/surprise-surprise-federal-agencies-not-protecting-the-information-they-collect-about-you/d/d-id/1065029
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00230-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941572
734
2.546875
3
Bad Sector Recovery Hard drives are built in a way so that they never return unreliable data. This means that if a hard drive cannot guarantee 100 percent accuracy of the data requested, it will simply return an error and will never give away any data at all. This article explains how bad sector recovery actually works and why it needs to be done with great caution. Understanding Bad Sectors General causes of bad sector formation are physical or magnetic corruption. Physical corruption is easy to understand—it occurs when there is physical damage done to the media surface. Magnetic corruption occurs when a hard drive miswrites data to a wrong location. While the latter may seem to be less damaging, it is actually as dangerous as physical damage, as miswritten data may damage not only adjacent sectors but also servo sectors. Regardless of the cause of damage, there are several possible outcomes: - Address Mark field corruption - Data corruption - ECC field corruption - Servo sector corruption - Or any combination of these What is common in all these types of corruption is that your operating system or normal data recovery tools cannot read the data from those sectors anymore. Let’s find out exactly what happens when a tool tries to read a sector that has one of the above-mentioned problems. Address Mark corruption When Address Mark is corrupted, the hard drive simply cannot find the requested sector. The data might still be intact, but there is no way for the hard drive to locate it without the proper ID. Some modern hard drives do not actually use sector ID or Address Mark in the sector itself; instead, this information is encoded in the preceding servo sector. To verify data integrity, a hard drive will always validate it with the error checking and correction algorithm using the ECC code written after the data field (see above diagram). When data is corrupted, the hard drive will try to recover it with the same ECC algorithm. If correction succeeds, the drive will return the sector data and will not report any error. However, if correction fails, the drive will only return an error and no data, even if the data is partially intact. ECC field corruption Although this is rare, the ECC code can also get corrupted. In this case, the drive reads perfectly good data from the sector and checks its integrity against the ECC code. The check fails due to the bad ECC code, and the drive returns an error and no data at all, because there is no way to verify data integrity. Servo sector corruption There are up to a few hundred servo sectors on a single track. Servo sectors contain positioning information that allows the hard drive to fine-tune the exact position of the head so that it stays precisely on track. They also contain the ID of the track itself. Servo sectors are used for head positioning in the same way a GPS receiver uses satellites—to exactly determine the current location. When a servo sector is damaged, the hard drive can no longer ensure that the data sectors following the servo sector are the ones it is looking for and will abort any read attempt of the corresponding sectors. How Bad Sector Recovery Works Once again, hard drives are built to never return data that did not pass integrity checks. However, it is possible to send a special command to the hard drive that specifically instructs it to disable error checking and correction algorithms while reading data. The command is called Read Long and was introduced into ATA/ATAPI standard since its first release back in 1994. It allowed reading the raw data + ECC field from a sector and returning it to the host PC as is, without any error checking or correction attempt. The command was dropped from the ATA/ATAPI-4 standard in 1998; however, most hard drive manufacturers kept supporting it. Later on, when hard drives became larger in capacity and LBA48 was introduced to accommodate drives larger than 128 GiB, the command was officially revived in a SMART extension called SMART Command Transport or SCT. Obviously, since the drive does not have to verify the integrity of data when the data is requested via the Read Long command, it would return the data even if it is inconsistent (or, in other words, the sector is “Bad”). Hence, this command quickly became standard in bad sector recovery. There is also another approach which is based on the fact that some hard drives leave some data in the buffer when a bad sector is encountered. However, our tests have shown that chances of getting any valid data this way are exactly zero. Debunking Bad Sector Recovery So to “recover” data from a bad sector, one would simply need to issue the Read Long command instead of the “normal” Read Sectors command. That is really it! It is so simple any software developer who is familiar with hard drives can do it. And sure enough, more and more data recovery tools now come with a Bad Sector Recovery option. In fact, it has come to the point where if a tool does not have a bad sector recovery feature, it automatically falls into a second-grade category. Error checking and correction algorithms were implemented for a reason, which is data integrity. When a hard drive reads a sector with the Read Long command, it disables these algorithms and hence there is no way to prove that you get valid data. Instead, you get something, which may or may not resemble your customer’s data. Tests in our lab had shown that, in reality, by using this approach, you will get much more random bytes than anything else. Yes, there are cases where this approach allows recovering original data from a sector, but these cases are extremely rare in real data recovery scenarios, and even then, only a part of the recovered sector will contain valid data. Even when we got some data off the damaged sector, what exactly should we do with its other (garbled) part? And how exactly do we tell which part of the sector has real data in it and which is just random bytes? Nobody is going to manually go through all the sectors in a HEX editor and judge which bit is valid and which is not. Even if someone did, there is no way to guarantee that what they see is valid data. And this is where the real problem starts. Dangers of Read Long approach Imagine a forensic investigator recovering data off a suspect’s drive while the drive has some bad sectors on it. To get more data off the drive, the investigator enabled Bad Sector Recovery option in his data acquisition tool. In the end, his tool happily reported that all the sectors were successfully copied, so he began extracting data from the obtained copy. While looking for clues, he found a file that had social security numbers in it. He then used these numbers in one way or another for his investigation. What he did not know is that one of the sectors that contained these numbers was recovered via the Read Long command, and some bits were flipped (which is very common for this approach). So instead of 777-677-766, he got 776-676-677, causing him and other people a whole lot of unnecessary trouble. Another example: when recovering a damaged file system, even slightly altered data in an MFT record can mislead the file recovery algorithm and in the end do much more harm than if there was no data copied at all in that sector. Once again, an error checking and correction algorithm is in place for a great reason. There is absolutely no magic in bad sector recovery; it is impossible to recover something that just isn’t there. There are tools that claim better bad sector recovery because they utilize a statistical approach, an algorithm where the tool reads the bad sector a number of times and then reconstructs the “original” sector by locating the bits that occur most often in the sector. While these tools claim this approach could improve the outcome, there is no evidence to back up the validity of such claims. Furthermore, rereading the same spot many times while the hard drive is failing is a good way to cause permanent damage to the media or heads. To summarize, if you are after valid data, avoid using any bad sector recovery algorithms. These algorithms will never offer data integrity no matter how complex their implementation is. And when you absolutely must recover data from bad sectors, make sure you use a tool that properly accounts for these recovered sectors, marking the files containing such sectors. This way, the operator has the ability to disregard such “unreliable” files and manually verify file integrity if it is an important one. |Dmitry Postrigan is the founder and CEO of Atola Technology, a Canadian company that makes high-end data recovery and forensic equipment.|
<urn:uuid:0737b2d1-c2c8-4ce9-ad8e-40c3518bc803>
CC-MAIN-2017-04
https://articles.forensicfocus.com/2013/01/21/bad-sector-recovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00138-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943146
1,801
3.125
3
Photon Entanglement over the Fiber-Optic Network Quantum mechanics with its bizarre and wonderful properties—individual particles that exist in an arbitrary combination of states, entangled particles acting in concert even when separated over long distances—is usually thought of as a world separate from everyday classical physics. Even Einstein couldn’t quite resolve entanglement with his view of the physical world and in a 1935 paper (along with Boris Podolsky and Nathan Rosen) argued that entanglement violated the locality principle (which states one physical system should have no immediate effect on another spatially separated system). But subsequent theoretical ideas and experiments have verified the existence and nonlocal behavior of entangled particles. And in the past 20 years, the new field quantum information science has been bridging quantum physics, computer science, optical technology, and communication engineering to harness the power of quantum properties. While the initial drive came from the desire to build a quantum computer capable of vastly outperforming today’s supercomputers, more recent efforts are venturing into more immediately practical applications. The power of quantum mechanics Rather than utilizing ordinary bits that exist in only one of two states (0 or 1), quantum information science utilizes qubits (see sidebar) that, first, exist in a superposition of their states 0 and 1 and, second, are capable of interacting with one another. In theory, a computer based on these interacting qubits is capable of doing certain calculations much faster than an ordinary computer that is limited to operating on a bit’s single state. As more qubits are combined, more simultaneous operations are theoretically possible; and for certain calculations (such as factoring), a relatively small number of qubits could conceivably outperform ordinary computers with a million processors, an extraordinary advance in computing. (Pioneering work in quantum computational theory was done by Peter Shor who, while at AT&T Bell Labs, devised a polynomial time algorithm for factoring large numbers on a quantum computer.) An important feature of quantum cryptopgraphic systems is that they are impervious to eavesdropping . . . Qubits can be combined, or entangled, because the mathematical rules of quantum mechanics allow two or more particles—atoms, photons, and ions have all been successfully entangled—to belong to a certain single joint quantum state that is not just the combination of the individual qubit states. A typical example would be a situation in which physical conservation laws constrain two qubits to have the same value so that, when measured, either both qubits are 1 or both qubits are 0. This is true even though neither qubit has a certain value prior to any measurements, even though the result of the first reading is completely random, and even if the second qubit is removed to a remote location. Quantum computers are proving very hard to build due to the difficulty of controlling the interactions of multiple qubits and keeping the entanglement state alive long enough to perform calculations. But proposals to employ entanglement for other applications—quantum metrology, quantum lithography—may be closer to reality. Currently the most advanced and promising of these proposals is quantum cryptography. Harnessing entanglement for secure cryptographic schemes Modern cryptography depends on the exchange of public and private cryptographic keys that enable two parties to encrypt information. The Achilles' heel of these systems is the safe distribution of a key without it being intercepted by third parties. The security of public-key distribution methods relies on the unproven hardness of math problems such as factoring (see article that describes how ALFP Fellow Adriana Lopez's research is addressing this issue ) using conventional (not quantum) computers. And while private shared keys do offer in principle the possibility of unconditionally secure communication, the key distribution process still does not completely avoid the possibility of eavesdropping or, in the case of physical couriers, immunity to bribes. Quantum cryptography or, more correctly, quantum key distribution, offers a protocol of creating a pair of private keys secured by the laws of physics. Quantum key distribution does not invoke the transport of a key because, by the nature of the entangled state, the key can be created at the sender and receiver simultaneously. By repeatedly sending and measuring photon polarization states in a clocked fashion, two users gradually build up identical strings of classical bits (or 0s and 1s) that they can use for encrypting another (parallel) data-stream between them. Since quantum states cannot be known before a measurement, the key is completely random. An important feature of quantum cryptographic systems is that they are impervious to eavesdropping since the state of entanglement is constantly monitored. Any potential eavesdroppers would unavoidably degrade the entanglement and reveal themselves. For the past year, AT&T Research has been studying photon entanglement distribution over optical fibers. The first commercial devices for entangled photon systems (see NuCrypt) are already being sold. Though more research instruments than real network equipment, they are a tangible step to harnessing quantum mechanics. Similar equipment from noncommercial sources (see link) is now being used in actual testbeds (one example is the Tokyo QKD Network; for more information, see the News and Views section of the January 2011 issue of Nature Photonics). AT&T's experiments into photon entanglement The vast installed global fiber-optic network, consisting of over a billion meters of optical fiber cables, opens a particularly attractive opportunity for implementing quantum communications protocols that rely on the distribution of entanglement between distant parties. Currently two major entanglement schemes have been proposed for telecom photons: polarization and time-bin entanglement. Polarization is particularly attractive because of the ease with which the polarization can be handled with standard off-the-shelf components (as a result, equipment for creating and detecting polarization-entangled photons is now commercially available). However, there has been a long-standing concern in the community that polarization entanglement could be significantly decohered (degraded) during fiber transmission due to two polarization effects in optical fibers: polarization mode dispersion (PMD) and polarization-dependent loss (PDL). Answers are starting to come. For the past year, AT&T Research has been conducting experiments to find out what happens to polarization entanglement over fiber optic cables with PMD and other network conditions. The equipment for these experiments has been custom-built for AT&T by NuCrypt. This project is a joint effort with two theorists: Dr. Cristian Antonelli (Università dell’Aquila), who collaborates under the auspices of AT&T Virtual University Research Initiative, and Prof. Mark Shtaif (Tel Aviv University), a former AT&T Labs researcher. On the way to solutions, AT&T researchers are learning more about the fundamental physics of entanglement decoherence . . . From the outside, the lab setup doesn’t look anything quantum; like ordinary pieces of network equipment, it is housed simply in several black boxes interconnected by strips of fiber and electrical cabling. One box is an entangled photon source. It creates a pair of entangled photon qubits, separates the paired photons spectrally, and directs each one over a dedicated fiber to one of two single photon detector stations. This process is repeated in a clocked fashion and, as the resulting stream of photon pairs arrives at corresponding detectors, the quantum state of a two-photon state is analyzed using quantum tomography, which completely quantifies a quantum state. By introducing PMD in a controlled way and performing tomography for various levels of PMD in each fiber, researchers thus probe the PMD-induced degradations. While the tomography measurement itself goes quickly (usually a few minutes), the most time is consumed by setting and verifying certain fiber conditions, which are very sensitive to miniscule changes in temperatures as well as other hard-to-control factors. The primary goal for these experiments is to fully investigate the engineering problems and corresponding solutions that need to be put in place should entanglement-based quantum protocols be someday implemented over AT&T networks. Recent experiments together with developed theory are yielding the first steps to understanding those issues. On the way to solutions, AT&T researchers are learning more about the fundamental physics of entanglement decoherence. This work, for the first time, found that transmission of polarization entangled photons in optical fibers reveals interplay among several intriguing physical phenomena: entanglement sudden death, the existence of decoherence-free subspaces, and the loss of non-locality. To take the example of sudden death of entanglement: this concept, originally proposed to describe the entanglement dynamics in atomic physics, describes a situation in which the entangled state degrades abruptly and completely in contrast to gradual decay of a single particle state. Interestingly, when PMD is present in one fiber only, the degradation of the entanglement is always gradual. On the other hand, adding some PMD to another fiber could either reduce or increase decoherence depending on the relative orientation of two PMD elements. Sometimes in the latter case the entanglement disappears completely, which is a manifestation of the sudden death arising naturally during photon propagation in fibers. Remarkably, the use of polarization entanglement in fibers has been debated at numerous conferences in recent years, and the quantum communication community remains split on the subject. AT&T’s work takes a first step towards solving this critical problem, and may have implications across subfield boundaries. Already experiment results have been presented at various conferences throughout 2010, including the most selective post-deadline session at the Optical Fiber Communication Conference. Journal papers are to follow with more details. The first that appeared in print in 2011 are: Loss of polarization entanglement in a fiber-optic systems with polarization mode dispersion in one optical path (preprint), and Nonlocal PMD compensation in the transmission of non-stationary streams of polarization entangled photons (preprint), and Sudden Death of Entanglement induced by Polarization Mode Dispersion (preprint). Possible future directions Future research in this field should encompass the following several directions. First, PMD in realistic fibers is frequency-dependent. This strong frequency dependence could either kill the entanglement or alternatively revive the entanglement if it is lost. Second, studies of the effect of polarization-dependent loss (PDL) are needed. While the PDL of the fibers is relatively small, a notable amount of PDL is introduced by network elements such as wavelength selective switches, optical add/drop multiplexes, and dispersion compensation devices. It would be interesting to figure out the non-trivial interplay between PDL and PMD. Finally, the other effects, such as nonlinearities from strong classical signals propagating through in the same fiber, could also play a role in entanglement decoherence. Eventually, the potential effectiveness of, and fundamental impediments to, implementing quantum repeater technology in the fiber-optic link also will need to be explored. This technology, once available, holds the promise of truly exploiting the quantum potential of long-haul fiber optic transmission. Quantum cryptography and communications hold great promise, but numerous effects need to be understood and various related problems are in need of solutions in the rich research area of entanglement distribution via fiber optics fibers. What is a qubit? A qubit (quantum bit) is a representation of a particle state, such as the spin direction of an electron or the polarization orientation of a photon. A qubit is the quantum equivalent of a bit in ordinary computing. But where a bit exists in one of two states (1 or 0), a qubit can exist in an arbitrary combination of both states. Physicists describe this as a coherent superposition of two states. This superposition is often represented by a point on a sphere with values 0 and 1 at the sphere poles. Measurements play an important role in quantum physics. Once one reads a qubit (or “performs a measurement” in quantum parlance), the qubit collapses into one of the two possible outcome states, with the probability of the particular state depending on the location of the superposition on the sphere. The result of any particular qubit measurement always remains uncertain until the measurement is performed: quantum mechanics just predicts the probabilities of the outcomes. What is entanglement? Entanglement is a fundamental concept in quantum mechanics. When only two particles are entangled, a measurement performed on one is reflected in the other, even when the two are separated by large distances. (This “spooky action at a distance” bothered Einstein who, along with Boris Pokolsky and Nathan Rose, in a 1935 paper argued that entanglement violated the locality principle, which states that changes performed on one physical system should have no immediate effect on another spatially separated system. Later experiments, however, have verified the nonlocal behavior of entangled photons.) Researchers have learned to entangle atoms, photons, atomic ensembles, superconducting quantum interference devices, and mechanical vibrations. The majority of experiments are done with light because entangled photons are easier to create and because they preserve their entanglement better than other particles. One drawback of using photons for quantum computing is that photons fly too fast for convenient storage. However, photon speed is not a constraint but an advantage for quantum cryptography. About the author Dr. Misha Brodsky joined AT&T Labs in 2000. His contributions to fiber-optic communications focused on optical transmission systems and the physics of fiber propagation, most notably through his work on polarization effects in fiber-optic networks. More recently Misha has been working on quantum communications; single photon detection; where his prime research interest is in photon entanglement and entanglement decoherence mechanisms in optical fibers. Dr. Brodsky has authored or co-authored over 70 journal and conference papers, a book chapter, and about two dozen patent applications. He is a topical editor for Optics Letters and has been active on numerous program committees for IEEE Photonics Society and OSA conferences. Dr. Brodsky holds a PhD in Physics from MIT.
<urn:uuid:412d0b9d-56e8-43e0-b8b2-592fb2612956>
CC-MAIN-2017-04
http://www.research.att.com/articles/featured_stories/2010_12/201101_Entangled_photons.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00376-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919217
2,940
3.390625
3
Zapata N.,U.S. Department of Soil and Water | Chalgaf I.,CITA DGA | Nerilli E.,IAMB CIHEAM | Latorre B.,U.S. Department of Soil and Water | And 4 more authors. Computers and Electronics in Agriculture | Year: 2012 This paper presents a real-time, on-farm irrigation scheduling software (RIDECO). The software was been designed for stone fruit orchards in the semiarid conditions of Spain. The characterization of stone fruit crop water requirements for the local conditions and under different irrigation strategies is presented. Meteorological data in the study area is collected daily from the SIAR public network of weather stations in an automated fashion. Subsequently, values of cumulative degree-days are computed to identify the stages of fruit growth and crop development. The software allows performing weekly irrigation schedules under standard, regulated deficit irrigation and water restriction conditions. The irrigation scheduling software stands as a valuable tool for on-farm water resources allocation planning. It can be used to forecast the irrigation water required to meet seasonal meteorological, agronomical and managerial scenarios in stone fruit orchards. RIDECO can also be used to plan deficit irrigation strategies in cases of severe water restrictions. The software can be parameterized to adjust to specific varieties and local farming conditions. A variety of graphs assist irrigation managers in their decisions. © 2012 Elsevier B.V. Source Zapata N.,EEAD CSIC | Nerilli E.,Instituto Agronomico Mediterraneo Of Bari | Martinez-Cob A.,EEAD CSIC | Chalghaf I.,CITA DGA | And 3 more authors. Spanish Journal of Agricultural Research | Year: 2013 Fruit production development is resulting in large commercial orchards with improved water management standards. While the agronomic and economic benefits of regulated deficit irrigation (RDI) have long been established, the local variability in soils and climate and the irrigation system design limits its practical applications. This paper uses a case study approach (a 225 ha stone fruit orchard) to unveil limitations derived from environmental spatial variability and irrigation performance. The spatial variability of soil physical parameters and meteorology in the orchard was characterized, and its implication on crop water requirements was established. Irrigation depths applied during 2004- 2009 were analysed and compared with crop water requirements under standard and RDI strategies. Plant water status was also measured during two irrigation seasons using stem water potential measurements. On-farm wind speed variability amounted to 55%, representing differences of 17% in reference evapotranspiration. During the study seasons, irrigation scheduling evolved towards deficit irrigation; however, the specific traits of RDI in stone fruits were not implemented. RDI implementation was limited by: 1) poor correspondence between environmental variability and irrigation system design; 2) insufficient information on RDI crop water requirements and its on-farm spatial variability within the farm; and 3) low control of the water distribution network. Source Martin-Closas L.,University of Lleida | Costa J.,University of Lleida | Cirujeda A.,CITA DGA | Aibar J.,University of Zaragoza | And 8 more authors. Soil Research | Year: 2016 Degradable materials have been suggested to overcome accumulation in the field of persistent plastic residues associated with the increasing use of polyethylene mulches. New degradable materials have been proven successful for increasing crop productivity; however, their degradation in the field has been hardly addressed. A qualitative scale was used in the present study to assess the above-soil and in-soil degradation of degradable mulches during the cropping season. Degradation was determined in three biodegradable plastic mulches (Biofilm, BF; Mater-Bi, MB; Bioflex, BFx), two paper sheet mulches (Saikraft, PSA; MimGreen, PMG) and one oxo-degradable plastic mulch (Enviroplast, EvP). Polyethylene (PE) mulch was used as control. Mulches were tested in five Spanish locations (Castilla-La Mancha, La Rioja, Navarra, Aragón and Catalunya), with three crop seasons of processing tomato. Biodegradable plastic mulches BF and MB degraded more and faster above-soil than paper mulches; among biodegradable mulches BF degraded more than MB, and MB more than BFx. The above-soil degradation of the oxo-degradable mulch EvP was highly dependent on location and crop season, and it degraded more than PE. Main environmental factors triggering above-soil degradation were radiation, rainfall and crop cover. In-soil, paper mulches and BF degraded more and faster than MB, whereas BFx and EvP barely degraded. Environmental factors triggering in-soil degradation during the crop season were rainfall and irrigation water. The effect of soil parameters (organic matter, nutrient availability) on degradation during the cropping season was not evidenced. The qualitative scale used proved convenient for determining mulch field degradation. A visual scale for supporting the qualitative evaluation is provided. In order to standardise parameters and criteria for future studies on field mulching degradation evaluation, a unified degradation qualitative scale is suggested. © CSIRO 2016. Source Finn J.A.,Teagasc | Kirwan L.,Waterford Institute of Technology | Connolly J.,University College Dublin | Sebastia M.T.,University of Lleida | And 30 more authors. Journal of Applied Ecology | Year: 2013 Summary: A coordinated continental-scale field experiment across 31 sites was used to compare the biomass yield of monocultures and four species mixtures associated with intensively managed agricultural grassland systems. To increase complementarity in resource use, each of the four species in the experimental design represented a distinct functional type derived from two levels of each of two functional traits, nitrogen acquisition (N2-fixing legume or nonfixing grass) crossed with temporal development (fast-establishing or temporally persistent). Relative abundances of the four functional types in mixtures were systematically varied at sowing to vary the evenness of the same four species in mixture communities at each site and sown at two levels of seed density. Across multiple years, the total yield (including weed biomass) of the mixtures exceeded that of the average monoculture in >97% of comparisons. It also exceeded that of the best monoculture (transgressive overyielding) in about 60% of sites, with a mean yield ratio of mixture to best-performing monoculture of 1·07 across all sites. Analyses based on yield of sown species only (excluding weed biomass) demonstrated considerably greater transgressive overyielding (significant at about 70% of sites, ratio of mixture to best-performing monoculture = 1·18). Mixtures maintained a resistance to weed invasion over at least 3 years. In mixtures, median values indicate <4% of weed biomass in total yield, whereas the median percentage of weeds in monocultures increased from 15% in year 1 to 32% in year 3. Within each year, there was a highly significant relationship (P < 0·0001) between sward evenness and the diversity effect (excess of mixture performance over that predicted from the monoculture performances of component species). At lower evenness values, increases in community evenness resulted in an increased diversity effect, but the diversity effect was not significantly different from the maximum diversity effect across a wide range of higher evenness values. The latter indicates the robustness of the diversity effect to changes in species' relative abundances. Across sites with three complete years of data (24 of the 31 sites), the effect of interactions between the fast-establishing and temporal persistent trait levels of temporal development was highly significant and comparable in magnitude to effects of interactions between N2-fixing and nonfixing trait levels of nitrogen acquisition. Synthesis and applications. The design of grassland mixtures is relevant to farm-level strategies to achieve sustainable intensification. Experimental evidence indicated significant yield benefits of four species agronomic mixtures which yielded more than the highest-yielding monoculture at most sites. The results are relevant for agricultural practice and show how grassland mixtures can be designed to improve resource complementarity, increase yields and reduce weed invasion. The yield benefits were robust to considerable changes in the relative proportions of the four species, which is extremely useful for practical management of grassland swards. The design of grassland mixtures is relevant to farm-level strategies to achieve sustainable intensification. Experimental evidence indicated significant yield benefits of four species agronomic mixtures which yielded more than the highest-yielding monoculture at most sites. The results are relevant for agricultural practice and show how grassland mixtures can be designed to improve resource complementarity, increase yields and reduce weed invasion. The yield benefits were robust to considerable changes in the relative proportions of the four species, which is extremely useful for practical management of grassland swards. © 2013 British Ecological Society. Source Playan E.,EEAD CSIC | Salvador R.,EEAD CSIC | Lopez C.,EEAD CSIC | Lecina S.,CITA DGA | And 2 more authors. Journal of Irrigation and Drainage Engineering | Year: 2014 Farmers continue to show great differences in irrigation water use, even for a given location and crop. Irrigation advisory services have narrowed the gap between scientific knowledge and on-farm scheduling, but their success has been limited. The performance of sprinkler irrigation is greatly affected by factors such as wind speed, whose short-time variability requires tactical adjustments of the irrigation schedule. Mounting energy costs often require the consideration of interday and intraday tariff evolution. Opportunities have arisen that allow these challenges to be addressed through irrigation controllers guided by irrigation and crop simulation models. Remote control systems are often installed in collective pressurized irrigation networks. Agrometeorological information networks are available in regions worldwide. Water users' associations use specialized databases for water management. Different configurations of irrigation controllers based on simulation models can develop, continuously update, and execute irrigation schedules aiming at maximizing irrigation adequacy and water productivity. Bottlenecks requiring action in the fields of research, development, and innovation are analyzed, with the goal of establishing agendas leading to the implementation and commercial deployment of advanced controllers for solid-set irrigation. © 2013 American Society of Civil Engineers. Source
<urn:uuid:8b0bf4b2-9a4e-4a8a-8117-b9dda558340a>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/cita-dga-1043355/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00286-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91727
2,208
2.515625
3
Will Computers Rival Poeple for Knowledge-Based Jobs? Some question if computers like Watson will be the first choice for knowledge jobs. A recent "experiment" on the Conan show had a version of IBM's Watson computer try its hand as announcer, leaving some to ask questions if computers (like Watson) will be the first choice for filling knowledge-based job positions. As noted in this report on USA Today, IBM announced a partnership with Nuance Communications to develop a "physician's assistant" technology based on Watson. "How did Watson's technology allow it to sift through some 15 trillion bytes of information (one byte is roughly equal to one piece of information, say a letter like "h") to conjure the answers to oddly-worded questions ranging from Beatles songs to Familiar Sayings? Watson actually is a departure from more frequently-trod paths to artificial intelligence, Kelly says, which more often seek to give computers an expert understanding of a very specific area of knowledge (a famous example is a natural language program called Lunar that could answer questions about moon rocks). Such expert systems are 'a dead end,' IBM research executive John Kelly says, not able to handle the broad range of questions and knowledge that Watson or any of its real-world successors would face. "Instead, the computer focuses on sifting through documents to find possible answers and then relates them to each other using methods borrowed from fields as diverse as linguistics and logic to develop confidence in possible correct ones. 'Watson is a triumphant celebration of thirty years of advances in efficiently indexing a large, broad library of texts, and fifty years of advances in statistical machine learning that can train on a huge set of past Jeopardy! example clues and responses,' says machine learning pioneer Doug Lenat of Cycorp Inc. in Austin, Tex."
<urn:uuid:92419391-8f28-4c9d-9a47-5920dc686380>
CC-MAIN-2017-04
http://www.enterpriseappstoday.com/business-intelligence/article.php/422436/Will-Computers-Rival-Poeple-for-KnowledgeBased-Jobs.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00212-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942572
376
2.578125
3
76service – A group that orchestrated attacks using the Gozi Trojan and pioneered a service used to provide clients with subscriptions to stolen data feeds provided by those attacks. Blind Drop – A drop that is well hidden and is designed to run while unattended, until an attacker comes to collect the data. In the case of remote access Trojans, can also refer to file hidden locally. Bot – A computer infected with software that allows it to be controlled by a remote attacker. Also used to refer to the malware itself which allows that control. Carder – Someone who trades in stolen credit card and cardholder data. Downloader – A small piece of code, usually a single instruction, used in the payload of an exploit to silently fetch a malicious EXE file from the attacker's server. Drop – A clandestine computer or service [such as e-mail account] that collects data stolen by a Trojan. Dump – As a noun, used interchangeably with “drop.” As a verb it means to transfer data onto a machine for analysis, or to discard an exe after reverse engineering. exe – A Windows executable program. In a malware attack, the "exe" refers to the malicious progam which infects the victim's PC. Exploit – Code used to take advantage vulnerabilities in software code and configuration, usually to install malware. Form-grabber – A program that steals information submitted by a user to a web site. (Originally forms were the only way to submit user input to a web server, but now the meaning has changed to encompass any HTTP communication using a POST request.) Gozi – One of a family of Trojans written by Russian RATs known as the HangUp Team, used in a string of attacks orchestrated by a group known as 76service. iFramer – A person who places a malicious IFRAME (in-line frame) tag into web pages, usually on compromised web sites, and then charges malware developers for access to those iFrames as a distribution method for Trojans. Keylogger – A program that logs user input from the keyboard, usually without the user's knowledge or permission. Malware – Any executable code that uses a computer in a way not authorized by it's owner. Includes Trojans that install backdoors, spyware, bot clients, keyloggers, worms, viruses, or other malicious code. Packer – A tool used to compress and scramble an EXE file. Used to hide the malicious nature of malware and thwart analysis by researchers. Padonki – A kind of Russian hacker slang in which words, often obscene ones, are purposefully misspelled or bastardized. Pesdato – English transliteration of a Padonki interjection. RAT - Remote Access Trojan, malware that allows an attacker to remotely control a infected PC or "bot". RATs – The nickname for people who write remote access trojans. RBN – The Russian Business Network. An infamous ISP used by primarily Russian malware groups to host malware and drops. The ISP is reportedly run out of Panama and owned a company operating from the islands of Seychelles, off the eastern coast of Africa. Variously described as "opaque," "dubious," and "shady." Redirect – A feature of HTTP used to automatically forward someone from one web site to another. In the case of malware, redirects are done invisibly, sometimes inside iFrames. Rootkit – Code that plugs into and changes the low-level functions of an operating system. Used by malware to hide itself from users and even the operating system itself. Torpig – A relatively new family of Trojans representing the latest in malware capabilities, including the ability to hide itself and provide backdoor access for installing other configurations, components, or even other Trojans. Trojan – A program that attempts to hide its malicious code by masquerading as an innocuous program most commonly through the use of a "packer." Variant – Malware that is produced from the same code base (or "family") as a previous version but is different enough to require new signatures for detection by anti-virus and anti-malware products. VXer – Originally, a virus writer. Now refers to anyone involved in the production or use of malware. --Source: SecureWorks, CSO Reporting
<urn:uuid:32926a02-2037-4cf0-b16c-82a7607c761f>
CC-MAIN-2017-04
http://www.csoonline.com/article/2123316/identity-theft-prevention/a-layman-s-glossary-of-malware-terms.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00542-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920012
912
2.9375
3
Protection from USB-based Attacks (USB Sentry) The Universal Serial Bus (USB) specification is a flexible standard that allows a wide range of peripheral devices to communicate with host computers. Unfortunately, host computers have to implicitly trust the USB devices that attach to it. This situation provides attackers with numerous attack vectors to steal data, spread malware, and control a host system. USB Sentry protects host computers from a wide array of USB-based attacks ranging from physical-level electrical attacks to device emulation attacks where a USB device performs additional, unexpected functionality. USB Sentry is a combination of a USB hardware device and host software that act in concert to protect a computer. The USB Sentry hardware device is a configurable USB firewall that only allows explicitly approved devices to communicate with a host computer. USB Sentry is 1-1/2" high, 4" wide and approximately 4" deep
<urn:uuid:58b05730-e52f-4ff3-acb7-c1cecf8e9dba>
CC-MAIN-2017-04
https://www.atcorp.com/index.php/technologies/cyber-security/cyber-defense-technologies/usb-steward
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00266-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92025
182
2.625
3
In their continuing efforts to stop counterfeiting the United States Federal Reserve released a newly-designed $100 note with an eye-popping color palette. The newest iteration sports a decade’s worth of security advances, anti-counterfeiting measures such as holograms, raised printing and artifacts that appear to move and change color depending on the viewing angle. Despite these sophisticated security features, currency forgers have a big financial incentive to succeed in manufacturing convincing fakes, as an article at Physics Central points out. In the computing world, experts have long looked to quantum mechanics to make unbreakable encryption. Encryption and decryption are seen as the number one killer apps for quantum computers. What if it were possible to extend quantum technology to make un-copyable currency? In quantum computing, units of information are known as “qubits.” Unlike a classical bit which can only exist in one of two states, quantum mechanics allows a qubit to exist in a superposition of both states at the same time. According to theory from quantum physics, this same principle can also be used to create ultra-secure currency, referred to as quantum money. “All dollar bills have serial numbers on them,” the articles notes. “Having a unique identifier on each bill helps law enforcement track and identify counterfeit bills. Quantum dollars would have a second serial number, one made of qubits that can only be read with a quantum computer.” “Whenever someone buys something with quantum money, the clerk runs the bill’s serial number through a central database, checking to make sure that the bill’s qubit code matches the one on file. It’s really a bit more like checking than an ordinary cash in this way.” The qubit codes are safe from counterfeiting because the act of measuring qubits would destroy them. This principle is referred to as the “no cloning theorem.” If a counterfeiter attempts to measure the polarizations of the qubits without knowing their value in advance, the code will be changed, making the bill worthless. The author details the process: “To check a bill’s authenticity, the polarized qubits pass through a filter. If the filters match what is on the bill, then the qubits pass through unchanged, but if they’re different, the qubits change to match the filters. The reason that the clerk can measure the quibits without changing them, is because by running the serial number first, the computer can pull up the correct filters and scan the bill to make sure it’s legit.” The idea for “quantum money” can be traced to physicist Stephen Wiesner, a graduate student at Columbia University, who worked on quantum information theory in the 1960s and 70s. Although the principles are sound, the idea has been called impractical owing to the fact that the necessary technology does not yet exist. Quantum computing is considered the holy grail for the field of cryptography. Whether or not “true quantum computing” exists is a rather contentious topic, but it’s safe to say that much progress has been made in the last few years by the likes of D-Wave, Google, NASA, and others.
<urn:uuid:f22054ee-bff7-4def-a7df-4d44cc650086>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/01/07/promise-quantum-money/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00266-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939977
674
3.65625
4
What’s so special about the letter S? It’s one of the most frequently used letters in the English language, a regular sponsor of Sesame Street, and is so common that Vanna White automatically selects it for contestants during the Wheel of Fortune’s final round. Pretty standard stuff. But for website managers and enterprise security administrators, and everyone who visits their organizations’ web sites, it’s about to become the most important letter of the alphabet. Soon, you won’t be able to reach many popular websites without adding an “s” at the end of “http” in the address bar. This assures visitors using any web browser that a page leverages the security protocol known as Transport Layer Security (TLS) – formerly Secure Sockets Layer (SSL) – cryptographic protocols that provide communications security over a computer network. Put simply, it shows that encryption is in place between the server and the user’s browser. The SSL protocol is stronger now than ever because of the research and improvements made by member organizations of the Certificate Authority Security Council (CASC), an advocacy group committed to the advancement web security. HTTPS is an example of the CASC’s on-going efforts to earn the trust of customers and all users and improve internet security. Website managers should be aware of the six key ways this will affect users’ experiences and interactions with their sites: 1. Clear, visible warnings: Web browsers will use visual cues to alert users of non-https connections. For example, Google Chrome will highlight insecure pages with a red slash in the address bar. They will also warn if an insecure page asks for a password or credit card by showing the words “Not Secure”. Firefox plans a similar warning for sites requesting passwords. In the future, both will transition from an information warning to a red triangle which is more noticeable. 2. Access to powerful features: Chrome will only be available over https. Services like Geolocation, Device Motion/Orientation, Full screen mode, DRM and more are strictly limited to https connections. Websites that need these features will have to implement SSL/TLS to utilize them. 3. Better, stronger, faster: http2 will replace the long-time standard http. It’s much faster, which enables a more enjoyable and efficient user experience, while also strengthening the user’s and company’s security postures. This is supported by Chrome, Firefox, Internet Explorer, Safari and Opera will only support http2 over https. So as websites migrate to the speedier http2, they must use SSL/TLS. 4. Leveraging referrer data: Website managers strive to draw visitors from other sites via referrals. Moving forward, seeking referrer data from other sites will require the use of https. Without https, the destination sites won’t know who is coming to their site. 5. New-look Gmail: Users of the popular email client will now see an open lock icon that indicates an insecure connection is used by depicting an open lock in the Gmail user interface. Email servers that use certificates to encrypt mail server to mail server data don’t show an open lock and detail the type of encryption used. 6. Everywhere you look: Many sites have already made the transition to https, including Google’s Blogspot and Analytics, Reddit, Flickr, Wikimedia, WordPress, Bitly and Shopify. The U.S. Government requires all sites under the .gov domain must be https by the end of this. The move to https requires web site managers to make decisions concerning buying, installing, and using certificates. The CASC recommends acquiring a certificate that is trusted by browsers rather than using a self-signed certificate. Extended Validation (EV) certificates are the gold standard because the organization information they contain is rigorously validated with multiple checks that give a high degree of confidence that the information contained in the certificate is accurate. When purchasing a certificate, look at the number of domains it covers. There are three categories: single domain, multiple domain and wildcard. Single domain certificates are by far the most common, covering a single website such as https://www.example.com. Multiple domain are appropriate for use with multiple related sites that all run on one server. Wildcard certificates support cases where multiple subdomains run on the same server and are used by the same business. Finally, when choosing, the key size of the certificate when generating the certificate signing request (CSR), the CASC recommends using a 2048-bit RSA key size. As we head into the frantic holiday shopping and travel season, web site managers must prioritize their efforts to secure their sites with the up-to-date certificate. Nothing will drive a consumer away from your web site and to a competitor’s than a red “X” in the browser address bar.
<urn:uuid:89555069-0fdc-44d6-a82d-cf09942dc1a9>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2016/12/16/secure-websites/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00294-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912707
1,009
3.0625
3
Supercomputer techies just want to chill - By Joab Jackson - Jul 09, 2004 The builders of tomorrow's supercomputers will have to find new ways to cool their systems. 'This is a very serious problem,' said Srinidhi Varadarajan, architect of the Terascale Computing Facility at Virginia Polytechnic Institute and State University, in Blacksburg, Va. 'Today's processors produce a phenomenal amount of heat.' One rack of servers can easily be cooled by an air conditioner, but 'when you have a hundred racks, that is not viable,' Varadarajan said at the recent National High Performance Computing and Communications Conference in Newport, R.I. SGI chief technology officer Eng Lim Goh said supercomputers continue to make giant leaps in processing power, but cooling technologies are not advancing as rapidly. As a result, system designers see a growing disparity between the amount of heat generated by newer supercomputers and data centers' ability to keep cool. Newer supercomputer facilities 'feel like wind tunnels,' Goh said. 'People are starting to think about a different medium. Air can only carry so much heat.' Virginia Tech chose a hybrid liquid-air cooling system because the traditional cooling approach would be inadequate due to the data center's restricted space. Varadarajan estimated that the 1,100 Apple G5s in the system produce 3.2 million British thermal units of heat in a 3,000-square-foot space. The usual approach would have forced air up through the perforated floor tiles with multiple fans, Varadarajan said. The computers would suck in the cooler air and blow out the hot air, which ceiling exhaust fans would carry away. But Varadarajan's team calculated the air would have to blow through the facility at 60 mph to adequately cool the G5s. So Virginia Tech picked a hybrid liquid-air setup: Chilled liquid travels through pipes under the facility's raised floor. Fans blow cooled air off the pipes, which carry 750 gallons of liquid per minute. Besides reducing wind, the hybrid system is also less expensive, Varadarajan said. An all-air cooling system would have cost around $5 million to build, far too pricey for the university. The hybrid system cost less than $1 million. Joab Jackson is the senior technology editor for Government Computer News.
<urn:uuid:85290e77-b761-4cf1-b95f-a622ab90143e>
CC-MAIN-2017-04
https://gcn.com/Articles/2004/07/09/Supercomputer-techies-just-want-to-chill.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00110-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936321
501
2.609375
3
The devil is not in these details: Why encryption isn’t evil This feature first appeared in the Winter 2016 issue of Certification Magazine. Click here to get your own print or digital copy. Editor’s Note: This feature was written and published prior to the emergence of the current dispute between Apple and the FBI and therefore does not directly reference those proceedings. In the past few months, deadly terrorist attacks rocked San Bernardino, Calif., and shattered the French capital city of Paris. The technical investigation following both incidents largely focused on questions regarding digital communication and coordination among the attackers using standard encryption protocols to avoid eavesdropping by law enforcement and intelligence organizations. Encryption is already a hot-button topic in cybersecurity. These dramatic breaches of public safety have sparked a worldwide debate regarding the widespread use of encryption, and its role in barring government access to private communications. There’s one bottom line question: Is encryption a sinister tool being used serve nefarious ends? Politicians and presidential candidates were quick to condemn the attack, but also used their soapboxes to rail against encryption technology as a tool of terrorism. In a Democratic presidential debate, Hillary Clinton called for “a Manhattan-like project” focused on encryption. Republican presidential candidate John Kasich struck a similar tone in arguing that “we have to solve the encryption problem.” There’s certainly an undertone in the national conversation that encryption is an unwanted technology that facilitates terrorism — and that the government must take action to protect Americans from it. Kasich and Clinton are correct that there is an encryption “problem” but the problem is not that the technology is available. The real problem is that the technology is not well understood. Many average citizens are surprised to learn that encryption is a part of their everyday life and that the security it provides routinely protects their credit card information, healthcare records and other sensitive data from prying eyes. Encryption is not a problem to be solved: It is a technology to be embraced as a cornerstone of every organization’s information security program. The government itself relies heavily upon encryption technology and spends millions of dollars annually developing new encryption methods. How can government officials decry encryption as a terrorist weapon while simultaneously using it to protect sensitive information? What is encryption? Encryption is, quite simply, a set of mathematical formulas. In its most basic form, encryption algorithms take plaintext messages and use a secret key to transform them into an encrypted form that is unintelligible by anyone who does not have access to the corresponding decryption key. Encryption algorithms are public knowledge. Any university-level computer science student has the skills required to write a small piece of software that implements military-grade encryption technology in a matter of weeks. The government would have as much luck banning encryption as they would banning algebra or physics. What would you think if you learned that your neighbor was using advanced military-grade encryption algorithms to protect files stored on his smartphone or laptop computer? How about if he was using encrypted messaging technology to apply the Advanced Encryption Standard to text messages that he exchanged with others around the world? Does this sound sinister? It’s not. This description could not only easily fit your actual neighbor, but it most likely applies to you as well. Where is encryption used? If you have a laptop computer issued by your employer, it’s more likely than not that the entire hard drive is encrypted to protect the contents from prying eyes. Companies do this as a matter of routine to protect themselves in the event that the device is later lost or stolen. If a hard drive is encrypted, nobody can gain access to the files stored on the drive without having access to the corresponding decryption key, which is usually encoded with the laptop user’s password. Do you own an iPhone or Android smartphone? Both devices automatically encrypt all of the information stored on the device for similar reasons. Current versions of iOS and Android prevent anyone other than the phone’s owner from gaining access to the encrypted data. Even if Apple or Google wanted to cooperate with government investigators (or anyone else for that matter), they simply don’t have access to your sensitive information. They designed their operating systems this way on purpose. This level of security protects your data with strong encryption that prevents anyone from gaining unauthorized access. Isn’t that what you expect from your phone or tablet? Have you ever logged onto your bank account online, checked your email over the web or visited the White House website? If you’ve done any of these things, you’ve used the HTTPS protocol to communicate securely with the remote web server. HTTPS uses strong encryption to protect your data from prying eyes while in transit. Yes, that’s right — the White House website requires that citizens visiting its web site use strong encryption to browse the site. Go give it a try. If you type whitehouse.gov into your browser’s address bar, notice that it quickly changes to https:// whitehouse.gov. The “s” in “https” indicates that strong encryption is in use. How can government officials claim that the use of encryption is a problem when they force citizens to use it every day? What do politicians want? As with many political conversations, it’s difficult to understand exactly what politicians are calling for when they speak out against encryption technology. Hillary Clinton, when asked how she would address encryption, admitted that, despite viewing encryption as a danger, she doesn’t really know what could be done to neutralize it: “It doesn’t do anybody any good if terrorists can move toward encrypted communication that no law enforcement agency can break into before or after, there must be some way. I don’t know enough about the technology … to be able to say what it is.” FBI Director James Comey has been similarly confusing in his plea for action against encryption technology. In a 2014 speech, he warned listeners that, “Justice may be denied because of a locked phone or an encrypted hard drive.” He went on to say that “We aren’t seeking a backdoor approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law.” Unfortunately, Comey doesn’t provide any technical details on how his so-called “front door” would actually work. By the way, it’s not just the White House website that forces the use of encryption — citizens visiting Director Comey’s FBI.gov site are also forced to use encrypted communications. What’s wrong with these government requests? The bottom line is that the requests by government officials and political candidates simply aren’t feasible. When pressed for technical details on their plans to subvert (when necessary) or replace encryption technology, they merely assert that technical people can figure it out. What no one says openly is that such an approach is simply not feasible, practical, or even advisable. There is no direct means of providing government officials with access to encrypted communications without fundamentally weakening the technology itself. The National Security Agency tried to develop this type of backdoor back in 1993 when they proposed the Clipper Chip: an encryption device with a government backdoor. That device failed miserably when the technology industry refused to adopt it. Two of the congressmen who attended a hearing where Director Comey made his pitch for a government backdoor later sent him a letter explaining their objections to his proposal. Rep. Will Hurd, R-Texas, and Rep. Ted Lieu, D-Calif., have an interesting shared background — they are both not only congressmen, but also trained computer scientists. In their letter to Comey they wrote: Any vulnerability to encryption or security technology that can be accessed by law enforcement is one that can be exploited by bad actors, such as criminals, spies, and those engaged in economic espionage. It is important to remember that computer code and encryption algorithms are neutral and have no idea if they are being accessed by an FBI Agent, a terrorist, or a hacker. During our oversight hearing, it was clear that none of the witnesses were willing to assert that a backdoor would be completely air-tight and secure. Moreover, demanding special access also opens the door for other governments with fewer civil liberties protections to demand similar backdoors. The congressmen are correct. Encryption is an essential technology for safeguarding sensitive information. The fact that terrorists use encryption technology is not a reason to deprive American citizens and others the use of secure communications methods. The government must find other means to counter terrorist threats and provide security against terrorism without jeopardizing the security of our private information. Any technology in the wrong hands can be used to bring sinister designs to fruition. That doesn’t make the technology itself corrupt, or mean that no one should ever use it for anything. Fear of terror that prevents technological tools from serving the public good is only accomplishing the aims of terrorists. Encryption is not evil.
<urn:uuid:eedf1435-8c89-471c-bdc6-75033c8e31c9>
CC-MAIN-2017-04
http://certmag.com/devil-not-in-these-details-why-encryption-isnt-evil/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00021-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949497
1,850
2.625
3
Job profile: Getting started in network design “Network design professional” is one of the more prestigious job titles in IT. The network design process is an obvious and essential prerequisite for running an infrastructure that can meet the requirements of its users. In this article, we will go through the definition of network design, the design process — including different design tasks — and how to get started and achieve career advancement. Network Designers (designers) fit different network infrastructure components and technologies together to form a network able to satisfy the needs of their client. The outcome is a network topology (physical arrangement of the various network elements, including links, nodes, and so forth) and detailed specifications of different components. This includes a network diagram, protocols, cabling infrastructure, IP addressing scheme, and many more details. The designer creates documents ranging from High Level Design (HLD) to Low Level Design (LLD) according to the level of details included. HLD documents include descriptions of the infrastructure and its components, and serve as a summary of what the final design will look like. LLD documents generally include much more detail. They typically contain full infrastructure details including description of technologies and devices used, down to the level of specific link speeds and cable types and lengths. The LLD is used during the implementation of the design and serves as a reference during different implementation phases. It is also a consulting document used after the successful implementation of the network. The skill of a network designer lies in choosing the optimum and balanced mix of technologies and components that efficiently fulfill customer requirements. Network design challenges When designing a network, the designer must pay attention to a lot of details attached to the proper operation of the network. The process usually starts by collecting the requirements, or in other words, the expectations for the network. Network capacity in terms of bandwidth available for applications traffic as well as the number of connectivity ports on network devices are some of the important details that a designer needs to know in order to size different network components. For example, the number of servers to be connected to the network reflects on the number of purchased switches. The available bandwidth is also referred to as network throughput, and is derived from the throughput of the different devices in the network. Availability is another key requirement for network infrastructure. While less important applications require less available network, mission critical applications may require network availability up to 99.999 percent. Network availability is reflected in the design cost as it requires creating redundant (duplicate) devices to act as Active/Standby or even Active/Active devices. Network reliability is important because it ensures that network devices can perform as claimed. If not, it’s like buying a car with top speed of 186 KPH, but when you start driving the car, it breaks down before reaching that speed. A final important factor in network design is network scalability. Network infrastructure should be built with the availability of expansion to accommodate new services. It’s always cost effective and faster to expand an existing network rather than build a new one. Designing a network is a comprehensive process and each of the above mentioned factors should be carefully and completely addressed to design and build an effective and efficient network infrastructure. Getting started in the field There are two schools of thought to answering this question: One mandates that network designers should have experience working in network operations. The argument is, “If you know how every component of a network operates, you can design a network.” For example, if the professional had experience with the problems of specific network protocol he then can substitute it with another one during the design. The professional can then use practical knowledge rather than depending on theoretical concepts. This method can equip a professional with extensive hands-on experience and enrich his networking knowledge. The second school of thought holds that, like any IT field, networking can be learned and developed. If you have enough knowledge of the concept or at least can understand it, experience can be substituted by book learning and consulting with more experienced professionals. More likely the manner of becoming a successful designer is a combination of both schools of thought. Regardless of which method you choose to follow, I promise you that it requires a lot of knowledge, especially specific product knowledge and lots of practice. Helpful materials and certifications IT Network forum and Newsletter websites should be in the daily routine of a network design professional too. Besides being current on new articles and industry developments, there are a number of comprehensive certification programs from well-known vendors. Examples include Cisco’s CCDA/CCDP/CCDE and Juniper’s JNCDA/JNCDS/JNCDP (to be announced soon) network design certification tracks. Another good option is a CNET certification like CNIDP. It is worth mentioning that when it comes to design, professionals should go for vendor neutral certificates. The gained knowledge and exposure to different vendors and products gives the designer flexibility in fitting different components in the design. Being a successful network designer It is crucial that network designers be up-to-date on the latest technologies and product portfolios of different equipment vendors. Successful designers rarely rely on one vendor. It is more common to use a mix of vendors when building a network infrastructure. This is especially true for security devices. The professional should know the capabilities of different devices so as to correctly position them within the solution. Using open standard protocols, network devices can be fit together in an orchestrated topology that provides smooth operation. While it seems tough to follow the rapid innovations in technology, it’s an interesting routine to know new concepts and apply them in designs. Vendor workshops, webinars and conferences are good events to assist in grabbing good knowledge and tips about technologies and products. Social networking with other professionals also provides knowledge that can add to your professional skills. The more exposure to technologies you get, the more capabilities you have in this field.
<urn:uuid:d78888db-7244-4e20-99a1-476602ebe888>
CC-MAIN-2017-04
http://certmag.com/job-profile-getting-started-network-design/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00021-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941778
1,207
3.046875
3
When it comes to storage problems, no one is exempt. The exponential growth of scientific and technical big data is not only having a major impact on HPC storage infrastructures at the world’s largest organizations, but at small-to medium-sized companies as well. The stumbling blocks at large organizations are highly visible. Over the years the government labs, educational institutions and major enterprises have built up complex and often highly dispersed storage infrastructures. They are characterized by numerous, distributed file systems running on multiple HPC systems. As a result, storage silos have become the norm. Just some of the consequences are limited access to data, high latency, and increased storage, maintenance and retrieval costs. In particular, these IT infrastructures have a difficult time handling either planned or unexpected peak period loads brought on by activities like checkpointing or unanticipated user demand. Site-Wide Storage at NERSC The National Energy Research Scientific Computing Center (NERSC) is an excellent example of how a major institution can solve these thorny storage problems. More than 5,000 scientists use NERSC’s computational facilities every year. They are performing scientific research for as many as 700 topics spanning such fields as solar energy, bioinformatics, fusion science, astrophysics, climate science and more. The Center currently has six state of the art computer systems and advanced storage systems. Included is “Edison,” a Cray XC30 with a peak performance of over two petaflops. Since 2006, NERSC has had to continuously address its storage problems and recently the pace has quickened. Typically, up to 400 researchers a day from all over the world were using the Center to handle hundreds of high-bandwidth applications to access, analyze and share research data. Because the facility relied on a multiple different file systems, delivering an optimum balance of capacity and throughput was a major problem. “We were constantly moving data around within the center to ensure we had sufficient storage to handle new project growth while keeping our existing users happy,” says Jason Hick, group leader, storage systems for NERSC. “It took an inordinate amount of time and created an enormous amount of network traffic.” Hick’s team debated the merits of deploying additional storage or moving to a centralized solution designed to meet both present and anticipated future growth. They opted for the latter. Says Hick, “NERSC was a pioneer in moving away from local storage in favor of site-wide Global File systems and consolidated storage architecture.” He adds that performance and efficiency were the primary drivers for adopting a site-wide storage architecture. DDN Storage Solution At the heart of the solution is DataDirect Networks Storage Fusion Architecture® (SFA), which provides all the functionality needed to ingest, analyze and archive big data on a single platform. This approach allows NERSC to deploy a centralized storage capability that can accommodate the requirements of the largest computer system on the network – including peak period bursts – as well meeting the storage needs of the Center’s other five systems. And when the ultra powerful NERSC 8 supercomputer “Cori” is installed in mid-2016, it will make full use of the scalable site-wide storage infrastructure. Hick reports that the cost of the centralized infrastructure is 30 percent less than a local file system, with savings running into several hundreds of thousands of dollars. “Scratch” storage costs have been cut by more than 50 percent. NERSC is just one of several large organizations that have moved to site-wide storage solutions based on DDN technology. Included are Texas Advanced Computing Center (TACC), Oak Ridge National Laboratory (ORNL) and Los Alamos National Laboratory (LANL). But the benefits of site-wide storage solution are not the exclusive domain of these big government labs and major institutions. Smaller sites may not have the resources to buy, deploy and manage infrastructure on the scale of TACC or an ORNL, but they can still enjoy the benefits of site wide storage. One very successful approach is to “converge” parallel file systems and other applications with storage to create centralized storage building blocks that provide higher performance and lower latency. At the same time, this solution also offers ease of purchase, deployment and management. Dealing with Big Data at the University of Florida The University of Florida is a good example. Its Interdisciplinary Center for Biotechnology Research (ICBR) has been in a rapid growth mode, generating increasing amounts of data as it adds new equipment such as next generation sequencers and Cryo-electron microscopy instruments. To handle this growth, ICBR wanted a flexible, low footprint, and simplified infrastructure that could scale as needed. The Center chose DDN’s converged infrastructure (DDN In-Storage Processing™) which allows users to embed parallel file sysytems and key applications inside the storage controller. This approach has allowed the ICBR to eliminate data access latency as well as the need for additional servers, cabling, network switches and adapters, while reducing administrative overhead. Balanced storage and faster application burst performance means that big data applications perform at optimal levels. The solution provides the performance and advanced capabilities needed to handle its rapidly growing next generation sequencing projects with their constantly changing application loads. ICBR’s experience shows how a mid-range organization with limited resources can enjoy a satisfactory site-wide storage solution. The Center has deployed an adaptive and customizable architecture for storing, managing and analyzing large collections of distributed data running to billions of files and petabytes of storage across tens of federated data grids. Optimized Storage for All As the University of Florida and the NERSC examples demonstrate, the benefits of site-wide, optimized storage are available to organizations both large and small. DDN, the HPC storage leader is showing the way.
<urn:uuid:64269b75-d193-4bb7-94f6-9bfbbdb27a7f>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/05/19/tackling-big-data-storage-problems-site-wide-storage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00415-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94626
1,224
2.609375
3
Cloud computing means sharing a computer with other clients of the cloud-service provider. The way that this works is by using virtualization. What is virtualization? Virtualization means taking a physical computer and dividing it up into one or more virtual computers. That means the host operating system is hosting other copies of other operating systems on the same machine. The operating system that controls other operating systems is called a “hypervisor.” Think about this, remember (if you are old enough) how IBM mainframe computers worked and work today. These large and expensive computers were designed to support multiple businesses and applications at once. Access to the machines was sold in timesharing-mode, where different companies paid for different amounts of time on the machine. To do that, the IBM mainframe operating system ran different operating systems in different areas of the machine they called “regions.” That means the memory of the machine was divided up so that one region could run, say CICS, and another run CMS, another run UNIX, or any of the many operating systems that IBM supported. They all shared the same memory, and the CPUs were divided up into time slices, which you and I would call “multitasking.” The hypervisor keeps track of what part of the memory belongs to which operating system and the CPU swaps in and out tasks at breathtaking speed, thus creating the illusion that computer was doing many things at the same time. In truth the CPU can only do one thing at a time. (Computers can have multiple CPUs, but that does not mean each CPU can do two things at once.) The disk drive storage is shared too, in the sense that the computer manages storage for each operating system instance, while the data for each operating system is kept separate. Computer virtualization means the same thing. The company VMware was first to market with this idea for PC hardware, meaning those that run Intel processors using the 8080 architecture. Now they have a competitor Hyper V, which is a Microsoft product that used to be called “Windows Server Virtualization.” An open-source hypervisor called Xen is also a competitor. (Open-source means the instructions [source code] used to power these machines are on the internet for anyone to download, study, and contribute changes. Usually different companies provide resources to work on this source code together.) Before we discuss how cloud computing relates to virtualization, it is worth discussing briefly why there has been a trend toward virtualization. If you don’t have a virtual server, you have a physical one. It used to be the case that a company called Sun dominated the market for high-end machines running UNIX, the operating system that powered most large business applications. Their major computer was Microsoft and Windows. Companies would buy separate servers for the database, the application server, the web server, proxy servers, and Active Directory. All of these cost a lot of money to buy and maintain. Because of virtualization, Sun started losing money and was taken over by Oracle; other specialized hardware vendors have gone out of business as companies moved to virtualization. The same thing is expected to happen to network hardware vendors, as networking too is moving to virtualization. The winner in all this are the PC manufacturers who have moved into building high-end PCs used for virtualization. (Their PC sales to consumers have fallen off drastically because of tablets and smartphones, but virtualization has breathed new life into these companies, albeit with much less revenue that before.) The high-end PC manufactures include Compaq, HP, Dell, Lenova. These machines are said to be “commodity servers,” meaning if you don’t like one company, switch to another, as their products are for the most part the same. Virtualization came first, followed by cloud computing. Companies like Amazon.com were the first to let multiple companies rent space on the thousands of computers that Amazon has in data centers around the world. Subsequent companies started calling this the “cloud,” and the name stuck. Each client pays a subscription fee to tap into the cloud. The idea is that this results in lower costs for the client, because they don’t have to buy their own hardware, and the cloud service provider benefits from economies of scale: they can buy computers cheaper than anyone else, because they buy so many, and their system administrators support more machines per person than would a smaller company, plus they have lower electricity costs for air conditioning per computer, because their data centers are so large. Since the cloud service provider uses virtualized servers, their clients share the same compute resources as other customers. Two different companies operating in the cloud would be using the same physical machine. Virtualization has made all of this possible, and at a lower cost, because it has replaced proprietary hardware, like Sun, with commodity hardware.
<urn:uuid:06873753-207e-4fa6-8954-5942b9103cae>
CC-MAIN-2017-04
http://www.cloudwedge.com/understanding-cloud-computing-virtualization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00167-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964688
1,001
3.84375
4
A team of researchers from Binghamton University have been working on a new intrusion detection approach based on monitoring the behavior of systems and spotting when it differs from the one that is considered normal. The project, titled “Intrusion Detection Systems: Object Access Graphs” and funded by Air Force Office of Scientific Research, is conducted by doctoral students Patricia Moat and Zachary Birnbaum, research scientist Andrey Dolgikh, and they are mentored by Victor Skormin, professor of electrical and computer engineering. They have chosen not to concentrate on detecting malware, as it can change faster than new signatures for it can be created, but on the systems’ behavior. “What we do is take a picture of what your computer is doing, and then we compare a picture of your computer behaving normally to one of an infected computer. Then, we just look at the differences,” Birnbaum said. “From that, we can see if your computer has an infection, what type of infection, and from there you know you’re under attack and you can take action.” These pictures are taken by monitoring system calls that go hand in hand with every computer operation performed. “System calls accumulated under normal network operation are converted to graph components, and used as part of the IDS normalcy profile,” they explained. They have developed algorithms to find a system normalcy profile, to find anomalous deviations, to recognize previously detected attacks, and a real-time visualization system to present the results. “Our IDS has the ability to instantly adopt changes in the normalcy definition,” they pointed out. “Our results demonstrate that achieving efficient anomaly detection is possible through the intelligent application of graph processing algorithms to system behavioral profiling.” More details about the project can be found here.
<urn:uuid:94a00d9d-2d4a-42fc-af94-cb2f244a347b>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/04/10/new-ids-project-spots-anomalous-system-behavior/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00157-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947234
384
2.8125
3
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss ARP, IARP, RARP & Proxy ARP. ARP, IARP, RARP, and Proxy ARP? When I first started studying for my CCNA years ago, one of the things that confused me was ARP. Or rather, what ARP did as opposed to Reverse ARP, Inverse ARP, and Proxy ARP! One book would mention ARP without mentioning the other variations, one would mention RARP but not Proxy ARP, and so on. I never forgot how confusing this was to me when I started. To help current CCNA candidates with this confusing topic, let's take a look at each one of these technologies. ARP – Address Resolution Protocol You may well know what ARP does from your networking studies or work on a LAN, but to effectively troubleshoot ARP issues on a WAN, you need to take network devices into account that may be separating the workstations in question. The basic ARP operation is simple enough. We concentrate on IP addressing a great deal in our studies and our jobs, but it's not enough to have a destination IP address in order to send data; the transmitting device must have a destination MAC address as well. If the sender doesn't know the MAC address of the destination, it has to get that address before data can be sent. To obtain the unknown Layer Two address when the Layer Three address is known, the sender transmits an ARP Request. This is a Layer Two broadcast, which has a destination address of ff-ff-ff-ff-ff-ff. Since Ethernet is a broadcast media, every other device on the segment will see it. However, the only device that will answer it is the device with the matching Layer Three address. That device will send an ARP Reply, unicast back to the device that sent the original ARP Request. The sender will then have a MAC address to go with the IP address and can then transmit. There are several network devices that may be between our two hosts, and for the most part, there is no impact on ARP. Since this is Cisco, though, there's gotta be an exception! Let's take a look at how these devices impact ARP. Repeaters and Hubs are Layer One (Physical Layer) devices, and they have no impact on ARP. A repeater's job is simply to regenerate a signal to make it stronger, and a hub is simply a multiport repeater. Therefore, neither a repeater nor a hub have impact on ARP. Switches are Layer Two devices, so you might think they impact ARP's operation; after all, ARP deals with getting an unknown MAC address to correspond with a known IP address. While that's certainly true, switches don't impact ARP for one simple reason: Switches forward broadcasts out every port except the one it was originally received on. The ARP Reply will be unicast to the device requesting it, as with the previous example. Now here's the exception — a router. Routers accept broadcasts, but routers will not forward them. For example, consider a PC with the address 22.214.171.124 /16. That host assumes it's on the same physical segment as the device 126.96.36.199 /16, since their IP addresses are both on the same subnet (188.8.131.52 /16). The problem here is that a router separates the two devices, and the router will not forward the ARP broadcast. The Cisco router will answer the ARP Request, however, with the MAC address of the router interface the ARP Request was received on. In this case, the router will respond to the ARP Request with its own E1 interface's MAC address. When the device at 184.108.40.206 receives this ARP Response, it thinks the MAC address of 220.127.116.11 is 11-11-11-11-11-11. Therefore, the destination IP for traffic destined for the remote host will be 18.104.22.168, but the MAC destination will actually be that of the router's E1 interface. Proxy ARP runs by default on a Cisco 2500 router, but it can be turned off at the interface level with the no ip proxy-arp command. RARP and Inverse ARP Reverse ARP is a lot simpler! RARP obtains a device's IP address when it already knows its own MAC address. (If the device doesn't know it's own MAC address, you have bigger problems than RARP!) A separate device, a RARP Server, tells the device what its MAC address is in response to the RARP Request. As you can see, RARP and DHCP have a lot in common. Inverse ARP doesn't deal with MAC or IP addresses. Inverse ARP dynamically maps local DLCIs to remote IP addresses when you configure Frame Relay. Many organizations prefer to statically create these mappings; you can turn this default behavior off with the interface-level command no frame inverse-arp. We hope you found this Cisco certification article helpful. We pride ourselves on not only providing top notch Cisco CCNA exam information, but also providing you with the real world Cisco CCNA skills to advance in your networking career.
<urn:uuid:f0fa0f89-9567-43b6-9947-bf78eef0d2a8>
CC-MAIN-2017-04
https://www.certificationkits.com/arp-iarp-rarp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00305-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947749
1,136
3.734375
4
Twitter. Netflix. Spotify. If you are one of the hundreds of million users on these sites, you may have noticed a disruption in their services last Friday, October 21st. They, among many other high-traffic sites, were the targets of a massive DDoS cyber attack. How were these sites attacked? Through a company called Dyn, an Internet performance company that helps to route internet traffic. With Dyn as their conduit, the hackers caused a significant Internet outage all throughout the United States. It is becoming increasingly clear that these types of DDoS attacks are on the rise and that unsecured Internet of Things (IoT) infrastructure is the hacker’s weapon of choice that deserves immediate attention from anyone who can take action. A DDoS attack involves overwhelming a web server with so much illegitimate traffic that the server under attack is crippled and unable to respond to legitimate requests. The malware used in the Dyn attack was able to exploit unsecured IoT devices such as DVRs, printers, surveillance cameras, and routers. Once a device is infected, the malware is able to coordinate multiple other devices and use their processing power to form a “botnet” – a network of infected computers that execute the DDoS attack. According to the Dyn security team, they observed tens of millions of individual devices associated with the botnet that carried out the attack, There is no easy way to determine if a particular device on your network is infected with malware or if it has been recruited for malicious botnet activity. There are, however, a number of things you can do right away to help secure your devices and help to prevent future attacks designed to cripple the Internet.
<urn:uuid:ef7ba4e3-ac51-4968-8405-3ac251feadd1>
CC-MAIN-2017-04
https://www.convergint.com/how-to-protect-your-iot-devices-from-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00543-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960068
342
2.515625
3
Punch down block, also called as cross-connect block, terminating block, or connecting block, is a type of terminal strip that connects one group of wires to another group of wires through a system of metal pegs that the wires are attached to, often used in telecommunications closets that support LAN(Local Area Network). Punch down blocks are the predecessors to patch panels and were commonly used to support low-bandwidth Ethernet and token-ring networks. The most common punch blocks are the 66 and 110 blocks. They are used to connect station cabling to the trunk cabling that goes from an IDF to the MDF. The 66 block has been widely used for splicing 25 pairs of telephone wires together. 110 blocks connect a punched down wire on one side to pre-connected patch cables with connectors such as RJ-45 or Telco 50-pin on the other side. A 66 block is a type of punch down block used to connect sets of wires in a telephone system. They have been manufactured in three sizes, A, B and M. A and B have six clips in each row while M has only 4. Each row of a 66 block is set up for one pair of wires to be spliced to another pair. however, any pair of clips can be used to connect any two wires. The A blocks spaced the rows further apart, and has been obsolete for many years. The B style is used mainly in distribution panels where several destinations (often 1A2 key telephones) need to connect to the same source. The M blocks are often used to connect a single instrument to such a distribution block. 66 blocks are designed to terminate 22 through 26 AWG solid copper wire. 66 blocks are available pre-assembled with an RJ-21 female connector that accepts a quick connection to a 25-pair cable with a male end. These connections are typically made between the block and the customer premises equipment (CPE). A 110 blocks is a updated version of punch down block, is the core part of the connection management system, used to connect wiring for telephone systems, data network wiring, and other low-voltage wiring applications. 110 type wiring block is flame retardant, injection-molded plastic to do the basic devices and the termination cabling system is connecting on it. The 110 block is designed for 22 through 26 gauge solid wire. This is the termination used on cat5e patch panel, cat 6 patch panel and RJ-45 jacks. They are also formed into block type terminations the size of small 66 blocks. The 110 block is designed for 500 MHz (1 gb/s) or greater bandwidth. 110 blocks are acceptable for use with AES/EBU digital audio at sample rates greater than 268 KHz as well as gigabit networks and analog audio. The specifications of 110 Wiring Blocks are as follows: 25 pairs 110 type wiring block, 50 pairs 110 type wiring block, 110 pairs 110 type wiring block, 300 pairs 110 type wiring block. The distribution frame package of 110 type wiring blocks should also include 4 or 5 blocks, connection block, blank labels and tags folder and the base. 110 type wiring block system uses easy quick-fit plug-hop loops which can be simply rearranged, so it provides a convenient cross-connect to non-professional and technical personnel management system. Punch down tool is used to force solid wire into metal slots on the block. Present residences typically have phone lines entering the house to a sole 66 or 110 block, and then it is spread by on-premises wiring to outlet boxes all over the house in a star topology. Both styles of punch block use a punch down tool to terminate the wires to the block. To terminate a wire, you place it into the terminal and then push it down to make contact with the punch down tool. The punch down tool fits around a 66 block terminal or into a 110 block terminal. One side of the blade is sharp to cut the wire off flush, this is normally marked on the tool with the word cut. Be sure to have this side oriented to cut off the loose end of the wire and not the end going to the other block. Hide extra cable behind the block in case you ever have to re-terminate a pair so that you don’t have to re-terminate the entire cable. Whatever the dimensions of the punch down tools are, usage is the same. Many tools have a dual blade that can be flipped depending on which style of block is in use.
<urn:uuid:a7fe43cb-9d5a-47c4-b1a4-b7691f472487>
CC-MAIN-2017-04
http://www.fs.com/blog/punch-down-block.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00048-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926644
919
2.546875
3
Reposted with permission from the Global Knowledge UK blog. Tablets are apparently the future of computing – versatile, lightweight, truly portable, and crucially, cool. It seems though that Microsoft is not falling for the marketing hype. They have instead allowed one of their Applied Science teams to invest some serious time and effort into developing an entirely new way of interacting with computers. If you are reading this on a desktop or laptop computer, you will have your keyboard at the front, mouse to the side and 2D screen at the back. Not for much longer though. If Jinha Lee, an MIT Media Lab Ph.D. student and a research intern at Microsoft and Cati Boulanger, a researcher at the company, get their way, your computer will be changing forever. You will have a 3D screen at the front mounted at roughly 45 degrees, to allow you to look down “through” the screen. The keyboard sits behind the screen and the mouse is gone forever! But, simply having a 3D screen is no longer particularly newsworthy. What makes Jinha and Cati’s development so impressive is the way you interact with the system. Using your hands as the controls, you reach under the screen – effectively into the display – and begin manipulating images, using tools, dragging and dropping content, with your hands. Imagine the way you interact with your tablet/smartphone, and then you start getting a vague idea of the interaction. But rather than making do with me explaining this great development, you can watch this video and let Jinha Lee demonstrate it. So how does this impact the training world? Well, network installation courses for example, can become genuine practical exercises, completed virtually. Imagine the benefit to both trainer and pupil alike to be able to actually get their hands dirty whilst on a course like Advanced Routing & Switching for Field Engineers. We all know that individuals learn best by “doing”, so to be able to actually learn hardware characteristics from the safety of a screen would surely be of great benefit. After all, should you make a mistake, you can simply turn your hand into a fist and bang the table (as this will surely replace CTRL-Z as the most important shortcut in our lives!) This can allow people to experiment confidently with networking structures, knowing that the worst that can happen is that they have to re-start. I certainly learn better when I can interact with something and get involved, I’m sure many of you do too. So, well done Microsoft, we look forward to Interactive 3D software being made available on Windows 7 later this year!
<urn:uuid:5b8d7000-bde3-4445-a5d7-fbf200576b5c>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/05/02/microsoft-brings-us-truly-interactive-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00534-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94804
542
2.84375
3
Unfortunately, it's not always this obvious when your privileged user accounts or data have been hacked. New reports from security investigators are estimating that the recent Adobe breach may in fact be the biggest known breach of all time -- with more than 152 million user accounts stolen. Adobe has stated that many of these were fake user accounts or users with invalid passwords, but from a data security perspective, the scope is still concerning. With today's infrastructure, an unprecedented amount of data can exist in one place, making it a virtual treasure trove. When you look at the cloud, or really, any virtualized infrastructure, you are right to be concerned. Research from Forrester, along with many other surveys, indicates that insiders continue to be the root cause of breaches. Forrester's 2013 data indicates 36% of breaches are from inadvertent misuse of data by insiders, while 25% of breaches are caused by malicious insiders. That's a lot of damage from people whose salaries you pay! Why are virtualized environments different? I've written a fair amount about why you need to be careful of data in public clouds, but what about data inside your firewall? Before server virtualization, networks were physically separated -- with designated servers, software and administrators to oversee them. Security meant keeping the bad guys out with a strong perimeter, segmentation internally to separate sensitive or regulated data, and protecting servers with physical methods like locked doors and video cameras. Over 50% of server workloads are now run in virtual machines, according to IDC's estimates. (VMware sponsored whitepaper). Many of the same security measures still apply: you must maintain firewalls, run and update antivirus, patch applications, and so on. But because virtual machines run on a hypervisor, having access to the virtual infrastructure really gives you the 'master key' to everything in the datacenter. You get a significant concentration of risk, and there is no video camera watching what you're doing. In the virtual world, administrators don't even need access to the VM. It's easy for them to take a snapshot, copy the snapshot elsewhere and spin up a copy of the VM and/or modify the disk image to inject new users and passwords. Thus, getting access to data is much easier than it was in the physical world. What should you do? If you are running a virtualized infrastructure and you value your data (or want to avoid a PR nightmare like Adobe has faced in the last few weeks), here are some tips and best practices that you can follow: - Know who is doing what: Make sure administrators are authenticated in a way that allows you to track what they are doing. If you are concerned about one person having too much power, use the 'two-man rule' -- where sensitive operations require the approval of a second person. - Control the environment: In virtualized environments, administrators can do a huge amount of damage intentionally or accidentally. Take the example of Shionogi, a pharmaceutical company who fired an administrator. Months later, in retribution for the firing of a colleague, the admin logged in and deleted all production virtual machines. This simple action killed a multitude of production applications, costing the company $800,000 to remediate, and a week of business downtime to recover. Out of the box management tools are made to give you access, but not necessarily securely. Make sure you have the right tools to prevent these catastrophic impacts. - Have a system of record: If you face audits from internal or regulatory sources, you need to have an audit-ready log that provides definitive information about both what has happened, as well as unsuccessful attempts to access or change resources. - Harden your hypervisors: There are best practices and tools available to harden the hypervisor. Make sure they're locked down to prevent tampering. - Encrypt your virtual machines and their data: You want to be able to prevent someone from copying a VM and spinning it up outside the trusted network. Further, encryption will secure the stored data or copies of the VMs that have been made for backup or disaster recovery. Armed with this knowledge, you can take greater advantage of virtualization, with fewer risks that your data may end up in the wrong hands.
<urn:uuid:a0fffe38-6f84-48bc-9f64-d41afa3acabc>
CC-MAIN-2017-04
http://www.computerworld.com/article/2475441/cybercrime-hacking/virtualized-environments-real-risk.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00260-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941237
866
2.609375
3
There are a number of known -- and certainly unknown -- risks involved in extended space travel, but perhaps the biggest hazard of all is radiation. With no atmosphere in deep space to filter out radiation from the sun and other sources, astronauts face potentially deadly levels of solar particles and cosmic rays. It's because of this and other health risks that I argue for extreme caution in mankind personally exploring space. Why send people on a suicide mission? I've been waiting for scientists to invent some kind of miracle radiation shield from rare, space-agey materials. Now it turns out there's a potential solution to radiation right in front of our noses. From the University of New Hampshire: Space scientists from the University of New Hampshire (UNH) and the Southwest Research Institute (SwRI) report that data gathered by NASA's Lunar Reconnaissance Orbiter (LRO) show lighter materials like plastics provide effective shielding against the radiation hazards faced by astronauts during extended space travel. The finding could help reduce health risks to humans on future missions into deep space. The researchers published their findings in the American Geophysical Union journal Space Weather. While aluminum has been the default material for building spacecraft, it's relatively useless in providing a shield for high-energy cosmic rays. Aluminum also adds a lot of mass. Cary Zeitlin of the SwRI Earth, Oceans, and Space Department at UNH, who is lead author of the report, called it "the first study using observations from space to confirm what has been thought for some time—that plastics and other lightweight materials are pound-for-pound more effective for shielding against cosmic radiation than aluminum." "Shielding can't entirely solve the radiation exposure problem in deep space, but there are clear differences in effectiveness of different materials," Zeitlin said in a statement. And it's not just plastic that would work as a radiation shield. "Anything with high hydrogen content, including water, would work well," Zeitlin said. Water, of course, would be heavy and impractical. But plastic is lightweight and cheap. It would, however, be able to handle the structural rigors of prolonged spaceflight before we start launching it into space with people inside, no matter how many people want to be on a reality TV show. Now read this:
<urn:uuid:3b71fbfc-e7cb-4ba4-ae06-824bcfa829a3>
CC-MAIN-2017-04
http://www.itworld.com/article/2705969/hardware/plastic-shielding-could-reduce-astronaut-radiation-risk.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00168-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954487
456
3.625
4
4 technologies that transformed government Being in charge of a government agency’s IT programs has never been an easy job. People have watched IT transform the economy, the culture and their personal lives in the past two decades, so they naturally expect a similar swift pace of technology-fueled reinvention from government — all without any missteps or wasting of taxpayer dollars, of course. Unfortunately, despite having invented some world-changing technologies, such as the Internet and the Global Positioning System, the government is often viewed as a technology laggard that is encumbered by outdated attitudes and procurement processes. Comparisons to the private sector are inevitable but mostly unfair. Government IT leaders deal with unique challenges and responsibilities when it comes to buying and deploying IT, including organizations that drive their own parochial IT agendas, project funding that is dependent on annual re-approval, and myriad regulations that dictate how agencies plan, develop and manage their IT systems. In the stories that follow, we take a closer look at some world-changing technology developments of the past 25 years, including the Internet, a game changer if there ever was one, and GPS, which has revolutionized the way we interact with our world and underscored the power of place. Now the demand for mobile technology and apps, with all the inherent security challenges, is driving a new revolution in employee productivity and public interaction, whether agencies are ready or not. Fortunately, the government has kept pace with many of those changes by gradually moving away from custom-built systems to commercial off-the-shelf technology. That shift has been accompanied by changes in policy that further streamlined the procurement process and gave agencies access to commodity products wrapped in solutions tailored to their specific needs. All those achievements face challenges, but that is only natural as technologies continually evolve to meet the government’s and the public’s ever-changing needs and expectations. NEXT: Government at your fingertips How the birth of the Internet enabled e-government The Internet has changed the way the whole world does business, so it is no wonder that it has transformed — and is still transforming — the way the government delivers services to the public, buys products and shares information. The Defense Department developed the Internet’s predecessor, the Advanced Research Projects Agency Network, in the 1960s and 1970s as a way for its university partners and research labs to communicate. By 1996, many civilian agencies were flocking to the Internet, notably the General Services Administration, which became one of the first to give Internet access to all its employees. 1996 was the year that the Clinger-Cohen Act effectively ended GSA’s reign as a mandated supplier to the government, so the agency was looking for ways to improve its operations and offer better services to its agency customers. Acting GSA Commissioner David Barram, a 24-year veteran of Silicon Valley technology companies, said at the time that the Internet would be a key to GSA’s future competitiveness. “Some people did still wonder what anyone in GSA would need it for,” said Bob Woods, president of Topside Consulting Group and former commissioner of GSA’s Federal Technology Service. “But the Internet’s communications potential quickly became apparent.” GSA’s leaders were hardly alone in their assessment. The National Institutes of Health set up a virtual store in 1996 that allowed its employees to shop for computer products over the Internet. Many agencies had been using electronic data interchange for years to conduct business, but those systems relied on esoteric back-office software and proprietary networks controlled by procurement specialists. By comparison, the emerging World Wide Web and the online storefronts it enabled were democratizing e-commerce. The Internet’s increasing popularity also gave agencies a new way to interact with and serve the public. As soon as they began going online, agencies established websites to provide basic information and government data to the public and later added a variety of electronic services. The e-government investments have paid off, especially in recent times when overall trust in government has taken a hit. For example, taxpayers who file their returns electronically give the Internal Revenue Service a fairly high score on the American Customer Satisfaction Index: 78 out of 100 versus 57 for those who file on paper. “ACSI results confirm that the promotion of e-government initiatives is not only a worthwhile pursuit but is one that will likely continue to alter the landscape of government,” said Claes Fornell, ACSI’s founder. However, security and privacy concerns continue to be major hurdles for the government’s expanded use of the Internet, particularly in the era of cloud computing. “I think the government has done a fairly good job in enhancing things to do with [agency use of the] Internet from a bureaucratic perspective,” said Rick Nelson, director of the Homeland Security and Counterterrorism Program at the Center for Strategic and International Studies. “But there will be this constant tension going forward about adopting technologies because of security concerns.” NEXT: The power of place How a technology developed for the Cold War permeated government operations The Global Positioning System is pervasive in today’s government operations, whether it’s supporting surveillance of the country’s borders, disaster response or critical functions on the battlefield. And it plays a role in a variety of products and services the public has enthusiastically adopted. That widespread use of the government-developed, satellite-based navigation system is a far cry from its origins as a highly secret, specialized and expensive asset, conceived during the Cold War as a means to improve the accuracy of the country’s nuclear defenses and other military capabilities. Over time, the government opened the system to civilian and public use, and GPS — and the parallel developments of publicly available, high-resolution satellite imagery and geographic information systems to manage all that data — has fundamentally changed people’s ability to understand and interact with the world around them. “We’ve had this explosion of 'the power of place,’ provided by the ubiquity of GPS and the availability of precise geospatial information,” said Keith Masback, president of the U.S. Geospatial Intelligence Foundation. “All members of the federal government are interacting with their IT systems and their data differently because they’re geo-enabled and enabled with precise location information.” That capacity is being applied to a multitude of government functions, many of them vital to both routine and emergency operations. For example, GPS and satellite imagery were central to the success of the decennial census in 2010 and are used every day in law enforcement, air traffic control, agriculture and emergency response. The technology has also proved invaluable in disaster response. “The Haiti earthquake and hurricanes Katrina and Rita…really brought everything to bear,” Masback said. “We had crowdsourcing of critical information that was enabled by GPS. Those were major turning points.” GPS is also playing a key role in the comprehensive overhaul of records at Arlington National Cemetery. Two years after allegations of gross mismanagement surfaced, officials are using GPS-based tools to digitize and organize operations — and enhance the visitor’s experience. “Arlington is now able to visualize operations across 624 acres, in real time, to understand what’s occurring at the cemetery,” said Maj. Nicholas Miller, the cemetery’s CIO. “We’ve transformed into a GIS-managed operation.” But there are challenges for the future of GPS and satellite imagery. The heavy dependence on the systems heightens existing and emerging vulnerabilities and complexities. Current satellite constellations are aging, and a number of policy-related issues threaten progress. Furthermore, geospatial systems in development in other countries could introduce interoperability challenges and fragment the supporting civilian industry. “We have to understand we’ve become reliant on GPS, and we’ve set the global standard: We built it, we launched it, we maintain it,” Masback said. He noted that Europe and China are developing their own satellite navigation systems and added, “but other countries are cognizant of the vulnerability that comes with having the keys to the kingdom. These are going to be different approaches to GPS.… How is that going to impact us?” NEXT: Better acquisition with commodity IT How the shift from custom-built to ready-made commercial products has streamlined federal acquisition The shift away from proprietary and custom-built systems to off-the-shelf hardware and software has had a significant impact not only on what the government buys but also how it buys. Although government has not achieved the level of gains that private industry has, commodity computing in government has had similar benefits: lower hardware costs, greater efficiency and rapid innovation in applications. The road to the broad commoditization of computing in government began with the development of the Unix operating system, said Tim Hoechst, chief technology officer at Agilex. He believes that Unix’s ability to run on a number of platforms led to companies competing to build cheaper systems that could host Unix. The next phase came on the desktop as first DOS and then Microsoft Windows became the standard for most computing purposes, and machines based on the Wintel architecture became pervasive. Procurement reform followed closely on the heels of commodity PCs, and agencywide contracts made acquisition even simpler. With the passage of the Federal Acquisition Streamlining Act of 1994 and then the Clinger-Cohen Act of 1996, the government truly became a buyer of commercial IT, said Larry Allen, president of Allen Federal Business Partners. Those laws were an acknowledgment that the government was no longer the major market driver in the development of IT systems, he added. The innovations of the commercial market had outstripped the government's ability to keep up, especially given its arcane laws. The reforms enabled agencies to buy the same technologies as their commercial counterparts. Costs dropped and competition accelerated as companies sought to establish themselves in a newly defined market. The government was freer to buy and companies were freer to offer commodity-like solutions. Once the rules came down and a firm preference for commercial products was established, buyers and sellers alike flocked to the GSA Schedules program to take advantage of its commercial offerings, reduced procurement lead times and streamlined competitions, Allen said. Commoditization further entrenched itself when government invested heavily in client/server computing, Hoechst said. That’s when people realized they could virtualize their back-end systems with lots of cheap, commodity processors. “In the late 1990s and early 2000s, they found they could use racks full of low-cost blades they could buy from a range of suppliers as long as they could run the operating systems and applications they wanted,” he said. “Linux was a big driver for this.” When GSA added IT services to its Schedules program, the growth accelerated, Allen said. It allowed federal buyers to obtain commodity products wrapped in tailored solutions. The service offerings helped differentiate many suppliers from one another, and the ease of use offered by the commercial nature of the solutions made the Schedules very popular with buyers. Where once the market was defined by government specifications and obsolete rules, it was now driven by commercial market trends and fewer rules, which allowed for more competitors and faster acquisitions. NEXT: The perils, and promise, of mobility How mobile technology is upending the workplace and remapping the security landscape Mobile technology isn’t exactly new. Portable computers and basic cell phones became popular in the 1990s, and though the bulky early devices were not always the most convenient to cart around or put in a pocket, they marked the breakout of computing and communications from the confines of the office. Now the popular BlackBerry and Palm devices of the early days have been supplanted by Apple iPhones and Android smart phones — and tablet PCs threaten to overtake laptops — thanks to a wave of innovation in the mobile technology market and broadband networking in the past several years. However, almost from the moment the first federal manager accessed his or her e-mail on a BlackBerry or took a laptop PC home or on the road, agencies have struggled to keep up with security challenges and users’ growing expectations for the technology. Meanwhile, citizens increasingly expect to access government information and services via portable devices. The huge surge in popularity is pressuring agencies to find secure ways to incorporate mobile devices into the enterprise — from “bring your own device” policies to Federal CIO Steven VanRoekel’s new digital strategy, touted as a much-needed blueprint for securing and managing mobile devices governmentwide. “Mobility is not growing from our needs as an enterprise,” said Simon Szykman, CIO at the Commerce Department, at a recent mobility event. “It is being thrust upon us.” Those security concerns came to head in 2006, when a laptop computer and external hard drive containing personal information on 26 million veterans and active-duty military personnel was stolen from the home of a Veterans Affairs Department data analyst. It was the largest information security breach in the government’s history, and VA later agreed to pay $20 million to settle a class-action lawsuit brought on behalf of the people whose personal information had potentially been compromised. That theft has had “a lasting impact that has been substantial and sustained within the VA,” CIO Roger Baker said. It was also a wakeup call for the rest of the government. Because the portability that makes the new devices so popular is also what makes them so vulnerable, new approaches focus on securing data, with one promising solution being to use mobile devices as secured thin clients that access applications in the cloud or on agency servers rather than on the local hard drive. “With the emergence of the federal digital strategy, we have identified the problems, and the next step is working together to resolve the problems,” said Tom Suder, president of Mobilegov and co-chairman of the Advanced Mobility Working Group at the American Council for Technology/Industry Advisory Council. “I’m very optimistic. There is a good group of government people leading the effort, and they are being very proactive.”
<urn:uuid:c17f2558-9be3-450b-b316-93e7572ceeef>
CC-MAIN-2017-04
https://fcw.com/articles/2012/06/15/feat-tech-intro.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00076-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960269
2,962
2.71875
3
The population of North Atlantic right whales has slowly crept up from about 300 in 1992 to about 500 in 2010. But a study that appeared this month in the journal Frontiers in Marine Science said the number of baby right whales born every year has declined by nearly 40 percent since 2010. Study author Scott Kraus, a scientist with the New England Aquarium in Boston who worked on the study, said the whales' population suffers even when they survive entanglements in fishing gear. He said data suggest those entanglements have long-term negative physical and reproductive effects on them. "They are carrying heavy gear around, and they can't move as fast or they can't feed as effectively," Kraus told The Associated Press in an interview. "And it looks like it affects their ability to reproduce because it means they can't put on enough fat to have a baby." Entanglements have surpassed ship strikes as a leading danger to right whales in recent years. Forty-four percent of diagnosed right whale deaths were due to ship strikes and 35 percent were due to entanglements from 1970 to 2009, the study said. From 2010 to 2015, 15 percent of diagnosed deaths were due to ship strikes and 85 percent were due to entanglements, it said. There is reason to believe the entanglements could harm conservation efforts despite recent positive signs on the whales' recovery, Kraus said. Researchers said earlier this year that they were beginning to see more of the whales in Cape Cod Bay, and that was a good sign. Stormy Mayo, a senior scientist at the Center for Coastal Studies in Provincetown, said the drive to make fishing gear safer for the whales could be key to saving them. "There's a great deal of work being done to try to change the configurations of various kinds of fishing gear or the methods of fishing to reduce entanglement," he told the AP. North Atlantic right whales are among the most endangered species of whales in the world. They spend the warm months feeding in areas off the Northeastern states and Canada and spend the winter off Southern states, where they give birth. They are called right whales because they were hunted relentlessly during the whaling era, when they were considered the "right" whale to hunt because they were slow and floated when killed. Explore further: Fishermen want humpback whales off endangered list More information: Scott D. Kraus et al, Recent Scientific Publications Cast Doubt on North Atlantic Right Whale Future, Frontiers in Marine Science (2016). DOI: 10.3389/fmars.2016.00137 Holliday E.B.,University of Houston | Yang G.,University of South Florida | Jagsi R.,University of Michigan | Hoffman K.E.,University of Houston | And 3 more authors. International Journal of Radiation Oncology Biology Physics | Year: 2015 Purpose: To evaluate characteristics associated with higher rates of acceptance for original manuscripts submitted for publication to the International Journal of Radiation Oncology • Biology • Physics (IJROBP) and describe the fate of rejected manuscripts. Methods and Materials: Manuscripts submitted to the IJROBP from May 1, 2010, to August 31, 2010, and May 1, 2012, to August 31, 2012, were evaluated for author demographics and acceptance status. A PubMed search was performed for each IJROBP-rejected manuscript to ascertain whether the manuscript was ultimately published elsewhere. The Impact Factor of the accepting journal and the number of citations of the published manuscript were also collected. Results: Of the 500 included manuscripts, 172 (34.4%) were accepted and 328 (65.6%) were rejected. There was no significant difference in acceptance rates according to gender or degree of the submitting author, but there were significant differences seen based on the submitting author's country, rank, and h-index. On multivariate analysis, earlier year submitted (P<.0001) and higher author h-index (P=.006) remained significantly associated with acceptance into the IJROBP. Two hundred thirty-five IJROBP-rejected manuscripts (71.7%) were ultimately published in a PubMed-listed journal as of July 2014. There were no significant differences in any submitting author characteristics. Journals accepting IJROBP-rejected manuscripts had a lower median [interquartile range] 2013 impact factor compared with the IJROBP (2.45 [1.53-3.71] vs 4.176). The IJROBP-rejected manuscripts ultimately published elsewhere had a lower median [interquartile range] number of citations (1 [0-4] vs 6 [2-11]; P<.001), which persisted on multivariate analysis. Conclusions: The acceptance rate for manuscripts submitted to the IJROBP is approximately one-third, and approximately 70% of rejected manuscripts are ultimately published in other PubMed-listed journals, but these ultimate-destination journals usually have a lower impact factor, leading to fewer citations and overall visibility. © 2015 Elsevier Inc. Source Jagsi R.,University of Michigan | Bennett K.E.,Scientific Publications | Griffith K.A.,University of Michigan | Decastro R.,University of Michigan | And 3 more authors. International Journal of Radiation Oncology Biology Physics | Year: 2014 Purpose Peer reviewers' knowledge of author identity may influence review content, quality, and recommendations. Therefore, the International Journal of Radiation Oncology, Biology, Physics ("Red Journal") implemented double-blinded peer review in 2011. Given the relatively small size of the specialty and the high frequency of preliminary abstract presentations, we sought to evaluate attitudes, the efficacy of blinding, and the potential impact on the disposition of submissions. Methods and Materials In May through August 2012, all Red Journal reviewers and 1 author per manuscript completed questionnaires regarding demographics, attitudes, and perceptions of success of blinding. We also evaluated correlates of the outcomes of peer review. Results Questionnaires were received from 408 authors and 519 reviewers (100%). The majority of respondents favored double blinding; 6% of authors and 13% of reviewers disagreed that double blinding should continue in the Red Journal. In all, 50% of the reviewers did not suspect the identity of the author of the paper that they reviewed; 19% of reviewers believed that they could identify the author(s), and 31% suspected that they could. Similarly, 23% believed that they knew the institution(s) from which the paper originated, and 34% suspected that they did. Among those who at least suspected author identity, 42% indicated that prior presentations served as a clue, and 57% indicated that literature referenced did so. Of those who at least suspected origin and provided details (n=133), 13% were entirely incorrect. Rejection was more common in 2012 than 2011, and submissions from last authors with higher H-indices (>21) were more likely to survive initial review, without evidence of interactions between submission year and author gender or H-index. Conclusions In a relatively small specialty in which preliminary research presentations are common and occur in a limited number of venues, reviewers are often familiar with research findings and suspect author identity even when manuscript review is blinded. Nevertheless, blinding appears to be effective in many cases, and support for continuing blinding was strong. © 2014 Elsevier Inc. All rights reserved. Source
<urn:uuid:fc7cf1d9-f1b3-4ae6-9be1-2f7ccf42abab>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/scientific-publications-1623291/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00196-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960418
1,509
3.5
4
Bringing Laptops to the Third World In First World nations such as the United States and the United Kingdom, computers are ubiquitous in schools and have been for some time. In Third World nations, however, this does not yet hold true — students in developing countries might complete an academic career without much or any interaction with a computer, putting these nations at a tremendous disadvantage in competing globally in the digital age. Nicholas Negroponte, Massachusetts Institute of Technology Media Laboratory chairman emeritus, is acting to change this with his One Laptop per Child program, which launched more than two years ago. It has since been established as an independent nonprofit. One Laptop per Child has developed a machine called the XO, billed as a “$100 laptop.” The organization is acting to put the XO in the hands of children in developing countries such as Argentina, Brazil, Libya, Nigeria, Pakistan, Thailand and Uruguay. Each country gets versions programmed specifically to its native languages. Manufactured by Quanta Computer Inc., the XO is a small, white unit with a green keyboard and framework, and it comes with a manually operated battery charger. When turned on, users are greeted by a screen with a stick figure icon in its center, which represents themselves. The figure is surrounded by a ring populated with icons for programs running on the machine. In this way, the XO’s operating system escapes the enforcement of a computer organized by files and folders. In fact, the machine has no hard drive. It does, however, feature three USB ports and headphone and microphone jacks, as well as an internal microphone and dual internal speakers. The keyboard is a sealed rubber membrane, accompanied by a touchpad and cursor-control keys. Because it’s intended for use by students who might be having their first experience with a computer when they pick it up, it’s designed to function as an organized presentation of programs as tools for learning, creating and communicating rather than merely working. Toward that end, the XO is Wi-Fi interactive. During classroom operation, end-users see other stick figures in different colors appearing on their screen, which represent other students in the vicinity. Moving the computer’s cursor to these figures produces students’ profiles, and from there, they can chat or work together on projects. On One Laptop per Child’s Web site (www.laptop.org), Negroponte discusses why it is important for students in developing countries to have their own computers. “One does not think of community pencils — kids have their own,” Negroponte stated. “They are tools to think with, sufficiently inexpensive to be used for work and play, drawing, writing and mathematics.” A computer, Negroponte points out, is the same thing but far more powerful. “Laptops are both a window and a tool, a window into the world and a tool with which to think,” he stated. “They are a wonderful way for all children to learn through independent interaction and exploration.” Ninety percent of the XO’s programming was taken from code available in the open-source community, which is one of the reasons the machine’s cost is so low. Chris Blizzard of Red Hat Inc. served as lead software integrator on the project, and he said he doesn’t think One Laptop per Child would have been possible without the availability of open-source software. “Open-source software has enabled more and more people to participate in IT all over the globe,” Blizzard said. “These technologies are used from San Francisco to Singapore to Mumbai because anyone can acquire and use it without permission. It’s possible that One Laptop per Child could start to move that from the back office into people’s homes.” – Daniel Margolis, firstname.lastname@example.org
<urn:uuid:f779bb54-43d3-428f-9d64-421d36914435>
CC-MAIN-2017-04
http://certmag.com/bringing-laptops-to-the-third-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00104-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960365
823
2.96875
3
Tornado Outbreaks Strain Emergency Managers / August 9, 2011 From April 25 to 28, 321 people were killed during a devastating tornado outbreak in the South. Storms first hit Arkansas and Louisiana, pounded Mississippi and Alabama, and continued through Georgia and Tennessee, all the way up to Virginia. On April 27 alone, 314 people were killed — more tornado deaths in a single day than on any day since 1932. About a month later, a tornado hit Joplin, Mo., and is estimated to have killed more than 130 people. The photos above shows volunteers and debris that fill the streets of Joplin after the EF-5 tornado touched down on May 22. To read more about this spring's deadly tornado outbreak, visit Emergency Management magazine's website. Photo courtesy of FEMA/Steve Zumwalt
<urn:uuid:0678aec5-d7f7-45c7-a874-47d7d7437e79>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Tornado-Outbreaks-Strain-Emergency-Managers-08092011.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00012-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950921
171
2.90625
3
Social engineering attacks and the associated effectiveness with each attack Social engineering means that there are some attacks which are done in person. Like, the physical efforts might be involved in them. There are many attacks which can be done through this technique and each of these attacks contains some effectiveness. Like some of them can be more effective in the case of urgency, like one might be in a hurry to open the site or get prone to some attack. Also, if someone doesn't know about the attack that it even exists, then too, he would be careless about it and the hackers can take advantage of the situation. Another thing is, that if the attack type is not used as commonly as some other attacks, then one would not know and hence would not even realized if there I some trapped knitted by the hacker. Here are some attacking techniques and the effectiveness which are embedded with each attack; The shoulder surfing is actually the technique which involves the direct observations. It includes looking over the ones shoulder, to gather some information about him. This technique is normally used when one wants to gather the information about the passwords, codes, PINS etc. It doesn't even make the person realize that he is being watched and only those with some amazing senses can sense that. It might happen that one goes to some cafe shop and takes his laptop with him, there someone watches him over typing he password of Facebook and can remember it along with the visible email ID. It is a common practice that when one is done with something, it is discarded into the trash bins. But this thing can be dangerous as well. This method is little creepy since one might get attacked in a way that the hacker can go through the trash of that person can hence can gather some information which isn't supposed to get out. Like if someone gets out of the bank, one might go there and check the receipt which he has left at ATM or has thrown in the bin. In the cyber world, this is done through the postings which are done by one. The hackers can keep some close eye to one's activities and the posting he has been doing and hence can gather some knowledge. Like, one can gather knowledge about one's likes and dislikes, activities, the position of job, schedules etc. also, this can refer to the exploration of the trash bin of a system, which is recycle bin. It is normal that when we delete something, we don't immediately empty up the recycle bin. Another technique which this attack follows is getting the user into the confidence over the internet, and then luring him to gather some important information about him. So one should always stay active and should not share any type of personal information with one they meet at internet. As the name suggests, this attack is done by tailing someone physically. Tailing means that one is followed physically and is followed to the places where he isn't allowed to be. They are the restricted sites. In the sense of security, it means compromising the physical security through following of someone through the door which is there to keep the intruders at arm's length. So the sum is that this is a type of social engineering where one who is not even authorized to get entered in some specific area, gets there by following someone is actually authorized. So, one should be well aware of the surroundings so that no one can get advantage of the privileges that he has. Nothing in this world id perfect so as the security system. No matter how much it is claimed that some security system is flawless, there would always be some weak point where it can get hit pretty badly. Even though the system built can be so amazing, but there can be some part of it which is soft. That is, the people. In any system, it is beyond the suspicion that people are the soft targets so now hackers are turning them to use them as the weapons. One should know that the impersonation is one of the many social engineering methods which are pretty bad. They are used to gain some access to the network or the system so that the fraud can be committed or the identity can be theft. The impersonation also is different from the others forms of attacks since it incurs through people and no email or the phones are used for it. The social engineer can get involved into it himself and he can play some role of the one who is known by the user. Hence, the user would trust that person and fooling him to obtain access to the data would become an easy task. The system is set on the method that one would believe if he is told that other person is what he is. Hence, there would be the trust and one would not question the power and authority and the other person who is fake, would play the role easily. Here the manipulation of the victim is done consciously, to gather up some information without one having realized that some security breach is being incurred. But, there is some much preparation which is required in order to get this drama played. Now, some of the social engineers even prefer the email approach or the phone calls rather than personal appearance. No one can know that any of the impersonators was involved and one can get fooled quite easily by someone. There are so many viruses' types which are out there in the cyber world and one of them is known as the virus hoax. The virus hoaxes can be pretty destructive since they might lead the user to ignore some virus warnings and hence, the user becomes an easy target for the destructive virus which can actually make some bad damaged. So one must know that if on receives some warning message about the virus, he should get it checked whether it is one of the hoaxes virus or not. Also, there is the tip. The attachment emails should not be opened rapidly and downloaded instantly. Reason is that those hoaxes can lay under them as well. Normally the websites now scan the attachments and the emails, but one can never know. So no matter if one knows the sender, the emails should be opened with some care. Whaling is actually a kind of the phishing attack. The phishing attacks have many types and this specific attack is designed especially for the executives or the people who are sitting at some higher posts in a company. These attacks are well planned and hence are designed carefully and they are not easy to be caught. One should know about phishing since it is one of the serious attacks. The phishing is actually an attempt which is made to get some sensitive information about one like the passwords, credit card details, usernames etc. the communication which is done in this case is done with the help of some social websites which are popular, some auction sites, online payments, banks sites etc. These sites and sometimes, the IT administrators too are used for luring the unsuspecting people. The phishing emails contain some links which are highly infected by some malwares. It is common practice that phishing is carried out through some instant messaging and the emails spoofing. It might happen that one opens up the email, and there is the link of the website and when one opens it up, it is same as the original website. The feels and the looks would be pretty much the same. They can be very similar and one might get fooled by it. So one can put in the personal information to log into it and hence his username and the password would be stored to the data base. One should know that the phishing is one of the special engineering methods which are used for the deceiving the users. Hence the current state of the web securities is spoiled greatly by this attack. There are some great deals of the incidents which have reported the phishing and they include some sure training, technical security steps, legislations and the public awareness as the measures to get rid of this attack. One must also know that this is a continuous threat which is getting grown day by day. The very famous websites which are related to social media like twitter, Facebook etc., this risk is gone too high. Now it is the common practice that many hackers use those websites to launch some attack on the user and hence user can become the target of these attacks no matter where he is. The worst thing about this technique is that it indicates some trusts and portrays it. It does it easily since one cannot know whether the website which is being shown is even real or not. When this thing happens, the hackers gets the chance to have access to some personal information like the usernames, credit cards numbers, security codes, passwords etc. One might remember the word fishing after hearing this name, and it makes sense since this word was generated from the word fishing. Principles (reasons for effectiveness)Authority The attacks can ensure that the authority is taken from the user and the attacker gets in into his hands. In some cases, it may happen that the authority isn't snatched from the user and attacker simple borrows it until he is done transferring the data or the money. There are several intimidation levels involved. Like in the impersonator's case, the attack is done by someone while making one believe that the attacker is the good and authentic person hence some trust is developed. Same is done in the case is phishing where the fake website is created similar to the original one so one can get deceived easily. One might not be able to find any social proof that who has done the attack and where the whole data is gone. Hence one can easily steal the data without leaving any social trace. Some of the attacks, like virus hoax, are not that common so people don't know about it. Hence they don't prepare themselves for such attacks and there are so many people out there who don't even know what phishing attack are. So their inadequacy of knowledge is what leads the hackers into their files and hence the result is data loss. The process of the hacking done in these cases is quick. There are many attacks which lure the people in by telling some offers which would end soon and hence the users fall into the trap of urgency and don't even think about the minor changes which can save them from being deceived. Another important factor here which can help someone getting away from it is getting familiarity. If one is being attacked and he doesn't even realize it's an attack since he isn't familiar with the terms and the methods, then he can surely get trapped and can lose his data to the hackers. There is something bad about the attacks, which is, that they sometimes portray some trust. Like in phishing, they act as if the website shown is real one and one might not even find any difference. In the case of hoax, the warnings come and go but one would think that they are normal. Hence one can develop so much trust and this can result into the success of the attack. Hence, there are many of the tools of social engineering which can be used by the attackers. So, one should be aware of them all and should prepare himself to recognize them. It might be difficult for him to know whether the attack is being done but if someone has familiarity with the attacks, he can easily avoid them and can secure his data.
<urn:uuid:be15193e-0b1c-478a-9b63-53e60e2a0a64>
CC-MAIN-2017-04
https://www.examcollection.com/certification-training/security-plus-social-engineering-attacks-associated-effectiveness-with-each-attack.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00498-ip-10-171-10-70.ec2.internal.warc.gz
en
0.981561
2,259
2.8125
3
A network of video cameras melded a unique algorithm let scientists with the Carnegie Mellon University track the locations of multiple individuals in complex, indoor setting - just like Harry Potter's Marauder's Map. If you haven't seen it, the Harry Potter Wiki describes the Marauder's Map as "a magical document that reveals all of Hogwarts School of Witchcraft and Wizardry. Not only does it show every classroom, every hallway, and every corner of the castle, but it also shows every inch of the grounds, as well as all the secret passages that are hidden within its walls and the location of every person in the grounds, portrayed by a dot. It is also capable of accurately identifying each person...even the Hogwarts ghosts are not exempt from this." The Carnegie Mellon system was able to automatically follow the movements of 13 people within a nursing home, even though individuals sometimes slipped out of view of the cameras and researchers said they made use of multiple cues from the video feed: apparel color, person detection, trajectory and, perhaps most significantly, facial recognition, according to university researchers. Specifically, the Carnegie Mellon algorithm significantly improved on two of the leading algorithms in multi-camera, multi-object tracking. It located individuals within one meter of their actual position 88% of the time, compared with 35% and 56% for the other algorithms, researchers said. Multi-camera, multi-object tracking has been an active field of research for a decade, but automated techniques have only focused on well-controlled lab environments. The Carnegie Mellon team, by contrast, proved their technique with actual residents and employees in a nursing facility-with camera views compromised by long hallways, doorways, people mingling in the hallways, variations in lighting and too few cameras to provide comprehensive, overlapping views, the researchers stated. The Carnegie Mellon researchers said they developed their tracking technology as part of an effort to monitor the health of nursing home residents but automated tracking techniques also would be useful in airports, public facilities and other areas where security is a concern. Despite the importance of cameras in identifying perpetrators following this spring's Boston Marathon bombing and the 2005 London bombings, much of the video analysis necessary for tracking people continues to be done manually, said Alexander Hauptmann, principal systems scientist in the Carnegie Computer Science Department. The researchers said such motion tracking systems have a number of challenges. For example, something as simple as tracking based on color of clothing proved difficult because the same color apparel can appear different to cameras in different locations, depending on variations in lighting. Likewise, a camera's view of an individual can often be blocked by other people passing in hallways, by furniture and when an individual enters a room or other area not covered by cameras, so individuals must be regularly re-identified by the system. "Face detection helps immensely in re-identifying individuals on different cameras. But that faces can be recognized in less than 10% of the video frames. So the researchers developed mathematical models that let them combine information, such as appearance, facial recognition and motion trajectories," the researchers stated. "Using all of the information is key to the tracking process, but facial recognition proved to be the greatest help. When the researchers removed facial recognition information from the mix, their on-track performance in the nursing home data dropped from 88% to 58%, not much better than one of the existing tracking algorithms." Check out these other hot stories:
<urn:uuid:975d3a13-fb1a-446a-ad0d-0bca5f8a7c92>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224791/mobile-apps/carnegie-mellon-video-net-brings-harry-potter-marauder-s-map-to-life.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00314-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948143
692
2.71875
3
NASA today released images of a comet that will make a pass within 84,000 miles of Mars -- less than half the distance between Earth and the moon. NASA said the Hubble Space Telescope captured the image on the left March 11 of comet C/2013 A1, also called Siding Spring, at a distance of 353 million miles from Earth. Hubble can't see Siding Spring's icy nucleus because of its minuscule size. The nucleus is surrounded by a glowing dust cloud that measures roughly 12,000 miles across, NASA said. +More on Network World: Fabulous space photos from NASA's Hubble telescope+ The right image shows the comet after image processing techniques were applied to remove the hazy glow of the coma revealing what appear to be two jets of dust coming off the location of the nucleus in opposite directions. This observation should let astronomers measure the direction of the nucleus's pole, and axis of rotation. According to NASA, the comet was discovered in January 2013 by Robert McNaught at Siding Spring Observatory. It is falling toward the sun along a roughly 1 million year orbit and is now within the radius of Jupiter's orbit. The comet will make its closest approach to our sun on Oct. 25, at a distance of 130 million miles - well outside of Earth's orbit. The comet is not expected to become bright enough to be seen by the naked eye. Check out these other hot stories:
<urn:uuid:ee07a29a-9364-4bfb-a1d5-9d28c32557a0>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226628/security/nasa-snaps-shot-of-flashy-mars-bound-comet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00398-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950179
288
3.59375
4
IT security is usually focused on how to prevent outsiders with malicious intent from causing harm to your IT systems and data. While this is a valid concern, people within organizations who simply do not understand the consequences of their everyday habits and behavior on company computers pose an equivalent if not greater risk. Every person within a company that has access to information is a gateway for data exfiltration. This is why education for ALL employees that encourages following best practices for IT security safety is extremely important to implement within organizations. So where should you start? Take 3 easy steps. 1. Awareness about the ways hackers get into your organization The average computer user has most likely heard all the keywords – virus, firewalls, malware, phishing, ransomware, insider threats – but what it all means has to be explained at the basic level and the consequences need to be emphasized. Of course, the biggest emphasis should be on how hackers can use them to get access to company data. From experience, it’s always best to use real-life examples. Case in point: Recently, I worked with a university whose administration staff received an email to their university emails to update their account information and passwords. It was a phishing scam that provided the hackers with multiple administrators’ passwords. When I further investigated the issue alongside the IT security team, I realized people didn’t understand that it’s not as easy as just changing your password again and that it’s not someone manually digging through their information. The department put forward an initiative to explain how phishing scams work and that the consequences are someone has all the data you had access to – including people’s personal data. In particular, most likely due to the high success rate of the hackers the first time, this university’s administration team was targeted multiple times afterwards. The hackers, however, failed to extract any additional information due to the administration’s teams new set of knowledge who reported each phishing e-mail afterwards and started a university wide alert every time they received a suspicious e-mail. 2. Constant reminders to change people’s bad habits When employees first start it’s important to give them a list of the top 10 rules they should follow regarding IT practices. If you know the rules that are violated the most, it’s suggested that those should make the top of your list. If you don’t then a good way to find out is to use monitoring techniques that will help you to collect this data. There’s a high chance you’ll be surprised by the type of rules people violate. Some examples of no-no’s can include attaching company files to personal e-mails, putting data on non-encrypted USBs, uploading files to cloud drives etc. Yearly training and reminding sessions should also be implemented as a part of company strategy. One of the most effective tactics is to inform users that they are violating policies while they’re attempting to take the action. This approach is extremely important for organizations who do not block particular actions because it can interfere with everyday tasks. For example, if someone in the customer chat department was to send a file via instant messenger, your IT team could set up a technology interface or leverage solutions that automatically alert the violating staff member – with a message saying that the action is not recommended. Based on my own research with practitioners, in 72% of cases this was found to be enough to deter the user from completing the action. Furthermore, my research showed that 57% of actions that were going to be taken, could have led to data exfiltration. 3. Lead by example Management can scare employees into following company policies but sometimes they don’t scare themselves enough. I’ve come across hundreds of companies where statistics show that management violates more IT policies than the average employee. The issue here is, if a manager violates a company policy while interacting with their employee, there’s a higher chance that the employee will engage in the same activity at some point at their time with the company. It’s also important for management to dig into why they’re violating the policies. If it’s because they’re lazy then their behavior simply has to be changed. If it’s because it’s making it hard for them to do their job – the rule has to be evaluated. Is it making it hard for those under management to do their job as well? Employees will find ways to circumvent policies if they’re inconvenient. Showing employees that their concerns are a part of the IT security strategy is important because it diminishes the feeling that the policies are implemented to restrict them. In my experience, companies that were able to reduce violations by their management by 10%, were able to reduce their overall company violations by 27% in just three months. Ultimately your organization is only as strong as your weakest link – and your weakest link may be someone that simply didn’t know not to click, send, download etc.
<urn:uuid:e7df229a-a2fc-484c-aa07-56cbee4409e7>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2016/12/20/mitigating-internal-risk/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00398-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961914
1,034
2.515625
3
“Built around a fictional average company network, we will tell the story of an attack making use of subtle bugs across the layers all of which are as of yet undisclosed. This will include a bug in an Ethernet-driver, which allows an attacker to bypass MAC- and IP-based filters, bugs in TCP-implementations that are assumed to be fixed but aren’t, a web-cache which confuses itself and an instant-messenger, which was fooled by the protocol specification. All of these bugs share a common property: They are a consequence of insecure coding-practices.” Alone these bugs don’t achieve much, but when chaining them together and creating a jigsaw puzzle of attacks, it’s possible to construct an effective attack. Stage 1: Attacking the clients The client offers a huge variety of attack possibilities. However you need to target the attack to the specific system. What application do you want to exploit, how, what host system, what shellcode to use…. The first phase is information disclosure. The application of choice for this demo is Pidgin emoticons. Using the MSN-SLP protocol, a client receiving an emoticon is able to request the graphic from the client by specifying the file to download. By replacing the name of the emoticon to fetch with a more interesting file (/etc/passwd springs to mind), an attacker can remotely retrieve the file. Adium suffers from the same issues as Pidgin as the issue replies on the underlying protocol and not the application itself. This issue is less of an implementation issue, and more of a protocol issue due to the complexity of the protocol (for no reason). Stage 2: Bypassing the internal packet filter When trying to attack layer n, it’s always best to look at the lower layers to see if you can control or exploit them. Looking at the link layer protocol for example, you see the typical MTU value of 1500 Bytes. Now that gigabit ethernet is coming to the enterprise, jumbo frames are supported (alongside the older 1500 MTU values). So what happens if somebody sends 2000 Bytes to a system that only supports 1500 ? Typically a controller will take the data and make it spam more than one receive buffer. However in some instances this might be the case. For example the e1000 Linux driver from earlier this year. It was fixed, but not really fixed ! When sending a frame that would normally be checked (i.e. a firewall) only the first part is checked, the second isn’t. This means that you can bypass the firewall rules, and send packets (and attacks) to systems behind it. 0-day !!! Stage3: Poisoning the cache Squid webcache, is also a DNS cache. This means that it’s vulnerable to the typical issues that standard DNS suffers from. Squid implements it’s DNS features independently of other DNS servers. Even though it randomizes the source port, the port is then statically assigned for the lifetime of the cache. Many layers of security are handed off to other systems (i.e. layer II protections). By using NAT’s build in source-port protections it’s possible to fingerprint the port used by Squid. When waiting for a DNS response, Squid constantly queries the next received packet. When it’s not expecting a response it caches responses until it makes a request. You can then put responses into the cache before even asking Squid to resolve. The first response wins, and you’ve given answers before the DNS server has even asked the question. Squid is automatically setup to wait 2 minutes for a response. By performing a DoS on the firewall, you have enough time to poison the cache. Stage 4: Denial of Service the Firewall In 2009, a fix was implemented to the RTL Linux driver to re-enable the previously disabled hardware filtering on the NIC. By scanning possible MTU lengths, you can find the exact value that triggers the NIC to throw an error saying that multiple fragmented packets have been received, both of 8000 Bytes (this isn’t really possible). This will also cause garbage to be sent up the stack, instead of the packet contents. Lucky for us, the attacker can specify this garbage and control where the crash takes place. More than just a Denial of Service, control over the remote machine. - The security of a network component relies on the environment - Security issues do not live in isolation – You never know the impact of a vulnerability until you see how it can be put to work More information can be found on the CCC wiki
<urn:uuid:20f66d7a-17ad-423c-8509-58034c7a47fb>
CC-MAIN-2017-04
https://blog.c22.cc/tag/dns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00509-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923029
972
2.59375
3
In modern-day America, the backbone to this country is the interconnected electric power grid and communications network. Yet, when there is an outage on one, it often leads to extensive failures throughout the other. To keep the nation operating, it is vital for engineers to establish preventive measures that avoid cascading failures within these interconnected systems. Arizona State University recognized this challenge and determined that a deeper understanding into the interdependency of these multilayered networks was necessary to allow engineers, service providers and urban planners avoid unnecessary failures and downtimes. As they began their study, ASU researchers found that most earlier models to highlight the relationship between these two systems were oversimplified and failed to capture the true complex relationship between the power grid and communications network. To overcome this challenge, ASU researchers set out to create a new model that identified the most vulnerable nodes on an interdependent network and captured the nature behind this relationship. Within a network, nodes can depend on one or several other nodes from within their own and other interdependent networks. So, the failure of one node can trigger the failure of nodes throughout the entire system. The goal of the ASU researchers was to determine what nodes trigger failure in the largest number of connected networks. By highlighting the most vulnerable nodes within a system, engineers can construct methods to avoid cascading failures throughout the entire network. Researchers at ASU’s Computer Science and Engineering Program used data from GeoTel Communications and Platts Electric Supply to create a realistic model based on Maricopa County that showcased what nodes relied most heavily on this interconnected system. GeoTel Communications provided data on Maricopa County’s communications network. With approximately 60 percent of Arizona’s population living within Maricopa County, researchers wanted to determine what areas were more vulnerable to failure. The data used in the study included the power network, which consisted of 70 power plants and 470 transmission lines, and the communications network, which consisted of 2,690 cell towers, 7,100 fiber-lit buildings and 42,723 fiber links. The research showed that these two systems were fundamentally interconnected: the power network was dependent on the communication network and visa versa to function properly. Researchers determined that generators rely heavily on the communications network and require cell towers or fiber-link buildings, and fiber links to remain functional. Cell towers depend on generators and transmission lines to operate. Fiber-lit buildings depend on generators and transmission lines to connect to cell towers. Only fiber links and transmission lines were independent structures. From the study, researchers pinpointed the most vulnerable properties of these interconnected networks. If you would like a copy of the study, please make your request at http://www.geo-tel.com/contact/ . If your community is looking for ways to backup and strengthen its power grid and communications systems, GeoTel Communications can provide fiber network maps and other telecom infrastructure data. This helps engineers design systems that avoid cascading failures within interconnected systems. If you are interested in obtaining telecom maps, contact GeoTel Communications at 800-277-2172.
<urn:uuid:5a8a83ff-752a-4a22-81f9-bc7557a333dd>
CC-MAIN-2017-04
http://www.geo-tel.com/2014/arizona-state-utilizes-geotel-data-telecom-power-grid-study/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00527-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929887
625
2.875
3
IPv6 offers many benefits over that of IPv4. 6 offers near-limitless number of addresses. With IPv6, technically we will no longer need to use NAT (network address translation). The reason for this is because NAT was originally used to share an IP address with multiple devices due to the limited number of IP addresses. Using NAT actually increases network latency because the addresses have to be translated. With IPv6, each device has a unique address therefore making point to point communications faster. Video conferencing and VOIP will greatly benefit from these direct connections.
<urn:uuid:d12f1a48-d2df-401e-bc1a-407c1783ccb5>
CC-MAIN-2017-04
http://www.bvainc.com/network-faster-with-ipv6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00279-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948552
114
2.9375
3
In 2020, both Harlingen and San Benito will have all the water they need, according to a state water agency website. However, Primera will fall short by 45 percent, says the new interactive State Water Plan website launched by the Texas Water Development Board. “This website is an example of the changes we are making to provide transparency to Texans about the important work TWDB does,” Carlos Rubinstein, TWDB chairman, said. The data offers communities important information as they plan projects for which they will apply for funding from the State Water Implementation Fund for Texas (SWIFT), board member Bech Bruun said. “It’s another way to get relevant information to those who want to get involved in the water planning process — something we strongly encourage all citizens to do,” Bruun said. To find the website, go to www.twdb.texas.gov, and then click on “Interactive 2012 State Water Plan Website.” A map of Texas will appear. The map has the state divided into sections A – P. Beneath the map is a list of years a decade apart beginning with 2010 and ending with 2060. The Valley is located in a dark blue area labeled M that stretches all the way to Webb County. Upon clicking on that section, it will open up and multiple dots will appear beginning with a deep green and ending with red. As the user moves the cursor from one dot to the next, the name of the city it represents will appear. It will also show how much of its demand for water will be met in a given year. The legend in the lower left-hand corner shows what the dots represent. A light green dot means that the area represented will lack between 0.5 percent and 10 percent of its total demand, said Dan Hardin, senior planning advisor for TWDB. “What we’re trying to represent is a measure of how serious an entity’s water problems might be in the future,” Hardin said. “Another way to think of need in planning terminology is, that’s the shortage. It’s essentially how much the available existing supply is going to fall short of meeting demands.” He said orange and red dots show that the entity is looking at a shortage of at least 25 percent of its water demands. The website also includes a section called “Regional Water Needs.” It shows how much water each region will need for six uses, including municipal, manufacturing and irrigation. Municipal, Hardin said, refers to water cities provide to homes, schools, stores and offices. In 2020 the M region will need 64,277 acre feet of water for municipal use. It will need 2,355 acre feet for manufacturing and 333,246 acre feet of water for irrigation. While the outlook is pretty good for 2020, more red and orange dots appear in 2040. However, Harlingen and San Benito are still projected to meet all of their water demands. Twenty years later in 2060, even more red dots pop up and Harlingen shows a shortage of 15 percent and San Benito lacks 11 percent of its total water demand. ©2014 Valley Morning Star (Harlingen, Texas)
<urn:uuid:9ea188c1-bbd4-4d80-a639-39391fdd918a>
CC-MAIN-2017-04
http://www.govtech.com/internet/Website-Predicts-Water-Needs-in-Texas-Communities.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00517-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938858
683
2.640625
3
Given a number of variables, a human mission to Mars is both feasible and affordable. That was the uniting idea behind a statement by a group of 60 government, industry and academic organizations who together reached what they called a "consensus on what is necessary to make human Mars exploration feasible, sustainable, and affordable within two decades." The group, brought together by the non-profit Explore Mars organization includes, Boeing, Lockheed Martin, the United Launch Alliance, the American Astronautical Society and others. "This is no small achievement," said Jim Kirkpatrick, Executive Director of AAS in a statement. "This is the first time such a diverse group has come together to agree that sending humans to Mars is both a priority and possible." +More on Network World: Quick look: NASA's Mars MAVEN mission+ The group said it evaluated a number of scenarios for what it called compelling human and robotic exploration of Mars, as well as the role the International Space Station (ISS) could play in moving a Mars mission along. The group agreed on six core principles: 1) Sending humans to Mars is affordable with the right partnerships, commitment to efficiency, constancy of purpose, and policy/budget consistency. 2) Human exploration of Mars is technologically feasible by the 2030s. 3) Mars should be the priority for human space flight over the next two to three decades. 4) Between now and 2030, investments and activities in the human exploration of space must be prioritized in a manner that advances the objective of initial human missions to Mars beginning in the 2030s. 5) Utilizing the International Space Station (ISS) is essential for human missions to deep space. 6) Continuation of robotic precursor missions to Mars throughout the 2020's is essential for the success of human missions to Mars. As with other Mars mission ideas, the group recognizes a bunch of companies and organizations will have to come together in a relatively short period of time to make any Mars mission a reality. "Careful coordination among stakeholders, NASA, industry/commercial, and potential international partners will be required. The human space flight stakeholders must initiate new and sustainable programs that will clearly advance the goal of landing crews on Mars by the mid 2030s. A logical, affordable architecture with a campaign of mission "stepping stones" and elements must be developed. "From the start, such an architecture must incorporate management efficiencies and flexibility based on lessons learned from ISS, commercial programs and other past NASA programs, as well as from DOD and industry. Above all, political and budgetary stability is essential over a two-decade time span. Accomplishing the goal will require a policy and appropriate budget commitment over multiple US Congressional and Presidential elections as has been done for other major undertakings in history. Sending humans to Mars is far less an issue of cost than it is of commitment," the group said in a report on the mission. Such grand plans have been espoused a lot recently. The Inspiration Mars group, lead by billionaire entrepreneur Dennis Tito detailed how his philanthropic group known as Inspiration Mars can get around the money details and get strait to the Red Planet by 2018. "No longer is a Mars flyby mission just one more theoretical big idea. It can be done - not in a matter of decades, but in a few years. Moreover, the mission might just show the way for a new model for joint effort and financing. It would attract significant private funding, while enabling NASA to do what it does best, and confirm the United States as the unquestioned leader in space," Tito who is a former NASA Jet Propulsion scientist, told the House committee. "If I may offer a frank word of caution to this subcommittee: The United States will carry out a Mars flyby mission, or we will watch as others do it - leaving us to applaud their skill and their daring. If America is ever going to do a flyby of Mars - a manned mission to another world - then 2018 is our last chance to be first." Basically what Tito and Inspiration Mars are proposing is an unprecedented partnership with NASA and other commercial space operators such as Orbital Sciences or Space X to launch a two-person spacecraft into space and make a 501 day trip to Mars where it would traverse 800 million miles, fly within 100 miles of the Red planet and return home. Another challenge is the launch window - between late 2017 and early 2018 - which would let the mission take advantage of a planetary alignment that occurs once every 15 years. As part of the testimony Inspiration Mars released a feasibility study of the Mars trip which included specific details of the proposed mission. For example, the operation would begin with two launches - one using NASA's still-in-development heavy lift rocket that would put into Low Earth Orbit the big mission components including "an [heavy lift Space Launch System] upper-stage rocket that will propel the spacecraft from Earth's orbit to Mars; a service module containing electrical power, propulsion, and communication systems; a [Orbital] Cygnus-derived habitat module where the astronauts will live for 501 days; and, for the last hours of the mission, an Earth Reentry Pod. This pod is derived from the work to date on [NASA's] Orion, but will greatly increase the entry speed for this new vehicle to be known as Orion Pathfinder." "In the second launch, a commercial transportation vehicle (to be selected from among competing designs) and crew will carry the astronauts into orbit for rendezvous... The two craft will meet using docking procedures and systems that have been perfected in 136 spaceflights, by 209 astronauts, to the International Space Station. After the crew transfer and detachment of the commercial vehicle, the SLS upper-stage will ignite a Trans-Mars Injection burn to escape Earth's orbit and begin the journey. " In the end Inspiration Mars concludes: "There are definitely challenges in developing the flight hardware and accomplishing the Inspiration Mars mission within the time constraint. However, there is an overwhelming belief that this mission is not only technically feasible, but programmatically achievable in the short time frame remaining We believe it is well-worth the commitment, resources and hard work to take advantage of this truly unique opportunity." And in December yet another group Mars One said Lockheed Martin was onboard to build the spacecraft that would land a technology demonstration robot on the Red Planet by 2018. The Mars One group ultimately wants to establish a human outpost on Mars. The lander robot would use technology Lockheed previously built for NASA's Phoenix lander which touched down on Mars in 2008. The Mars One lander will evaluate the use of the Phoenix design for the Mars One mission and identify any modifications that are necessary to meet future requirements. In addition, the mission would go a long way toward determining the cost and schedule of future missions. Check out these other hot stories:
<urn:uuid:395403ca-354b-417e-a9e7-319c86c91485>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226141/security/a-lot-of-groups-want-to-go-mars--will-any-succeed-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934894
1,402
2.90625
3
One of the real success stories of the digital age is the marriage of photography with personal computers. Digital technology is used for taking photos, correcting mistakes, adding special effects, and printing, sharing and displaying. All of this makes it easier and more enjoyable to take pictures for business and pleasure. But what hasn't changed in the transition from analog to digital is the photographic skill needed to produce a compelling image. There are rules of the road to follow when transforming raw images into eye-popping photographs. Lighting is one common stumbling block, and many casual shooters - even business photographers - are in the dark about it. When shooting outside, the best light is in the early morning or late afternoon. If you have to shoot at midday, put yourself and your subject in the shade, if possible, to avoid harsh highlights, dark shadows and squinting eyes. If you must be in the sun, try to shoot with it beside you rather than at your back. If you can't avoid shooting with the sun behind your subject, turn on your camera's flash and use it to avoid a background that is overly dark or bright. Photos snapped indoors can present tricky lighting challenges as well. Subjects illuminated with conventional incandescent bulbs may have a slightly orange cast because cameras are preconfigured for the sun's "color temperature." You can correct for this in any of three ways: Change the camera's "white balance" setting, use special "daylight-balanced" light bulbs, or place your subject by a window to take advantage of natural illumination. Using a flash can also prevent this, but flash photography has problems all its own. The inexpensive built-in flash in ordinary digital cameras can make your subject unnaturally bright and the background artificially dark. Instead, if possible, turn off the flash and use additional lighting by moving a lamp or two close to your subject. If you must use a flash, experiment with diffusing its light by bouncing it off a light-colored ceiling or nearby wall. One way to do this is to hold a small mirror in front of the flash at a 45-degree angle. Flashes can also cause the devilish "red eye" problem in living subjects. To try to prevent this, you can use your camera's red-eye setting, if it has one. Another option is to tape a small piece of tracing paper over the flash to diffuse its light. Composition - how you position your subjects and yourself, and what you choose to include in the photo - is another crucial aspect of good photography that's often overlooked. A frequent mistake is to shoot too far away from the subject. It's generally best to fill the camera's LCD screen or viewfinder with your subject and minimize the foreground and background. You'll get sharper results by moving in closer, if possible, rather than using your camera's zoom mode or a telephoto lens. You can crop a photo later using an image-editing program, but you risk losing sharpness here as well. A high-megapixel camera can preserve the clarity. Pay attention to the background. Avoid positioning your subjects directly in front of vertical objects, such as telephone poles - it will look like something is growing from atop their heads. Also avoid backgrounds that are overly cluttered, which distract attention away from your subjects. You can correct many mistakes and add in amazing special effects using image-editing programs, such as Adobe Photoshop Elements or Paint Shop Pro. But avoid the temptation of doing too much. An over-edited photo can look as amateurish as an over-designed Word document or Web site. What size you make the final photos depends on whether you intend to print them out on your inkjet printer, send them via e-mail, post them to your Web site, or make them available to whomever you choose through a photo-sharing site such as Shutterfly, www.shutterfly.com, or Snapfish, www.snapfish.com. Photos meant for viewing on a computer screen should be smaller than those that will be printed out: One rule of thumb for Web photos is that the width should be no more than 800 pixels. The durability of the ink used by inkjet printers is improving all the time. But to minimize the chance of an image fading, mount prints behind plastic or glass, or for optimal protection use special ultraviolet glass available from picture frame shops.
<urn:uuid:f0857209-a2ed-4fa8-9abf-79d11441773d>
CC-MAIN-2017-04
http://www.govtech.com/wireless/Photography-Skills-Still-Matter-in-the.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00453-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935693
902
2.515625
3
A concept in use in the Netherlands has arrived in New Orleans' Lower Ninth Ward, site of severe flooding during 2005's Hurricane Katrina. A 1,000-square-foot house that floats (pictured) has been constructed for the Make it Right Foundation. The 46,000 pound "chassis" of the house is constructed of polystyrene foam covered with a shell of reinforced concrete. In the event of catastrophic flooding, the house will rise on the water as high as 12 feet. Pilings tether the house, while allowing it to rise vertically. Designed by Morphosis Architects and Morphosis founder, Thom Mayne, construction was assisted by UCLA architecture graduate students. "The immense possibilities of the Make It Right initiative became immediately apparent to us," said Mayne in a release,"how to re-occupy the Lower 9th Ward given its precarious ecological condition? The reality of rising water levels presents a serious threat for coastal cities around the world. These environmental implications require radical solutions. In response, we developed a highly performative, 1,000-square-foot house that is technically innovative in terms of its safety factor -- its ability to float -- as well as its sustainability, mass production and method of assembly."
<urn:uuid:ec268ef7-dd91-4954-b579-3fb23dd0d2c0>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/New-Orleans-Unveils-Floating-House.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00418-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967438
246
2.625
3
As the Internet becomes an increasingly important means of conducting transactions and the volume of e-business grows exponentially, a secure infrastructure is needed to provide authentication, confidentiality and access control. Security has evolved from a basic password scheme to a complex key infrastructure. Initially, shared secret keys were exchanged and maintained between pairs of correspondents. However, as the Internet expanded, this method became impractical. Today, a powerful public-private key technology (PKI or Public Key Infrastructure) has evolved to solve this problem. Download the paper in PDF format here.
<urn:uuid:b55438ae-3c43-4a97-aad1-2dff6dde742d>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2002/08/27/public-key-infrastructure-pki-a-primer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00050-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951221
112
2.625
3
Korean engineers have broken a record by transmitting enough power wirelessly over a distance of about 16 feet to charge up to 40 smartphones at the same time. The researchers, from the Korean Advanced Institute of Science and Technology (KAIST), created a "Dipole Coil Resonant System" (DCRS) made specifically for an extended range of inductive power transfer between transmitter and receiver coils. The development of long-distance wireless power transfer has attracted a lot of attention by researchers in recent years. The Massachusetts Institute of Technology (MIT) first introduced a Coupled Magnetic Resonance System (CMRS) in 2007. It used a magnetic field to transfer energy for a distance of 2.1 meters (about seven feet). According to the Korean researchers, CMRS has unsolved technical limitations that make commercialization difficult. For one, CMRS has a rather complicated coil structure (it's composed of four coils for input, transmission, reception, and load); bulky-size resonant coils; and a high frequency (in a range of 10MHz) The KAIST team uses a lower, 20kHz frequency. While that may seem like a unique move, the KAIT engineers are using the same technology as WiTricity, a company in Watertown, Mass. WiTricity has been developing magnetic resonance charging over distance for sale to manufacturers since 2009. What the KAIST researchers did was build a bigger system. WiTricity's wireless charging technology is designed for "mid-range" distances, which it considers to be anywhere from a centimeter to several meters, according to Kaynam Hedayat, Witricity's product manager. Magnetic resonance wireless charging works by creating a magnetic field between two copper coils. The larger the copper coils and the greater the power being pushed through them, the bigger the size of the magnetic field. What KAIST researchers did was build a 10-foot-long, pole-like transmitter and receiver that was able to create a magnetic field large enough to transmit 209 watts of power over a distance of five meters (or about 16 feet). Over that distance, the wireless transmitter still emitted enough power to charge up to 40 smartphones, if plugged into an outlet powered by the wireless transmitter. But, as the distance increased, the power dropped off significantly. The Korean engineering team conducted several experiments and achieved "promising results." For example, at 20kHz, the maximum output power was 1,403 watts at a three-meter distance; 471 watts at four meters; and 209 watts at five meters. "For 100 [watts] of electric power transfer, the overall system power efficiency was 36.9% at three meters, 18.7% at four meters, and 9.2% at five meters," Chun Rim, a professor of Nuclear & Quantum Engineering at KAIST, said in a statement. "A large LED TV as well as three 40 [watt]-fans can be powered from a five-meter distance." The Korean researchers believe that wireless charging will eventually be as common as Wi-Fi in homes and public places.
<urn:uuid:6881c456-9e1a-4ee0-a4a8-346b6faca88d>
CC-MAIN-2017-04
http://www.computerworld.com/article/2487553/emerging-technology/it-s-now-possible-to-wirelessly-charge-40-smartphones-from-16-feet-away.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00536-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955257
640
3.171875
3
In California’s Sierra Nevada, gold-rush era shacks still stand, with shingles made from flattened bean cans, and gaps between boards are filled with newspapers or mud — not what one would call energy-efficient. Though that was 150 years ago, today’s houses are much warmer, tighter and efficient than they were just few decades ago. But just how far can energy efficiency go? Today, a house that produces as much energy as it consumes — a net zero home — is an attainable goal. Some approaches to net zero rely on miniscule floor space and wearing heavy body insulation, but the National Institute of Standards and Technology (NIST) has a better idea. Last week, NIST cut the ribbon on a two-story, four-bedroom, three-bath house with 2,700 square feet of living space, plus a 1,500-square-foot basement and a garage equipped to charge electric vehicles — it is a net zero energy test facility with all the modern conveniences that an upscale family might expect. Nobody lives in this net zero home, but a family of four is simulated by heating, cooling, running showers, cycling washers and dryers, lights going on and off, and running the dishwasher, etc. Even the normal heat and moisture emitted by human bodies is simulated. And all the equipment, appliances, etc., used in the test facility are commercially available. “You can take a normal home and add solar to it, and it will reduce your energy bill, but it won’t get you to net zero,” said A. Hunter Fanney, chief of NIST’s Building Environment Division. “The most economical steps to doing so are you build it so the shell of the structure – the thermal envelope – is insulated extremely well and is airtight. The second thing you do is pick energy-efficient appliances and heating and cooling equipment. And then the third thing is adding solar.” Fanney said the test facility integrates those things. In addition to its curb appeal and creature comforts, the test facility — which Fanney said would cost approximately $800,000 to build as a net zero residence — is an engineer’s delight. The roof is covered with solar panels, and three different styles of geothermal systems are buried in the ground. The first geothermal system, Fanney said, “is a long horizontal loop in the backyard [that’s] six feet down in trenches six feet apart, laid out in a serpentine manner — about 1,200 linear feet of tubing.” At the six-foot depth, Fanney said, the temperature is about 55 degrees Fahrenheit. Three different 400-foot vertical geothermal wells are to the side of the house, and in front is a buried “slinky” coil system of tubing. Geothermal is part of a long-term testing phase that can go on for decades, said Fanney, while the first year will be devoted to achieving net zero. While the facility is smart-grid compatible, Fanney said that currently, there are no commercially available appliances or heating and cooling systems that are smart-grid ready. “Once the standards are put in place, and NIST has a major role in doing that, the utility can talk to the appliances and heating and cooling equipment,” he said. “Then it’s going to be a substantial change. ... Right now as a residential customer, you pay the same amount for your energy, it doesn’t matter if it’s three o’clock in the afternoon, or three o’clock in the morning.” Utilities have a lot of excess energy at 3 a.m., but may experience shortages at 3 p.m. on a hot day, Fanney said, adding that the change he sees coming is what he calls “time-of-use pricing.” “The cost of a unit of energy at 3 a.m. will be substantially less than three in the afternoon,” he said, so dishwashers, clothes dryers and heating hot water can be scheduled for early morning hours to cost less and even out the energy load. Even without the full smart-grid hookups, smart electric meters currently have four advantages. “Right now, utilities are using smart meters for automatic billing; they don’t have to send a meter reader out,” said Fanney. “And it allows them to connect and disconnect service, and they can do all that remotely.” And when there’s a power outage, he said, they know immediately where the outage is, in which houses. Additionally, some utilities have a program called green button; once these smart meters are in place, the green button program allows the consumers to use a smartphone or computer to see the energy used by their home on a daily basis. “You don’t know where it’s used in the home, but at least you can look at your patterns,” Fanney said. “Some studies have shown that if people know how much energy they are using, they will actually reduce their energy use about 6 percent.” The facility has only been operating for a week, but results will be available soon, as actual operation is compared to computer simulation benchmarks. “The whole mission of NIST is to develop measurement science and metrics to pull new technologies into the market,” said Fanney, adding that energy is an invisible attribute, and the only way the consumer can judge is kind of like the appliance labels — the yellow energy labels that give consumers the anticipated cost of operating those systems over a year. “We hope to develop methods and metrics to better allow the consumer to make wiser energy selections as far as equipment and housing construction in the future.”
<urn:uuid:121bbddd-591c-40d9-9f2b-685af66c56b4>
CC-MAIN-2017-04
http://www.govtech.com/technology/NIST-Unveils-Zero-Net-Energy-House.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953842
1,218
3.40625
3
For years, technologists have predicted the Internet would provide the masses with free speech that would challenge totalitarian regimes. But so far the Web has been a weak tool in the fight against censorship, technology experts said on Monday. One of the most commonly cited examples of how citizens have used the Internet to challenge a government occurred after the Iranian presidential elections in June 2009. Despite news accounts of protests being organized through the text-messaging service Twitter, Evgeny Morozov, a fellow at Georgetown University's Institute for the Study of Diplomacy, said he has grown skeptical of the Web's power to foster democracy. Morozov, a native of Belarus who studies the Internet's effect on authoritarian states and a blogger for Foreign Policy magazine, noted the evolution of social media actually has aided oppressive regimes. The Web has given dictators the ability to mine contents of social networking sites to identify dissidents and to pay bloggers for spreading propaganda. "I wasn't really sure that the good guys were winning," he said. Morozov spoke at an invitation-only meeting on Monday hosted by Foreign Policy at the magazine's Washington office. More than a dozen government officials, academics, industry executives and journalists attended the get-together to discuss an article by Morozov in the May/June issue of Foreign Policy that refutes the theory the Internet can encourage democracy in repressive regimes. Attendees at the meeting included officials from the Defense and State departments; Google; Microsoft; the Center for Democracy and Technology, a civil liberties group in Washington; the human rights organization Freedom House; and several journalists. Retribution against bloggers and other open-minded Internet users in foreign countries has become a growing area of concern for the United States. The State Department this year named one of its strategic priorities, Internet Freedom, the human right to access networked technologies without restraints. But activists are beginning to understand the shortcomings of networking tools for bringing about changes in regimes. Robert Guerra, Internet freedom project director at Freedom House, said democratic activism spurred by the ease of blogging and other Web 2.0 communications has limits. "Governments have reacted with repression 2.0," he said. Authoritarian regimes have learned how to block certain computers from accessing the Web, Guerra noted. One of the more effective ways to overcome these obstacles is to educate citizens on how to safely use Web circumvention tools such as proxies that hide citizens' locations, said Guerra and Daniel Baer, State's deputy assistant secretary for the bureau of democracy, human rights and labor. The Internet is not the first free speech facilitator to fall short of expectations. "I've had the sense that I've been here before," said Price Floyd, principal deputy assistant secretary of public affairs at Defense. Since the Cold War, Radio Free Europe/Radio Liberty, the U.S.-funded broadcaster, has used shortwave frequencies to report news to countries that ban free press. But throughout history, oppressive regimes have countered with espionage and misinformation campaigns to discredit the broadcasts, according to Radio Free Europe. "The Soviet KGB and Warsaw Pact intelligence services penetrated the stations, jailed sources and even resorted to violence in attempts to intimidate RFE and RL staff," the broadcaster's website noted. Part of Price's job is to promote a policy issued in February to open Defense networks to social media sites such as Facebook and YouTube, which previously had been off-limits. "In the Defense Department, there was a move to block access to all these things," he said. "But it's now not blocked. Is there anything different?" Government officials and tech companies can only do so much. Although the Obama administration is helping developing countries such as Cuba and Haiti build telecommunications infrastructures, neither U.S. policies nor IT firms can make anti-censorship a precondition of doing business with the United States, most experts acknowledged. "If you stamp it in any way as American, you're asking for it to be rejected," said Bob Boorstin, Google's director of public policy. Dictators always will be able to find a countermeasure, such as the Internet-filtering tools that China uses to exert control over its people, some participants said. Google recently decided to divert users of its search engine in China to Hong Kong's Google search service, when the company discovered that hackers allegedly from China had attempted to penetrate the Web mail accounts of human rights activists.
<urn:uuid:86f9fbd6-72e8-4939-8746-5b6d07d94a2a>
CC-MAIN-2017-04
http://www.nextgov.com/defense/2010/05/internet-isnt-the-agent-of-regime-change-some-hoped-for/46782/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00198-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960879
902
2.640625
3
This chapter gives a brief overview of the facilities provided by Fileshare and also describes how Fileshare works. Fileshare is best suited to applications that share data files, concurrently, between many users, across a network. It supports all of the functionality provided by the base COBOL file handling system and provides several additional features: The ability of any one application to take advantage of these features depends on: You do not need to make any program source code changes to use the basic Fileshare system. Source code changes are only needed to take advantage of some of the advanced features provided by Fileshare. Using the base COBOL file handling system, a typical COBOL I/O request to a shared data file causes the file handler to make several accesses to that data file across the network. See Figure 1-1. Figure 1-1: Conventional Network With Fileshare, a program that needs to access a data file has its request processed by the File Handling Redirector (FHRedir) module. The FHRedir module sends the request over the network to a Fileshare Server that performs the low-level I/O operations and then passes the result of the I/O operation, including the file status, back to FHRedir. FHRedir returns the result to the program. See Figure 1-2. Figure 1-2: Fileshare Network A Fileshare System is made up of: |Fileshare Clients||A Fileshare Client comprises a user program, making data file I/O requests, via the FHRedir module. FHRedir redirects the I/O requests to a Fileshare Server. |Fileshare Servers||A Fileshare Server runs on the same machine as the data files that you want to access. The Fileshare Server accepts requests across the network from the Fileshare Client, processes them by calling a local copy of the Micro Focus File Handler and then returns the result to the Fileshare Client.| Since a single Fileshare Server processes all the requests from several Fileshare Clients, it can use a single copy of the Micro Focus File Handler, regardless of how many users are accessing the Fileshare Server. This has several advantages: Copyright © 1999 MERANT International Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law.
<urn:uuid:83741e48-5867-4be0-93b7-9362d97c3a88>
CC-MAIN-2017-04
https://supportline.microfocus.com/documentation/books/sx20books/fsintr.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00042-ip-10-171-10-70.ec2.internal.warc.gz
en
0.872177
494
3.046875
3
Tackling the problem of getting women into STEM careers This feature first appeared in the Summer 2015 issue of Certification Magazine. Click here to get your own print or digital copy. Tech skills will be required in 80 percent of all jobs in the next decade, yet women in technology have been declining since 1991. At the current rate of decline, fewer than 1 percent of the global tech workforce will be female by 2043. STEM (science, technology, engineering, and math) jobs are twice as likely to be held by men, even in a randomized, double-blind study. Monica Eaton-Cardone, founder and CIO of Global Risk Technologies, says that in order for today’s women to have a chance tomorrow, gender must become a non-factor. “The opportunity to add a valuable contribution to society through technology is a benefit that should be promoted more — especially to women,” says Eaton-Cardone. She believes that women are interested in STEM opportunities, but don’t get many chances to develop or pursue that interest. Eaton-Cardone points to the 66 percent of fourth-grade girls who are interested in math and science — yet only 18 percent of college engineering majors are female. Currently, only 1 in 4 STEM jobs is held by a woman. Eaton-Cardone says that in order to change this, women need to be encouraged and women need to be educated on the growing potential of STEM careers. Eaton-Cardone is a unique case for women in STEM, excelling in a field of men despite having no formal IT background. Her creative solutions for payment processing in Chargebacks911, eConsumerServices, and Global Risk Technologies have enabled merchants, consumers and banks to find solutions for their online businesses. Even taking the rare success stories into account, however, there are many questions when it comes to women in STEM: ● Why do young girls lose their enthusiasm for math and science? ● How can society offer encouragement to women in STEM? ● Why are STEM industries averse to women? ● What programs/opportunities exist to encourage girls in STEM? ● How can companies learn about potential bias in hiring practices? What can they do to change this? ● What are your recommendations to women who would like to get into technology or IT? Monica Eaton-Cardone made a career out of discovering where there is a problem and then solving it herself. She then develops solutions for others who are experiencing the same problem. Eaton-Cardone has a 20-year background in developing retention campaigns, which entails developing technologies around monitoring key performance indicators to help track advertising performance, customer acquisition trends, and other factors that help to secure customers. Her career expanded to the online arena, which she claims is a totally different ballgame because it is constantly evolving. As with brick-and-mortar businesses, principles such as “The customer is always right” are important. But the online arena uses a spaceless customer where there is less accountability for both consumer and merchant, and a reduced barrier to entry for merchants on a global scale. That introduces some unique problems in business — problems that are hard to quantify because online business is a moving target. Take chargeback and credit card processing, for instance: “Technology solutions follow the technology criminals,” Eaton-Cardone says. “It is the ‘genius criminals’ who are actually responsible for creating a revolution in this industry because we’re all trying to stay ahead of them — and keep up with the loopholes that they expose — to make our systems even more secure. However, the minute you think that you have the best, most secure system, you’re dead in the water because just a few months go by and someone else has figured out some other way to expose a weakness in your online presence. “You have to continually re-invent and recognize that the most relevant data to analyze when it comes to the economy today is the present, not necessarily historical. I developed a software program strictly for my own use because I found there was such confusion in handling chargebacks and being able to analyze risk in the online environment. Lo and behold, there were a number of other online merchants who needed that solution as well, which gave birth to Global Risk Technologies to serve these online merchants.” Eaton-Cardone’s first project was developing a VOIP (Voice Over IP) technology that connected call centers in four countries so that she could analyze the results. Her education lies in architecture, so she was not exactly interested in IT. What she was interested in, though, was building things and solving problems. She enjoys math, and she likes organizing ways to solve a problem. Technology was something she fell into as a result of solving a problem with well-defined requirements. Can any of us get away from technology? Eaton-Cardone says that every woman on the planet has a natural interest in technology and would expose that talent if she tried it. She believes that women, in general, have an aptitude for design and creativity, tapping into their talents in structure and organization. Most mothers are proficient multi-taskers, she says. This is what technology is! Women just use a different set of tools. Eaton-Cardone has a daughter who is 8. Her daughter is as good with Legos as any boy of the same age, and Eaton-Cardone is certain that her daughter would enjoy a robotics class. At the same time, she’s certain that her daughter would not consider taking such a class without ample encouragement from her mother. By their teenage years, most girls have not been afforded much opportunity to be exposed to what technology is. Girls would enjoy developing a computer program on their iPhones because it’s creative. They would be designing something — it’s not just math; they’d be applying their talents. “Boys are probably more likely to learn math at a faster pace because they take courses like wood shop, and guess what wood shop is?” Eaton-Cardone says. “A bunch of angles that allow them to apply math to their creativity with the wood; they’re learning a skill (wood shop) in tandem with math.” How do we provide the same opportunities to girls? The top-down approach — putting pressure on corporations to hire more women — is not only unworkable but is actually damaging. What ends up happening is that people are interviewed to become computer programmers or coders, and there aren’t any women to interview. Eaton-Cardone says she may have one woman out of 100 applicants, and she must fight the impulse to hire that one woman, because to do so would make that woman a charity case. Suppose, for example, that the lone female applicant is not as qualified as the male applicants. So now there is a woman who is setting an example for every other woman, and she’s not very good. To hire her simply to make a statement would not be fair to her, to women in general, to the corporation, or to the 99 men who applied. Trying to incentivize female applicants with money or scholarships doesn’t work because men and women go to college to pursue their passions. Oftentimes money is not enough to get them to change their passions. One needs to start at a younger age. Eaton-Cardone says she overheard a man at a seminar, who said, “We’re giving everyone the same opportunity.” But Eaton-Cardone claims it’s not an opportunity if we’re telling a 13-year-old to choose between a sewing class and learning robotics. It would be an opportunity to actually require students to take a robotics class to allow that exposure to technology to happen. “If the only piano players we had on the planet were very young children who expressed a desire to learn piano, we’d have no pianists,” Eaton-Cardone says. “You expose them to something with parental stewardship, and the children learn whether or not they have a talent for the skill and become engaged. Boys are drawn to STEM because they are more naturally interested in computer games, and girls are drawn to creativity and design work. Women aren’t given an opportunity early in their lives to put their creativity and design talents together with technology.” Technology is a genderless field. One cannot expect a girl at the age of 13 to decide everything in which she’s going to be interested. In Asia, girls are required to take trigonometry, which allows them to consider a career in STEM. Many IT pros are given the flexibility to work from home or telecommute. If one has the talent, one has terrific flexibility and opportunity if a STEM field is chosen. Women do not recognize the freedom that is afforded by a STEM field. One is hired because of one’s talent and interest in STEM, just as men are. It would be interesting to see how many girls would pursue a STEM career if they took wood shop. Expose them to areas outside a traditional girl’s comfort zone, and more girls will go into STEM. The best learning comes when there is an application method for the theory being taught. Girls can’t be expected to naturally excel in math when they are only given theory without any application for it. Boys are naturally choosing things that apply the math theory. Eaton-Cardone: “We like to say that a person either has the STEM gene or he/she doesn’t. At Global Risk Technologies, we hire a woman as an executive assistant, and we will reveal her hidden abilities in numbers. We hear women saying, ‘I like to help people,’ and women don’t realize that majoring in STEM will allow them to help many more people than studying non-STEM subjects. But they’ve never seen how they can apply their natural abilities. You can learn how to do anything online. Opportunities are boundless. It’s back to having confidence in trying new things; thinking outside the box. It is damaging to women to tell them they are the underdog. The men out there are producing. Don’t be afraid to start at the very bottom, and if you perform, the company will recognize this. If you feel sorry for yourself, you will become a liability to that company.” Some final advice to women in technology from Monica Eaton-Cardone: Find a subject about which you can be passionate. Find a mentor in that subject area and become excellent at it. This takes work and many hours. Turn your interest into a passion. Invest in yourself. Pick it and stick with it. Actor Marc Anthony told reporter Meredith Vieira what his father said to him many years ago. “My dad told me early on, he said, ‘Son, we’re both ugly.’ I swear, he says it to this day. And he goes, ‘You work on your personality. It builds character.’ ” We would have a better planet if both men and women put 100 percent of their efforts into their passion for something. Why can’t that something be STEM?
<urn:uuid:324b988c-34d3-467f-9ea2-b250a3ed10f7>
CC-MAIN-2017-04
http://certmag.com/tackling-problem-getting-women-stem-careers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00436-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96955
2,328
2.765625
3
Li Z.,Southwest University | Tian L.,Chongqing Landscape and Gardening Research Institute | Tian L.,Chongqing Urban Landscape Engineering Technology Research Center | Cuccodoro G.,Museum dHistoire Naturelle | And 2 more authors. Zootaxa | Year: 2016 Oberea fuscipennis (Chevrolat, 1852) species group is revised based on morphology and DNA barcode data. Oberea di-versipes Pic, 1919 and O. infratestacea Pic, 1936 are restored from synonymy. The following two new synonymies are proposed: Oberea fuscipennis ssp. fairmairei Breuning, 1962 = Oberea diversipes Pic, 1919; and Oberea hanoiensis Pic, 1923 = O. fuscipennis (Chevrolat, 1852). Source Xian X.-D.,Chongqing Landscape and Gardening Research Institute | Feng Y.-L.,Chongqing Landscape and Gardening Research Institute | Willison J.H.M.,Dalhousie University | Ai L.-J.,Chongqing Landscape and Gardening Research Institute | And 2 more authors. Environmental Science and Pollution Research | Year: 2015 Two examples of the creation of naturalized areas in the littoral zone of the Three Gorges Reservoir in the urban core of Chongqing City, China, are described. The areas were created for the purpose of restoring ecological functions and services. Plants were selected based on surveys of natural wetland vegetation in the region, and experiments were conducted to discover the capacity of species of interest to survive the sometimes extreme hydrological regimes at the sites. Novel methods were developed to stabilize the plants against the rigors of extreme summer floods and constant swash, notably zigzag berms of rocks wrapped in iron mesh. The areas include native reeds, grasses, shrubs, and trees. Plant communities in the areas are zoned according to flooding stress, and their structure is less stable at lower elevations that are subjected to greater stress. The tall grass Saccharum spontaneum (widespread in Southern Asia) and the tree Pterocarya stenoptera (native to Southwest China) are notable for their utility at these sites in the center of a large city. Communities of tall reeds and grasses have become so dense and stable that they now provide the ecosystem services of capturing river sediments and resisting erosion of the river banks. It is recommended that extensive greening of the riparian zones in urban areas of the Three Gorges Reservoir be conducted for the purpose of providing ecosystem services, based in part on the experiences described here. © 2015, Springer-Verlag Berlin Heidelberg. Source
<urn:uuid:6afd494e-e440-45c6-abfe-3887370dcb5a>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/chongqing-landscape-and-gardening-research-institute-2142235/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00160-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908225
595
2.65625
3
DOE launches smart grid Web portal Smart Grid Information Clearinghouse offers use cases, lessons learned - By Alice Lipowicz - Jul 08, 2010 The Energy Department has started a new Smart Grid Information Clearinghouse Web site to provide a forum for information sharing on smart grid technologies. The beta version of the Web portal started July 7 provides information on technologies, standards, rules, use cases, training and other best practices for smart grid technologies that use sensors, infrastructure and communications devices to better monitor and control energy use. The fully operational Web site is anticipated to start this fall. The clearinghouse Web site provides an overview of the technologies, projects and deployments underway. Another goal of the Web site is to collect comments and suggestions from the smart grid community and the public about application, research and development of smart grid technologies. Updating electrical smart grid technology means good, green sense Verizon, UTC do smart grid homework A related federal Web site, www.smartgrid.gov, also tracks smart grid projects in detail. It is sponsored by the Federal Smart Grid Task Force, an interagency group of nine agencies, including the Energy, Commerce, Defense and Homeland Security departments. Smart grid technologies include smart meter implementations, as well as upgrades to transmission and energy distribution infrastructure. In a report released July 7, ABI Research estimated that the global smart grid industry would grow to $46 billion by 2015, from $12 billion in 2008. Energy last year selected the Virginia Tech Advanced Research Institute in Arlington, Va., with assistance from the Institute of Electrical and Electronics Engineers and the EnerNex Corp. to create the new Web site. The five-year contract is valued at $1.3 million. Funding was provided through the economic stimulus law of 2009. Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week.
<urn:uuid:4d199b30-12fa-4b9b-bdcf-5e9b6a61b901>
CC-MAIN-2017-04
https://fcw.com/articles/2010/07/08/energy-department-debuts-smart-grid-web-site-for-best-practices.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00556-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899991
387
2.625
3