text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
With the increase of identity theft resulting from illegal use of Social Security numbers and other personal information, the federal government's use of radio frequency identification (RFID) is met by a typical American suspicion colored by Orwellian fears. Countless organizations already track our buying habits and daily activities with information we voluntarily provide to credit card companies and online vendors. We purchase fruit snacks at the store, and a week later, fruit snack coupons appear in our mailbox. Some don't seem to mind being regularly tracked by marketing companies, but how would we feel if government agencies used similar methods to keep tabs on us? This question has been raised regarding government use of RFID, and few people appear willing to suffer this loss of privacy. But what do we know about RFID, and what benefits and challenges does this technology present for those employing it and those being tracked? Another Sweeping Use of Radio Waves RFID tags store and remotely retrieve data with silicon chips and antennas that respond to radio frequency queries from a transceiver. Any entity implementing RFID will need tags, tag readers, edge servers, middleware and application software. Passive tags, the most popular variety, do not require an internal power source and are cheaper to manufacture. Active tags broadcast their own signal and are used in container management, and require an internal power source. Although relatively new, RFID is already used in numerous ways, including highway toll collection and item-level tagging by Wal-Mart, which mandates supplier compliance with the technology. Depending on the application, tags can provide data on a product's whereabouts or next destination in the supply chain; and specifics, such as date of use or manufacture. The technology is not yet widely deployed, but federal RFID spending is projected to increase 120 percent by 2009, according to research firm INPUT. Defense agencies lead with that spending growth, with significant civilian agency adoption expected in 2007. INPUT expects substantial growth as private-sector use demonstrates similar cost benefits in areas outside the supply chain process. The Department of Justice began using RFID file tracking systems in 2005, and other federal branches are acquiring such systems now. The Department of Defense is deploying RFID technology to improve its supply chain management for the war in Iraq. In 2004, the U.S. Food and Drug Administration began promoting its use to reduce counterfeit drugs in the supply chain, and some pharmaceutical companies have followed suit. The Department of Homeland Security (DHS) and the Coast Guard use RFID for shipment tracking in the ports of Los Angeles and Long Beach. Some more complex RFID devices can be encrypted and used to authenticate personal identification. Biometric passports -- being introduced in Europe and the United States -- identify holders with an embedded RFID chip, which is basically a digital signature. In the United States, E-Passports -- whose RFID chips contain a photo, biographic information from the data page, and other identifying data such as fingerprints -- have only been issued to diplomats thus far. They will likely be issued nationwide by October 2006, at which time, 27 countries will also be required to issue readable passports. Passport chips are remotely readable and in use in the European Union and Japan. RFID devices are mostly used to track product locations on shelves or in the supply chain -- book tracking in libraries and bookstores, animal identification, ID badges and building access control. Colorado even uses RFID to protect elk herds from contagious diseases. Like any new technology, these small, wireless devices bring their own set of challenges and concerns. RFID will continue generating research into uniform standards, security and privacy, while observers watch how far Americans want the government to go in guaranteeing safety and how much privacy they're willing to sacrifice in the process. The possibilities for RFID use are too numerous to imagine, especially with its potential in all government levels for information management and national security tracking. Buying and Using RFID RFID is gradually being considered as a tool for streamlining government services and processes. "Process improvements, and more importantly cost savings, obtained through the employment of RFID in a limited number of existing programs, such as DHS's Free and Secure Trade program, will encourage greater acceptance within civilian agencies in the future," said Chris Campbell, senior analyst for federal market analysis at INPUT. He said RFID adoption would continue to appear at the program level versus agencywide until the technology is more widely accepted. This new technology's cost can be prohibitive for some, said Kevin Kalinich, co-national managing director of technology and professional risks for Aon, a financial services group. Kalinich, who works daily with major retailers to assess and mitigate the risk of implementing RFID, admits the technology is costly and complex. "Commercial adoption is not widespread because of this. RFID tags cost anywhere from 25 cents to $1 for implementation. Bar codes, on the other hand, cost 1 cent per product." When used for student and employee IDs, each tag can cost as much as $5 to $7. Regardless of cost, Kalinich said RFID is likely to have a profound and positive impact on its users' IT infrastructure. "The upside is tremendous real-time collection of information. Governments don't care as much about paying the millions of dollars if they have a different goal in mind than the commercial entities, where it's all about profitability." Kalinich thinks the RFID issue is unprecedented in some ways, which is why his company also offers insurance to entities wishing to protect themselves against potential misuse. RFID's short history, though, does present some challenges for his business, he said. "The challenge from our standpoint is this: In the insurance business, we like to work using actuarial analysis and predictability. We don't have benchmarks or years of actuarial data in the case of RFID." According to Paul Mathans, manager of emerging technologies and public services at BearingPoint, cost-conscious procurers can take heart that prices on RFID tags in broad ranges have dramatically decreased since demand has increased, and sticker shock depends on the overall cost scenario. "If you are losing books in the libraries, paying in the 25- to 50-cent price range per tag makes sense." Mathans said in general, price is not a big part of today's discussion. "Looking for the innovative application designs is the critical issue." He sees abundant potential and practicality in RFID, and said the tags, in addition to tracking important military materiel shipments to Iraq, can also streamline health-care assets and patient management, track who's in the penal system, prevent counterfeit sales, and facilitate passage of legitimate travelers across U.S. borders while freeing more resources to track illegal crossings. Jim Harper, director of information policy studies at the Cato Institute in Washington, D.C., and member of the DHS's Data Privacy and Integrity Advisory Committee, also thinks RFID adds significant value to the supply chain. "RFID has the potential to wring out inefficiency so taxpayers and consumers can keep more of their dollars. Literally billions of dollars are wasted when logistics managers lose track of materiel and it sits idle, when it has to be reshipped, or when products spoil or expire. Billions more are lost to theft," Harper said, adding that in identification tracking, the benefits are much smaller when managers try to use RFID in human environments, and the costs in terms of privacy and security soar. What About Big Brother? A potential problem RFID systems pose for both the private and public sectors is that the data contained in the tags, and their adjunct personal and financial data located elsewhere, are very attractive to criminals, especially in the case of digital passports. If the middleware and databases connected with RFID are infected by viruses, the actual tags can be affected as well. Depending on the use of an affected tag, any unblocked security breach could threaten the associated information and who uses it. Privacy advocates are concerned about technologies like RFID because they fear chips will track individual habits and transmit personal information. But are such fears overblown? Biometrics certainly possess the creep factor when patterns from our own retinas, fingerprints, voices and DNA can potentially be used to track our location and behavior. On the flip side, cyber-security becomes that much more important in preventing an RFID data file from being combined with home addresses, and Social Security and home phone numbers. But what if RFID data in one tag is only a number, similar to the Social Security number, and isn't used in conjunction with other personal information? "My father used to paint a yellow line on every tool he purchased as a way to identify it," said Bradford Brown, managing director for Protiviti Inc., a technology risk consulting practice for the federal sector in Washington, D.C. "You send your child off to camp and stick a label on clothes. At football games, you label your cooler. The difference with RFID is someone else is doing it for us, and we don't like that." When speaking about RFID at a recent conference at the Massachusetts Institute of Technology, Brown fielded many questions and concerns about privacy. "People can see a lot of value in tracking and identifying assets, but when it comes to identifying people and baggage, the issue becomes dicey," he said. "RFID goes to the core of how closely Americans feel about the right to privacy." The concern is that biometric information databases, if accessed or used illegally, can be manipulated by criminals, terrorists or spies for foreign governments. "The government says it will employ best practices for RFID by following up to make sure information is encrypted, by limiting access, and by only using information for its designated purpose," said Kalinich. "But there is a tremendous disparity in data protection. You're only as good as your worst database." Privacy is not the only worrisome aspect of RFID. Some believe the possibilities for attacks and misuse of RFIDs are as numerous as its uses. Researchers at the Johns Hopkins University Information Security Institute broke into a cryptographically enabled RFID transponder -- the Digital Signature Transponder manufactured by Texas Instruments -- used in several wide-scale systems, including vehicle immobilizers and the Exxon Mobil Speedpass system. The study indicated that the potential for such security breaks increased the chances of someone's car being stolen and of Speedpass being criminally used to purchase gasoline. For these reasons, RFID is constantly scrutinized and improved for use in private and public spaces. For example, companies are looking for ways to deactivate the tags after a set period of time, and alleviate the possibility of tags following people and continuing to record information. There are four legal issues to consider with RFID tags, according to privacy/security attorney Kraig Baker, a partner with the law firm Davis Wright Tremaine. He cited consumer concern about not knowing when their information is being used; a lack of established ground rules for data collection and use; what the data will link to; and the concern that personal information is linked in one place, such as a digital passport. "The tag is fundamentally about location," said Baker, "and there is concern that the information will be linked to a cross-sectional database, combined with sensitive data about you, your DNA and location, to a point where there is no privacy at all." How does RFID differ from previous information available in public records, such as Social Security numbers listed in real-estate transactions? "It used to be comforting for Americans that some things were too difficult to link and figure out," Baker said. "You had to traipse down to local government agency to find and link the information. You knew that you didn't have total privacy, but it was lots of work to find anything out." Now, he said, the notion of getting information with the click of a button creates this new discomfort, especially when combined with the idea that the sequence of events in one's life should be private, despite being in the public space. It looks and sounds a lot like stalking. That is where the tags struggle, said Baker. "Agencies and companies that plan to use it have not done a great job educating the public about what the tags can and can't do, how they will monitor its use and what security systems are in place." Regardless, such a powerful and flexible technology is here to stay. There is definitely work ahead for those in government who hope to use RFID to accomplish a variety of goals, including cost-cutting, people and item tracking, and greater overall efficiency. Baker said if entities using the technology sell it to their customers, they will have more success. "Right or wrong, Americans tend to be willing to give up privacy for efficiency, so if you sell people on the fact that this passport tag will allow them to go through security faster, people are more likely to agree with manageable risks that bring an efficiency benefit." The risk-to-efficiency ratio is part of the equation for CIOs as well. "The biggest challenge facing agencies adopting RFID is how to construct a system architecture that will handle substantially increased amounts of data," said Campbell. RFID technology has brought the issues of privacy and security to the forefront as government agencies struggle to find secure ways to store personal data, especially in light of the growing concern over identity theft. According to Brown, the Federal Information Security Management Act was passed in January 2003 to address security concerns, protect the nation's critical information infrastructure and encourage government to look at managing the risk in a regulatory environment. "As with any other technology, if you have uniformity and standards, a framework in place to assess risk, and the right policies and procedures in place, I have no doubt we will work around this." Do we have a choice with our privacy where RFID is deployed? "Where commercial use of RFID is concerned, you know what risks you are taking," Kalinich said. "Public use could be mandated by our government where you don't have a choice about what information they collect." Or, as Robert Atkinson, president of the Information Technology and Innovation Foundation, told an audience at the Federal Office Systems Exposition 2006 in Washington, D.C.: "We need to distinguish when we react to privacy concerns. Rather than ban the technology, we need to make sure government IDs have encryption devices," he said, comparing RFID to the information contained on a driver's license. "Only the technology is different. The privacy issue is the same."
<urn:uuid:f8f63a9e-2862-4dc7-973a-e217d8ec5a55>
CC-MAIN-2017-04
http://www.govtech.com/magazines/pcio/Tracker-Locator-Identifier-Spy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00297-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951917
2,961
2.5625
3
A way has been found to provide power for deep space mission. A University of California scientist working at Los Alamos has developed a way to generate electrical power for deep-space travel using sound waves. The traveling-wave thermoacoustic electric generator could power space probes to the furthest reaches of the universe. Scott Backhaus, working with colleagues from Northrop Grumman, designed a thermoacoustic system that is twice as efficient as similar thermoelectric generators on spacecraft and which uses heat from the decay of a radioactive fuel to generate electricity. Current devices convert only 7 per cent of the heat source energy into electricity. The traveling-wave engine converts 18 per cent of the heat source energy into electricity. The attraction of the device, now with the efficiency needed for space travel, is the fact that the only moving component in the device besides the helium gas itself is an ambient temperature piston. This gives it the reliability needed for long-distance space probes. Traveling wave thermoacoustic heat engines convert high-temperature heat to acoustic power with high efficiency, without moving parts. Electrodynamic linear alternators and compressors have shown high acoustic-to-electricity transduction efficiency along with long maintenance-free lifetimes. Optimising a small traveling wave thermoacoustic engine for use with an electrodynamic linear alternator gave the Backhus team a traveling-wave thermoacoustic electric generator. It is a power conversion system, good for demanding applications such as electricity generation aboard spacecraft. Thermoacoustics is the thermodynamic interaction of acoustics with solid surfaces that posses a temperature gradient. The oscillations of acoustic pressure generate heat transfers to and from solid surfaces while acoustic displacement oscillations cause the heat transfers to happen at spatially separate locations. Time phasing of the pressure and displacement oscillations, the sign and magnitude of the temperature gradient in the solid and the location of that temperature gradient in the acoustic wave can all be used to create a variety of devices. These include standing-wave and traveling-wave heat engines and refrigerators. The gas undergoing the acoustic motion is the only moving component. The absence of moving parts lets you tailor the device geometry to a particular application and gives you the reliability. The new device integrates a traveling wave thermoacoustic heat engine with a linear alternator to generate electricity from high-temperature heat. They used a flexure-bearing-supported linear alternator. It is composed of a stack of several spiral-cut circular metallic plates with a piston attached to its center. The stack forms an extremely stiff bearing in the radial direction, soft in the axial direction. This lets the piston move in its cylinder with a radial clearance as small as 10 micrometers. The stiff flexure bearing keeps the piston from touching the cylinder and the small clearance effectively forms a nonwearing seal that requires no lubrication. A coil of copper wire attached to the piston oscillates with it. As the coil moves through a magnetic field generated by permanent magnets, the linear motion of the piston is transformed into electricity. Converting high temperature heat to acoustic power is good where the acoustic power can be used directly, such as powering a traveling-wave thermoacoustic refrigerator. When it is converted to electricity through a linear alternator, interface requirements and size restrictions put extra demands on the engine that can significantly change its design and optimisation. This made optimisation harder but still doable; you minimise mass and volume, because it is on a spacecraft, while maximising electric power output. They modified the thermoelectric engine to minimise the peak-to-peak stroke while increasing acoustic power output. The traveling-wave engine is a modern adaptation of the 18th century thermodynamic invention of Robert Stirling--the Stirling engine--which is similar to a steam engine but uses heated air instead of steam to drive a piston. It works by sending helium gas through a stack of 322 stainless-steel wire mesh discs called a regenerator. The regenerator is connected to a heat source and heat sink that causes the helium to expand and contract. This creates powerful oscillating sound waves that drive the piston of a linear alternator to generate electricity. To comment on this article, write to us at firstname.lastname@example.org out more about Technical Insights and our Alerts, subscriptions and research
<urn:uuid:7b4e7881-4b84-4bf9-a596-73ffa430164b>
CC-MAIN-2017-04
http://www.growthconsulting.frost.com/web/images.nsf/0/AE8F0D22224B298265256F4400264582/$File/TI%20Alert%20-%20Aero%20NA%20.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00297-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884836
895
3.90625
4
Do you want to make the Internet a safer place? Maybe this is something for you. Internet activists, Electronic Frontier Foundation (EFF) and Access have teamed to launch HTTPS Now, an international campaign aimed at soliciting consumers to help make web surfing safer. HTTPS Now comprises three initiatives: - Individuals are encouraged to use HTTPS Everywhere, a security tool for Firefox that automatically encrypts the user’s browsing session by changing from HTTP to HTTPS whenever possible, - Individuals are asked to complete the HTTPS Now survey advising whether a site uses HTTPS and how it has been implemented, and - Website operators are encouraged to use selected resources to learn how and why to deploy HTTPS correctly. See “More information & Other Resources” at HTTPS Now. Proper deployment of HTTPS can limit the impact of malicious tools such as Firesheep which can be used to compromise email or social network accounts.
<urn:uuid:fd92f68d-db40-4618-bc11-60239c512abe>
CC-MAIN-2017-04
https://www.entrust.com/https-now/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00417-ip-10-171-10-70.ec2.internal.warc.gz
en
0.883889
182
2.578125
3
Insider threats may be the biggest – and least addressed – cyber risk facing organisations today. A malicious or simply careless insider can quickly expose a company’s confidential data and valuable trade secrets, undermining its competitive advantage, damaging its brand and even endangering other employees. The phenomenon is likely hugely under-reported, because most firms want the problem to quietly go away and avoid bad publicity. Leadership worries: “What does it say about our company that our own people are a threat?” On top of this, companies have had limited means of protecting themselves. For most of business history, risk mitigation has involved one-time background checks and company policy education for new hires, and then crisis response, whether that’s responding to employee concerns or a data leak caused by an insider. >See also: Why insider threats are still succeeding Even still, reported numbers are high. At least 43% of data loss is due to insiders, Intel Security recently reported. Some data loss is caused by actors with malicious intentions, such as in the case of GlaxoSmithKline, in which insiders are accused of stealing cancer research to sell to China. Some of it is more opportunistic, in the case of employees using a past employer’s intelligence to get ahead at a new job. A survey by Symantec found that half of employees admit to taking corporate data when they transfer jobs, with 40% suggesting they plan to use the information at their new organisation. Additionally, some insider-caused data loss is inadvertent and careless – for example, in the case of employees falling prey to phishing scams. And some even happens at the behest of outsiders. Criminal gangs now actively seek out and exploit vulnerable insiders, such as those with addiction or financial problems. Attackers are also gathering publicly available information – on social media sites for example – and targeting employees with advanced access privileges in an organisation’s network, like those in legal, payroll and HR. They then attempt to gain access to their companies’ systems and commit larger frauds. Effectively reducing all types of insider cyber risk chiefly involves setting up a proactive program that can prevent destructive events in the first place. This depends on identifying and defusing at-risk insiders before they reach crisis point. Unlike external cyber threats, where an attack is nearly inevitable and resilience stems from preventing escalation, insiders who pose threats can be exposed before they act on their impulses. Huge progress in the field of big data analytics has made this possible. Here’s how it works: insiders who engage in malicious or non-malicious behaviour often signal it in advance through their choice of language. By analysing employee communications algorithmically in bulk, with the right psychologically-proven detectors in place, a company can flag individuals who are disgruntled or under extreme mental stress, and can measure changes in emotion, attitude and personality over time. And all of this can be done within the guidelines of EU data privacy and ethical practices. But tools are just one part of a larger strategy. If an at-risk individual is identified by machine intelligence, there still needs to be a human team trained in-house to respond to the finding by reviewing other aspects of the individual’s behaviour and determining a response that will defuse, and not enflame, the situation. It is also important for company leadership to be aware of common 'stressors' that trigger malicious or irrational behaviour by insiders. For example, a redundancy or job-loss can propel an individual who is already on edge to act out, spurring them to steal data for use at a new job or to sell it to competing organisations or even nation-states. By identifying potential 'bad leavers' well in advance, organisations can prepare and take precautionary steps. A comprehensive insider-threat programme consists of three elements: foundational policies and best practices for protecting valuable company information; a trained multi-disciplinary team including HR executives, security staff, and legal professionals who can spot concerning behaviours associated with insider risk and can devise appropriate responses; and technology that leverages big data behaviour analytics to detect high-risk individuals by identifying technical and behavioural anomalies. For example, access to highly sensitive information should be tightly limited, and data-loss protection tools that monitor for data exposure, should be in place. Senior executives have woken up to the dangers of hacking and data theft in recent years, but most are still focused on the threat from unknown actors behind computers far away. While such attacks – including the recent increase in ones using ransomware – have generated headlines, for most companies bigger dangers are lurking inside the company and within the companies of third party partners and service providers. These risks also need to be approached with direct attention and constant vigilance, with effective strategies and tools in place to mitigate them. When employers walk into their office buildings every morning, they expect to be safe, and that safety can come in many different forms, such as the security of knowing that they work at a financially sound organisation, and the safety of being free from harassment and workplace violence. The recommendations presented here apply not only to insider cyber risk, but also nearly all types of insider risk as well. Business leaders must develop proactive plans to monitor, detect and prevent bad actors within their organisation before they strike, and can do so by acting within the bounds of privacy laws and without creating a culture of paranoia. It is self-evident that ignoring these risks can result in catastrophic consequences. Arguably, organisations and their leadership have a heightened responsibility and capacity to tackle insider threats than external ones over which they have no control. Sourced from Scott Weber, managing director, Stroz Friedberg
<urn:uuid:b61ac7af-1455-4862-9386-b2fbadc9b94a>
CC-MAIN-2017-04
http://www.information-age.com/rogue-employees-may-be-riskier-outside-hackers-so-how-do-you-stop-them-123461253/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00535-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946699
1,156
2.53125
3
Schmid B.,Senckenberg Biodiversity And Climate Research Center Bik d Senckenberg Gesellschaft For Naturforschung | Schmid B.,Stellenbosch University | Nottebrock H.,Stellenbosch University | Nottebrock H.,University of Hohenheim | And 10 more authors. Ecography | Year: 2016 The responses of animal pollinators to the spatially heterogeneous distribution of floral resources are important for plant reproduction, especially in species-rich plant communities. We explore how responses of pollinators to floral resources varied across multiple spatial scales and studied the responses of two nectarivorous bird species (Cape sugarbird Promerops cafer, orange-breasted sunbird Anthobaphes violacea) to resource distributions provided by communities of co-flowering Protea species (Proteaceae) in South African fynbos. We used highly resolved maps of about 125 000 Protea plants at 27 sites and estimated the seasonal dynamics of standing crop of nectar sugar for each plant to describe the spatiotemporal distribution of floral resources. We recorded avian population sizes and the rates of bird visits to > 1300 focal plants to assess the responses of nectarivorous birds to floral resources at different spatial scales. The population sizes of the two bird species responded positively to the amount of sugar resources at the site scale. Within sites, the effects of floral resources on pollinator visits to plants varied across scales and depended on the resources provided by individual plants. At large scales (radii > 25 m around focal plants), high sugar density decreased per-plant visitation rates, i.e. plants competed for animal pollinators. At small scales (radii < 5 m around focal plants), we observed either competition or facilitation for pollinators between plants, depending on the sugar amount offered by individual focal plants. In plants with copious sugar, per-plant visitation rates increased with increasing local sugar density, but visitation rates decreased in plants with little sugar. Our study underlines the importance of scale-dependent responses of pollinators to floral resources and reveals that pollinators’ responses depend on the interplay between individual floral resources and local resource neighbourhood. © 2015 The Authors Source
<urn:uuid:ea3b9e00-f257-442c-a8a5-529abd9f4930>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/biodiversity-and-climate-research-center-bik-d-senckenberg-gesellschaft-for-naturforschung-199815/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.851484
463
2.5625
3
Once a BGP session is established, routers will exchange two types of messages: KEEPALIVE and UPDATE. Keepalive messages are sent to let a neighboring router know we are still alive, but just didn’t have any updates to send. The update message carries three types of information: a list of withdrawn routes, a set of path attributes and “network layer reachability information” (NLRI). If previously advertised prefixes are no longer reachable, they’re sent as withdrawn routes in an update. The NLRI field is simply a list of prefixes that are advertised as being reachable, and the path attributes field holds information about those advertised prefixes. There are four types of path attributes: - Well-known mandatory: all BGP routers must understand these and they must be present for all prefixes - Well-known discretionary: all BGP routers must understand these, but they don’t have to be present - Optional transitive: BGP routers are not required to understand these, but they must be passed along in updates to neighbors - Optional non-transitive: BGP routers are not required to understand these, and if they don’t, the attribute must not be passed along in updates to neighbors Today, we’ll be looking at the BGP NEXT_HOP attribute, which is a well-known mandatory attribute. When looking at a single router that has a BGP session towards another router in a different autonomous system — i.e., an external BGP or eBGP session—the next hop attribute is usually very boring: it simply contains the IP address of the neighboring router. IP packets towards an address covered in a prefix learned over BGP are forwarded to the IP address in the next hop attribute. When a router sends an update over eBGP, it updates the next hop attribute, normally with its own address on the interface that the update is transmitted over. However, there is a bit more to the next hop attribute. With internal BGP (iBGP) between routers within the same autonomous system, the NEXT_HOP is not updated. So in the figure, router B in AS 20 gets two prefixes from router A in AS 10 with 10.10.10.10 as the next hop. Router B simply sends packets to destinations such as 198.51.100.1 to 10.10.10.10, which is an address on a directly connected interface for router B. Router C, on the other hand, is not connected to the 10.10.10.x subnet, so it has no idea where those packets need to go. The usual solution to this problem is to run an interior gateway protocol (IGP) such as OSPF and redistribute connected subnets into that IGP. Router C now knows that packets towards 198.51.100.1 go towards address 10.10.10.10 through BGP and that packets towards 10.10.10.10 go to router B through OSPF. So by doing a recursive routing table lookup router C knows to send packets with destination 19220.127.116.11 to router B. This also works if there are additional hops between routers B and C. An alternative solution is to configure eBGP routers with next-hop-self. In that case, those routers will put their own address in the next hop attribute and there is no need for iBGP routers to know the subnets used for eBGP. This way, it’s possible to not have an IGP, as long as all the routers are directly connected to each other. By using a route map, it’s possible to manually rewrite the next hop attribute. This is often useful in the face of (distributed) denial-of-service (DoS and DDoS) attacks. For instance, on a Cisco router, this configuration will update the next hop address to an address that is routed to the Null0 interface: ip community-list 13 permit 65000:13 route-map customer-in permit 10 match community 13 set ip next-hop 18.104.22.168 ip route 22.214.171.124 255.255.255.255 Null0 The result is that if the 65000:13 community attribute is present on a prefix, all traffic towards that prefix is routed to the Null0 interface and dropped. An ISP can use this configuration to allow customers to have traffic for certain (sub-) prefixes or individual addresses filtered out so other prefixes (which don’t get the 65000:13 community) remain unaffected by the attack. Alternatively, traffic can be redirected to the address of a filter box that can look deeper inside the packets and remove the unwanted traffic. Please note that it’s also possible to use a route map to update the next hop address when forwarding IP packets. This is a way to bypass normal routing table lookups and is known as “policy routing”, and should not be confused with modifying the next hop attribute in BGP or other routing protocols. On an internet exchange, a lot of routers from different autonomous systems are connected to a big, shared network. Organizations connected to the exchange can then set up BGP sessions between them as desired. However, on big exchanges with many members, this can be a lot of work. So most Internet Exchanges run one or two route servers. If you then connect to (peer with) the route server, you’re automatically connected to everyone else who is also connected to the route server. However, if the route server would include its own address in the next hop attribute for BGP updates it sends out, that would mean all the traffic between route server users would go through the route server. Fortunately, BGP is smarter than this. For instance, suppose 192.0.2.1 is the route server, with 192.0.2.2 and 192.0.2.3 being route server users. When the route server gets an update from 192.0.2.2, it knows that router 192.0.2.3 can reach 192.0.2.2 directly, because all three routers are connected to the same shared network. So the route server doesn’t update the next hop address in this case, and even though BGP updates flow through the route server, the actual traffic is exchanged directly between the routers of the internet exchange members. In our blog post on IPv4 BGP vs IPv6 BGP we talked about how it’s best practice to use separate eBGP sessions for IPv4 and IPv6. The reason for this is that the router can’t simply put the IP address for the local end of the BGP session in the next hop if the session runs over IPv4 and the prefixes are IPv6, or the other way around. Because the next hop isn’t updated for iBGP, sending both IPv4 and IPv6 prefixes over a single (IPv4 or IPv6) iBGP session is not an issue. What we didn’t mention is that for IPv6, BGP actually exchanges two next hop addresses. Other routing protocols work over link local addresses—the addresses starting with fe80::/64 that IPv6 automatically configures on all interfaces. Link local addresses are necessary to generate proper ICMPv6 redirect messages, hence the need for BGP to know about them. However, using next hop addresses exclusively wouldn’t work in BGP because, as we saw earlier, iBGP routinely exchanges next hop addresses that are multiple hops away and are decidedly not “link local”. So, IPv6 BGP simply exchanges both. When using BGP commands such as show bgp ipv6 unicast <prefix> the regular global next hop address will show up, but with commands like show ipv6 route <prefix> a link local address may appear.
<urn:uuid:4e25f9e6-1ec2-4aa6-b116-0f8cbea0e78b>
CC-MAIN-2017-04
https://www.noction.com/blog/bgp-next-hop
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926234
1,649
3.046875
3
Why DNS Propagation Takes So Long Many of our KB (knowledge base) articles mention DNS propagation delay. You know you need to be patient as you wait for your site to become live to the rest of the world, but you also want to explore the possibility that a problem may exist, delaying the process even more. This article describes DNS and propagation delay and points you to some tools that can help you determine if a problem exists and you need to contact our support department or if patience really is a virtue. What is DNS? DNS, short for Domain Name System, is the service which translates that domain name you type into your browser into an IP address, and tells your browser which server it needs to connect to to load the site you want to visit. It's handy because it you had to remember the IP address of every website you visit, it would make surfing the internet much harder. When a site is set up, as the hosting provider we create a Master DNS record on our DNS servers, which updates any changes made to your DNS records on the server every 15 minutes. You can request that the registrar of the domain point to our DNS server as being the master authority of your domain. Why Does DNS Take So Long to Propagate? You have registered your domain name, uploaded your website to one of our web servers, and asked your registrar to either use our name servers or to point your "A" record to your web server's IP address. Once that this is done, what's the hold up? When your website's address is entered into a browser, the computer requests the IP address of the server housing your site from your Internet Server Providers (ISP) DNS records. If the site is not listed in the records it queries registrars to find out who the DNS start of authority (SOA) is for your website. If you're using your registrar's name server as your SOA, it looks up the "A" record for your domain and returns the IP address of the server listed. If you are using our name servers, the registrar points the browser to our DNS servers to determine the IP Address for your domain name. From there the request is sent to the server the domain is hosted on which then provides the browser with the website. To speed the loading of websites, each ISP caches a copy of DNS records for a period of time, sometimes up to 48 hours. This means that they make their own copy of the registrars' master DNS records, and reads from them locally instead of making a direct request to the domain registrar every time a request for your site is made. This speeds up web surfing quite a bit by: - decreasing the return time it takes for a web browser to request a domain lookup and get an answer and - reducing the amount of traffic on the web. The downside to caching the master DNS records is because each company or ISP only updates their records every few days, any changes you make to your DNS records are not reflected between those updates. Although our DNS servers update every 15 minutes, the time between updates system wide is not standardized so the delay can range from a few hours to several days. This slow updating of the cached records is called propagation delay because your website's DNS information is being propagated across all DNS servers on the web. Once completed, everyone can visit your new website. There are some useful websites which will help you see this propagation process, and show you when your website should be visible: What's my DNS? (https://www.whatsmydns.net/): WhatsMyDNS can show you a variety of different records (selectable from the dropdown), and show you in 'real time' where those records have propagated to. Most commonly you would use this to check if the A Record for your site has propagated out to the rest of the world. If any locations show a red 'X', it means that location does not have any DNS information for the domain name being queried (yet). http://www.intodns.com/): intoDNS will show a breakdown of your currently reported DNS (nameservers, MX records, PTR, and A Record). It picks up DNS changes fairly quickly, and may show the changes before they have fully propagated to the rest of the world (See What's My DNS?, above) http://viewdns.info/): Similar to intoDNS this resource will give you a breakdown of your current DNS, however they have many other resources available as well which you may find useful in general such as WHOIS, rDNS Lookup, IP History, and more. If it has been longer than 48 hours, your site is not loading, and the two sites above do not show available DNS records, there may be further issues with the configuration of your site. Please contact our support department for assistance with troubleshooting the issue.
<urn:uuid:0731bf61-5e8e-4e0a-9a0a-87afa0525349>
CC-MAIN-2017-04
https://support.managed.com/kb/a604/dns-propagation-and-why-it-takes-so-long-explained.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00169-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936819
1,017
2.84375
3
Last night I watched a couple frightening episodes of Showtime’s new series, Dark Net. In both episodes, the growing sophistication of encryption was cited as the primary reason why criminals are getting away with everything from rasomware to child pornography. If one were to rely solely on these dramatized programs as the representation of truth in the cybersecurity world, good prevailing seems nearly impossible. But there is good, a lot of good out there, and for those who have always been inclined to take things apart and put them back together again, or for those who are fascinated by code, there are paths for you to explore in the world of cyber security that will help everyone defend themselves against the bad guys. Understanding encryption is an important, even essential, skill to have regardless of the position you hold. Whether you have or wish to pursue a formal education, or you are self-taught and have impressive skills, there is a place for you. The trick is in finding that place so that you can get your skills noticed. That’s what happened for Lysa Myers, researcher, ESET who took the initiative to train herself into her security position. Myers said, “When I started in security, it was all about malware. Encryption seemed super complicated and something that only hardcore people understood. The more I looked into it, thought, the more I realized that we all use encryption.” Making a shift in the alphabet with just a few letters is one commonly used code writing that many researchers read, according to Myers, “Like it’s unencoded.” Creators of ransomware have become more sophisticated with their encryption because they use more complicated algorithms. “There’s a balance between how complicated it is and how slow it is. As computers become more powerful, we are able to handle more complex encryption. 256-bit encryption is the most common, but there are various levels with 40 this or 128 this,” said Myers. For Myers, realizing how many different times and how many different ways encryption is already used, made it seem a lot less complicated. “If you have a shorter key, it’s more simple. The longer the key, the more complicated. The more bits there are the more complicated it is to unlock,” Myers said. Today there are certain encryption powers that make it unfeasible to unlock on our own, and Myers said, “It’s better to treat ransomware like a fire, and if you have a backup, ransomware is not a big deal.” But understanding encryption is useful in just about every area of security. “If you’re the person in charge of implementing security, knowledge of encryption is important. If you’re someone analyzing malware, it’s a useful thing to know. Pen testers need to know it,” Myers said. Because of the way products and technology have evolved, encryption has gotten more user friendly, but Myers said, “It’s not a simple thing to make good, secure encryption. You have to be extra cautious of security practices so that there isn’t a way of reversing the encryption that you might not be thinking of at the time.” Once she started seeing the use of encryption around her, Myers realized that’s it’s not so scary. “Pretty much any OS has encryption for you to use on your files and folders. Email and IM have encrypted versions to prevent snooping,” she said. And learning about encryption was for Myers as easy as, “Poking around at different things. How do I work encryption on my email? Investigating ways of incorporating it in your network traffic. You just check the help files and say how do I encrypt my files.” This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:1eb2c2b8-c979-48b7-9b15-a75a5e61c694>
CC-MAIN-2017-04
http://www.csoonline.com/article/3046138/it-jobs/getting-past-the-fear-of-encryption.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00563-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96175
811
2.734375
3
It’s easy to see EHR (electronic health records) as a hero of tech efficiency in the industry, but the concept of patient-centered healthcare is a reminder that patients, including those who are incarcerated, are the drivers behind the design, and implementation of any patient record system. The U.S. correctional system sees about 10 million people per year filter through its jails and prisons, with 90 percent of those incarcerations in jails, according to the Health And Human Rights Journal. The majority of that population tends to be poor and racial and ethnic minorities, and have higher rates of medical, mental health, and substance abuse challenges. In addition to their pre-existing conditions, incarceration brings with it new health risks of injury from violence and mental health stressors. As in other community health settings, EHRs are being used in correctional health systems to bolster safe and productive care. As is true in non-incarcerated populations, EHRs carry advantages: Specific to incarcerated populations, clinical staff also have access to records from prior incarcerations. Examples In NYC In the New York City jail system, treatment of patients falls under the Bureau Of Correctional Health Services (CHS) of the NYC Department Of Health And Mental Hygiene. Jails in NYC tend to be chaotic settings for healthcare, where the average length of stay is 45 days, but the median is eight days — making a comprehensive look at large quantities of information difficult without the aid of a solution like an EHR. CHS has recently adopted a focus on human rights as a part of its ongoing healthcare mission and the EHR is a key component in implementing that mission. Adapting The EHR The traditional EHR has been designed for a general population that does not face the same risks as incarcerated ones. Even within these populations, risks are not evenly distributed, and while most correction facilities have mechanisms in place for protecting vulnerable inmates, patients have reported that they are still repeatedly victimized, even in protective custody. Many health providers have knowledge of individual cases, but tracking trends has been difficult, largely because of a lack of training and technical capacity to aggregate data. The EHR can be adapted to gather information related to abuse, neglect, and other violence, as well as facilitating report generation based on patient demographic profile, time, location, or clinical outcome. Connection To A Larger System The connection of jail-based EHR to a state-wide health information exchange is one of the key components in using the EHR in addressing actual human rights issues within the correctional population. While NYC represents a jurisdiction that is currently using EHR actively to address human rights issues, other states, like Oregon, are implementing EHR systems purely for the sake of efficiency. In the future, these systems could be adapted to address needs that go beyond efficiency and cost savings. The Oregon Department of Corrections issued a budget request of $2.6 million in 2013 to convert their health records to an electronic system. Files are currently spread across 14 prisons, making it impossible for officials to search for trends that might improve inmate health. To make the transition, the state faces the challenge of converting millions of pieces of paper to electronic records, according to The Sun Herald. You can find more information on EHRs and their use in addressing population health issues in these articles:
<urn:uuid:73f45fa9-6a20-46a2-bd3a-dae9ba0162a9>
CC-MAIN-2017-04
http://www.bsminfo.com/doc/the-benefit-of-ehrs-to-the-prison-system-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00471-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949117
679
2.578125
3
Today, we'll briefly cover Unlicensed Mobile Access technology as we prepare to compare UMA with an IP-Multimedia Subsystem architecture. Like IMS, UMA technology has its origins with the wireless (cellular/mobile) community and the technology's specifications have been developed with collaboration from the Third Generation Partnership Project (3GPP) and other standards bodies. UMA's main objective is to allow a common way for dual-mode wireless handsets to move (roam) between "unlicensed" wireless network connections like those provided in a Wi-Fi (IEEE 802.11) or a Bluetooth transmission, and a GSM or General Packet Radio Service (GPRS) network connection. Unlike the multi-part IMS architecture, the UMA architecture is less complicated, with the principle component called the UMA Network Controller (UNC). The architecture also includes security components like AAA and security gateways. The UNC typically sits inside the “unlicensed” network zone and sits between the Unlicensed Mobile Access Network (UMAN) and a carrier’s “licensed” mobile network. The UMS knows where to route calls as they come and go through a wireless access point; it connects to wireless access points (like an 802.11 router) via the IP network core on the “unlicensed” side and to the core mobile network on the other side. It is responsible for monitoring and (in cooperation with the AAA server) for users' authentication as the move in and out of the UMAN range, and it is designed to complete a transparent hand-off between the networks. The UNC is also responsible for storing the user’s location so the mobile network “knows” where to route calls for a dual mode handset that is in range of the UMAN. Specifically, the UNC "tunnels" GSM/GPRS connections over a Bluetooth or Wi-Fi connection. Next time, we'll discuss how IMS and UMA can and should work together, and provide comments about "fixed-mobile" convergence.
<urn:uuid:6f1d729f-4787-4f7a-b07e-f7a03e4192c2>
CC-MAIN-2017-04
http://www.networkworld.com/article/2311045/lan-wan/what-is-unlicensed-mobile-access-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00471-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927063
429
2.875
3
Yves Le Roux is the Technology Strategist at CA Technologies and Chair of ISACA’s Data Privacy Task Force. In this interview he discusses the evolution of the digital identity, the influence of politics on privacy, Google Glass, and much more. What are the critical issues in understanding the very nature of identity in a society actively building bridges between the real and digital world? If you speak to a psychologist, he/she will explain you that each individual integrate various aspects of identity, memory and consciousness in a single multidimensional self. As said in a study done by Cabiria (2008), “The structure and design of virtual worlds allows its users to freely explore many facets of their personalities in ways that are not easily available to them in real life”. But this may have some consequences. For example, if an individual creates a virtual identity that is different from their real life identity, it can take a lot of psychological effort to maintain the false identity. In addition, one of the two options will occur, the identities may converge into one, making the virtual and real identities truer, or the individual may simply toss out the virtual identity, and start over with a new one. The main issue with identities in this virtual world is trust. Law enforcement officials view this possibility of multiple untrusted identities as an open invitation to criminals who wish to disguise their identities. Therefore, they call for an identity management infrastructure that would irrevocably tie online identity to a person’s legal identity. A popular opinion among politicians is: “If you have nothing to hide, you have nothing to worry about”. Why is privacy still important, even if you have nothing to hide? The line “if you’ve got nothing to hide, you have nothing to worry about” is used all too often in defending surveillance overreach. It’s been debunked countless times in the past. For example, in 2007,in a short essay, written for a symposium in the San Diego Law Review, Professor Daniel Solove (George Washington University Law School) examines the nothing to hide argument. His conclusion was: “The nothing to hide argument speaks to some problems, but not to others. It represents a singular and narrow way of conceiving of privacy, and it wins by excluding consideration of the other problems often raised in government surveillance and data mining programs. When engaged with directly, the nothing to hide argument can ensnare, for it forces the debate to focus on its narrow understanding of privacy. But when confronted with the plurality of privacy problems implicated by government data collection and use beyond surveillance and disclosure, the nothing to hide argument, in the end, has nothing to say.” In our privacy study following a European paper issued by Michael Friedewald, we distinguish seven types of privacy: 1. Privacy of the person encompasses the right to keep body functions and body characteristics (such as genetic codes and biometrics) private. 2. Privacy of behaviour and action includes sensitive issues such as sexual preferences and habits, political activities and religious practices. 3. Privacy of communication aims to avoid the interception of communications, including mail interception, the use of bugs, directional microphones, telephone or wireless communication interception or recording and access to e-mail messages. 4. Privacy of data and image includes concerns about making sure that individuals’ data is not automatically available to other individuals and organisations and that people can “exercise a substantial degree of control over that data and its use”. 5. Privacy of thoughts and feelings. People have a right not to share their thoughts or feelings or to have those thoughts or feeling revealed. Individuals should have the right to think whatever they like. 6. Privacy of location and space, individuals have the right to move about in public or semi-public space without being identified, tracked or monitored. 7. Privacy of association (including group privacy), is concerned with people’s right to associate with whomever they wish, without being monitored. Considering the full spectrum of privacy, are you sure you have nothing to hide? For example, do you want that people knows where you spend your time — and, when aggregated with others, who you like to spend it with? If you called a substance abuse counselor, a suicide hotline, a divorce lawyer or an abortion provider? What websites do you read daily? What porn turns you on? What religious and political groups are you a member of? How has privacy evolved in the digital world? What are users still doing wrong? Internet is a worldwide network and everything must be developed for a global environment (without national borders). Cloud computing delivery models require the cross-jurisdictional exchange of personal data to function at optimal levels. In January 2011, the World Economic Forum (WEF) issued a publication entitled: “Personal Data: The Emergence of a New Asset Class”. In this document, the WEF highlighted the differences of Privacy-related laws and police enforcement across jurisdictions, often based on cultural, political and historical contexts and that attempts to align such policies have largely failed. For the WEF, the key to unlocking the full potential of data lies in creating equilibrium among the various stakeholders influencing the personal data ecosystem. A lack of balance between stakeholder interests – business, government and individuals – can destabilize the personal data ecosystem in a way that erodes rather than creates value. Furthermore, the service provider may change this policy. Everybody remembers the Instagram case. In December 2012. Instagram said that it has the perpetual right to sell users’ photographs including for advertising purposes without payment or notification. Due to the strong reaction, Instagram has backed down. Many consumers are poorly educated about how their personal data is collected by companies and are unsure about what it is actually used for. Investigation into the recent implementation of the EU Cookie Law has highlighted how misinformed consumers in Europe currently are. For example, 81 percent of people who delete cookies do not distinguish between the “first-party’ cookies that give a website its basic functionality (e.g., remembering what items the consumer has placed in their shopping basket) and the “third-party’ cookies that advertisers place on websites to track user viewing. At the same time, 14 percent said they thought the data used to show them relevant ads included information that could identify them personally, while 43 percent were not sure if this meant their identity was known. With wearable recording devices such as Google Glass getting traction, we are opening ourselves for a new type of privacy invasion. Do you see people embracing such technologies en masse or can we expect them to question those that do? Google Glass is essentially a phone in front of your eyes with a front-facing camera. A heads-up display with facial recognition and eye-tracking technology can show icons or stats hovering above people you recognize, give directions as you walk, and take video from your point of view. In July 2013, Google has published a new, more extensive FAQ on Google Glass. There are nine questions and answers listed under a section named Glass Security & Privacy, with several concentrating on the spec’s camera and video functionality. But this doesn’t solve others privacy concerns: - Google Glass tracks your eye movements and makes data requests based on where you’re looking. This means the device collects information without active permission. Eye movements are largely unconscious and have significant psychological meanings. For example, eye movements show who you’re attracted to and how you weigh your purchase options when shopping. - How many of you will turn off your Glass while punching in your PIN? How about when a person’s credit card is visible from the edge of your vision? How about when opening your bills, filing out tax information, or filing out a health form? Remember that computers can recognize numbers and letters blazingly fast – even a passing glance as you walk past a stranger’s wallet can mean that the device on your face learns her credit card number. All of this information can be compromised with a security breach, revealing both the information of the one using Glass and the people they surrounds themselves with. - On July 4th 2013, Chris Barrett, a documentary filmmaker, was wearing Glass for a fireworks show in Wildwood, N.J., when he happened upon a boardwalk brawl and subsequent arrest. The fact the glasses were relatively unnoticeable made a big difference: “I think if I had a bigger camera there, the kid would probably have punched me,” Barrett told. The hands-free aspect of using Glass to record a scene made a big difference. In your opinion, which will be the major threats to privacy in the near future? Privacy is entering a time of flux, and social norms and legal systems are trying to catch up with the changes that digital technology has brought about. Privacy is a complex construct, influenced by many factors, and it can be difficult to future-proof business plans so they keep up with evolving technological developments and consumer expectations about the topic. One way to ensure there are no surprises around privacy is by seeing it not as a right, but rather as an exchange between people and organizations that is bound by the same principles of trust that facilitate effective social and business relationships. This is an alternative to the approach of “privacy as right,’ that instead positions privacy as a social construct to be explicitly negotiated so that it is appropriate to the social context within which the exchange takes place. The lengthy privacy policies, thick with legalese that most services use now will never go away, but better controls will probably emerge. Whatever the tools are used to protect and collect personal data in the future, it will be important for companies like Facebook and Google to educate their consumers and to provide them with options for all levels of privacy. Yves will be addressing these issues and others at the 2013 European Computer Audit, Control and Security (EuroCACS) / Information Security and Risk Management (ISRM) conference that will take place at Hilton London Metropole on the 16th – 18th September 2013.
<urn:uuid:11e6ca8a-5c20-42b0-9507-a9b6c3991f43>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/08/20/the-erosion-of-privacy-in-the-digital-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945233
2,074
2.65625
3
Embedded Passwords: Dangerous by Default The security community was horrified when it learned about Stuxnet, the worm designed to eat into industrial control systems, or SCADA systems, that was purportedly targeted at Iran's Bushehr nuclear reactor. Not only was the worm highly sophisticated, but it also targeted a SCADA system from Siemens whose embedded password was well known. Hard-coded passwords and embedded credentials are "extremely pervasive," being found in "everything from embedded systems such as printers, mobile and wireless devices, to databases to major applications like SAP or Oracle's PeopleSoft," Stuart McClure, senior vice president of risk and compliance at McAfee, told TechNewsWorld. Default passwords, which are included in everything from routers to software, can and should be changed, although many users don't do so. The most common example of this is wireless routers, most of which offer no security at all unless users actively change their default passwords. "Administrators sometimes neglect to change default passwords due to fear of breaking things and creating more work for themselves," McClure said. So why would anyone include default passwords in their products? Having a default password makes it easy to install large numbers of devices, Russell Smoak, director of security research and operations at Cisco Systems, told TechNewsWorld. Further, default passwords allow untrusted suppliers to install large numbers of devices or do pre-staging because the passwords can later be changed, Smoak said. Hard-coded passwords, however, cannot be changed by system administrators. They can be a "significant security risk," Smoak pointed out. "They reduce the security posture and expose devices to illicit access," he explained. Outlining the Threat "Using embedded credentials secures apps from regular system users to some degree, but it's like closing your door but leaving it unlocked," Al Hilwa, a program director at IDC, told TechNewsWorld. "Unless there are ways to obfuscate code in a secure way, embedded credentials will be readable by anyone who has access to the code." Cisco's Unified Videoconferencing products constitute a case in point. Several of the products in the Cisco Unified Videoconferencing 5100, 5200 and 3500 series run on a Linux shell which contains three hard-coded usernames and passwords that cannot be changed. The accounts can't be deleted, either. Cisco has warned that these products have multiple vulnerabilities that let attackers obtain remote access to them illicitly to compromise them. It's not as if the danger of embedded credentials and hard-coded passwords is new. "This is a well-known industry issue and you'd have thought that most major software players would have expurgated these passwords and credentials from their code by now, but it's clearly not the case," Hilwa said. The problem is most clearly seen in databases, attacks on which often yield thousands of names for cybercriminals to harvest and exploit with identity theft schemes. However, just getting rid of default passwords in databases isn't a solution, because they are included in the databases for a reason. "Having default accounts is very helpful in a testing environment," Noa Bar Yosef, a senior security strategist at Imperva, told TechNewsWorld. "They let you do everything from testing the connection to creating tables and testing scripts." For example, every default Oracle installation contains a default test account which is accessed using the credentials "scott" or "tiger," Bar Yosef said. The "scott" default username has a limited set of privileges, but these are sometimes elevated for testing. The trouble occurs when testers forget to restore the original privilege level for the username or delete the username altogether, and the database is moved into production. That might open the database to hacking, Bar Yosef pointed out. Vendors generally view hard-coded passwords as unacceptable even though they use them, Barbara Fraser, CTO Consulting Engineering at Cisco, told TechNewsWorld. "Over the years, I've seen an increased awareness of the risk represented by embedded credentials, but also an increased focus within the industry to eliminate or avoid them altogether," Fraser added. The trouble is that there's lots of old code that's difficult to secure, IDC's Hilwa pointed out. It wouldn't be wise for DBAs to change or delete hard-coded password accounts at once. For example, the owner of a database may not be able to change a default hard-coded password because it might break the application or he's restricted from doing so, Bar Yosef suggested. He will have to find a bypass solution instead. That's not necessarily the case, Adam Bosnian, an executive vice president at Cyber-Ark, told TechNewsWorld. "If you don't change the password correctly and restart the app correctly it would be a problem," Bosnian said. "If you can set the credential there must be a way to reset it and manage it in a more secure manner." Corporations should set up a database assessment process that tests their databases against accepted industry benchmarks, Bar Yousef recommended. They should also hold one individual accountable for ensuring that default logins and accounts are taken off production servers, she added. Some security vendors offer products that search for default passwords and notify system administrators in real-time. One is McAfee Vulnerability Manager. Further, third-party applications like John the Ripper and L0phtcrack that audit weak passwords have been available for a while, McAfee's McClure said. Corporations don't necessarily need a vendor to do the job for them, Bosnian said. "These things can be fixed without product, but you need to monitor what you do," he explained. Meanwhile, the International Organization for Standardization (ISO) is working on secure coding guidelines, Fraser said. However, these may not be accepted wholeheartedly by vendors. "Once completed, it remains to be seen how the secure coding practices will be applied within the industry," Fraser said. "We will need a few more Stuxnets to give the situation the urgency it deserves," McClure opined.
<urn:uuid:b9f3fc55-d85f-4ba4-a603-30d31dcbe6bb>
CC-MAIN-2017-04
http://www.linuxinsider.com/story/network-management/71369.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00134-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955588
1,269
2.625
3
Infrared Networking Basics Its time to whip out your infrared-enabled devices and communicate without wires. In Part 1, learn the basics of infrared networking in Windows 2000. One of the coolest but least understood native Windows 2000 features is infrared networking. Infrared networking allows you to perform such tasks as transferring files between machines and printing to infrared-enabled printers without the need for wires. In spite of this capability, very few of the Windows 2000 books I've read even mention infrared support. In this article series, I'll explain how you can take advantage of Windows 2000's infrared networking capability. I'll begin by discussing how infrared networking works. I'll then explain how to install, configure, and work with Windows 2000's infrared support. How It Works A traditional network requires a minimum of two PCs that are equipped with network cards and are attached to a communications medium. Each of these PCs must also have a unique computer name for identification purposes and share a common protocol with the other PCs on the network. In infrared networking, this definition is revised a bit. Whereas traditional networking requires a minimum of two computers, infrared networking is usually limited to only two computers. Actually, they don't both have to be computers--one of the devices could be a pocket PC or a printer. In spite of the fact that there are usually only two devices involved in infrared communications, a computer name and common protocol are still required. The computer name is required in case multiple infrared devices are present in a given area. The computer name allows the devices to determine which devices should be communicating. In the case of infrared networking, the infrared port takes the place of the network card. (I'll discuss the infrared port in more detail in a future article.) As far as a communications medium goes, whereas traditional networks use copper wire or fiber, infrared networks don't require a physical connection between the two devices. The only requirement from a connection standpoint is that a direct line of sight exists between the two devices. The Need for a Protocol A shared protocol is required even in infrared networking because of the nature of infrared communications. To see why this is the case, it's necessary to understand how infrared communications work on a more basic level. At its simplest, infrared communication involves using an infrared emitter to send pulses of infrared light to an infrared receiver. Infrared light is used instead of other types of light that fall into the spectrum of visible light, because it's less susceptible to interference than visible light. An example of very simple infrared communication is the remote control for your television or stereo. Such a remote contains an infrared emitter. When you press a button on the remote, it emits pulses of infrared light, which the infrared receiver on the television or stereo receives. In the case of a remote control, a chip inside the remote causes the infrared receiver to flash a different pattern of invisible light for each button pressed. If you hold down a button, the flash pattern repeats. It's possible to watch an infrared remote function; certain types of digital cameras can record infrared light. In Figure 1, you can see a stereo remote. The image on the left shows a remote with the infrared emitter turned off, as would occur during idle times or between pulses of light. However, the image on the right shows what it looks like when the infrared emitter emits a pulse of light.
<urn:uuid:3403c24b-fdd4-4b24-a310-67b3888c259a>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsysm/article.php/624891/Infrared-Networking-Basics.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00042-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934221
682
2.53125
3
Scientists constantly monitor the environment for invasive species that may disrupt the natural order in waterways or other habitats. In the United States alone, approximately 7,000 invasive species in the animal and plant kingdoms inflict an estimated $138 billion per year in damage and control costs. Zebra mussels invaded the Great Lakes in 1988, causing government agencies to spend as much as $1 billion between 1989 and 2000 to fight the non-native mollusks' spread. Vital Signs, a program developed by the Gulf of Maine Research Institute (GMRI) in Portland, Maine, provides handheld computers to middle- and high-school students to electronically collect scientific data on aquatic environments, helping arrest the spread of non-native species. Vital Signs -- in Maine schools for four years -- went international in 2004 to seven primary schools in Ireland, where zebra mussels are also a problem. A Bright Idea Sarah Kirn, Vital Signs program manager, said that Alan Lishness, GMRI chief innovation officer -- and car buff -- got the concept for Vital Signs after attending a car race. He watched a pit crew using handheld computers to download information from the racecar's sensors, then realized students could use handheld computers to gather scientific data for schools and the GMRI -- then known as the Gulf of Maine Aquarium -- as environmental research. In 1998, the Gulf of Maine Aquarium turned to Pulse Data Systems, which created the Vital Signs software for Palm handheld computers. The software allows for data collection using peripheral technologies, such as a GPS receiver, then forwards the data to an integrated database. Vital Signs piques student curiosity about their environment by using methods that interest them -- namely computers, said Kirn. "If you take a computer and get the students' attention, then lure them outside where there is something important to learn, you've engaged them in a different way," she said, by incorporating technology with the outdoors. The program also gets students thinking about their environment and how they can help maintain it. Describing the experience as a "hands-on science lab," Gretta McCarron, Vital Signs project officer in Ireland, explained that students using the program gain a sense of ownership and responsibility for streams in their area -- especially when they learn about the negative effects of pollution on natural habitats. Using traditional methods of scientific observation such as tape measures, depth meters, thermometers and pH probes, students record the resulting data in the Vital Signs software using their handheld computers. The data is then time and date stamped, and is easily transferred to a database. Vital Signs Ireland uses Palm Zire 72 PDAs with GPS Navigation Pak, Bluetooth technology, built-in cameras for taking photos and video, and microphones for audio recording. In the field, students first record their location using the GPS receiver. Then they observe weather conditions; stream characteristics such as width, depth and flow rate; water temperature; air temperature; water pH; the type of stream bed; and information regarding the surrounding habitat, such as how the land is being used and the vegetation and animals in the area. Students record all observations on a monthly basis, and they can call up the informational text and photos stored in the handheld as a guide. Once finished, students return to the classroom and hand over their computers so teachers can upload the information to the Vital Signs Web site, where it is available to anyone interested. In Ireland, this includes fisheries and industries such as farming that monitor how their actions affect nearby waterways. The GIS-enabled Web site, currently maintained by Northern Geomantics in Maine, arranges data in a geographic context on a map based on the GPS receiver's position when an observation was recorded. Eventually the Web site, based loosely on the GMRI's Vital Signs site, will be completely maintained in Ireland as part of the Vital Signs program there. After the data has been transferred to the Web site, the students use their findings for follow-up activities in class. "They can draw graphs from their data or create bulletins to inform the rest of the school or parents about water quality in their area," said McCarron. In the future, video conferencing will allow students from different schools to see each other's Vital Signs work, further promoting a common interest in shared waterways. Crossing the Atlantic Seven primary schools in Ireland test Vital Signs in pilots, and next year the program is expected to grow, said McCarron, adding that transferring technology across the Atlantic has worked very well so far. "We would like to spend more time testing, evaluating and monitoring the system before we embark on mass expansion," she said. Vital Signs first drew the attention of the Irish Central Border Area Network (ICBAN) in 2003, after an economic-revitalization delegation came to Maine and listened to GMRI President Don Perkins talk about Vital Signs. Kate Burns, ICBAN's chief executive, and 10 local authorities approached Perkins with interest. Kirn explained that originally Burns expressed the ICBAN's desire to unite Northern Ireland and the Republic of Ireland through a common resource. In the late 1960s, Northern Ireland suffered a period of violent conflict -- beginning with civil rights marches -- that killed more than 3,000 people, most of them civilians. Known as the Troubles, the period officially ended in 1998, however its effects can still be seen and felt throughout Ireland. "The [ICBAN's] purpose was to foster cross-border cooperation projects and economic revitalization in those areas that have been really harmed during the Troubles," said Kirn. In November 2003, the GMRI signed an agreement with the ICBAN stating that the GMRI would update their existing Vital Signs program to meet Ireland's curriculum needs for use in primary schools. "The aim of the program is not only to teach children about their local environments, but also to make them aware of this shared water resource and their joint responsibility for water quality," said McCarron. Kirn believes that the changes made for Ireland's program are a stepladder to improving the technology in Maine. The GMRI and the ICBAN agreed to share improvements made to Vital Signs, which will benefit future partners. Zebra mussels were first found in 1988 in Lake St. Clair, the small body of water connecting Lake Erie and Lake Huron. Within a year, they spread to all of the Great Lakes, and it was too late to stop them. Because scientists can't be everywhere at once, they need help, explained Kirn. "If we get [students] to be the eyes and ears of the scientists, and gather high-quality data, then they have participated in science," she said. Without predators to curb their expansion, invasive species can multiply rapidly, and zebra mussels are no exception. A female can produce 30,000 to 100,000 eggs each year. They disrupt the food chain by filtering out phytoplankton -- microscopic plants that live in the water -- which deprives the animals that depend on phytoplankton of food. This continues up the food chain, gradually depleting populations of native species. "If you catch a new introduction of one of those plants in a lake before it's really gotten hold, you can remove it and keep that lake open, but some grow so aggressively that they literally fill up a pond," she said. "And they can do that from one small fragment of a plant that was on somebody's [boat] propeller." The GIS-enabled software will allow scientists who visit the Vital Signs Web site to know exactly where a photograph was taken if a student thinks he or she has found an invasive species. Both students and scientists would benefit, said Kirn. "The students have become involved in a real project which is extremely gratifying and engaging for them."
<urn:uuid:422a947a-da31-4dcc-a832-d81f067f0131>
CC-MAIN-2017-04
http://www.govtech.com/geospatial/Down-by-the-River.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00042-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955319
1,618
3.515625
4
Textbooks on software engineering prescribe to check preconditions at the beginning of a function. This is a really good idea: the sooner we detect that the input data or environment does not match our expectations, the easier it is to trace and debug the application. A nice function with precondition checking refuses to “work” if the preconditions are not satisfied. The next question is: how exactly should our function refuse to work it detects that an unsatisfied precondition? I see the following possible answers to this question (sorted from the least invasive to the most destructive): - silently repair and continue - return an error code - throw an exception - stop or abort the application The least invasive approach, to repair and silently continue, is a bad idea. An application consisting of many ‘intelligent’ functions doing something despite of erroneous input would be extremely difficult to debug and use. Such an application would always return an answer but we would never know if this answer is correct at all. The second approach, to return an error code, requires a lot of manual work. Not only we have to establish different error codes for different situations, we have also to generate them and the caller must not forget to check them. As common experience shows, we do forget to check error conditions… Exceptions are much better since once an exception is thrown it will be propagated to the callers until someone catches it. So the programmer’s burden of checking the error codes disappears. Or does it?… In fact it gets replaced by the burden of specifying exception handlers at the right places and by the burden of remembering that almost any line of the program can be interrupted by an exception. If we want to make our program not only ‘exception generating’ but also ‘exception safe’, then we have to consider many possible execution paths – with and without exception. This turns out to be quite a feat in itself. If you want more gory details, consider Exceptional C++ and its follow-ups. This book contains vital information about programming with exceptions. The last choice is the easiest one. If the preconditions are not satisfied, simply abort the application. This is no brainer – no error codes, no exceptions, just pay the price of killing the application (if the application is a quick & dirty perl script, then the tradition is to tell it literally to die…) Alas, this is acceptable only in a limited number of situations. If we encounter a fatal condition and the application can not meaningfully continue, then ok, there is nothing to lose, dump it. For example: a compiler which can not find the input file, or a mail client which can not find the account settings. The best thing they can do is to stop immediately. But in all other cases, you should not use this kill the application. For example, you can not use this approach in IDA plugins. Imagine a plugin which works with MS Windows PE files. It is natural for such a plugin to check the input file type at the initialization time. This is the wrong way of doing it: if ( inf.filetype != f_PE ) error("Sorry, only MS Windows PE are supported"); This is bad because as soon as we try to disassemble a file different from PE, our plugin will interfere and abort the whole application, i.e. IDA. This is quite embarrassing, especially for unsuspecting users of the plugin who never saw the source code of the plugin. The right way of refusing to work is: if ( inf.filetype != f_PE ) If the input file is not what we except, we return an error code. IDA will stop and unload the current plugin. The rest of the application will survive. Do not let your software be capricious without a reason 🙂
<urn:uuid:316a9544-10c9-4457-a902-7084531793da>
CC-MAIN-2017-04
http://www.hexblog.com/?p=30
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00068-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911703
800
2.640625
3
IBM is spending $3 billion to figure out what will happen when computer power stops doubling every few years. The strange phenomenon -- called Moore’s Law -- was described in 1965 by Intel cofounder Gordon Moore, who predicted the number of circuits that can fit on a processor would double every year as the technology evolved. Ten years later, he had reason to believe the rate would slow to doubling every one to two years. Somehow, the pattern has held ever since, helping computers go from room size to desk size to lap size to phone size to nano size. Although some disagree, another former Intel executive last year predicted Moore’s Law will cease to hold after about 2020, partly on the assumption that things can’t keep getting smaller forever, and they’re already pretty small. “The current generation of chips, due later this year, have reached 14 nanometers,” according to Arik Hesseldahl of Re/code. “For a sense of scale, that’s only a tad thicker than the wall of an individual cell.” Chips can get a little smaller, but what happens after that is the $3 billion question. The physical limits of silicon are what are holding up the shrinking process, so it might be time for a new material. Or maybe computing itself will drive the changes; IBM is looking to quantum and neurosynaptic computing, for instance. If eternal business growth depends on chips eternally shrinking, the future will be interesting to watch. Hopefully, it’s no big thing.
<urn:uuid:106307d0-6881-4667-91b1-5b41e9c219fd>
CC-MAIN-2017-04
http://www.nextgov.com/emerging-tech/emerging-tech-blog/2014/07/what-will-happen-when-computer-chips-stop-getting-smaller/88381/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00556-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952923
324
3.40625
3
ccrypt is a utility for encrypting and decrypting files and streams. It was designed as a replacement for the standard unix crypt utility, which is notorious for using a very weak encryption algorithm. It is based on the Rijndael cipher, which is the U.S. government's chosen candidate for the Advanced Encryption Standard. This cipher is believed to provide very strong security. Encryption and decryption depends on a keyword (or key phrase) supplied by the user. By default, the user is prompted to enter a keyword from the terminal. Keywords can consist of any number of characters, and all characters are significant (although ccrypt internally hashes the key to 256 bits). Longer keywords provide better security than short ones, since they are less likely to be discovered by exhaustive search. No reviews yet No comments yet
<urn:uuid:43786b7e-f69c-4873-a931-fb4c36d4242c>
CC-MAIN-2017-04
http://fileforum.betanews.com/detail/ccrypt-for-Windows/1113518779/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00372-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953089
171
2.8125
3
Kelemen A.,Debreceni Egyetem TEK | Kelemen A.,Debrecen University | Torok P.,Debreceni Egyetem TEK | Torok P.,Debrecen University | And 8 more authors. Journal of Landscape Ecology | Year: 2010 Spontaneous succession in lack of restoration focused case studies is often underappreciated in restoration. We studied the regeneration of alkali and loess grasslands in extensively managed (mown twice a year) alfalfa fields using space for time substitutions. In our study we addressed the following questions: (i) How fast is the disappearance of the perennial alfalfa following abandonment of intensive management from vegetation? (ii) Is the course of vegetation development in extensively managed alfalfa fields different than in abandoned crop fields formerly cultivated with short lived crops? (iii) How fast is the regeneration of native grasslands in extensively managed alfalfa fields? We found that alfalfa gradually disappeared from vegetation, and its cover was low in 10-years-old alfalfa fields. We also detected a continuous replacement of alfalfa by perennial native grasses and forbs. No weed dominated stages were detected during the spontaneous grassland recovery in alfalfa fields. Our results suggest that the recovery of species poor grasslands is possible within 10 years. The partial recovery of loess and alkali grasslands not require technical restoration methods in alfalfa fields where nearby native grasslands are present. Source
<urn:uuid:cc5de86b-1903-4e6c-9abf-380d4f60fba5>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/debreceni-egyetem-tek-645188/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00188-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922767
321
2.578125
3
It's not just a file system full of odd looking files that only the kernel understands. Instead, it's really something of a peep hole into your system. And there a quite a number of useful things that you can learn from the files that it contains. So, what do you see when you cd over to /proc? Well, run ls and the first thing you're likely to notice is the very large group of directories with just numbers for names. These numbers correspond to the process IDs (PID) of processes that are running on your system -- everything from the init process that started the boot time ball rolling to the shell you're using right now. And you're likely to see quite a lot of them -- probably several hundred or more. $ cd /proc $ ls 1 15878 38 433 5266 579 67 7521 devices 10 1589 39 434 5267 5792 6788 7523 diskstats 10052 16 393 435 5268 58 6793 7525 dma 1021 1623 3956 436 5269 580 6794 7529 driver 10522 16571 3957 437 5270 581 6795 7531 execdomains 10552 16585 3958 438 5271 5810 6796 7533 fb 11 1695 3959 439 5272 582 6797 7535 filesystems 11984 17 3960 44 5273 583 6798 7537 fs ... If you were to count the numeric (process) directories, your total should be the same as the response you'd get if you ran the command ps -ef --no-headers | wc -l (ps output without the header line). The bulk of these directories will likely be owned by root but, depending on how your system is being used, you'll also see application service accounts (such as oracle in the example below) and usernames among the process owners listed. # ls -l | more total 0 dr-xr-xr-x 5 root root 0 Oct 3 2013 1 dr-xr-xr-x 5 root root 0 Oct 3 2013 10 dr-xr-xr-x 5 root root 0 Oct 3 2013 1021 dr-xr-xr-x 5 root root 0 Oct 3 2013 11 dr-xr-xr-x 5 oracle oinstall 0 Feb 4 07:11 1167 dr-xr-xr-x 5 root root 0 Jan 26 11:00 11920 dr-xr-xr-x 5 root root 0 Mar 7 2014 11923 dr-xr-xr-x 5 gdm gdm 0 Jan 26 11:01 11950 Notice that none of these are files in the same sense as files we see in our file systems. They don't take up space on the disk and they don't have content even if the cat command displays their data for you. Unlike the directories that we see in "real" file systems, these show up as using 0 bytes of data. Many will have dates and times that correspond to the last time the system was booted (i.e., when the related processes started) while other files in /proc may appear to be updated almost constantly. Only the /proc/kcore file will have a signficant size and it might appear to be huge, though even it isn't really using disk space) as it relates to the RAM on your system. # ls -l kcore -r-------- 1 root root 39460016128 Feb 8 09:10 kcore You'll also see a collection of other files in /proc with names like cpuinfo, key-users and schedstat -- names that provide clues to what these files contain. In fact, you can think of the files in /proc as falling into two categories -- those that are represent processes running on your system and those represent some aspect of the system itself. So, what are some useful things these interesting pseudo files can tell you? For one thing, they can tell you how long the system has been up. Check out the /proc/uptime file. This file reports the system uptime, even though it might not be immediately obvious. The number 74216960.58 in the output below probably likely doesn't look like an uptime report to you. But type "cat uptime" a couple times in a row and you'll notice that the numbers are constantly changing. It's obviously keeping up. $ cat /proc/uptime;sleep 10;cat /proc/uptime 74216960.58 73912315.63 74216970.58 73912325.61 As you'll note, this file actually contains two numbers. The first is the uptime of the system (as you'd expect from the name) while the second is the amount of time the system has spent idle. The numbers are constantly changing because we're always getting further from the time the system was last booted. After sleeping for ten seconds, the number on the left just happens to be 10 units larger, so it's clear that these numbers are reporting time in seconds. No problem. A little command line math can turn those seconds into days. If we then compare the result of our calculation with the uptime command output, we'll the some connection between the numbers. $ expr 74216970 / 60 / 60 / 24 858 $ uptime 14:30:17 up 858 days, 23:50, 1 user, load average: 0.08, 0.04, 0.00 Of course, almost no one would want to go through all the trouble of calculating uptime with an expr command when the uptime command can tell us what we want to know directly, especially if we have to think through the sixty seconds per minute, sixty minutes per hour, and 24 hours per day conversions. And think your system is busy? Do a little more math with these numbers and you might see something like this. Notice how I added two zeroes to the end of the idle time figure to get an answer that would represent the percentage of the time this system has been idle. Yes, that's 99%. This system is clearly not straining -- at least not most of the time. $ expr 7391232500 / 74216970 99 This uptime exercise is useful because it reinforces the idea that these "files" are plucking information from the system to update the virtual file content many times a second. Note, though, that the dates and times associated with this file keep up with the current time. $ ls -l /proc | tail -11 -r--r--r-- 1 root root 0 Feb 9 14:42 stat -r--r--r-- 1 root root 0 Feb 9 14:42 swaps dr-xr-xr-x 11 root root 0 Oct 3 2013 sys --w------- 1 root root 0 Feb 9 14:42 sysrq-trigger dr-xr-xr-x 2 root root 0 Feb 9 14:42 sysvipc dr-xr-xr-x 4 root root 0 Feb 9 14:42 tty -r--r--r-- 1 root root 0 Feb 9 14:42 uptime -r--r--r-- 1 root root 0 Feb 9 14:42 version -r-------- 1 root root 0 Feb 9 14:42 vmcore -r--r--r-- 1 root root 0 Feb 9 14:42 vmstat -r--r--r-- 1 root root 0 Feb 9 14:42 zoneinfo Another file with information that will likely seem familiar is the version file. This file supplies information on your operating system version, much like the output of the uname -a command and undoubtedly tapping the same system resources. $ cat /proc/version Linux version 2.6.18-128.el5 (email@example.com) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)) #1 SMP Wed Dec 17 11:41:38 EST 2008 $ uname -a Linux sea-aveksa-1.telecomsys.com 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38 EST 2008 x86_64 x86_64 x86_64 GNU/Linux Another file -- the cpuinfo file -- supplies fairly extensive information on your system CPUs. While I don't want to insert all 500+ lines into this post, you can see some of the details below. The second command is simply counting up the number of CPUs. $ head -11 /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 44 model name : Intel(R) Xeon(R) CPU X5650 @ 2.67GHz stepping : 2 cpu MHz : 2660.126 cache size : 12288 KB physical id : 1 siblings : 12 core id : 0 $ more cpuinfo | grep processor | wc -l 24 The vmstat file provides virtual memory statistics. Want to see what's happening with page swapping? The numbers below represent swapping activity (pages swapped in and out) since the system was booted. $ grep pswp /proc/vmstat pswpin 229269 pswpout 316559 If these names look familiar, you may be remembering them from sar output like that shown below. # sar -W 10 2 Linux 3.14.35-28.38.amzn1.x86_64 (ip-172-30-0-28) 02/10/2016 _x86_64_(1 CPU) 12:17:03 PM pswpin/s pswpout/s 12:17:13 PM 0.00 0.00 12:17:23 PM 0.00 0.00 Average: 0.00 0.00 We can also look at memory statistics. These details can come in very handy if you want to get a very detailed understanding of the memory on your system and how it is being used. $ more /proc/meminfo MemTotal: 37037804 kB MemFree: 18605268 kB Buffers: 323740 kB Cached: 14919556 kB SwapCached: 12068 kB Active: 13878148 kB Inactive: 3846048 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 37037804 kB LowFree: 18605268 kB SwapTotal: 16778232 kB SwapFree: 16309048 kB Dirty: 9896 kB Writeback: 0 kB AnonPages: 2468880 kB Mapped: 7089292 kB Slab: 442900 kB PageTables: 189648 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 35297132 kB Committed_AS: 12768916 kB VmallocTotal: 34359738367 kB VmallocUsed: 271696 kB VmallocChunk: 34359466659 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB Want to check on what file system types are supported by your kernel? Take a look at /proc/filesystems. $ head -11 /proc/filesystems nodev sysfs nodev rootfs nodev bdev nodev proc nodev cpuset nodev binfmt_misc nodev debugfs nodev securityfs nodev sockfs nodev usbfs nodev pipefs To view all the mounts used by your system, look at the /proc/mounts file. $ cat /proc/mounts rootfs / rootfs rw 0 0 /dev/root / ext3 rw,data=ordered,usrquota 0 0 /dev /dev tmpfs rw 0 0 /proc /proc proc rw 0 0 /sys /sys sysfs rw 0 0 /proc/bus/usb /proc/bus/usb usbfs rw 0 0 devpts /dev/pts devpts rw 0 0 /dev/sda1 /boot ext3 rw,data=ordered 0 0 tmpfs /dev/shm tmpfs rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0 /etc/auto.misc /misc autofs rw,fd=6,pgrp=5541,timeout=300,minproto=5,maxproto=5,indirect 0 0 -hosts /net autofs rw,fd=12,pgrp=5541,timeout=300,minproto=5,maxproto=5,indirect 0 0 oracleasmfs /dev/oracleasm oracleasmfs rw 0 0 //windows-server/outgoing /mnt/ActAccts cifs rw,mand,unc=\\windows-server \outgoing,username=xferSvc,uid=0,gid=0,file_mode=02767,dir_mode=0777,rsize=16384,wsize=57344 0 0 The /proc/net directory contains a wealth of network information including data for your network interfaces. $ ls /proc/net anycast6 ip_conntrack netfilter rt6_stats tcp6 arp ip_conntrack_expect netlink rt_acct tr_rif bonding ip_mr_cache netstat rt_cache udp dev ip_mr_vif packet snmp udp6 dev_mcast ip_tables_matches protocols snmp6 unix dev_snmp6 ip_tables_names psched sockstat wireless if_inet6 ip_tables_targets raw sockstat6 igmp ipv6_route raw6 softnet_stat igmp6 mcfilter route stat ip6_flowlabel mcfilter6 rpc tcp Examples of some /proc/net data include your arp cache and routing table. $ cat arp IP address HW type Flags HW address Mask Device 172.30.0.1 0x1 0x2 0a:ee:74:5c:40:bd * eth0 172.30.0.2 0x1 0x2 0a:ee:74:5c:40:bd * eth0 $ cat route Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT eth0 00000000 01001EAC 0003 0 0 0 000000000 0 0 eth0 FEA9FEA9 00000000 0005 0 0 0 FFFFFFFF0 0 0 eth0 00001EAC 00000000 0001 0 0 0 00FFFFFF0 0 0 For some files in /proc, you'll need to use your superpowers. Here we're looking into some aspects of our host-based firewall. $ sudo cat /proc/net/ip_tables_names filter nat You can view arp (address resolution protocol) data that your system has collected using the /proc/net/arp file. This is much the same information that you'd see using the arp command. $ cat /proc/net/arp IP address HW type Flags HW address Mask Device 10.20.30.128 0x1 0x2 00:50:56:B1:2E:01 * bond0 10.20.30.110 0x1 0x2 A4:BA:88:12:2C:5D * bond0 10.20.30.154 0x1 0x2 00:50:56:B3:0E:33 * bond0 10.20.30.1 0x1 0x2 00:00:0C:07:AC:2A * bond0 10.20.30.33 0x1 0x2 00:50:52:B6:32:33 * bond0 Or maybe you want to look into page faults. $ cat vmstat | grep "fault" pgfault 2426152809 pgmajfault 79826 You can examine your swap partitions and swap files through the /proc/swaps file. $ more /proc/swaps Filename Type Size Used Priority /dev/mapper/VolGroup00-LogVol01 partition 16777208 514200 /swapfile file 1024 0 -2 Details about your system's devices are available in the /proc/sys/dev directory. Below, we look at the cdrom and raid devices. # ls -l /proc/sys/dev/cdrom total 0 -rw-r--r-- 1 root root 0 Feb 8 17:59 autoclose -rw-r--r-- 1 root root 0 Feb 8 17:59 autoeject -rw-r--r-- 1 root root 0 Feb 8 17:59 check_media -rw-r--r-- 1 root root 0 Feb 8 17:59 debug -r--r--r-- 1 root root 0 Feb 8 17:59 info -rw-r--r-- 1 root root 0 Feb 8 17:59 lock # ls -l /proc/sys/dev/raid total 0 -rw-r--r-- 1 root root 0 Feb 8 17:59 speed_limit_max -rw-r--r-- 1 root root 0 Feb 8 17:59 speed_limit_min Examining the contents of one of these files, we see the maximum speed (RAID rebuild speed) that is set for the device. # cat /proc/sys/dev/raid/speed_limit_max 200000 A lot of the information available through /proc can also be viewed using commands like arp, netstat, and sar. Still, it's useful to be able to pull data from the kernel in one convenient location and /proc provides a tremendous wealth of stats for anyone who's want to dive deeply into their system. This tour of /proc and some of the extensive information that it provides was just a taste of the detail available to you. The key to making good use of all this data is deciding what kind of information you want to see and devising scripts or aliases to fetch it from the tremendously detailed files always waiting for you in /proc. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:3ab60745-14c5-4384-8b84-dc0bb81254c6>
CC-MAIN-2017-04
http://www.computerworld.com/article/3031656/linux/probing-into-your-systems-with-proc.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00096-ip-10-171-10-70.ec2.internal.warc.gz
en
0.801987
3,916
2.609375
3
As the internet of things (IoT) gains in popularity, so does edge computing — with futurists projecting that they will grow in tandem. An alternative to cloud computing, edge computing has been identified as a way to process and store data closer to end users. But first, what is edge computing? With edge computing, IT professionals can provide data processing power at the edge of a network instead of maintaining it in a cloud or a central data warehouse. That’s why it has been considered a viable option for IoT applications,
<urn:uuid:ca970970-a79a-41a0-a7f7-406e3913204d>
CC-MAIN-2017-04
http://www.lifelinedatacenters.com/cio-strategy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00308-ip-10-171-10-70.ec2.internal.warc.gz
en
0.895701
110
2.71875
3
RFID solutions improve accuracy and overall operations across a wide range of industries by providing identification, location and tracking capabilities. But RFID comes in many different frequencies, such as Low Frequency (LF), High Frequency (HF) and Ultra High Frequency (UHF). All RFID technologies work in similar ways, but each has its own set of performance characteristics that determine the applications they’re best suited for. Understanding these characteristics is important when selecting the technology for your RFID applications. Motorola Solutions recommends asking yourself the following questions to help with the selection: - What is the minimum and maximum distance between tags and readers? - Do you need to read one tag at a time or many tags simultaneously? - Are you tracking items that are relatively inexpensive or high cost items? - How much information do you need to store on tags? - Do you need to conduct payments or other transactions with RFID-enabled devices? - Where do you need to read tags? For example, as assets move in and out of the loading dock for receiving and shipping? As assets pass through specific areas? - How sensitive is the data on the tag? What level of security will the data require? Be sure to check back later this week as I take a closer look at each type of RFID technology and the typical applications they’re used for.
<urn:uuid:eef1f839-0e0c-4b62-b6b1-dcd6ad5f6766>
CC-MAIN-2017-04
http://blog.decisionpt.com/selecting-rfid-frequency
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00124-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93513
277
2.5625
3
Wireless Local Area Network (WLAN) security is one of the most important aspects of any WLAN design. The same security exposures exist on WLANs as for hard-wired Ethernet LANs. However, WLANs are actually exposed to many, additional vulnerabilities, in addition to those expected with wired Ethernet LANs. For example, someone could park outside a building and pick up the WLAN signals from inside the building, reading, and perhaps copying and stealing, the data. This type of hacking is often called a form of the Man-in-the-Middle attack. As you have learned in your CCNA studies, a WLAN links two or more devices using some wireless distribution method (typically, spread-spectrum or OFDM modulated radio waves) and usually provides a connection through an access point (AP) that is directly connected to a hard-wired Ethernet network. This gives users the mobility to move around within a local coverage area and still be connected to the network. Many businesses are now implementing WLAN segments on their internal LANs because they are easy to set up and there are no additional wires to run. WLANs enable users with laptops and other mobile devices to roam the enterprise and not have to physically plug in wherever they go. Too often, Business Decision Makers (BDMs) think that because the setup of a wireless network is essentially plug-and- play, that everything is functioning properly and securely. However, WLANs are a virtual playground for hackers. WLAN technology is still relatively new, and most network designers and administrators are not sufficiently proficient with security protocols and procedures. Hackers have found wireless networks relatively easy to break into and can even use wireless technology to leap-frog into wired networks.As a result, it is very important that enterprises define effective wireless security policies that guard against unauthorized access to important resources. However, there are a great number of security risks associated with the current wireless protocols and encryption methods. Hacking methods have become much more sophisticated and innovative with wireless. Hacking has also become much easier and more accessible with easy-to-use Windows or Linux-based tools being made available on the Web at no charge. Any wireless access point that is attached to a hard-wired Ethernet network segment is essentially bridging the internal network directly to the surrounding area, in many cases without firewall protection. Without proper security measures for authentication, any laptop with a wireless card can access the network and listen to all network traffic. From a network design and management aspect, it is important to understand the potential for rogue WAPs in an enterprise. WAPs can be purchased at many stores such as Wal-mart or Kmart and hooked up by even a non-technical person. In many cases, the network administrators are not made aware of these unauthorized installations which, unfortunately, are logically located inside the corporate firewall and Demilitarized Zone (DMZ). Some WLAN security vulnerabilities give hackers an opportunity to cause harm by stealing information, accessing hosts in the wired part of the network, or preventing service through a denial-of-service (DoS) attack. Other vulnerabilities may be caused by a well-meaning but uninformed employee who installs an AP without the IT department’s approval, with no security. Several of the most common types of WLAN security issues a CCNA must be familiar with are: - War Drivers: The attacker often wants to gain Internet free of charge. So, this type of hacker drives around, attempting to locate APs that have no or weak security. The success of this type of attack can be enhanced if the attacker uses easily downloaded software tools and, in many cases, high-gain directional antennas, which are also easily purchased and installed. - Hackers: The motivation for hackers is to either find information or, perhaps deny services to network owners. In addition, an attacker’s end goal may be to compromise the hosts, such as servers, inside the wired network. Then, the attacker uses the wireless network as a way to access the Enterprise network, without having to go through Internet connections that have firewalls and Intrusion Detection Systems (IDS). They often do this to continue to improve on their hacking skills, or simply for their own personal enjoyment. - Employees: Employees, at all levels of the organization chart, can unwittingly help hackers gain access to the Enterprise network in several ways. An employee could go to an office supply store and buy an AP for less than $100, install it in their office using the default settings of “no security,” and create their own small WLAN, erroneously think they have a “private” WLAN. However, this WLAN would enable a hacker to gain access to the rest of the Enterprise from their car in the parking lot. This would also be a good example of how a “man-in-the-middle” attack could occur. - Rogue AP: Here, an attacker captures packets in the existing WLAN, finding the Service Set Identifier (SSID) and cracking security keys, if they are used. Then, the attacker can set up their own AP, using the same settings, and get the enterprise’s clients to use it. In turn, this can cause the associated users to enter their usernames and passwords, enabling the next phase of the attacker’s plan. In my next post, I will discuss the WLAN standards most commonly used to implement the authentication and encryption segments of a security policy. Author: David Stahl
<urn:uuid:f33af4cf-8e1e-4aa5-a06d-b76111989695>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/10/21/whos-that-man-in-the-parking-lot-with-the-laptop/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00124-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953918
1,137
3.1875
3
Many Americans make sure to turn off the lights when they leave home, which is a good idea considering rising energy costs. What they might not consider, however, is that many of the electronic items still plugged in use energy even when they're turned off. This may not seem like a big deal, but it adds up, according to the Lawrence Berkeley National Laboratory, which reported that these idle electronic household items use 5 percent of our domestic energy and cost consumers more than $3 billion per year. Using energy-efficient electronics is one option, but there are many other ways to reduce energy consumption, which has state policymakers searching for the best solutions. As energy costs continue to rise, state governments search for energy efficiency policies that drive down costs without sacrificing energy benefits. A Place to Turn The Alliance to Save Energy -- a nonprofit coalition of business leaders, consumer leaders, environmental nonprofit organizations and government entities working together to promote energy efficiency worldwide -- created an online resource for state policymakers to research energy efficiency policies enacted on a statewide level. This resource -- the State Energy Efficiency Index -- can be found through the alliance Web site "It's a one-stop option for energy efficiency advocates, companies and legislators who are interested in learning about what's already on the books," said Anna Carmichael, senior policy associate of the alliance. Initially the alliance created a newsletter to highlight pending state energy efficiency legislation. With the information gathered for the newsletter, and the constant evolving status on legislation, it became apparent that a database would really help to sort the information and present it in an organized and timely manner. "We get questions all the time from reporters and our associate companies asking what states have standards, what states have building codes," said Carmichael. "So it just made sense to put all of the information in one place." The index organizes policy information into categories by policy topic, including appliance standards, building codes, greenhouse gas emission cap-and-trade programs, energy-efficiency funds, public benefit funds, tax incentives, transportation initiatives and other legislation. Policies are also searchable by state. An interactive map of the United States allows users to click on any state to get a listing of energy-efficiency laws in effect for that state. "It's a resource for legislators to think of new ideas for their own particular state. They can see what their neighbors are doing and they can look on their state page to see what they're missing," said Carmichael. With a tracking system called NetScan, a search tool for finding information on the Internet, the alliance gathers information on new state energy-efficiency policies to update the index. Carmichael said the alliance also has contacts in the states, particularly in state energy offices, who e-mail policy updates that may have been missed. To help states that aren't sure which energy-efficiency policies to implement, the alliance plans to recommend model legislation that encompasses best practices across the nation. "Of course, any sort of legislative staff can go through legislation from other states and sort of create their own," said Carmichael. "But sometimes it's helpful to have a group like the alliance that has connections with businesses and other public interest groups, can look through the legislation and talk with people on the ground in those states to see which pieces have really worked well." She said a third-party perspective can be helpful for states who want to look at potential policy from the consumer, business and legislative points of view. Classifying a best-case example allows states to benefit from the different perspectives the alliance investigates. The State Energy Efficiency Index is an evolving compilation of data, and the alliance welcomes any input. "We want to work with other organizations to make this a collaborative process," Carmichael said.
<urn:uuid:8f5d02c6-5a80-4991-9013-d928ee63b399>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Saving-Energy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00362-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949221
762
2.96875
3
The structure and makeup of the Internet has adapted as the needs of its community have changed. Today's Internet serves the largest and most diverse community of network users in the computing world. A brief chronology and summary of significant components are provided in this chapter to set the stage for understanding the challenges of interfacing the Internet and the steps to build scalable internetworks. The Internet started as an experiment in the late 1960s by the Advanced Research Projects Agency (ARPA, now called DARPA) of the U.S. Department of Defense. DARPA experimented with the connection of computer networks by giving grants to multiple universities and private companies to get them involved in the research. In December 1969, the experimental network went online with the connection of a four-node network connected via 56 Kbps circuits. This new technology proved to be highly reliable and led to the creation of two similar military networks, MILNET in the U.S. and MINET in Europe. Thousands of hosts and users subsequently connected their private networks (universities and government) to the ARPANET, thus creating the initial "ARPA Internet." ARPANET had an Acceptable Use Policy (AUP), which prohibited the use of the Internet for commercial use. ARPANET was decommissioned in 1989. By 1985, the ARPANET was heavily used and congested. In response, the National Science Foundation (NSF) initiated phase one development of the NSFNET. The NSFNET was composed of multiple regional networks and peer networks (such as the NASA Science Network) connected to a major backbone that constituted the core of the overall NSFNET. In its earliest form, in 1986, the NSFNET created a three-tiered network architecture. The architecture connected campuses and research organizations to regional networks, which in turn connected to a main backbone linking six nationally funded super-computer centers. The original links were 56 Kbps. The links were upgraded in 1988 to faster T1 (1.544 Mbps) links as a result of the NSFNET 1987 competitive solicitation for a faster network service, awarded to Merit Network, Inc. and its partners MCI, IBM, and the state of Michigan. The NSFNET T1 backbone connected a total of 13 sites that included Merit, BARRNET, MIDnet, Westnet, NorthWestNet, SESQUINET, SURANet, NCAR (National Center of Atmospheric Research), and five NSF supercomputer centers. In 1990, Merit, IBM, and MCI started a new organization known as Advanced Network and Services (ANS). Merit Network's Internet engineering group provided a policy routing database and routing consultation and management services for the NSFNET, whereas ANS operated the backbone routers and a Network Operation Center (NOC). By 1991, data traffic had increased tremendously, which necessitated upgrading the NSFNET's backbone network service to T3 (45 Mbps) links. Figure 1-1 illustrates the original NSFNET with respect to the location of its core and regional backbones. As late as the early 1990s, the NSFNET was still reserved for research and educational applications, and government agency backbones were reserved for mission-oriented purposes. But new pressures were being felt by these and other emerging networks. Different agencies needed to interconnect with one another. Commerical and general-purpose interests were clamoring for network access, and Internet service providers (ISPs) were emerging to accommodate those interests, defining an entirely new industry in the process. Networks in places other than the U.S. had developed, along with interest in international connections. As the various new and existing entities pursued their goals, the complexity of connections and infrastructure grew. Government agency networks interconnected at Federal Internet eXchange (FIX) points on both the east and west coasts. Commercial network organizations had formed the Commercial Internet eXchange (CIX) association, which built an interconnect point on the west coast. At the same time, ISPs around the world, particularly in Europe and Asia, had developed substantial infrastructures and connectivity. To begin sorting out the growing complexity, Sprint was appointed by NSFNET to be the International Connections Manager (ICM)--to provide connectivity between the backbone services in the U.S. and European and Asian networks. NSFNET was decommissioned in April 1995. The decommissioning of NSFNET had to be done in specific stages to ensure continuous connectivity to institutions and government agencies that used to be connected to the regional networks. Today's Internet structure is a move from a core network (NSFNET) to a more distributed architecture operated by commercial providers such as Sprint, MCI, BBN, and others connected via major network exchange points. Figure 1-2 illustrates the general form of the Internet today. The contemporary Internet is a collection of providers that have connection points called POP (point of presence) over multiple regions. Its collection of POPs and the way its POPs are interconnected form a provider's network. Customers are connected to providers via the POPs. Customers of providers can be providers themselves. Providers that have POPs throughout the U.S. are called national providers. Providers that cover specific regions (regional providers) connect themselves to other providers at one or multiple points. To enable customers of one provider to reach customers of another provider, Network Access Points (NAPs) are defined as interconnection points. The term ISP is usually used when referring to anyone who provides service, whether directly to end users or to other providers. The term NSP (network service provider) is usually restricted to providers who have NSF funding to manage the Network Access Points, such as Sprint, Ameritech, and MFS. The term NSP, however, is also used more loosely to refer to any provider that connects to all the NAPs. NSFNET has supported data and research on networking needs since 1986. NSFNET also supported the goals of the High Performance Computing and Communications (HPCC) Program, which promoted leading-edge research and science programs. The National Research and Education Network (NREN) Program, which is a subdivision of the HPCC Program, called for Gigabit-per-second networking for research and education to be in place by the mid 1990s. All these needs, in addition to the April 1995 expiration deadline of the Cooperative Agreement for NSFNET Backbone Network Services, lead NSFNET to solicit for NSFNET services. This process is generally referred to as solicitation. The first NSF solicitation, in 1987, lead to the NSFNET backbone upgrade to T3 links by the end of 1993. In 1992, NSF wanted to develop a follow-up solicitation that would accommodate and promote the role of commercial service providers and that would lay down the structure of a new and robust Internet model. At the same time, NSF would step back from the actual operation of the network and focus on research aspects and initiatives. The final NSF solicitation (NSF 93-52) was issued in May 1993. The final solicitation included four separate projects for which proposals were invited: The solicitation for this project was to invite proposals from companies to implement and manage a specific number of NAPs where the vBNS and other appropriate networks may interconnect. These NAPs should enable regional networks, network service providers, and the U.S. research and education community to connect and exchange traffic with one another. They also should provide for the interconnection of networks in an environment that is not subject to the NSF Acceptable Use Policy. (This policy was put in place to restrict the use of the Internet for research and education.) Thus, general usage, including commercial usage, can go through the NAPs also. The NAP is defined as a high-speed network or switch to which a number of routers can be connected for the purpose of traffic exchange. NAPs must operate at speeds of at least 100 Mbps and must be able to be upgraded as required by demand and usage. The NAP could be as simple as an FDDI switch (100 Mbps) or an ATM switch (155 Mbps) passing traffic from one provider to the other. The concept of the NAP is built on the FIX (Federal Internet eXchange) and the CIX (Commercial Internet eXchange), which are built around FDDI rings with attached Internet networks operating at speeds of up to 45 Mbps. The traffic on the NAP should not be restricted to that which is in support of research and education. Networks connected to the NAP are permitted to exchange traffic without violating the use policies of any other networks interconnected to the NAP. There are four NSF-awarded NAPs: The NSFNET backbone service was physically connected to the Sprint NAP on September 13, 1994. It was physically connected to the PacBell NAP and Ameritech NAP in mid-October 1994 and early January 1995, respectively. The NSFNET backbone service was upgraded to the collocated FDDI offered by MFS on March 22, 1995. Additional NAPs are being created around the world as providers keep finding the need to interconnect. Networks attaching to NAPs must operate at speeds commensurate with the speeds of attached networks (1.5 Mbps or greater) and must be upgradable as required by demand, usage, and program goals. NAPs must be able to switch both IP and CLNP (ConnectionLess Networking Protocol). The requirements to switch CLNP packets and to implement IDRP-based (InterDomain Routing Protocol, ISO OSI Exterior Gateway Protocol) procedures may be waived depending on the overall level of service and the U.S. government's desire to foster the use of ISO OSI protocols. A NAP manager should be appointed to each NAP with duties that include the following: The current physical configuration of today's NAPs is a mixture of FDDI/ATM switches with different access methods, ranging from DS3 for dedicated and FR/ATM/SMDS for switched. Figure 1-3 shows a possible configuration, based on some contemporary NAPs. The routers could be managed either by the NSP or the NAP manager. Different configurations, fees, and policies are set by the NAP manager. Connections from different LATA (Local Access and Transport Area) are provided by Inter eXchange Carriers (IXC). Due to the decommissioning of the NSFNET backbone, federal regional networks faced the problem of transitioning to the new infrastructure where they have to be connected to new NSPs. The Federal Networking Council (FNC) Engineering and Planning Group (FEPG) was responsible for making a recommendation on how to transition to the new NAP-NSP operational environment with minimal disruption to users, specifically in federal agency communications with the U.S. academic and research communities. Existing Federal Internet eXchanges (FIX West and FIX East) were to be connected to the major NSPs (MCInet, Sprintlink, ANS). The FIX West backbone formerly was maintained at NASA Ames. Now it is connected to the major NSPs, and route servers were installed to peer with the federal agencies. The FIX East backbone formerly was maintained at SURA (College Park, MD). Now it is connected to the major NSPs and is also bridged to the MAE-East facility (Tyson's Corner, VA) of MFS. The CIX (pronounced Kix) is a nonprofit trade association of Public Data Internetwork Service Providers. The association promotes and encourages the development of the public data communications internetworking services industry in both national and international markets. The CIX provides a neutral forum to exchange ideas, information, and experimental projects among suppliers of internetworking services. Some benefits CIX provides its members include: With increasing ISP connectivity to NAPs, the CIX becomes essential in the coordination of legislative issues between members. In fact, the role of the CIX for physical connectivity is not as important as its role in coordination between parties. With the existence of a number of other high bandwidth connection points such as the NAPs, the CIX plays a minor role in the connectivity game. ISPs who still rely on the CIX as their only physical connection to the Internet are still way behind. On July 13, 1994, the CIX board voted to block traffic from ISPs who are not CIX members. CIX membership costs approximately $7,500 annually. Although NAP connectivity is primarily something ISPs have to worry about, the level of redundancy and diversity of NAP connections affects traffic patterns and trajectories in the whole Internet. As such, the delays or speed of access caused by ISPs' interconnectivity affect the performance of everyone's Internet access. As you will see in the rest of this book, speed of access to the NAPs and the distance an ISP or a customer is from the NAP affects routing behaviors and traffic trajectories. Another project for which NSF solicited services is the Route Arbiter (RA) project, which is charged with providing equitable treatment of the various network service providers with regard to routing administration. The RA will provide for a common database of route information to promote stability and manageability of networks. Multiple providers connecting to the NAP have created a scalability issue because each provider will have to peer with all other providers to exchange routing and policy information. The RA project was developed to reduce the full peering mesh between all providers. Instead of peering among each other, providers will peer with a central system called a route server. The route server will maintain a database of all information needed for providers to set their routing policies. Figure 1-4 shows the physical connectivity and logical peering between a route server and various service providers. The following are the major tasks of the RA per the NSFNET proposal: Today, the RA project is a joint effort of Merit Network, Inc., the University of Southern California Information Sciences Institute (ISI), Cisco Systems, as a subcontractor to ISI, and the University of Michigan ROC, as a subcontractor to Merit. The RA service is comprised of four projects: As you have already seen, the main parts of the Route Arbiter concept are the route server and the RADB. The practical and administrative goals of the RADB apply mainly to service providers connecting to the NAP. Configuring the correct information in the RADB is essential in setting the required routing policies, as explained in Appendix A, "RIPE-181." As a customer of a provider, you may never have to configure such language. What is important, though, is not the language itself but rather understanding the reasoning behind the policies being set. As you will see in this book, policies are the basis of routing behaviors and architectures. On the other hand, the concept of a route server and peering with centralized routers is not restricted to providers and NAPs, and could be implemented in any architecture that needs it. As part of the implementation section of this book, the route server concept will come up as a means of creating a one-to-many relationship between peers. The very high-speed Backbone Network Service (vBNS) project was created to provide a specialized backbone service for the high-performance computing users of the major government-supported SuperComputer Centers (SCCs) and for the research community. The vBNS will continue the tradition that NSFNET has provided in this field. The vBNS will be connected to the NSFNET- specified NAPs. On April 24, 1995, MCI and NSF announced the launch of the vBNS. MCI duties include the following: The five-year, $50-million agreement between MCI and NSFNET will tie together NSF's five major high performance communication centers: The vBNS has been called the R & D lab for the 21st century. The use of advanced switching and fiber optic transmission technologies, Asynchronous Transfer Mode (ATM), and Synchronous Optical Netwok (SONET) will enable very high-speed, high-capacity voice and video signals to be integrated. The NSF is already in the process of authorizing use of the vBNS for "meritorious" high-bandwidth applications, such as using super-computer modeling at NCAR to understand how and where icing occurs on aircraft. Other applications at NCSA consist of building computational models to simulate the workings of biological membranes and how cholesterol inserts into membranes. The vBNS will be accessible to select application sites through four NAPs in New York, San Francisco, Chicago, and Washington, D.C. Figure 1-5 shows the geographical relationships between the centers and NAPs. The vBNS is mainly composed of OC3 /T3 (OC12 is in the process of being deployed) links connected via high-end systems, such as Cisco routers and Cisco ATM switches. The vBNS is a specialized network that emerged due to continuing needs for high-speed connections between members of the research and development community, one of the main charters of the NSFNET. Although the vBNS does not have any bearing on global routing behavior, the preceding brief overview is meant to give the reader background on how NSFNET covered all its bases before being decommissioned in 1995. As part of the NSFNET solicitation for transitioning to the new Internet architecture, NSF requested that regional networks (also called mid-level networks) start transitioning their connections from the NSFNET backbones to other providers. Regional networks have been a part of NSFNET since its creation and have played a major role in the network connectivity of the research and education community. Regional network providers (RNPs) connect a broad base of client/member organizations (such as universities), providing them with multiple networking services and with Inter Regional Connectivity (IRC). The anticipated duties of the Regional network providers per the NSF 93-52 program solicitation follow: In the process of moving the regionals from the NSFNET to new ISP connections, NSF suggested that the regional networks be connected either directly to the NAPs or to providers connected to the NAPs. During the transition, NSF supported, for one year, connection fees that would decrease and eventually cease (after the first term of the NAP Manager/RA Cooperative Agreement, which shall be no more than four years.) Table 1-1 lists some of the old NSFNET regional providers and their new respective providers under the current Internet environment. As you can see, most of the regional providers have shifted to either MCInet or Sprintlink. Moving the regional providers to the new Internet architecture in time for the April 1995 deadline was one of the major milestones that NSFNET had to achieve. |Old Regional Network||New Internet Provider| |Cornell Theory Ctr.||MCInet| In addition to the four main projects relating to architectural aspects of the new Internet, NSF recognized that information services would be a critical component in the even more widespread, freewheeling network. As a result, a solicitation for one or more Network Information Services (NIS) managers for the NSFNET was proposed. This solicitation invites proposals for the following: At the time of the solicitation, the domestic, non-military portion of the Internet included the NSFNET and other federally sponsored networks such as the NASA Science Internet (NSI) and Energy Sciences Network (ESnet). All these networks, as well as some other networks of the Internet, were related to the National Research and Education Network (NREN), which was defined in the President's fiscal 1992 budget. The NSF solicitation for Database Services, Information Services, and Registration services were needed to help the evolution of the NSFNET and the development of the NREN. At the time of the proposal, certain network information services were being offered by a variety of providers; some of these services included the following: Under the new solicitation, NIS managers should provide services to end users and to campus and mid-level network service providers, and should coordinate with mid-level and other network organizations, such as with Merit, Inc. In response to NSF's solicitation for NIS managers, in January 1993 the InterNIC was established as a collaborative project among AT&T, General Atomics, and Network Solutions, Inc. It was to be supported by three five-year cooperative agreements with the NSF. During the second year performance review, funding by the NSF to General Atomics stopped. AT&T was awarded the Database and Directory Services, and Network Solutions was awarded the Registration Services and the NIC Support Services. The NIS manager will act in accordance to RFC 1174, which states the following: The Internet System has employed a central Internet Assigned Numbers Authority (IANA) for the allocation and assignment of various numeric identifiers needed for the operation of the Internet. The IANA function is performed by the University of Southern California's Information Sciences Institute. The IANA has the discretionary authority to delegate portions of this responsibility and, with respect to numeric network and autonomous system identifiers, has lodged this responsibility with an Internet Registry (IR). The NIS manager will either become the IR or a delegate registry authorized by the IR. The Internet registration services to be provided will include: Today, NSI is providing assistance in registering networks, domains, AS numbers, and other entities to the Internet community via telephone, electronic mail, and U.S. postal mail. RS will work closely with domain administrators, network coordinators, ISPs, and other various users to register Internet domains, Autonomous System numbers, and networks. The RS will provide databases and information servers such as WHOIS registry for domains, networks, AS numbers, and their associated Point Of Contacts (POCs). The RS also offers Gopher and Wide Area Information Server (WAIS) interfaces for retrieving information. The documents distributed by the InterNIC registration services include templates, network information, and policies to request network numbers and register domain name servers. Some of the templates include: The implementation of this service should utilize distributed database and other advanced technologies. The NIS manager could coordinate this role with respect to other organizations that have created and maintained relevant directories and databases. AT&T is providing the following services under the NSF agreement. The original solicitation for "Information Services" was granted to General Atomics in 1993 and taken away in February 1995. At that time, Network Solutions, Inc. took over the proposal, and it was renamed NIC Support Services. The goal of this service is to provide a forum for the research and education community, Network Information Centers (NICs) staff, and the academic Internet community, within which the responsibilities, duties, and functions of the InterNIC may be defined. As of now, this service is divided into two components: Other Internet Registries (IR) were created outside the U.S.; these registries perform functions similar to those performed by the InterNIC in the U.S. Created in 1989, RIPE is a collaborative organization that consists of European Internet service providers. It aims to provide the necessary administration and coordination to enable the operation of the European Internet. APNIC is the IR for the Asia Pacific rim. It provides the IP registration and domain name services for that region. Created in 1993, APNIC started as a 10-month pilot project with the goal of providing Internet Registry functions and Routing Register functions (the RR function has not materialized to date). The pilot proved to be successful, and the APNIC is now in full operation serving as an IR. Other Internet Registers are listed on the InternetNIC home page. With the creation of a new breed of ISPs that want to interconnect with one another, offering the required connectivity while maintaining flexibility and control has become more challenging. Each provider has a set of rules, or policies, that describe what to accept and what to advertise to all other neighboring providers. Example policies include determining route filtering from a particular ISP and choosing the preferred path to a specific destination. The potential for the various policies from interconnected providers to conflict with and contradict one another is enormous. To address these challenges, a neutral Routing Registry (RR) for each global domain had to be created. Each RR will maintain a database of routing policies created and updated by each service provider. The collection of these different databases is known as the Internetworking Routing Registries (IRR). The role of the RR is not to determine policies, but rather to act as a repository for routing information and to perform consistency checking on the registered information with the other RRs. This should provide a globally consistent view of all policies used by providers all over the world. Autonomous Systems (ASs) use exterior gateway protocols such as BGP to work with one another. In complex environments, there should be a formal way of describing and communicating policies between different ASs. Maintaining a huge database containing all registered policies for the whole world is cumbersome and difficult to maintain. This is why a more distributed approach was created. Each RR will maintain its own database and will have to coordinate extensively to achieve consistency between the different databases. Some of the different IRR databases in existence today are: Each of the preceding registries serves a limited number of customers except for the Routing Arbiter Database (RADB), which handles all requests not serviced by other registries. As mentioned earlier, the RADB is part of the Routing Arbiter (RA) project, which is a collaboration between Merit and ISI with subcontracts to Cisco Systems and the University of Michigan ROC. The decommissioning of the NSFNET in April 1995 marked the beginning of a new era. The Internet today is a playground for hundreds and thousands of providers competing for market share. For many businesses and organizations, connecting their networks to the global Internet is no longer a luxury but a requirement for staying competitive. The structure of the contemporary Internet has implications for service providers and their customers in terms of speed of access, reliability, and cost of use. Some of the questions organizations that want to connect to the Internet should ask are: Are providers--whether established or relatively new to the business--well-versed with routing behaviors and architectures? For that matter, how much do customers of providers need to know and do with respect to routing architecture? Do we really know what constitutes a stable network? Is the bandwidth of our access line all we need to worry about to have the "fastest" Internet connection? The next chapter is intended to help ISPs and their customers evaluate these questions in a basic way. Later chapters get into details of routing architecture. Interdomain routing is fairly new to everybody and is evolving every day. The rest of this book builds upon this chapter's overview of the structure of the Internet in explaining and demonstrating current routing practices. Q- Are there other NAPs besides the four NSF-awarded NAPs? A- Yes. As connectivity needs keep growing, more NAPs are being created. Many exchange points are spread over North America, Europe, Asia/Pacific, South America, Africa, and the Middle East. Q- If I am a customer of a provider, do I have to connect to a NAP? A- No. NAPs are mainly for interconnection between providers. If you are a customer of a provider, your connection will be to the provider only. But how your provider is connected to one or more NAP can affect the quality of your connection. Q- Is the function of the route server at the NAP to switch traffic between providers? A- No. The route server keeps a database of routing policies used by providers. Providers use the NAP physical media to exchange traffic directly between one another. Q- Do all providers that connect to a NAP have to peer with the route server? A- Although this is the recommended procedure, in some situations, major providers end up peering directly with each other, while smaller providers are required to peer with the route server. Q- What is the difference between IRs and IRRs? A- Internet Registries (IRs) such as the InterNIC are responsible for registration services such as IP address assignment. Internet Routing Registries are responsible for maintaining databases of routing policies for service providers. Q- How are database services different from the Route Arbiter Database? A- Database services are part of the Network Information Services. These databases include communication documents such as RFCs. The RADB is a database of routing policies. RFC 1786; Representation of IP Routing Policies in a Routing Registry (Ripe-81++)
<urn:uuid:85ec2ff7-6951-41d0-a99a-8bebf3df31cc>
CC-MAIN-2017-04
http://www.cisco.com/cpress/cc/td/cpress/design/isp/1ispint.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947445
5,955
3.484375
3
W97M/Osm.A is a companion macro virus discovered at the end of May 1999. It doesn't infect "Normal.dot" or documents. This macro virus replicates attaching a template "Default.dot" in the documents. This template is a separate file that contains the macro virus code. Depending on the settings of your F-Secure security product, it will either automatically delete, quarantine or rename the suspect file, or ask you for a desired action. More scanning & removal options More information on the scanning and removal options available in your F-Secure product can be found in the Help Center. You may also refer to the Knowledge Base on the F-Secure Community site for more information. When an infected document is opened, the virus attempts to copy the "Default.dot" file from the active document's directory to the Word's startup directory with the name "Startup.dot". This way the macro virus code executes every time when Word is opened. Once installed the virus replicates by copying the "Startup.dot" as "Default.dot" to the active document's directory, and attaching this template to the document during the save operation. The virus will not spread further on another computer if the "Default.dot" template is deleted and the user opens the document. W97M/Osm.A virus hides the "Tools/Macro" dialog by creating its own dialog box similar to the original one. Any attempt to change or create a macro fails and the virus shows a message box: You do not have permission to create macros on this computer. The virus code is invisible via "Tools/Macro/Visual Basic Editor", because the template project is originally password protected. Additionally the attached template contains a hidden embedded executable file "A:\osm32.EXE" that contains a dropper of Back Orifice trojan which is infected twice with W95/Marburg.8582 virus. The macro virus executes this embedded file by activating it, using Visual Basic command. Since the embedding of the executable file contains the reference to drive "A:", the virus may cause an error when the macro virus is executed from another drive or directory. However, this doesn't stop the macro virus to replicate and to execute the infected embedded executable file. As a result of all these attached and embedded virus codes, the macro virus will cause infection with tree different viruses/trojans. Technical Details: Katrin Tocheva, Sami Rautiainen and Peter Szor, F-Secure 1999
<urn:uuid:bd3d088b-1cd6-4811-aec2-588f5daec49c>
CC-MAIN-2017-04
https://www.f-secure.com/v-descs/osm.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.87293
529
2.578125
3
McAfee released “The Secret Life of Teens,” a survey that surveyed 955 U.S. 13-17 year olds (including 593 teens ages 13-15 and 362 teens aged 16-17) and reveals the online behavior of American teens and areas of concern for parents. “Keeping kids safe no longer only means teaching them about the dangers of alcohol or how to deal with a school bully,” said Tracy Mooney,McAfee Chief Cyber Security Mom. “This report is a wake-up call to the real dangers our teens face when they make themselves vulnerable online.” The study revealed that despite news headlines, teens are providing more information than they should with strangers: - 69 percent of 13-17 year olds have updated their status on social networking sites to include their physical location - 28 percent of teens chat with people they don’t know in the offline world - Girls are more likely than boys to chat with people online that they don’t know in the offline world, (32 percent vs. 24 percent), and 13-15 year old girls (16 percent) are more likely than boys the same age (7 percent) to have given a description of what they look like. Cyberbullying has made media headlines several times this year, with tales of teens and tweens harassing each other online- with tragic consequences. One-in-three teens knows someone who has had mean or hurtful information posted about them online – like sending anonymous emails, spreading rumors online, forwarding private information without someone’s permission or purposely posting mean or hurtful information about someone online. - 14 percent of 13-17 year olds admit to having engaged in some form of cyberbullying behavior in 2010 - 22 percent say they wouldn’t know what to do if they were cyberbullied. Teens have more options to get online than ever before. “It’s almost impossible to keep up with how my kids get online,” continued Mooney. “It’s not like keeping the home computer in the living room is the answer anymore – you have to educate your kids to be safe while they’re accessing the Web from their friends’ houses, or on their phone – away from my supervision.” - 87 percent of teens go online somewhere other than at home - 54 percent access from their friends’ or relatives’ houses - 30 percent of teens access the Web through a phone and 21 percent through a video game system - 23 percent of kids go online anywhere with an open Wi-Fi signal. Approximately two in five teens say they don’t tell their parents what they do while they are online (42 percent) and that they would change their online behavior if they knew their parents were watching (36 percent). In an effort to further conceal online behavior, teens admit to the following: - 38 percent of teens close or minimize the browser when their parents enter the room - 32 percent of teens clear the browser history when they are done using the computer - 55 percent of 13-17 year olds hide what they do online from parents.
<urn:uuid:ca002d6e-f1fb-41a2-9776-75299f95344d>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2010/06/23/teens-share-alarming-amounts-of-personal-info-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949515
655
3
3
The Japanese government is investigating how radioactive concrete ended up in an apartment complex built to house Fukushima residents evacuated from homes near the stricken Fukushima Daiichi Nuclear Power Plant. The contamination was discovered after a high-school student in the city of Nihonmatsu was found to have been exposed to 60 percent more radiation in three months than the government rates as safe for an entire year. Schools in Nihonmatsu, about 40 miles from the Fukushima Daiichi nuclear plant that was eventually destroyed by damage done during a March, 2010 Tsunami, is temporary home to many residents evacuated from the vicinity of the plant. Investigators traced the source of the high-schooler's exposure to radioactive cesium in the concrete structure of the six-month-old apartment building in which the student and a dozen families lived. The building was erected hurriedly, using gravel from a quarry in the town of Namie, less than a dozen miles from the Fukushima plant. That puts the quarry well within the 12-mile "no-go" zone declared by the Japanese government that forbids any material from areas heavily irradiated by the plants from being used outside it. Japanese officials were criticized for having sketchy and inadequate plans for dealing with natural disasters serious enough to affect the two Fukushima power plants. One weakness in that lack of preparation was overoptimistic estimations of how well the plants would contain radiation and how far any contamination might reach. Though the Tsunami struck March 11, it wasn’t until the last week in April that the 12-mile no-go zone was firmly established and enforced. Due to that delay, the gravel found in the three-story Nihonmatsu apartment building was on the ground, exposed to both radiation and radioactive fallout for six weeks before being shipped out to be used in nearby construction projects. The quarry owner told the Japanese Nuclear Emergency Response agency the quarry shipped 5,200 tons of gravel to 19 construction and cement companies between March 14 and April 22, when the 12-mile no-go zone was closed. The agency is tracing other shipments to identify other potential risks. Despite the extra radiation, the Nuclear Emergency Response agency does not plan to tear down the building or order residents to leave. Even in the most intensely irradiated areas within the building, annual exposure would equal about 10.86 millisieverts, not the 20 millisieverts required before the agency requires a building be evacuated, according to the WSJ. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:934f07c3-da60-4a97-a604-bbe5c9b8d63a>
CC-MAIN-2017-04
http://www.itworld.com/article/2731179/security/fukushima-radiation-built-into-apartment-housing-evacuees-from-disaster.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00298-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963096
566
3.171875
3
In light of the recent tragic events in Haiti, it might be a good time to review some of the requirements for a well designed Uninterruptible Power Source (UPS) to be included in all of our critical network installations. As a CCNA, we are called upon to help maintain the continued operation of networks during any type of power outage caused by either supplier failures or what is often referred to as “Acts of God,” such as tornados, hurricanes or, in this case, a 7.0 earthquake. After the devastating earthquake in Haiti, it became obvious that the country had lost most of its ability to provide any type of communications, either within the country or with the rest of the world. This blackout covered all of the most commonly used media, including the Internet, telephone, or radio. Although there were isolated instances of messages getting out, Haiti was essentially isolated, even though surrounded by neighboring countries and possible first responders. Although we are usually only faced with incoming power source problems, they can, by themselves, bring down any normal network operation. Depending on where we live, we normally refer to our power sources as coming from household power, household electricity, power lines, domestic power, wall power, line power, AC power, city power, street power, and grid power. No matter what we call the “power source”, the loss of any normal supply can leave us with dead equipment. It is important to understand the most commonly used terminology when discussing UPS capability. A UPS is usually implemented through the use of a battery backup. It is an electrical apparatus that provides emergency power to a load when the input power source fails. A UPS differs from an auxiliary/emergency power system or standby generator in that it provides instantaneous or near-instantaneous protection from input power interruptions. It does this by means of one or more attached batteries and associated electronic control circuitry. The downside of a battery-implemented UPS is that batteries have a maximum charge and can only provide an on-battery runtime of a relatively short period. These backup times usually range from five to fifteen minutes, which are typical for the most commonly used units. This period provides sufficient power to last until an auxiliary power source is brought on line, or the protected equipment is properly shut down. As such, a UPS is not designed to provide continuous operation until the main power source is reinstated. While not limited to protecting any specific type of equipment, a UPS is typically used to protect computers, data centers, telecommunication equipment, or other electrical equipment where an unexpected power disruption could cause anything from a network outage to actually causing injuries, fatalities, serious business disruption, and/or data loss. UPS units range in size from units designed to protect a single desktop computer without a video monitor, which typically requires around a 200 VA rating, to large units powering entire data centers or buildings. Even though the primary role of any UPS is to provide short-term power when the input power source fails, most UPS units are also capable of correcting other common utility power problems. - Total Loss of Input Voltage – A power outage, which is also known as a power cut, power failure, power loss, or blackout, is usually considered to be a short- or long-term loss of the electric power to an area. There are many causes of power failures in an electricity network, including: faults at power stations; damage to power lines; substations, or other parts of the distribution system; a short circuit; or even the overloading of electricity mains. - Power Surge or Spike – In electrical engineering, spikes are defined as fast-or short-duration electrical transients in voltage. They can be voltage spikes, current spikes, or transferred energy spikes in an electrical circuit. Fast, short-duration electrical transients, or over-voltages in the electric potential of a circuit, are typically caused by lightning strikes, tripped circuit breakers, and power transitions in other large equipment on the same power line. - Power Sag – A power sag is defined as either a momentary or sustained reduction in input voltage. A brownout or sag is a drop in voltage in an electrical power supply. The term “brownout” comes from the dimming experienced by lighting when the voltage sags. - Single Points of Failure – In large business enterprises, where reliability is of great importance, a single huge UPS can also be a single point of failure that can disrupt many other systems. To provide greater reliability, multiple smaller UPS modules and batteries can be integrated together to provide redundant power protection that is equivalent to one very large UPS. Many computer servers offer the option of redundant power supplies, so that in the event of one power supply failing, one or more other power supplies are able to power the load. This is a critical point. Each individual power supply must be able to power the entire server by itself. Redundancy is further enhanced by plugging each power supply into a different circuit and to a different circuit breaker. Redundant protection can be extended further by connecting each power supply to its own individual UPS. This provides double protection from both a power supply failure and a UPS failure, so that continued operation is ensured. This configuration is also referred to as 2N redundancy. If the budget does not allow for two identical UPS units then it is common practice to plug one power supply into mains and the other into the UPS. Through experience, you will note that laptops are not usually protected with a UPS system since they all can function with internal battery power. Of course, the amount of runtime depends upon the type and size of the battery provided. In my next post, I will discuss some of the most frequently used UPS technologies. Author: David Stahl
<urn:uuid:142a3c2e-c453-42f0-9daa-ed0c61fc7fa5>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/01/25/the-importance-of-a-ups/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00326-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948252
1,179
3.515625
4
As historic drought spreads in California, environmentalists are turning their attention towards potable water. While this isn’t an entirely new subject for conservation—we’re intimately familiar with water shortages here in Colorado and Wyoming—accelerating buzz and knowledge about climate change is making water shortages a hot topic, and data centers aren’t about to escape unscathed. An investigative report in the Financial Times is shedding some light on the current water situation in industries around the world. The report includes a look at Google’s data centers, with Joe Kava, Google’s head of data center operations, quoted as saying that water is the “big elephant in the room” for data center companies. Why Do Data Centers Need Water? Data centers require vast amounts of water, mostly for cooling, even with the proliferation of advanced, highly efficient evaporative cooling systems. Despite many operators raising their operational temperatures and taking advantage of free cooling, with outside air filtered nearly directly onto the data floor, hundreds of thousands of gallons of water are needed to cool even small operations. For example, Green House Data’s original facility uses highly efficient cooling units and is only 10,000 square feet (tiny compared to the 200,000 square foot facilities operated by large data center companies). In mid-summer, the WY1 facility uses 180,000 gallons of water each month. Our new WY2 facility will be 35,000 square feet, launching with about 10,000 sq ft of operational data center space. We have four super efficient Emerson Liebert EVI indirect evaporative cooling units ready for opening day. On a 75F degree day, operating with just a 1MW load, which is pretty low, the units would evaporate 460 gallons an hour (115 gallons each). Beyond cooling, the data center floor also must stay at an ideal humidity. Inefficiencies and water waste can therefore occur when one computer room is humidifying while another is dehumidifying. That Seems Like a Lot of Water — Is It? In a word, yes. That’s a lot of water. With a 200,000 square foot data center, we can assume up to 30,000,000 gallons of water a month, even if the cooling units are 20% more efficient than ours. However, data centers aren’t the biggest culprits when you consider the industrial world. A power plant with an outdated once-through cooling system will suck in more than a million gallons per minute. For something a little closer to home, a garden hose running nonstop would use about 750 gallons in a month. That doesn’t mean our industry should be ignoring our water usage, especially as the resource becomes more scarce. And we haven’t been, at least not entirely. The Green Grid put out its Water Usage Effectiveness measurement (WUE) back in 2011, recommending that operators start measuring and improving their water efficiency. WUE = Annual Water Usage / IT Equipment Energy Usage. It is measured in liters/kilowatt-hour. They also include a more advanced equation that measures the water use in comparison to total site energy use, which can also be used to compare the water use inside the facility with the water used in the generation of said energy. Few data center operators are reporting on this metric. Large operators like Google, Facebook, and Microsoft have also introduced efforts to increase efficiency through "penthouse" facility designs that include large free cooling plenums, cleaning and reusing grey water, cooling entire facilities with seawater, or even going so far as to overhaul treatment plants. As liquid cooling technologies become more common, more efficient cooling will be within reach and with much less water use. However, the chemicals used for liquid cooling carry their own environmental burden. Will we see a Liquid Cooling Usage Efficiency metric? Probably. In the meantime, more and more data center operators are reporting on energy efficiency. As our neighboring states and countries struggle to keep their crops watered, it’s time we start considering and conserving our water, too. Posted By: Joe Kozlowicz
<urn:uuid:91a537b8-0614-4a6d-9958-4b3fdbb98b8e>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/is-water-the-elephant-in-the-room-for-data-center-efficiency
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00326-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934225
849
2.875
3
(Excerpted & condensed from the Cisco Press book Network Security Auditing, written by Chris Jackson available June 4, 2010) To understand security, it is critical that you realize that security is a process, not a product. Security is a broad topic, and one of the few in information technology that literally touches all aspects of a business. To focus security efforts and to make them manageable, it helps to break down the various aspects of security into the five pillars of security. 1. Assessment: Assessments document and identify potential threats, key assets, policies and procedure, and management’s tolerance for risk. Assessments are not something that are done once and then forgotten. As the business needs change and new services and technologies are introduced, regularly scheduled reassessments should be conducted. Doing this gives you an opportunity to test policies and procedures to ensure that they are still relevant and appropriate. 2. Prevention: Prevention is not just accomplished through technology, but also policy, procedure, and awareness. Expect individual security controls to fail, but plan for the event by using multiple levels of prevention. 3. Detection: Detection is how you identify whether or not you have a security breach or intrusion. If you can’t detect a compromise, then you run the risk of having a false sense of trust in your prevention techniques. 4. Reaction: Reaction is the aspect of security that is most concerned with time. The goal is to minimize the time from detection to response so that exposure to the incident is minimized. Fast reaction depends on prevention and detection to provide the data and context needed to recognize a security breach. 5. Recovery: Recovery is where you play detective to determine what went wrong so that you can get the systems back on line without opening up the same vulnerability or condition that caused the problem in the first place. There is also the post-mortem aspect that determines what changes need to be made to processes, procedures, and technologies to reduce the likelihood of this type of vulnerability in the future.
<urn:uuid:9d0a607b-4f96-41b8-8d5a-bef7afc4a737>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/05/27/five-keys-to-security-fundamentals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00142-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947727
410
2.65625
3
Amr Ibrahim Enan is a Global Knowledge instructor who teaches and blogs from Global Knowledge Egypt. In the previous post, we discussed the need for VXLAN in the cloud along with the issues it solves. In this post, we will focus more on how VXLAN works. As you can see in the figure above, the packets exchanged between VXLAN enabled devices have four headers that encapsulate the Layer 2 frame. Those headers are: - Ethernet header - IP header - UDP header - VXLAN header The first question that might come to your mind is why we need all of those headers? Why do we not just add the VXLAN header? In order to understand this, we need to understand what is VXLAN, and how does it work? VXLAN (Virtual eXtensible Local Area Network) addresses the requirements of Layer 2 and Layer 3 data center network infrastructure in the presence of VMs in a multitenant environment. It runs across the existing networking infrastructure and provides a means to “stretch” a Layer 2 network. In short, VXLAN is a Layer 2 overlay scheme over a Layer 3 network Only VM’s within the same VXLAN segment can communicate with each other. Each VXLAN segment is scoped through a 24 bit segment ID hereafter termed the VXLAN Network Identifier (VNI). This allows up to 16M VXLAN segments to coexist within the same administrative domain. Hence we have the usage of the VXLAN header in the figure. The VNI scopes the inner MAC frame originated by the individual VM. Thus, you could have overlapping MAC addresses across segments but never have traffic “crossover” since the traffic is isolated using the VNI qualifier. This qualifier is in an outer header envelope over the inner MAC frame originated by the VM. Due to this encapsulation, VXLAN could also be termed a tunneling scheme to overlay Layer 2 networks on top of Layer 3 networks. The tunnels are stateless, so each frame is encapsulated according to a set of rules. The end point of the tunnel (VTEP) is located within the hypervisor on the server which houses the VM. Thus, the VNI and VXLAN related tunnel/outer header encapsulation are known only to the VTEP— the VM never sees it. The VTEP we are talking about here is the Nexus 1000V. Nexus 1000V now fully supports the VXLAN technology. For more information, on that visit: www.cisco.com/go/nexus1000v . VXLAN In Action Consider a VM within a VXLAN overlay network. This VM is unaware of VXLAN. To communicate with a VM on a different host, it sends a MAC frame destined to the target as before. The VTEP on the physical host looks up the VNI to which this VM is associated. It then determines if the destination MAC is on the same segment. If so, an outer header comprising an outer MAC, outer IP address UDP address, and VXLAN header are inserted in front of the original MAC frame. Now you might ask yourself why UDP and not TCP — or even why UDP in the first place? Well, the outer UDP header with a source port is provided by the VTEP, and the destination port is a well-known UDP port obtained by IANA assignment. It is recommended that the source port be a hash of the inner Ethernet frame’s headers to obtain a level of entropy for ECMP/load balancing of the VM to VM traffic across the VXLAN overlay, which, as we discussed earlier, will now use VPC or VSS instead of STP which relies mainly on Portchannels. The final packet is transmitted out to the destination, which is the IP address of the remote VTEP that connects the destination VM addressed by the inner MAC destination address. Upon receipt, the remote VTEP verifies that the VNI is a valid one and is used by the destination VM. If so, the packet is stripped of its outer header and passed on to the destination VM. The destination VM never knows about the VNI or that the frame was transported with a VXLAN encapsulation. In addition to forwarding the packet to the destination VM, the remote VTEP learns the Inner Source MAC to outer Source IP address mapping. It stores this mapping in a table so that when the destination VM sends a response packet, there is no need for an “unknown destination” flooding of the response packet. So to summarize, VXLAN will allow you to increase the number of available layer 2 domains by adding the new VXLAN header. Using this header will allow you to have up to 16 M layer 2 domains, and if two devices hosted on the same physical infrastructure have the same address, it will not be a problem as long as it is configured in a different VXLAN. Also your VXLAN members can be in the same layer 2 domains or in different layer 2 domains since VXLAN can over-relay over layer 3 domains. Devices with the same VXLAN still can communicate with each other. Also, your traffic will be effectively load balanced over the over-relay network as the UDP source port will be different for each VM starting a new connection.
<urn:uuid:f7a6d177-36aa-4ad1-b0f3-8c6f9e85045c>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/03/26/vxlan-what-is-it-and-why-do-we-need-it-the-conclusion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00142-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907844
1,104
3.03125
3
Can your computer read a Web page without your help? Soon it might. Tim Berners-Lee, the inventor of the Web format, and the organization that keeps the standards of the Web, the World Wide Web Consortium, have recently been promoting the idea of making the Web machine-readable, or a Web of data. What does that mean? After all, at least in one sense, the Web is already being read by a machine -- namely your own computer -- when you surf the Web. At the International Semantic Web Conference, being held this week in Chantilly, Va., Dean Allemang, chief scientist at Semantic Web consulting firm TopQuadrant, offered a solid example of how a machine-readable Web would help us all, in theory anyway. His example was work-related: booking hotels. Say you wanted to attend a conference at some out-of-town location. The conference site itself probably has a Web site. You copy its physical address from its site, and go to an online hotel broker site, such as Hotels.com, to find a nearby hotel. You do a search on hotels, say, by entering that address into the search criteria, to seek hotel within a certain radius. Or you just a get a list of hotels and go to a third Web site, a mapping site such as MapQuest, and enter hotel addresses and the conference center address to see if any hotel is close to the conference center. In Allemang's view, this really is crazy. Why copy some information from one page and paste it to another, using the same computer? Why can't the computer itself do the work? The trick would be to get all the sites to agree on how to represent an address, Allemang said. Then, the addresses can be passed from one site to the next through your browser, automatically, without you having to do anything. The mapping site could check your cache and list any addresses found there, offering you the option of mapping them. Automating such a task (and the countless others we do by hand on our computers), is the point of creating a machine-readable Web. If computer programs can read the Web pages and carry out tasks, we won't have to. Relational databases make the prospect feasible. With databases, you can structure data so each data element is slotted into a predictable location. You can query a database of personnel data to return a birth date of a particular person, because the row of data with that person's info has a dedicated column dedicated to the birth date. This approach wouldn’t work so well for data beyond a single database, however. "The problem is that everyone assumes you will need to build a huge data warehouse, where everything can be compared. This will never happen," Allemang said. Another factor: On the Web, data is not structured in such a way that it can retrieved with any consistency, and the vast number of people who design and maintain Web sites would not all agree on the same format for structuring data. The answer the W3C has come up with comes in a form of a set of interrelated standards, that can be used to embed data on Web sites, as well as to interpret the data that is found there. One standard is the Resource Description Framework. The other is the Web Ontology Language, or OWL. RDF is a way of encoding data so it can be available for a wider audience in such a way that external IT systems can understand it. It is based on making associations. It describes data by breaking each data element into three nodes: a subject, a predicate, and object. For example, consider the fact that Yellowstone National Park offers camping. "Yellowstone" would be subject. "offers" would be the predicate and "camping" would be the "object." (All three elements get uniform resource identifiers, or a globally-recognized Internet addresses). A query against Triple Store, which is what a RDF database is called, can link together disparate facts. If another triple, perhaps located in another Triple Store, contains the fact that Yellowstone contains the Mammoth Hot Springs, a single search across multiple Triple Stores can return both facts. Additional standards can further refine the precision of the data definition. For instance, two parties can agree that the term "Yellowstone" refers "Yellowstone National Park" by using a shared, controlled vocabulary, which can be referenced through a Resource Description Framework schema and RDFS. RDFS also allows inferencing. In RDFS, you can state that Yellowstone is a type of national park. So a search for national parks that offer camping would return Yellowstone. Of course, the Interior Department could build a list of all the national parks and include which services each park offers. But with the semantic Web approach, such a single database would never be needed. The services for each park could maintain their own data, and the results could be compiled only when someone posts some piece of specific data, Allemang pointed out. In essence, with RDF, a user can build a set of data from various sources on the Web that may have not been brought together before. How do you use these triples? One way is through the query language for RDF, called SPARQL (an abbreviation for the humorously recursive SPARQL Protocol and RDF Query Language). With Structured Query language (SQL), you can query multiple database tables through the JOIN function. With a SPARQL query, you specify all the triples you would need, and the query engine will filter down to the answers that fit all of your criteria. For instance, say you are looking for a four-star hotel in New York. You have a query to look for triples specifying for four-star hotels, and for hotels and New York. The query search engine would find all the triples for hotels in New York, as well as all the triples for four-star hotels, and filter the set down to four-star hotels in New York. Even more sophisticated interpretations of RDF Triples can be done through OWL. The logical chain of reason within a RDF Triple is relatively static, and can vary according to who does the encoding. One triple may say that Yellowstone "offers" camping as a service, but another triple may state that camping "is offered" Arcadia National Park. While it may seem obvious to us that both Arcadia and Yellowstone offer camping, it wouldn't be to the computer. A SPARQL query engine, perhaps one embedded in a Web application, could consult OWL and return both entries though. While the idea of a machine-readable Web sounds great, there still requires data holders to render their material in RDF, a tall order for already-overworked Web managers. But the benefits may be worth it — once online, data can be reused in ways that government managers may never have considered. Posted by Joab Jackson on Oct 27, 2009 at 9:39 AM
<urn:uuid:40c450c5-3749-432e-84d7-113a1ec62ff0>
CC-MAIN-2017-04
https://gcn.com/blogs/gcn-tech-blog/2009/10/machine-readable-web.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00142-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939774
1,449
3.0625
3
Cash flow analysis, also known as cash flow projection, considers all factors that affect incoming and outgoing business cash in order to facilitate accurate analysis and predictions. Individual analyses of key cash components can identify problem areas and help mitigate issues. Cash Flow Statements Explained A cash flow statement (CFS) is meant to complement the income statement and balance sheet. The CFS includes: - Company sources and uses of cash. - Beginning and ending cash values during a specified period. - Combined total change in cash from all sources and uses of cash.
<urn:uuid:caa9a4f0-2ae1-4d02-8b04-8ff82aee8ecd>
CC-MAIN-2017-04
https://www.infotech.com/research/effective-decision-making-goes-with-the-cash-flow
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00262-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915529
112
3.09375
3
Adv C++ Programming & the Standard Template Library (STL) (TTE9755) Mastering C++ Programming and STL. The C++ Standard Template Library (STL) is a general-purpose library of generic algorithms and data structures. This course is an intermediate-to-advanced level, hands-on programming course that thoroughly explores all of the STL components. Its purpose is to make a programming task much easier by providing extensive components that can be combined in an application. It also provides a framework into which different programming problems can be dissected.
<urn:uuid:a733f7f8-3481-45f0-bb4d-d3ecff264eac>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/118988/adv-cplusplus-programming-the-standard-template-library-stl-tte9755/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00170-ip-10-171-10-70.ec2.internal.warc.gz
en
0.858487
115
2.734375
3
IBM says it has patented a natural disaster warning system, which uses analytic techniques that accurately and precisely conducts post-event analysis of seismic events, such as earthquakes, as well as provide early warnings for tsunamis, which can follow earthquakes. The invention also provides the ability to rapidly measure and analyze the damage zone of an earthquake to help prioritize emergency response needed following an earthquake. According to Big Blue, the invention would require a piece of software running on each machine in a data center that would gather data generated by vibration sensors, known as MEMS accelerometers, within computer hard disk drives to analyze information generated by seismic events. This technique is enabled by collecting hard drive sensor data and transmitting it via high speed networking to a data processing center, which can analyze the data, classify the events, and enrich the data -- in real time, IBM says. From there, it can be determined exactly when a seismic event started, how long a seismic event lasted, the intensity of a seismic event, the frequency of motion of a seismic event, direction of motion of a seismic event, IBM says. This invention is able to crowd-source important earthquake IBM stated. Information is then delivered to decision makers for action, including the emergency response representatives, such as police, firefighters or the Federal Emergency Management Agency. "Every modern hard drive has an accelerometer built into it and, with this invention, you can take the data off the device, network it together, analyze it, and generate actionable information that tells you in very fine detail what happened during an earthquake," says IBM inventor Robert Friedlander "It is a means to quickly learn what happened, receive a first estimate of what damage has been done, and help first responders determine where they should direct their emergency resources." The invention also uses sensor data to assess and provide early warnings for tsunamis, which can follow earthquakes that occur at the ocean floor. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:7fad6b59-1743-4d67-8b1b-ebbf98019848>
CC-MAIN-2017-04
http://www.networkworld.com/article/2227550/security/ibm-says-software-helps-predict-natural-disasters.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949543
410
3.328125
3
In the last couple of years, malware that hijacks the users’ machines and demands money to “unblock” it has become an often encountered threat. Messages presented by this “ransomware” usually contain warnings that seem to come directly from law enforcement agencies and accuse the user of having downloaded pirated music tracks or movies. The entity behind the warning and the language used in the message are usually well matched, but as Microsoft researchers have shown, that is not always the case. In a very recent example, the ransomware authors made quite an effort with HTML style sheets and content in order to trick the users into believing that GEMA (a German music copyright organization) is the author of the warning, but they unexpectedly used the English language for it: The malware detects the users’ IP address and host name, and tries to threaten them into paying a 100 Euros via Paysafecard, a popular European pre-paid electronic payment method. The message also helpfully notes where such a card can be bought. Once the payment is effected and the password entered, the computer should ideally be “unlocked”, but that outcome is by no means certain.
<urn:uuid:e5954004-c9a8-411a-bd14-05d6f44fc8ce>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2012/03/19/europeans-targeted-with-new-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947845
247
2.75
3
As has become apparent to nearly everyone in the HPC community, life beyond petascale supercomputing will be power limited. Many efforts around the world are now underway to address this problem, both by commercial interests and researchers. One such effort that brings both into play is the Mont-Blanc research project at the Barcelona Supercomputing Center (BSC), which is looking to exploit ARM processors, GPUs, and other off-the-shelf technologies to produce radically energy-efficient supercomputers. In this case, radically means using 15 to 30 times less energy than would be the case with current HPC technologies. The idea is to be able to build petascale, and eventually exascale supercomputers that would draw no more than twice the power of the top supercomputers today. (The world champ 10-petaflop K computer chews up 12MW running Linpack.) Specifically, the goal is develop an architecture that can scale to 50 petaflops on just 7MW of power in the 2014 timeframe, and 200 petaflops with 10MW by 2017. The Mont-Blanc project was officially kicked off on October 14, and thanks to 14 million Euros in funding, is already in full swing. Last week, NVIDIA announced that BSC had built and deployed a prototype machine using the GPU maker’s ARM-based Tegra processors that have, up until now, been used only in mobile devices. The power-sipping ARM is increasingly turning up in conversations around energy efficient HPC. At SC11 in Seattle last week, there were a couple of sessions along these lines, including a BoF on Energy Efficient High Performance Computing that featured the advantages of the ARM architecture for this line of work as well as a PGI exhibitor forum on some of the practical aspects of using ARM processors for high performance computing. There was also the recent news by ARM Ltd of its new 64-bit ARM design (ARMv8), which is intended to move the architecture into the server arena. NVIDIA is already sold on ARM, and not just for the Tegra line. In January, the company revealed “Project Denver,” its plan to design processors that integrate NVIDIA-designed ARM CPUs and CUDA GPUs, with the idea of introducing them across their entire portfolio, including the high-end Tesla line. “We think that the momentum is clearly pointing in the direction of more and more ARM infiltration into the HPC space,” said Steve Scott, CTO of NVIDIA’s Tesla Unit. The Mont-Blanc project is certainly an endorsement of this approach. The initial BSC prototype system is a 256-node cluster, with each node pairing a dual-core Tegra 2 with two independent ARM Cortex-A9 processors. The whole machine delivers a meager 512 gigaflops (peak) and an efficiency of about 300 megaflops/watt, which is on par with a current-generation x86-based cluster. The numbers here are somewhat meaningless though. The initial system is a proof of concept platform designed for researchers to begin development of the software stack and port some initial applications. The second BSC prototype, scheduled to be built in the first half of 2012, will employ NVIDIA’s next-generation quad-core Tegra 3 chips hooked up to discrete NVIDIA GPUs, in this case, the GeForce 520MX (a GPU for laptops). This system is also 256 nodes, but will deliver on the order of 38 peak teraflops. Energy efficiency is estimated to be a much more impressive 7.5 gigaflops/watt, or more than three and a half times better than the industry-leading Blue Gene/Q supercomputer. In conjunction with this second prototype, NVIDIA will be releasing a new CUDA toolkit that will include ARM support. The first two prototypes are BSC inventions. The project will subsequently develop its own more advanced prototype. According to Scott, that cluster will be 1,000 nodes, although the internal make-up is still not decided. Given the timeframe though (2013-2014), the system is likely to include NVIDIA processors using Project Denver technology, with the chip maker’s homegrown ARM implementation and much more performant GPUs. By the end of the three-year project, the researchers intend to have a complete software stack, including an operating system, runtime libraries, scientific libraries, cluster management middleware, one or more file systems, and performance tools. They also hope to have 11 full-scale scientific applications running on the architecture, which encompass fluid dynamics, protein folding, weather modeling, quantum chromodynamics, and seismic simulations, among others. Whether Mont-Blanc leads to any commercial HPC products remains to be seen. NVIDIA, for its part, is certainly happy to see this level interest and adoption of its ARM-GPU approach. “We see this as seeding the environment, where people can do software development and experimentation,” said Scott. “We think that it will grow into something larger down the road.”
<urn:uuid:e80c7d1d-b9d9-4eea-85af-809b961cccb3>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/11/23/nvidia_tegra_processors_blaze_the_way_for_arm_in_supercomputing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00291-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943998
1,041
2.953125
3
Sahara Desert of Mauritania / February 19, 2013 In the Sahara desert of Mauritania, this prominent circular feature has attracted attention since the earliest space missions. Why? Because in the otherwise rather featureless expanse of the desert, it forms a conspicuous bull’s-eye. The site was initially interpreted as a meteorite impact structure because of its high degree of circularity, but according to NASA, it is now thought to be "merely a symmetrical uplift that has been laid bare by erosion." Image courtesy of NASA/GSFC/MITI/ERSDAC/JAROS, and U.S./Japan ASTER Science Team
<urn:uuid:ce9d7371-56c5-4ad4-864e-e5d0ba37d00b>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Sahara-desert-of-Mauritania.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00529-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936678
138
3.734375
4
By Kurt Wimmer and Josephine Liu The United Nations Office on Drugs and Crime has released a report warning that terrorists are increasingly using the Internet to spread propaganda, recruit and train supporters, finance their activities, and plan terrorist attacks. Besides providing an overview of the existing legal frameworks to address terrorists’ use of the Internet, the report highlights a number of challenges associated with investigating and prosecuting terrorism cases — and specifically notes that “[o]ne of the major problems confronting all law enforcement agencies is the lack of an internationally agreed framework for retention of data held by ISPs.” As the report notes, some countries already require ISPs to retain certain types of data for a specified time period. But even in the European Union, where Directive 2006/24/EC requires Member States to ensure that regulated providers retain specified communications data for a period between six months and two years, there is no consistent data-retention period. Some Member States require data to be retained for six months, others for two years. In addition, several Member States continue to grapple with implementing the Directive, including Germany (where an attempt to implement it was struck down by the constitutional court). There have been a number of recent attempts to enact or expand data-retention legislation. For example: - Earlier this year, the Australian government asked Parliament to begin an inquiry into whether ISPs should be required to retain data for up to two years. The Attorney General recently clarified that the government is proposing retention of subscriber and traffic data, not the content of communications. - A draft cybercrime law was introduced in Brazil’s Senate that would require Internet intermediaries to retain “electronic address data” associated with the source and timing of an Internet connection for three years. - As chronicled here, a number of data-retention bills have been proposed in the United States. The most recent federal proposal is H.R. 1981, which passed out of committee in December 2011. The bill would require ISPs to retain for at least one year a log of “temporarily assigned network addresses” to enable identification of customers. - The UK Parliament is considering a draft Communications Data Bill that would expand the types of data that telecommunications operators must retain for a year. Telcos would need to retain traffic data — e.g., time, duration, originator, recipient, location of sending device — for communications made via social media, webmail, VoIP, or online gaming. The UN report’s call for the “development of a universally agreed regulatory framework imposing consistent obligations on all ISPs regarding the type and duration of customer usage data to be retained” may prompt law enforcement agencies to push harder for mandatory data retention periods, although we expect that privacy groups will continue to oppose these efforts.
<urn:uuid:a539e419-bd22-45b9-91be-ac67b1f1ed11>
CC-MAIN-2017-04
https://www.insideprivacy.com/international/un-report-calls-for-mandatory-data-retention/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00437-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934358
568
2.59375
3
-Encourages healthier lifestyle by showing consequences of an unhealthy one - NEW YORK; May 16, 2006 – Accenture (NYSE: ACN) today unveiled an experimental “mirror” that shows unhealthy eaters what they could look like in the future if they fail to improve their diets. The device – known as the Persuasive Mirror - stems from an Accenture research initiative aimed at developing technologies that encourage people to maintain healthy lifestyles in order to avoid obesity and related health problems. Plans call for it to be used in upcoming research studies at the University of California, San Diego (UCSD). “We see great potential in using the technology available via the Persuasive Mirror not only to assess body image but also to determine how body image might be used to affect positive behavioral change,” said Jeannie Huang, M.D., M.P.H., assistant professor in residence at UCSD. Dr. Huang is also a member of PACE (www.paceproject.org), a multidisciplinary research consortium of more than 40 professionals that conducts a broad array of research aimed at developing tools to help health professionals help their patients make and sustain healthy changes in physical activity, diet and other lifestyle behaviors. The mirror was developed at Accenture Technology Labs in Sophia Antipolis, France, where researchers strive to embed technologies into ordinary household items, thereby allowing users to gain valuable health information just by going about their daily activities. Accordingly, the prototype was built to look like a standard bathroom mirror. Operation requires that users do nothing more than look at their “reflections.” But the operational simplicity belies the device’s complex technology. The “mirror” uses two cameras placed on the sides of a flat-panel display and combines video streams from both cameras to obtain a realistic replication of a mirror reflection. Advanced image processing and proprietary software are used to visually enhance the person’s reflection. Couch Potatoes Beware The mirror is fed information from webcams and sensing devices placed around the house, including images of everyday activities. For example, the monitoring system can be configured to spot visits to the refrigerator, treadmill usage, or time spent on the couch. Software analyzes the data to determine behavior – be it healthy or unhealthy – and how behavior, including overeating, will influence future appearance, including obesity. As a result, a sedentary person, for example, can see his face growing fat before his eyes. The Persuasive Mirror can also be configured to accept other health-related data. For instance, it can show the consequences of too much time spend in the sun, or calculate the benefits of data provided by devices such as a pedometer worn during a brisk walk or run. Future iterations will also calculate the effects of other unhealthy behaviors such as drinking, smoking or drug use. “One of the key solutions experts identify for solving the growing problems caused by poor diet, including obesity, inactivity and smoking is a change in personal habits,” said Martin Illsley, director of the Sophia Antipolis facility, one of three research labs operated by Accenture. “This led us to think about using technology as a persuasion tool, specifically how technology can be used to create the kind of motivation and personal awareness that will change unwanted behaviors.” This is known as the science of captology, defined as the study of computers as persuasive technologies. It includes the design, research and analysis of interactive computing products created for the purpose of changing people’s attitudes or behaviors. Illsley and his team concluded that for any technology dealing with diet and exercise habits to be persuasive, it needed to be highly visual. They realized that a mirror that projects the image of how the individual’s face and body will look in the future if habits are poor – or, conversely, improve – could best drive home the point. “We monitor the individual’s habits in terms of diet and exercise and whether or not they smoke or spend time in the sun. And by focusing on the face and body, visually project how he or she will look in the near future,” said Illsley. “The image can punish them if they have not taken good care of themselves, or can reward them if they are following healthy diet plans and have begun to lose weight.” Intelligent Home Services The mirror fits into Accenture Technology Labs initiative called Intelligent Home Services that merges sensor technologies and artificial intelligence to enable a new class of assistive technologies. It makes use of cameras to track activity and artificial intelligence techniques to learn habits automatically so that deviations can be spotted. Previous prototypes have demonstrated how emerging technologies in the home can bring prolonged independence to the elderly, create a channel for new services and help businesses and governments address the challenge of the aging population. All of the prototypes offer practical benefits to business. In the case of the mirror, which took 18 months to build, Illsley sees potential benefits for companies in such industries as pharmaceuticals, health care services and insurance. “While applications exist for entering a photo of an individual and seeing how he or she is expected to look years later, such as those used to find missing children, this concept is completely different. We are not aware of another company or research firm that has done anything similar,” said Illsley. Illsley cautions that input and monitoring from medical experts is essential. “That’s one reason we’re so excited about working UCSD. By collaborating with them, we can take this prototype to the next stage and ensure that further development takes place with medical expertise. This will ensure the technologies we have identified are used to help people improve their lifestyle in the best way possible.” Accenture is a global management consulting, technology services and outsourcing company. Committed to delivering innovation, Accenture collaborates with its clients to help them become high-performance businesses and governments. With deep industry and business process expertise, broad global resources and a proven track record, Accenture can mobilize the right people, skills and technologies to help clients improve their performance. With more than 129,000 people in 48 countries, the company generated net revenues of US$15.55 billion for the fiscal year ended Aug. 31, 2005. Its home page is www.accenture.com.
<urn:uuid:1cc9e77d-3700-40d6-b8f7-ab47b6cb6f0b>
CC-MAIN-2017-04
https://newsroom.accenture.com/news/high-tech-mirror-from-accenture-offers-look-into-future.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00097-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9418
1,318
2.609375
3
Ding H.-W.,Institute of Hydrogeoloical and Engineering Geology | Ding H.-W.,Geology Engineering Institute of Gansu Province | Li L.,Institute of Hydrogeoloical and Engineering Geology | Li L.,Geology Engineering Institute of Gansu Province | And 8 more authors. Northwestern Geology | Year: 2013 Based on the field survey data and exploration results, this paper comprehensively e-laborated the landslide distribution, structure, type, size and the history of deformation. In addition, factors that may lead to or influence the control of landslide were also thoroughly analyzed, which include the topography, geological structure and earthquake, stratum lithology, precipitation, hydrological geological conditions and human economic activities. According to the slope of typical geological section, transfer coefficient method was applied to calculate and analyze the landslide stability. The result shows that landslide is in a less stable or unstable state under the current situation, while rainfall and earthquake conditions will make the landslide unstable. Finally, landslide project management ideas and solutions were proposed, with a view to be used for the landslide government in similar regions. Source
<urn:uuid:0995b85d-896e-4571-8063-676590854b78>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/institute-of-hydrogeoloical-and-engineering-geology-1229541/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00125-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91441
240
2.5625
3
Definition: A balanced and nested grid (BANG) file is a point access method which divides space into a nonperiodic grid. Each spatial dimension is divided by a linear hash. Cells may intersect, and points may be distributed between them. See also twin grid file. Note: After [GG98]. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "BANG file", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/bangfile.html
<urn:uuid:e6fb09df-59a2-4cd5-b0cb-aa8c42cd8c04>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/bangfile.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00575-ip-10-171-10-70.ec2.internal.warc.gz
en
0.871177
179
2.65625
3
Security breaches are on the up – we all know that – and they are set to get worse. In order to interact with suppliers online, organizations will be expected to have stronger authentication, which is where two-factor authentication will play an increasingly bigger role. Before continuing, let’s first take a moment to capture the enormity of the problem. - We want to be able to work wherever we happen to find ourselves rather than be restricted to a physical building - We want to use whatever device we happen to have in our hands - We want to do it 24 hours a day: online banking, go shopping, order repeat prescriptions and complete tax returns – the list goes on To perform each of these tasks, you will need to create a user account. Yet all too often it’s been proven that just using a username, in combination with a password, is inadequate. For organizations the repercussions can be far more damaging. The frequency of data breaches is just one indication that this is a growing problem that many have yet to come to grips with. So, how can organizations strengthen these vulnerable virtual applications and access points? Two Kinds of Evidence From a security perspective, the simple concept is that we typically trust the person accessing an application. However, passwords can be cracked or even guessed, so a stronger model is needed. This is where two-factor authentication has stepped up to the plate. In its very basic sense, it is the combination of two different elements from a choice of three: - Something you know – such as a pin or password - Something you own – such as a key, mobile phone, token or the chip embedded in a credit card - Something specific to the person – such as a fingerprint, or retina I’d like to clarify: entering certain characters from a memorable phrase does not constitute two-factor authentication. It’s still something you know, so it’s just duplicating something you know. Whereas something specific to the person – or biometrics as it’s widely referred – is considerably foolproof, it requires hardware, which often makes this element a non-starter. The reason is a physical reader would need to be installed at every entry point, making it either very expensive or impractical, when you consider the flexibility our technical society demands. There’s also the further complication of designing a solution today that’s capable of accommodating the devices of tomorrow. It’s not surprising, therefore, that when introducing a two-factor authentication solution, it is the first two elements that are the most common combination employed. Although the amalgamation of something you know and something you own seems a no-brainer, the reality is less practical. Many employees who access their corporate network will be familiar with a physical token or key. For consumers, banks are increasingly adopting two-factor authentication for their on-line banking services – HSBC in the UK has just introduced an HSBC Secure Key for every user. If every organization that allows individuals to access its systems first issues them a physical token, then that’s a lot of plastic. In time these could become dozens of tokens weighing you down. Imagine, having one for the bank, your health records, tax returns, utility companies to access and pay bills, employer, and so on. It wouldn’t be long before we became chained down to our multiple token necklace. Additionally, there’s the expense of each of these little pieces of plastic – not just in monetary terms, because they’re not free, but also to the planet. The environmental cost for producing and distributing 4,000 tokens works out at around 4.3 million metric tons of CO2 or, for those who like a visual representation, that’s the equivalent of chopping down 240 million trees. Physical Token Apathy The biggest issue with physical tokens is that end-users simply don’t like them. Organizations already struggle with users either forgetting or losing their physical tokens. Each instance results in a call to a help desk to allow one-time access. In the case of a lost token, a replacement has to be issued, resulting in wasted time, postage and the expense of the device. Imagine this replicated not just for employees, but for every person that accesses your service. What about for all of us, as consumers? Imagine the frustration when you want to pay a bill at work and you’ve left your token at home, and trying to identify which token belongs to which supplier. SMS Technology is the Logical Alternative I’ve made, what I hope you’ll agree, is a compelling case for two-factor authentication. It’s just that I don’t believe that physical tokens are the way forward. Practically every pocket holds the perfect key – SMS technology on your mobile phone. Organizations can easily utilize existing mobile technology – whether corporate or personally owned – to replicate a physical token. A passcode is sent to the user’s mobile phone as a text message, turning the mobile into a ‘soft’ token. When comparing soft against physical tokens, it is estimated that moving to soft token authentication will reduce ongoing costs by 40–60%. And there’s no reason why dozens of soft tokens can’t be carried on a single device. Moreover, another advantage of SMS messaging is that the passcode can be preloaded, as it gets sent immediately to the user once the previous passcode is used and is stored, ready for use the next time. So if there are delays or signal problems, then the user will already have their next passcode ready to go, avoiding any login issues. Finally, if you were to lose a piece of plastic, then you probably wouldn’t notice until the next time you needed it. But, if you’re separated from your mobile phone, then you notice it almost immediately. It makes sense, therefore, that using a mobile phone as ‘something you own’ is the perfect solution. Who in their right mind would opt instead to be strangled by a token necklace? Andrew Kemshall is the co-founder and technical director of SecurEnvoy. Before setting up SecurEnvoy, which specializes in tokenless two-factor authentication, Kemshall was worked for RSA as one of their original technical experts in Europe, clocking over 15 years of experience in user authentication. His particular specialty is two-factor authentication in the fields of architecture, design and development of next-generation authentication software.
<urn:uuid:bab89372-2e6c-48e3-b340-7d484b4ec90e>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/opinions/comment-two-factor-authentication-world-of-the/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00391-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943251
1,356
2.71875
3
The PC of the future will be very different from the computers that have come to dominate so many desktops in the home and office today, according to a broad survey of 895 tech experts by the Pew Research Center's Internet & American Life Project and Elon University's Imagining the Internet Center. By 2020 the majority of these experts expect most people to access their applications online, the basic cloud computing model championed today by Google (NASDAQ: GOOG), Salesforce (NYSE: CRM) and others, versus the traditional model of running software stored on the PC. Likewise, information access and sharing will be online versus relying on what's stored on the local device. However, many of those surveyed also agreed that the PC still has a future working in tandem with cloud-based systems. In one scenario, the PC could prove broadly useful as the primary interface to local networks or private clouds. Some also noted that PCs, even if they're primarily used as Web terminals, will continue to dominate because smartphones and other portable devices have a limited user interface and aren't ideal for the most common productivity applications including word processing and working with spreadsheets. Apple CEO Steve Jobs voiced a different view at a recent conference where he predicted the ascendancy of mobile devices like his company's iPad in an increasingly mobile world. "PCs are going to be like trucks...They are still going to be around," Jobs said at the AllThingsD conference, adding that only "one out of X people will need them." One measure of where the experts in the Pew study see the cloud's impact was in response to the following statements. Some 71 percent agreed with the statement: "By 2020, most people won't do their work with software running on a general-purpose PC. Instead, they will work in Internet-based applications such as Google Docs and in applications run from smartphones. Aspiring application developers will develop for smartphone vendors and companies that provide Internet-based applications because most innovative work will be done in that domain, instead of designing applications that run on a PC operating system." On the flip side, only 27 percent agreed with this statement: "By 2020, most people will still do their work with software running on a general-purpose PC. Internet-based applications like Google Docs and applications run from smartphones will have some functionality, but the most innovative and important applications will run on (and spring from) a PC operating system. Aspiring application designers will write mostly for PCs." But some survey respondents said cloud-computing adoption may also continue to be hampered by security concerns and users' willingness to share personal information on social networks and other cloud-based systems. Beyond individual or consumer concern, some of those surveyed said large businesses are far less likely to put most of their work "in the cloud" anytime soon because of control and security issues. Others predicted low-income people in least-developed areas of the world are most likely to use the cloud because it augments the mobile phone that is likely their only computer device. Survey results represented the individual opinions of representatives from such companies and institutions as Google, Microsoft. Cisco Systems, Yahoo, Intel, IBM, Hewlett-Packard, Nokia, New York Times, O'Reilly Media, Wired magazine, The Economist magazine, Institute for the Future, British Telecom, MITRE and Craigslist.
<urn:uuid:7089671e-5a47-49e1-b9bd-c498952d3e24>
CC-MAIN-2017-04
http://www.cioupdate.com/news/article.php/3887636/Tech-Experts-See-Bright-Future-in-the-Cloud.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00207-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946556
685
2.703125
3
Churnside J.H.,Earth System Research Laboratory | Marchbanks R.D.,National Oceanic and Atmospheric Administration Geophysical Research Letters | Year: 2015 The first synoptic measurements of subsurface plankton layers were made in the western Arctic Ocean in July 2014 using airborne lidar. Layers were detected in open water and in pack ice where up to 90% of the surface was covered by ice. Layers under the ice were less prevalent, weaker, and shallower than those in open water. Layers were more prevalent in the Chukchi Sea than in the Beaufort Sea. Three quarters of the layers observed were thinner than 5 m. The presence of these layers, which are not adequately captured in satellite data, will influence primary productivity, secondary productivity, fisheries recruitment, and carbon export to the benthos. © 2015. American Geophysical Union. All Rights Reserved. Source The U.S. Could Make a Fast, Cheap Switch to Clean Energy More Coal-fired power plants are the biggest emitters of greenhouse gases in the United States, but new research finds that existing technology could cheaply slash the nation’s carbon spew nearly 80 percent by 2030. How? By transporting renewable energy from where the sun is shining and the wind is blowing to where it is not, according to the study, which was published on Monday in the journal Nature Climate Change by scientists from the National Oceanic and Atmospheric Administration and the University of Colorado Boulder. NOAA’s highly detailed weather data shows there’s nearly always someplace in the 48 contiguous states where electricity can be generated by solar power stations and wind farms, even if it happens to be hundreds or thousands of miles away from where it’s needed. The quandary: How to move electricity generated by that sun or wind over long distances without losing too much of it in the process. The solution: A proven technology, called high-voltage direct current, already exists and can carry power across long distances more efficiently than alternating current, the standard power transmission mode in the U.S. Utilities could add direct-current infrastructure to alternating-current transmission lines over the next 15 years as part of planned updates and upgrades without breaking the bank, said study coauthor Alexander MacDonald, who recently retired as director of NOAA’s Earth System Research Laboratory. “Almost everybody believes that if we go to wind and solar energy it will be more expensive, or won’t be ready unless we have a big technological breakthrough” in battery storage technology, MacDonald said. “Our study says that with existing transmission technology and use of the whole 48 states with this ‘interstate for electrons,’ we’re ready right now to have a national system that has the same electric costs as today, with as much as 80 percent less carbon, and just as reliable.” The greater reliance on wind and solar power would also cut water use for energy by 65 percent, the study found. That’s because fossil fuel plants, which generate 40 percent of the nation’s carbon emissions, need large volumes of water for cooling. RELATED: Morocco Will Soon Become the World’s Solar Energy Superpower “Our study assumed that the existing U.S. power system, with all of its AC distribution and usage, stays the same,” said MacDonald. “Power can be taken off the HVDC network for use, and put on by generation. To a power provider, let’s say a utility, instead of building a coal plant, they build a connection to the HVDC network. Everything else stays the same.” To test ideas about the most cost-effective means of generating power, MacDonald and his colleagues conducted a complex mathematical analysis that combined finely detailed data on continent-wide weather patterns from 2006 to 2008 with equally detailed data on power demand for the same period. “NOAA folks have known for some time how big weather is,” said mathematician and physicist Christopher Clack of the Cooperative Institute for Research in Environmental Sciences, a collaboration between NOAA and the University of Colorado Boulder. “We built and ran a very sophisticated model that was able to take advantage of [NOAA’s] exceptionally good-quality weather data to look at the situation of the grid, and see if there’s any way of running the grid that would incorporate a really cheap system.” The model was not designed to prioritize low carbon emissions, he said. “We tried to be completely agnostic on which technologies were picked. It turned out the most effective combination we saw was full U.S., 48-state transmission, backed up by gas when solar and wind wasn’t enough.” Using the U.S. Energy Information Administration’s estimate of a 0.7 percent increase in power demand annually between 2015 and 2030, the researchers found that scenarios combining wind, solar, and natural gas power with a nationwide transmission grid cut greenhouse gas emissions from 33 to 78 percent below 1990 levels. If gas was cheaper than solar and wind, the emissions were higher; when renewables beat gas on price, emissions went down. The cost to ratepayers was between $0.086 and $0.10 per kilowatt-hour—comparable to the actual average nationwide cost of $0.094 per kilowatt-hour in 2015 and potentially saving power customers $47.2 billion a year. News Article | May 7, 2015 HOUSTON--(BUSINESS WIRE)--Geospace Technologies (NASDAQ: GEOS) today announced a net loss of $5.2 million, or $0.40 per diluted share, on revenues of $27.9 million for its fiscal quarter ended March 31, 2015. This compares with a net income of $10.8 million, or $0.82 per diluted share, on revenues of $68.6 million for the corresponding quarter in the prior fiscal year. For the six months ended March 31, 2015, the company recorded revenues of $49.1 million and a net loss of $10.6 million, or $0.82 per diluted share. For the comparable period last year, the company recorded revenues of $169.9 million and a net income of $35.0 million, or $2.66 per diluted share. The company noted that its results for the three and six month periods ended March 31, 2015 include the revenue recognition of a $3.0 million non-refundable deposit received from Seafloor Geophysical Solutions AS (SGS) in fiscal year 2014 as a down payment toward the purchase of an OBX system. Due to capital constraints, SGS was unable to take delivery of the system. Walter R. (“Rick”) Wheeler, Geospace Technologies’ President and CEO said, “Depressed market conditions for seismic equipment sales and rentals remained persistent throughout our second quarter. After removing the revenue impact of the SGS deposit, second quarter revenues were sequentially 18% higher than reported in our first quarter; however, when compared to last year’s second quarter, our fiscal year 2015 second quarter revenues fell by $43.6 million or 64%. Adjusted revenues for the six months ended March 31, 2015 declined by $123.8 million or 73% from the same period last year. Comparatively, these year-over-year reductions are a direct consequence of having no performing contracts underway in the current fiscal year for the manufacture of permanent reservoir monitoring (PRM) systems, along with significant lower market demand for all of our other seismic products.” “Traditional seismic product revenues in the second fiscal quarter were $9.6 million, a decrease of $3.5 million or 27% from the previous year. For the six months ended March 31, 2015, revenues were $17.3 million, representing a reduction of $16.2 million or 48% from the prior year period. The revenue decline for both periods is due to unusually large geophone orders that occurred in last year’s first quarter along with much weaker demand in the current year periods for traditional land and marine products in the current seismic industry environment.” “Wireless product revenues of $12.1 million in the second fiscal quarter were similar to those reported for the same period last year. As noted above, we recognized $3.0 million of wireless product revenues in the second quarter in connection with SGS’s inability to take delivery of an OBX system. SGS is continuing their efforts to secure funding for their business plans and, if successful, it may lead to a newly negotiated agreement for the rental or purchase of an OBX system. Excluding the effect of this deposit, our adjusted wireless revenues for the second quarter decreased by $3.4 million or 27% from last year. For the six months ended March 31, 2015, adjusted wireless revenues decreased $43.2 million or 74%. Only 5,400 GSX channels were sold in the first six months of the current fiscal year compared to 77,000 GSX channels in the same period of the prior year. These declines are further evidence of the weak demand for land seismic equipment in today’s market. Amidst these otherwise depressed market conditions, we are actively issuing quotations for our cableless OBX ocean bottom nodal systems and we see increases in the number of applied uses for the OBX and the number of channels utilized in some survey operations. Most of our OBX customers are encountering delays in the awarding of tendered jobs as well as delayed startups for jobs in hand, so there remains some uncertainty for this niche market as it continues to unfold.” “Reservoir product revenues for the second quarter totaled $1.1 million, a decrease of $37.1 million or 97% from last year. For the six months ended March 31, 2015, revenues in this segment were $3.3 million, a drop of $64.1 million or 95% from the previous year. For both periods, the decrease can be mostly attributed to having no contracts underway in the current year for the production of PRM systems. Additionally, our borehole and other reservoir products are also experiencing similar lower demand in the current seismic market. Although we have no PRM contracts currently in hand, we continue to have working discussions with potential customers who are interested in pursuing future PRM systems. We reiterate that no significant revenues associated with PRM contracts are anticipated in fiscal year 2015. However, we believe that our unchallenged expertise, past successes and ongoing research and development in this technology will continue to facilitate significant opportunities for future PRM contracts.” “Despite the depressed conditions in our seismic businesses, we are pleased to report improving profits in our non-seismic businesses. For the three months ended March 31, 2015, our non-seismic businesses reported revenues of $5.0 million and operating income of $0.5 million compared to revenues of $4.8 million and operating income of $0.3 million last year. For the six months ended March 31, 2015, this segment reported revenues of $10.5 million and operating income of $1.4 million compared to revenues of $10.7 million and operating income of $1.0 million last year.” “Broad and ongoing decline in seismic exploration activity has led to a significant reduction in demand for our products. We expect this lowered demand to persist or worsen until our customers see an increase in demand for their seismic exploration services. With low rental fleet utilization and largely curtailed manufacturing activity, gross profits will remain severely challenged by ongoing rental fleet depreciation and fixed factory overhead costs. In coping with these market conditions, we have made adjustments to reduce costs and preserve cash while maintaining critical infrastructure and core competencies within the organization. Factory hours have been cut roughly 60% from a year ago through personnel reductions and other control measures. Plans for further facility consolidation are underway. In addition, both discretionary and planned capital expenditures have been reduced or deferred, including those associated with our Pinemont plant expansion. We further note that significant payments for 2014 property taxes and fiscal year 2014 incentive compensation expenses, together totaling over $10 million, are now behind us. As additional financial strengthening, just this week, we amended and renewed our credit agreement with Frost Bank for a three year period. The amended credit agreement allows us to borrow up to $30.0 million as determined by a borrowing base, whereas our previous agreement significantly restricted our ability to borrow during these difficult market conditions. In consideration of these things, we believe the strength of our balance sheet and the advantages offered by our products and technologies provide us the requisite means to weather the current industry cycle.” Geospace Technologies will host a conference call to review its fiscal year 2015 second quarter financial results on May 8, 2015, at 10:00 a.m. Eastern Time (9 a.m. Central). Participants can access the call at (866) 952-1907 (US) or (785) 424-1826 (International). Please reference the conference ID: GEOSQ215 prior to the start of the conference call. A replay will be available for approximately 60 days and may be accessed through the Investor tab of our website at www.geospace.com. Geospace Technologies Corporation designs and manufactures instruments and equipment used by the oil and gas industry to acquire seismic data in order to locate, characterize and monitor hydrocarbon producing reservoirs. The company also designs and manufactures non-seismic products, including industrial products, offshore cables, thermal printing equipment and film. This press release includes “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. All statements other than statements of historical fact included herein including statements regarding potential future products and markets, our potential future revenues, future financial position, business strategy, future expectations and estimates and other plans and objectives for future operations, are forward-looking statements. We believe our forward-looking statements are reasonable. However, they are based on certain assumptions about our industry and our business that may in the future prove to be inaccurate. Important factors that could cause actual results to differ materially from our expectations include the level of seismic exploration worldwide, which is influenced primarily by prevailing prices for oil and gas, the extent to which our new products are accepted in the market, the availability of competitive products that may be more technologically advanced or otherwise preferable to our products, tensions in the Middle East and other factors disclosed under the heading “Risk Factors” and elsewhere in our most recent Annual Report on Form 10-K and Quarterly Report on Form 10-Q, which are on file with the Securities and Exchange Commission. Further, all written and verbal forward-looking statements attributable to us or persons acting on our behalf are expressly qualified in their entirety by such factors. We assume no obligation to revise or update any forward-looking statement, whether written or oral, that we may make from time to time, whether as a result of new information, future developments or otherwise. News Article | September 30, 2014 Big-name telecom providers and networking manufacturers, like Brocade and Cisco, have joined together under the auspices of the Linux Foundation to help develop a standardized open-source framework for network functions virtualization (NFV). The new organization, called the Open Platform for NFV Project (OPNFV), aims to bring a standard way of using NFV technology to the mainstream so that carriers and other companies can build new high-tech networking products faster. As the OpenDaylight project, which just released the second version of its codebase, aims to bring a uniform standard to software defined networking (SDN) by creating a software controller that everyone can agree upon, OPNFV wants to take it a step further and try to standardize a way of virtualizing the entire network, not just one piece, explained Jim Zemlin, the executive director of Linux. This idea of virtualizing every part of the network, not just the software controller, is what separates NFV from SDN, said Prodip Sen, the board chair of OPNFV and a Hewlett-Packard CTO. During Sen’s time working at his previous job at Verizon, he said he learned how difficult it was for major operators to adopt SDN because of the legacy equipment telcos use to power their vast and complicated networks. Given the complexity and scale of these carriers’ network infrastructure, it would make more sense to virtualize all of the hardware gear in their networks — including load balancers, firewalls and even the gear designed to facilitate networking communication activity like the IP Multimedia Subsystem, which enables voice over IP. “Each [networking capability] represents an expensive hardware component that is difficult to replace and manage,” Zemlin said. “That represents billions of dollars of stuff replaced by software that’s easier to maintain and is far-less costly.” While the idea for NFV was to help carriers create a more well-managed network through software virtualization, the same idea could be used by enterprises with complex infrastructure as well, said Sen. The first task of OPNFV will be to take the many proof-of-concept NFV technology proposals submitted by the participating companies and consolidate them so as to lay the groundwork for a standard NFV platform that can be built upon, said Sen. The organization is hoping that the participating companies’ enthusiasm bleeds over to the development community as a whole, which could then potentially lead to an active open-source community that could also contribute, Zemlin added. The OPNFV will also incorporate the different open-source technology out there that pertains to networking, like the OpenDaylight’s codebase and OpenStack software, said Sen; however, it will be up to the OPNFV community, once it gets going, to determine which pieces of technology fits into the OPNFV framework. OPNFV is gearing for a potential release of its platform by next year, Zemlin said. It’s worth pointing out that while the OPNFV is working on standardizing a networking concept that views software as the answer for streamlining the complexity of networks, software in itself can be difficult to manage and is notoriously error prone; there’s a reason why configuration management vendors like Ansible and Chef as well as upstarts like Docker are important nowadays and that’s because people want to make working with software and IT a less-burdensome task. Currently, Hewlett Packard, China Mobile, Intel, Juniper Networks, Nokia Networks, NEC, IBM and Red Hat are among the 38 members of OPNFV. Given the amount of companies involved, Zemlin is hoping that the greater good of developing a standard will overshadow the vendor interests that could end up dominating the platform. “All of us are smarter than any one of us,” said Zemlin. “Once the snow ball starts rolling, it is unstoppable.” This story has been updated. A new scientific study says that rapidly warming waters off the New England coast have had a severe consequence — the collapse of a cod fishery that saw too many catches even as overall cod numbers declined due to warmer seas. It’s just the latest in a series of findings and occurrences — ranging from gigantic snows in Boston last winter, which scientists partly linked with warm seas, to a sudden and “extreme” sea level rise event in 2009-2010 — suggesting that this particular stretch of water is undergoing profound changes. “2004 to 2013, we ended up warming faster than really any other marine ecosystem has ever experienced over a 10 year period,” says Andrew Pershing of the Gulf of Maine Research Institute, lead author of the new study just out in the journal Science. Pershing conducted the work with researchers from his institution and several others in the U.S. including NOAA’s Earth System Research Laboratory in Boulder, Colo., and Stony Brook University in New York. The paper reports that during the decade-long period in question, the Gulf of Maine, the ocean region extending from Cape Cod northeast to the southern tip of Nova Scotia, warmed up by a stunning 0.23 degrees Celsius per year (0.41 degrees Fahrenheit). That’s faster warming than occurred in 99.9 percent of the rest of the world ocean, the scientists say. [No, global warming is not going to take away your fish and chips] During the same time period, this fishery’s managers did reduce cod quotas, but not enough — presumably because of a lack of realization about the rapidly warming waters and their stark effects on fish. As a consequence, the overall cod stock now stands at just 4 percent of its optimum size. Last November, the National Oceanic and Atmospheric Administration announced sharp restrictions on cod fishing in the area, with harsh consequences for fishing dependent communities like Gloucester, Mass. “The Gulf of Maine cod stock, a historic icon of the New England fishery, is in the worst shape we have seen in the 40 years that we have been monitoring it,” said John Bullard, NOAA Fisheries regional administrator for the greater Atlantic region, at the time. At the center of the new study is a demonstration of just how tightly all of this is related to warm waters. Here’s a figure the researchers created to describe their findings: The effect of warm waters on Atlantic cod likely occurs because of a harmful effect on larvae and juvenile fish. But the scientists say they don’t fully understand whether it is related to changes in cod predators or prey, or simply the temperature itself. Warmer temperatures also pose a metabolic challenge to cod as they reach critical reproductive age. The disaster for the fishery wasn’t caused by temperatures alone, however — it was also caused by how humans failed to take them into account, the researchers charge. “Ignoring the influence of temperature produces recruitment estimates that are on average 100% and up to 360% higher than if temperature is included,” the study authors write. Thus, in effect, cod were overfished because ocean warming wasn’t adequately considered in fishing quotas. The effect has not, to be sure, been the same for all species. Take lobsters, for instance, which are now thriving in the same waters. “They’re the flipside of cod,” says Pershing. “They are booming now, especially off the coast of Maine, and that’s due in part to the fact that there are fewer cod which eat lobsters, but also due to the warmer water, which helps them grow faster.” The consequence of the dramatic downturn for the cod fishery has likely been significant for some fishing communities, although there are no definitive data on the matter, says Pershing’s colleague Jen Levin, who manages the sustainable seafood program at the Gulf of Maine Research Institute. But Levin says that the availability of cod on people’s plates hasn’t changed much, since globally, other fisheries are doing far better, such as in the Bering Sea. “From an industry perspective, seafood is one of the most traded commodities on the planet, so as far as what’s available on the marketplace, you can still find cod, it’s just not from here, it’s from other parts of the world,” says Levin. What’s most intriguing is what is causing the dramatically warm waters — and how this may relate to other observed changes in the region. Clearly, part of the cause is the overall ocean warming trend that has been seen around the globe due to climate change. But at the same time, the researchers say, the warm and salty Gulf Stream has also moved northward over the course of the last century. In late 2011, in fact, there was a dramatic northward movement that led to the warming of some New England lobster traps by more than 6 degrees Celsius, or over 10 degrees Fahrenheit. [Global warming is now slowing down the circulation of the oceans — with potentially dire consequences] This change has also been linked to a warming climate, but for more complex reasons. Among other factors, the northward shift of the Gulf Stream appears tied to a larger change in Atlantic ocean circulation — the slowing of the Atlantic Meridional Overturning Circulation, or AMOC, which carries warm surface water northward and cold water southward at depth, and is driven by differences in temperature and salinity of these waters. “AMOC interacts with the bottom of the ocean and when it slows, the interaction with the bottom causes the Gulf stream to shift north,” says Michael Alexander, one of the study authors and a researcher at NOAA’s Earth System Research Laboratory in Boulder, Colo., by e-mail. “Once the Gulf Stream shifts north some of the warm water it carries is able to work its way into coastal waters, including the Gulf of Maine.” “There are long-term changes in ocean circulation in the North Atlantic, most likely driven by anthropogenic climate change, that have led to a ‘cold blo[b]‘ in the sub-polar central North Atlantic, but might actually be responsible at least in part for the anomalous warmth in the far western North Atlantic,” adds Penn State University climate researcher Michael Mann, who reviewed the new study for the Post. In effect, the idea is that as less warm water moves north into the waters below Greenland, there’s more that can linger off the U.S. east coast. [Why some scientists are so worried about a cold ‘blob’ in the North Atlantic ocean] The consequences of these changes may be affecting far more than cod and the people who fish for them. For instance, a slowing of the AMOC was also recently associated by scientists with a sudden and dramatic 4 inch East Coast sea level rise event in 2009 and 2010. Slowing the circulation is expected to cause U.S. sea level rise because it weakens the contrast between warm waters to the right (or European side) of the Gulf Stream and cooler waters on its left (or American side). Warm water is less dense than cold water. It thus takes up more space. Scientists like Mann have also linked warm ocean temperatures off New England to the dramatic snowfalls that Boston experienced earlier this year — noting that warmer water means there is more moisture in the atmosphere above it. And this moisture, if swept up in a storm, can produce more precipitation. In sum, it’s all part of a bigger picture, Mann says: And research to understand the other consequences of such stark ocean warming in the Gulf of Maine and off of the coast of New England has only begun. “We’re seeing an ecosystem going through a really massive change, and I really want my colleagues to look at this. We need to understand what it means,” says Pershing. How super low natural gas prices are reshaping how we get our power Scientists confirm that East Antarctica’s biggest glacier is melting from below Congressional skeptic on global warming demands records from U.S. climate scientists For more, you can sign up for our weekly newsletter here, and follow us on Twitter here.
<urn:uuid:39ab4e4c-2802-4ba7-8812-c664575cc028>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/earth-system-research-laboratory-157973/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00537-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951304
5,623
3.296875
3
The acknowledgement bit in a TCP packet. (ACKnowledgment code) - Code that communicates that a system is ready to receive data from a remote transmitting station, or code that acknowledges the error-free transmission of data. ARCNET is a local area network (LAN) protocol, similar in purpose to Ethernet or Token Ring. ARCNET was the first widely available networking system for microcomputers and became popular in the 1980s for office automation tasks. It has since gained a following in the embedded systems market, where certain features of the protocol are especially useful. Brute-force search is a trivial but very general problem-solving technique, that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement. For example, a brute force password attack would attempt to discover the password to a secure service by trying all known passwords one after another. Error in a program that cause problems. The CA is an authority trusted by one or more users to issue and manage certificates. The CA is the security solution for conducting business on the Internet. The CA ensures that electronic transactions are conducted with confidentiality, data integrity, proper user authentication, and protection against repudiation. or CTR is a way of measuring the success of an online advertising campaign. A CTR is obtained by dividing the number of users who clicked on an ad on a web page by the number of times the ad was delivered (impressions). For example, if your banner ad was delivered 100 times (impressions delivered) and one person clicked on it (clicks recorded), then the resulting CTR would be 1 percent. CVSS refers to the Common Vulnerability Scoring System and is a vendor-neutral, industry standard that conveys vulnerability severity and helps determine urgency and priority of response. It solves the problem of multiple, incompatible scoring systems and is usable and understandable by anyone. The CVSS can be understood form the CVSS Base Vectors and CVSS Temporal Vectors CVSS vectors containing only base metrics take the following form: The letters within brackets represent possible values of a CVSS metric. Exactly one option must be chosen for each set of brackets. Letters not within brackets are mandatory and must be included in order to create a valid CVSS vector. Each letter or pair of letters is an abbreviation for a metric or metric value within CVSS. These abbreviations are defined below. Metric: AV = AccessVector (Related exploit range) Possible Values: R = Remote, L = Local Metric: AC = AccessComplexity (Required attack complexity) Possible Values: H = High, L = Low Metric: Au = Authentication (Level of authentication needed to exploit) Possible Values: R = Required, NR = Not Required Metric: C = ConfImpact (Confidentiality impact) Possible Values: N = None, P = Partial, C = Complete Metric: I = IntegImpact (Integrity impact) Possible Values: N = None, P = Partial, C = Complete Metric: A = AvailImpact (Availability impact) Possible Values: N = None, P = Partial, C = Complete Metric: B = ImpactBias (Impact value weighting) Possible Values: N = Normal, C = Confidentiality, I = Integrity, A = Availability CVSS vectors containing temporal metrics are formed by appending the temporal metrics to the base vector. The temporal metrics appended to the base vector take the following form: Metric: E = Exploitability (Availability of exploit) Possible Values: U = Unproven, P = Proof-of-concept, F = Functional, H = High Metric: RL = RemediationLevel (Type of fix available) Possible Values: O = Official-fix, T = Temporary-fix, W = Workaround, U = Unavailable Metric: RC = ReportConfidence (Level of verification that the vulnerability exists) Possible Values: U = Unconfirmed, Uc = Uncorroborated, C = Confirmed Dynamic Host Configuration Protocol (DHCP) is a communications protocol that lets network administrators manage and automate the assignment of Internet Protocol (IP) addresses in an organization's network. DHCP allows devices to connect to a network and be automatically assigned an IP address. The process of identifying a program error and the circumstances in which the error occurs, locating the source(s) of the error in the program and fixing the error. A device is a group of target for scan IP Addresses and/or domains. The procedure of allocating temporary IP addresses as they are needed. Dynamic IP's are often, though not exclusively, used for dial-up modems. The person who uses a program after it's been compiled and distributed. Ethernet is a frame-based computer networking technology for local area networks (LANs). The name comes from the physical concept of ether. It defines wiring and signaling for the physical layer, and frame formats and protocols for the media access control (MAC)/data link layer of the OSI model. Ethernet is mostly standardized as IEEEs 802.3. It has become the most widespread LAN technology in use during the 1990s to the present, and has largely replaced all other LAN standards such as token ring, FDDI, and ARCNET. Provides a standard for data transmission in a local area network that can extend in range up to 200 kilometers (124 miles). The FDDI protocol uses as its basis the token ring protocol. In addition to covering large geographical areas, FDDI local area networks can support thousands of users. As a standard underlying medium it uses optical fiber (though it can use copper cable, in which case one can refer to CDDI). FDDI uses a dual-attached, counter-rotating token-ring topology. type of file system. File Transfer Protocol. This is the language used for file transfer from computer to computer across the WWW. An anonymous FTP is a file transfer between locations that does not require users to identify themselves with a password or log-in. An anonymous FTP is not secure, because it can be accessed by any other user of the WWW. In Simple words, the protocol used on the Internet for exchanging files. FTP uses the Internet's TCP/IP protocols to enable data transfer. FTP is most commonly used to download a file from a server using the Internet or to upload a file to a server (eg, uploading a Web page file to a server. An access method in HTTP. The visual symbols and choices to control a program. Most GUI's use windows, menus, and toolbars. Most operating systems use GUI's because most users are uncomfortable with a less user friendly interface like a command line. is the daily server vulnerability assessment and certification service that delivers essential, real time verification of your security credentials directly to your website customers. HTTP (Hypertext Transfer Protocol) is the foundation protocol of the World Wide Web. It sets the rules for exchanges between browser and server. It provides for the transfer of hypertext and hypermedia, for recognition of file types, and other functions. The Internet Protocol (IP) is a data-oriented protocol used by source and destination hosts for communicating data across a packet-switched internetwork. An IP address is a numeric address that is used to identify a network interface on a specific network or subnetwork. Every computer or server on the Internet has an IP address. It is a unique number consisting of four parts separated by dots. For example, 22.214.171.124. The address contains two pieces of information : the network portion, known as the IP network address, and the local portion, known as the local or host address. A company or organization that provides the connection between a local computer or network, and the larger Internet. Internet Message Access Protocol'. IMAP is a method of distributing e-mail. It is different from the standard POP3 method in that with IMAP, e-mail messages are stored on the server, while in POP3, the messages are transferred to the client's computer when they are read. Thus, using IMAP allows you to access your e-mail from more than one machine, while POP3 does not. This is important because some email servers only work with some protocols. Software/hardware that detects and logs inappropriate, incorrect, or anomalous activity. IDS are typically characterized based on the source of the data they monitor: host or network. A host-based IDS uses system log files and other electronic audit data to identify suspicious activity. A network-based IDS uses a sensor to monitor packets on the network to which it is attached. An information security exposure is a mistake in software that allows access to information or capabilities that can be used by a hacker as a stepping-stone into a system or network. In cryptography, an algorithm's key space refers to all possible keys that can be used to initialize it. Put in its most simplistic terms, the possibilities in the series A,B,C...Z represent a much smaller key space than AAA,AAB,AAC...ZZZ. A well-designed cryptographic algorithm should be highly computationally expensive when trying to brute-force through all possible key values. A tarpit is a computer entity that will intentionally respond slowly to incoming requests. The goal is to delude clients so that unauthorized or illicit use of a fake service might be logged and slowed down. Note that some purists do not really consider a tarpit to be a honeypot, though it is certainly a fake information system resource that can delay any incoming aggressors. For example, to fight off spammers, some people run tarpits that look like open mail relays, but instead answer very slowly to SMTP commands. These are layer 7 tarpits. Other known tarpits are those that play with the TCP/IP stack in order to hold the incoming client's network socket open while forbidding any traffic over it. The Labrea Tarpit is an excellent example that plays with the TCP/IP stack and has been used to slow down the spread of worms over the Internet. To achieve this tarpit state, iptables accepts an incoming TCP/IP connection and then immediately switches to a window size of zero. This prohibits the attacker from sending any more data. Any attempt to close the connection is ignored because no data can be sent by the attacker to the target. Therefore the connection remains active. This consumes resources on the attacker's system but not on the Linux server or the firewall running the tarpit. A local area network (LAN) is a computer network covering a small local area, like a home, office, or small group of buildings such as a home, office, or college. Current LANs are most likely to be based on switched Ethernet or Wi-Fi technology running at 10, 100 or 1,000 Mbit/s (1,000 Mbit/s is also known as 1 Gbit/s). Short for Media Access Control address, a hardware address that uniquely identifies each node of a network. A concise, bulleted list of actions that you need to take to achieve PCI compliance. Network News Transfer Protocol - Refers to the standard protocol used for transferring Usenet news from machine to machine. A protocol is simply a format used to transfer data to two different machines. A protocol will set out terms to indicate what error checking method will be used, how the sending machine will indicate when it is has finished sending the data, and how the receiving machine will indicate that it has received the data. Netstat is a command-line tool that displays a list of the active network connections the computer currently has, both incoming and outgoing. It is available on Unix, Unix-like, and Windows NT-based operating systems. Networking is the scientific and engineering discipline concerned with communication between computer systems. Such networks involves at least two computers, which can be separated by a few inches (e.g. via Bluetooth) or thousands of miles (e.g. via the Internet). Computer networking is sometimes considered a sub-discipline of telecommunications. Nessus is a comprehensive open-source vulnerability scanning program. It consists of nessusd, the Nessus daemon, which does the scanning, and nessus, the client, which presents the results to the user. NIDS - Network-Based Intrusion Detection System. Detects intrusions based upon suspicious network traffic. A network intrusion detection system (NIDS) is a system that tries to detect malicious activity such as denial of service attacks, port-scans or even attempts to crack into computers by monitoring network traffic. Nmap is free port scanning software designed to detect open ports on a target computer, determine which services are running on those ports, and infer which operating system the computer is running (this is also known as fingerprinting). It has become one of the de-facto tools in any network administrator's toolbox, and is used for penetration testing and general computer security. The essential software to control both the hardware and other software of a computer. An operating system's most obvious features are managing files and applications. An OS also manages a computer's connection to a network, if one exists. Microsoft Windows, Macintosh OS, and Linux are operating systems. Open Vulnerability and Assessment Language (OVAL) is an international, information security community baseline standard for how to check for the presence of vulnerabilities and configuration issues on computer systems. OVAL standardizes the three main steps of the process: - collecting system characteristics and configuration information from systems for testing; - testing the systems for the presence of specific vulnerabilities, configuration issues, and/or patches; - presenting the results of the tests. "OVAL-ID Compatible" means that a Web site, database, archive, or security advisory includes both of the following: - OVAL-IDs used as references for security issues. - The capability is searchable by OVAL-ID. While it is important to the OVAL and information security communities that these types of capabilities include references to OVAL-IDs, for example, "OVAL8127", for the testing of the issues that they describe to their customers in their advisories, databases, etc., verbatim replication of OVAL definitions is not encouraged because any changes in the definition by the original author may not be brought forward to the copied version in a timely manner. For this reason, the capability must reference only OVAL-IDs and not the text of the definitions in order to be considered OVAL-ID compatible. Additionally, the ability to search through collections is required for a capability to be considered OVAL-ID compatible. The Payment Card Industry Data Security Standards ( PCI DSS ) are a set of 12 regulations developed jointly by Visa, MasterCard, Discover and American Express to prevent consumer data theft and reduce online fraud. Compliance with these standards is mandatory for any organization that stores, transmits or processes credit card transactions. Payment Card Industry (PCI) Compliance is an initiative which is being strongly enforced by the four major credit card companies (Visa, MasterCard, Discover and American Express). Currently, being PCI compliant means that YOU are in compliance with the four major credit card companies. Ping is a computer network tool used to test whether a particular host is reachable across an IP network. A program that allows a Web browser to display a wider range of content than originally intended. For example: the Flash plugin allows Web browsers to display Flash content. There are two versions of POP. The first, called POP2, became a standard in the mid-80's and requires SMTP to send messages. The newer version, POP3, can be used with or without SMTP. POP3 is the abbreviation for Post Office Protocol - a data format for delivery of emails across the Internet. Privacy Enhanced Mail (PEM) is a standard for message encryption and authentication of senders. A control bit (reset), occupying no sequence space, indicating that the receiver should delete the connection without further interaction. The receiver can determine, based on the sequence number and acknowledgment fields of the incoming segment, whether it should honor the reset command or ignore it. In no case does receipt of a segment containing RST give rise to a RST in response. A message format used by DOS and Windows to share files, directories and devices. NetBIOS is based on the SMB format, and many network products use SMB. These SMB-based networks include Lan Manager, Windows for Workgroups, Windows NT, and Lan Server. There are also a number of products that use SMB to enable file sharing among different operating system platforms. Simple Mail Transfer Protocol is the de facto standard for e-mail transmission across the Internet. SMTP is a relatively simple, text-based protocol, where one or more recipients of a message are specified (and in most cases verified to exist) and then the message text is transferred. Simple Network Management Protocol. The network management protocol used almost exclusively in TCP/IP networks. SNMP provides a means to monitor and control network devices, and to manage configurations, statistics collection, performance, and security. Secure Sockets Layer is commonly used protocol for managing the security of a message transmission on the internet. Sockets refers to the sockets method of passing data back and forth between a client and a server program in a network or between program layers in the same computer. SSL uses the public- and private-key encryption system, which includes the use of a digital certificate. SYN (synchronize) is a type of packet used by the Transmission Control Protocol (TCP) when initiating a new connection to synchronize the sequence numbers on two connecting computers. The SYN is acknowledged by a SYN/ACK by the responding computer. An IP address which is the same every time you log on to the Internet. See IP for more information. TCP stands for Transmission Control Protocol. TCP is one of the main protocols in TCP/IP networks. Whereas the IP protocol deals only with packets, TCP enables two hosts to establish a connection and exchange streams of data. TCP guarantees delivery of data and also guarantees that packets will be delivered in the same order in which they were sent. LAN technology was developed and promoted by IBM in the early 1980s and standardised as IEEE 802.5 by the Institute of Electrical and Electronics Engineers. Initially very successful, it went into steep decline after the introduction of 10BASE-T for Ethernet and the EIA/TIA 568 cabling standard in the early 1990s. A fierce marketing effort led by IBM sought to claim better performance and reliability over Ethernet for critical applications due to its deterministic access method, but was no more successful than similar battles in the same era over their Micro Channel architecture. IBM no longer uses or promotes Token-Ring. Madge Networks, a one time competitor to IBM, is now considered to be the market leader in Token Ring. A person who uses a computer, including a programmer or end user. How the user controls a program. Perhaps the simplest UI is a keyboard and command line, to enter text commands. Sometimes called a "console". In network security, a vulnerability refers to any flaw or weakness in the network defense that could be exploited to gain unauthorized access to, damage or otherwise affect the network. The term Web server can mean one of two things: 1. A computer that is responsible for accepting HTTP requests from clients, which are known as Web browsers, and serving them Web pages, which are usually HTML documents and linked objects (images, etc.). 2. A computer program that provides the functionality described in the first sense of the term. Wildcards are symbols that add flexibility to a keyword search by extending the parameters of a search word. This can help if you are not certain of spelling, or only know part of a term, or want all available spellings of a word (British and American English, for example). * stands for one-or-more characters (useful for all suffixes or prefixes), # stands for a single character, and ? stands for zero-to-nine characters. Short for World-Wide Web. It is a global information space which people can read-from and write-to via a large number of different Internet-connected devices.
<urn:uuid:c138cd22-1f34-48e5-b76d-4da8d6e978ab>
CC-MAIN-2017-04
https://www.hackerguardian.com/help/glossary.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00355-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913942
4,234
2.734375
3
|To turn on the device encryption, here are the simple steps:| About a year ago since iOS 8 was released, Apple has included a new device encryption feature. The meaning behind this was to protect and secure device data from hackers, thieves, and also the government agencies. After the company was wrongly accused of cooperating with the US government’s PRISM surveillance program, Apple wanted to be sure that it couldn’t be forced to hand over data in their customers’ devices. So, Apple wanted to give their customers encryption keys to their device’s data. Most iPhones and iPads are running the latest iOS 8 are probably already protected but some reports show that about one-third have never set a four-digit passcode before. To turn on the device encryption, here are the simple steps: Go to your iPad or iPhone settings - From the home screen you can usually find your settings, go there. - Then go to Touch ID & Passcode (iPhone 5 and older will just say “Passcode”) - Turn on passcode - Once in passcode area, check to make sure the passcode is on and you have strong passcode. - Once passcode is on, you are set! - Going back to the setting menu, scroll down to the bottom page and you will see “Data protection is enabled” This now means your device is encrypted and nobody else can gain access to the data on the device.
<urn:uuid:d0167043-3717-4869-95a0-ef7040afb0c9>
CC-MAIN-2017-04
http://www.bvainc.com/how-to-enable-ipad-iphone-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00263-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947995
307
2.578125
3
Cloud computing has become mainstream in today’s HPC world. Although there is no consensus on the definition of cloud computing, it is typically perceived as a set of shared and scalable commodity computing resources that are geographically located throughout the world and are available on-demand over the web. There has been a great amount of confusion over whether cloud computing is a new infrastructure or the same old HPC that we know, wrapped in a new name as a marketing gimmick. Review of the literature points to the fact that a large section of the academic community still debates this question. Buyya et al. argue that cloud computing appears to be similar to grid computing at a cursory glance. However, a closer observation presents a different case. Armburst et al. supports the claim of Buyya et al., adding that the cloud computing platform uniquely provides an illusion of infinite available resources. Lee advocates the difference by employing the case of Hurricane Katrina in 2005 to conclude that the only answer to the scientific and operational grand challenge problem is enormous computer power. However, it is not economically possible to dedicate the required amount of resources for this single purpose. Therefore resources must be shared and available on-demand, the platform should be scalable on-demand, and resources should be easily accessible in a user friendly way over the web. The grid computing platform or any other large compute cluster cannot adapt to these guidelines. Foster et. al presents a comprehensive comparison of the grid computing and the cloud computing platforms. The authors recognize the similarity in the two platforms in terms of the vision and challenges, but the authors also make a solid case to differentiate the two platforms in terms of scale of operation. The authors agree that the more massive scale being offered by the cloud computing platform can demand fundamentally different approaches to tackle a gamut of problems. Such confusion has hampered the curious nature of researchers to explore cloud computing. With a vague assumption that there aren’t any challenges that have not been previously posed by various distributed computing platforms such as compute-clusters and grid computing, many of the HPC researchers have not been motivated enough to explore the newer research challenges and opportunities in offering computing as a utility. Furthermore, the lack of universal development standards for cloud computing platforms mandate the eScience developers to rewrite their respective applications from scratch for every cloud offering. Although cloud computing has many aspects closely similar to the traditional parallel and distributed computing platforms, it poses a new set of its own challenges. Traditional large-scale computing resources were not targeted at enabling end-users to rent compute hours with provisioning time being in minutes. On the contrary, cloud computing facilitates experimentation with an idea on a massive platform without investing the capital in owning the resources. Therefore, it has the potential to target a much bigger set of users not necessarily familiar with the parallel or distributed computing aspects. In order to enable the HPC researchers who currently work with large distributed computing systems, but do not work with cloud computing, to bring their expertise to cloud computing, it is essential to provide them with easier means of applying their knowledge. One way of doing this is by allowing them familiar frameworks from a traditional HPC setting. If all cloud platforms supported frameworks and runtimes such as BSP, MPI, and Map-reduce, it would have been so much easier for their adoption. Our research concentrated on the problem of bringing the frameworks such as bag-of-tasks and MPI to cloud platforms. Our implementation on Microsoft’s Azure cloud platform provided some positive results. We were able to create (simple) applications from scratch and deploy them in less than 2 hours. The lines of code were about 200-300 lines that includes the file handling, processing, and reporting. These applications were not very complex but they serve as proofs of concept that such frameworks can help motivate developers to create applications for cloud platforms. We did not beat the performance of the Azure APIs with our frameworks, but that was never the vision. We have increased the programmer productivity by manifold without sacrificing the performance. Our results show performance at par with applications that employ native Azure APIs. Our major contribution is that it is now easier for anyone to write an MPI-style application on the Azure platform without learning a single Azure API and without understanding the idiosyncrasies of the Azure cloud platform. We envision future research to concentrate on unifying the theme of cloud computing by offering seamless portability among various cloud vendors, rich set of resources to suit a large user base (multi-core, many-cores etc.), better resource management, faster time to provision resources, and improved debugging interface. About Dinesh Agarwal: Dinesh Agarwal recently graduated with a Ph.D. from Georgia State University. He is currently pursuing a career as an entrepreneur working with both HPC in the cloud and to solve a pesky problem that bothered him as a student with Bookup. You can find him on LinkedIn, @Twitter, or simply email him at dinwal at gmail dot com.
<urn:uuid:1691254d-90fb-4aba-90d9-339ba394fbaf>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/06/10/hpc_in_the_cloud_old_wine_in_new_bottles/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00292-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948936
1,019
2.625
3
New research application for humidity monitoring: Pipe organs Tuesday, Mar 12th 2013 University researchers know all about the necessity of humidity monitoring, as having non-ideal environmental conditions can wreak havoc on an experiment and make its results worthless. In a somewhat unusual take on this paradigm, one professor at Ohio's Oberlin College has applied this principle to restoring classic pipe organs. In an effort to discover why and how historic pipe organs corrode over time, Oberlin chemistry professor Catherine Oertel and others around the world are turning to humidity monitoring and other environment control systems, the Cleveland Plain Dealer reported. Pipe organs, large piano-like instruments that feature dozens of large metal tubes, have long been a staple of Europe's musical and religious tradition. However, due to their size and the number of parts used, old pipe organs are prone to breakage. Considering that even the smallest fissure can dramatically affect an instrument's sound, even minor issues can be majorly devastating. The Plain Dealer reported that while modern replacement parts can easily be installed on an antique organ, some scoff at this fix because it alters the instrument's sound and ruins its historic appeal. "All these incremental changes take away from what was heard and composed for by historic composers," Oertel said, according to the news source. How humidity monitoring helps In order to get a better idea of what was causing problems with historic pipe organs, researchers from around the world have implemented humidity monitoring solutions to determine exactly where issues were first appearing and what the likely culprit was. For starters, Oertel and others were able to determine that most issues were related to internal structural damage. In particular, the newspaper noted that as internal metalwork was exposed to various levels of humidity over time, certain corrosive physical qualities appeared in the pipes. While external humidity was one source for this moisture, researchers discovered another surprising one: the wood used to make the instrument. As a natural material, the wooden frame used was found to be a source of moisture that was destroying pipe organs from the inside, the source reported. As fresh supplies and modern glues were used to repair old organs, the amount of internal humidity-related damage observed by researchers increased dramatically. "We have a quite strong correlation between these problems and restoration and repairs 10, 20, 30 years ago, where new wood was introduced into the organ," said Carl Johan Bergsten, an organ players and a research engineer at the University of Gothenburg's Organ Art Center in Sweden, according to the newspaper. Thanks to the research team's findings, pipe organ owners and players are now equipped with more actionable knowledge to keep their beloved instruments free from harm. For one, churches and other locations should use humidity monitoring to keep a close eye on internal conditions. In addition, the Plain Dealer reported that organists should not use humidifiers and that oak in particular should be avoided when building or repairing an instrument. Oertel told the newspaper that while there used to be hundreds of pipe organs across Western and Central Europe, their numbers are diminishing rapidly. By using humidity monitoring technology, preservationists and music lovers can better maintain a piece of history.
<urn:uuid:08b29412-d52d-4be0-ae6f-819be7a2d16d>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/new-research-application-for-humidity-monitoring-pipe-organs-402845
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00016-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954766
647
2.890625
3
A large number of Americans still fail to use basic Internet security tools and there remains a substantial gap between the protections people think they have and what is actually installed on their computers, according to a new cyber security study released by the National Cyber Security Alliance (NCSA) and Symantec, makers of Norton security software. The NCSA-Symantec Online Safety Study found that more than 80 percent of Americans claim to have a firewall -- designed to prevent hackers and criminals from stealing personal information -- installed on their computer. Yet, in reality only 42 percent had adequate firewall protection according to the study. Americans do seem to have heeded the computer virus warnings as 95 percent of those checked had anti-virus software installed. "As we begin National Cyber Security Month, this national study of America's cyber security protections provides us with a critical baseline of understanding of how we conduct ourselves and protect ourselves online," said NCSA Executive Director Michael Kaiser. "Great strides have been made but our citizens, economy and national infrastructure will remain at unnecessary risk until every computer user in America has anti-virus, anti-spyware and firewall software on their computers." On the bright side, users' perceptions matched closely with reality in the realm of anti-spyware software. The study revealed virtually no difference between the percentage of Americans who had anti-spyware software installed (82 percent) and the percentage that said they had it installed (83 percent). Still, close to one-fifth of all users are not running adequate spyware defenses. Spam filters, however, were a different story. Seventy-five percent of poll respondents said they were using spam filters, compared to only 52 percent who had them installed to prevent unwanted e-mail. While many Americans still struggle to understand basic cyber security tools and practices, they do recognize that security is a major issue. Only 26 percent of Americans polled said they felt their computers were "very safe" from viruses, and only 21 percent said their computers were "very safe" from hacker attacks. "We must redouble our efforts to ensure that Americans know how to use all of the tools necessary to protect their computers, themselves and their families from harm," Kaiser said. "Too often, cyber security has been made to seem complicated and inaccessible. We want to help all Americans get to the point where following basic cyber security practices become as natural as looking both ways before crossing the street.
<urn:uuid:91bf09bc-6231-4df7-8872-8f40ee3ad848>
CC-MAIN-2017-04
http://www.govtech.com/security/Americans-Fail-to-Use.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00438-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972693
494
2.78125
3
Environments will become more aware, responsive and connected Tweet this By 2025, a range of objects, from cars to refrigerators to coffee cups, will be instrumented with unique identifiers like RFID chips, computation and communication systems to connect to the Internet. The diffusion of sensors, communication devices and processing power into everyday objects and environments will activate the previously stagnant environment into an aware, responsive and informed world. Ad Hoc Network Commutes Cohda Wireless designs systems that will allow cars to form ad hoc mesh networks while on the road. Cars within those networks will communicate critical safety information such as their speed and direction and link up with roadside sensor nodes and eventually a larger cloud-based intelligence, crucial to the development of self-driving cars. Revealing the invisible Lapka is a tiny personal environment monitor that connects your phone to measure, collect and analyze the hidden qualities of your surroundings. Lapka’s sensors respond to the invisible world of particles, ions, molecules and waves. But Lapka doesn’t just quantify what it measures. It provides results that are specific to the present conditions, empowering people to make more informed decisions. As objects become embedded with sensing capacities and connect to the Internet and each other, our environments will become substantially more transparent and responsive. Our homes, and the objects inside them, will fundamentally change as they become networked and connected. For instance, our bed could anticipate when we will wake up and then pass that information on to the coffee maker to brew a fresh pot before work. Cyberspace will become an overlay on top of our existing reality. Most of the physical spaces in our lives are shared by at least two people—in most cases, many more than that. In environments with the power to give us granular information and respond to our needs and desires, people will place different demands on space. If we’re not mindful about how we optimize our spaces, we could inadvertently perpetuate inequality or create new forms of discrimination. As data streams continuously form ubiquitous sensors, and location-based technologies and online platforms unlock latent value in people, places, and things, opportunities abound for promising new services and systems. For a system, whether for traffic patterns or infectious disease tracing, to succeed, it must be open and participatory. To give us seamless experiences and to avoid being locked into fragmented ecosystems, our devices and the software running on them, must be able to share information with other objects and systems. As more people, program and devices participate, the value of each component expands exponentially. Your data will be bought and sold on an open economyLearn more » Information will become a sensory experience.Learn more » New tools will put privacy controls into consumers' hands.Coming soon Decisions will be made by artificial intelligence.Coming soon
<urn:uuid:0d21c3cd-75c2-4d4c-a5ce-4a9aa6534dfe>
CC-MAIN-2017-04
https://www.emc.com/information-generation/networked-ecosystems.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00438-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926118
576
3.234375
3
Apache Spark - an open source project - is an application framework for doing highly iterative analysis that scales to large volumes of data. Through its powerful engine and tooling, Apache Spark significantly lowers the barrier to entry for building analytics applications. This brief introduces the Apache Spark platform and explains how it can be used to create analytics applications based on machine learning. Phil Muncaster reports on China and beyond Jon Collins’ in-depth look at tech and society Kathryn Cave looks at the big trends in tech
<urn:uuid:9dc6abbb-8e08-40cb-b562-dad08bfb79c3>
CC-MAIN-2017-04
http://www.idgconnect.com/view_abstract/35014/the-next-wave-intelligent-applications-powered-apache-spark
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00254-ip-10-171-10-70.ec2.internal.warc.gz
en
0.891172
105
2.640625
3
(config-if)#ip address 10.1.17.0 255.255.240.0 What will appear in the routing table? - A static route to 10.1.16.0. - A static route to 10.1.17.0. - A connected route to 10.1.16.0. - A connected route to 10.1.17.0. The correct answer is 3. The routing table will have a connected route to 10.1.16.0. It is connected directly to the router; it is not a static route via another router. The router calculates that the given address is on subnet 10.1.16.0/20; it has host bits 0001 00000000.
<urn:uuid:86eb9865-ec83-457c-bfa1-be34c94b2950>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/06/28/ccna-exam-prep-question-of-the-week-10/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00162-ip-10-171-10-70.ec2.internal.warc.gz
en
0.824384
159
3.140625
3
The Real Wakeup Call From HeartbleedThere's nothing special about Heartbleed. It's another flaw in a popular library that exposed a lot of servers to attack. The danger lies in the way software libraries are built and whether they can be trusted. In case you live under a rock, a serious security flaw was disclosed last week in the widely used OpenSSL library. On a threat scale of 1 to 10, well known security expert Bruce Schneier rated it an 11. Essentially, an attacker can send a "heartbeat" request that tricks the server into sending random memory contents back to the attacker. If the attacker gets lucky, that memory contains interesting secrets like passwords, session IDs, Social Security numbers, or even the server’s private SSL key. Peeking under the covers of the software world This is not yet another article discussing how to check yourself for Heartbleed, how to remediate it, or trying to figure out how this could have possibly happened. I’m more interested in whether we learn anything from this wakeup call, or whether we just hit the snooze button again. There was nothing particularly special about Heartbleed. It’s just another flaw in a popular library that exposed a lot of servers to attack. Let’s take a look at how these libraries are built and ask whether they can be trusted with our finances, businesses, healthcare, defense, government, energy, travel, relationships, and even happiness. Libraries are eating the world In 2011, Marc Andreessen, who founded Netscape, wrote an excellent essay, "Software Is Eating the World," in which he describes how whole industries, like photography, film, marketing, and telecom are being devoured by software-based companies. I credit the widespread availability of powerful libraries with enabling developers to create incredible software much more quickly than they could on their own. In fact, new tools that provide what is known as “automated dependency resolution” allow libraries to build on other libraries, magnifying the “standing-on-the-shoulders of giants” effect. Today, there are 648,740 different libraries in the Central Repository, a sort of open-source clearinghouse where developers can download software components for use in their applications. A typical web application or web service will typically use between a few dozen and a few hundred of these components. Remember, all of these components have the ability to do anything that the application can do. A component that is supposed to draw buttons is capable of accessing the database. A math library is capable of reading and writing files on the server. So, a vulnerability in any of these libraries can expose the entire enterprise. The zero-assurance software supply chain You can think of all this code as a sort of supply chain for software. Modern applications are assembled from components, custom business logic, and a lot of "glue" code. In the real world, supply chain management is used to ensure that components used in making products actually meet certain standards. They come with material data safety sheets, test results, and other ratings. This whole process is managed to ensure that the final product will work as expected and be safe to use But there is no assurance in today’s software supply chain. There are plenty of security features, but that’s not assurance. Assurance evidence comes from activities that tell you if the defenses are any good. Direct evidence is derived from verification or testing of the application itself. Indirect evidence tells you about the people, process, and technology that created the code. Wouldn’t it be nice if it were possible to choose components based on whether the project takes security seriously and can prove it? Today, that’s impossible. There simply is no framework for capturing and communicating assurance. Don’t hate the playa – hate the game It’s tempting to think that Heartbleed is an isolated incident created by a single developer mistake. In fact, Theo de Raadt, the founder of OpenBSD, writes that wrongheaded attempts to improve performance prevented standard security protections from working, and concludes that “OpenSSL is not developed by a responsible team.” I don’t believe in blaming a team of volunteer developers who build software and give it away for free. Actually, I’d like to take this opportunity to thank the OpenSSL team for its hard work and offer my support. Our challenge is how to help all software projects to be more like OpenBSD, whose security page provides considerably more evidence than most projects. It’s time to admit it – we have a library security problem Please don’t misunderstand. This isn’t about open or closed source. I am a huge supporter of open-source. I’ve written it, donated my work, and run a large international open-source foundation for years. Open-source has the opportunity for a better assurance case, but it’s just not good enough to say that something is secure solely because the source is available. The fact that a bug like #heartbleed can exist for years without being discovered is all the proof you should need. There are three serious kinds of problems with libraries that everyone should be concerned about: - Known vulnerabilities. These are problems discovered by researchers and disclosed to the public. All you have to do is make sure you monitor and keep up with the latest versions of your libraries. Read more… - Unknown vulnerabilities. These are the latent problems that have not yet been discovered or disclosed publically. For these, you should select libraries written by teams with the best assurance case, including evidence about design, implementation, process, tools, people, and testing. Would you trust your business to them? - Hazards. These are powerful library features that have a legitimate use, but can expose your enterprise if used incorrectly. For these, developers need guidance on using the library safely. Look for libraries that provide guidance on safe use. Unfortunately, the information required to address the library security challenge isn’t widely available. That means architects and developers can’t make informed decisions about what components to include in applications. I think we all need to do a better job of asking software projects to provide the assurance evidence we need. As software continues to eat the world and becomes even more critical in everyone’s lives, we will either figure out a way to generate assurance and communicate it to those who need it, or we’ll keep making bad choices and experiencing increasingly damaging breaches. The FDA "Nutrition Facts" label was initially scoffed at and took decades to become popular. What do you think? Would a “Software Facts” label catch on? A pioneer in application security, Jeff Williams is the founder and CTO of Contrast Security, a revolutionary application security product that enhances software with the power to defend itself, check itself for vulnerabilities, and join a security command and control ... View Full Bio
<urn:uuid:639bf453-e03d-4d71-98ad-2ccb8e4ca989>
CC-MAIN-2017-04
http://www.darkreading.com/the-real-wakeup-call-from-heartbleed/d/d-id/1204487?_mc=RSS_DR_EDT
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948406
1,438
2.515625
3
Passwords have been a weakness of network security since the development of computer networks. Through guessing weak passwords, exploiting weak passwords, acquiring passwords through social engineering, or more recently using malicious software like Advanced Persistent Threats (APT), attackers have focused on compromising passwords to gain access to the network. The traditional approach to defending against password attacks has focused on user awareness training, ever increasing password complexity requirements, certificate based authentication, and multi-factor authentication. Defenses that rely on the user are often subject to apathy, non-compliance from the user, and lack of enforcement of company policies that render them ineffective. Two-factor authentication technologies have suffered from poor adoption because of high costs, resistance from the user community, and in some cases, vulnerabilities in the two-factor technology that attackers can exploit. Current trends in APT malware have targeted both password collection and two-factor authentication, which have further reduced their effectiveness. [HELP IS ON THE WAY: 15 free security tools you should try] Further complicating the job of protecting the network is an explosion in mobile devices requiring access anywhere, and a strong focus on international business. The days of having a contained network that only uses company-managed devices on secured networks are largely over. Today's network is global, persistent across devices, and must be available to the user from any device at any location. If the organization does not provide this capability, in most cases the user will work around the organization. Defending user access to network resources in today's information requires a defense-in-depth approach that consists of understanding the company's risk tolerance, understanding the company's user base, and deploying technology solutions that align with the users and the business. The first step in developing an effective defense is to understand how the company uses the network and what the expectations for usage are. This requires the network architect to go beyond what is written in the policy documents and observes what users are actually doing. An effective approach to identify this is to meet with non-IT business staff and discuss how they use technology. Additionally, walking around business locations can provide great insight into how people are using technology. Many IT departments that have "banned" mobile devices or remote access from home are surprised to find that users bring their own devices in spite of policies. Understanding how employees use technology to do their jobs is also essential. The requirements for a sales department may be much different than those of a data entry clerk. Manufacturing personnel may already be using unapproved devices through their tendency to solve technical problems and get the job done. Finally, understanding the culture of the organization will help determine what technology is acceptable. Are users free roaming creative professionals that stress art over function? Are the users very conservative and professional? Each of these could drive very different solutions. At the end of the day, if the user does not accept the technology, they will find ways around it. Today, technical solutions to protect the network beyond passwords fall back to two classic concepts in information security that are "least privilege" and Authentication Authorization & Accounting (AAA). All technical mechanisms must take the approach of allowing the least amount of access that users need to do their job, make reasonably sure the users are who they say they are, make sure they are assigned access to limited resources, and their activities are accounted for and anomalies are identified. Least privilege must be applied based on more than the user's identification. Different levels of access should be applied based on the type of device being used to access the network, when the network is accessed, and where the network is being accessed from. User access profiles should be developed for the most common access scenarios that users utilize to access the network. For example most organizations will have the following categories (most to least secure): " User on the internal network on a managed device " User on the external network on a managed device " User on the external network on a non-managed device " User on the internal network on a non-managed device Each of these categories should be assigned a set of resources that they are allowed to access, which could include restrictions to certain server or services. Unmanaged devices should be directed to services that provide abstract access that limit the volume of activity a user can access. For example, a Citrix Xen App or Microsoft Terminal Services access could be allowed to limit the amount of information an attacker could retrieve from the network. Access controls should be designed to contain a compromised account to the least amount of access and the least amount of data loss possible. This concept can be extended to internal network segmentation to protect sensitive internal networks such as process control, financial and manufacturing systems. Technologies such as Network Admission Control, SSL VPN with posture assessment, Mobile Device Management (MDM), and virtual desktop/application presentation applications have matured to a point where they provide network designers effective tools to control network access. The network should be designed in a way that leverages the technologies to provide users the least privilege while at the same time enabling them to leverage technology. Most network vendors are heavily focused at integrating these technologies into their products. Implementing least privilege is designed under the assumption an account will be inevitably compromised. Even though a compromised account should be expected, steps should be taken to reduce the probability of a compromise occurring and detecting abuse as rapidly as possible. Classic password policies and user awareness training provide a basic level of protection that most organizations will need to implement. Password policies should be implemented in a way that is accepted by the user base. Requiring overcomplicated or frequently changing passwords in most cases will result in users repeating passwords or writing them down. Multi-factor authentication is another line of defense that can be implemented to protect authentication. While effective in reducing risk, most organizations limit multi-factor to external access to the network due to the cost of the technology and limited user acceptance of the technology. Organizations should focus on deploying multi-factor authentication for systems that provide external access to sensitive applications or massive amounts of data. It should be remembered that no multi-factor authentication method is invincible, but is another tool to reduce risk. Password authentication is a weakness that we will have to live with for the foreseeable future. But through defense-in-depth security architectures that address authentication as a holistic system of people, processes and technologies, a company's risk can be reduced. Reducing risk to a level that allows the organization to function in the most efficient way possible should be the goal of all network and security professions. Alexander Open Systems (AOS) the premier systems integrator in the Midwest. Read more about wide area network in Network World's Wide Area Network section. This story, "Securing the Network Beyond Passwords" was originally published by Network World.
<urn:uuid:51739974-fa9d-4294-ace4-eb191402f4b5>
CC-MAIN-2017-04
http://www.cio.com/article/2387902/security0/securing-the-network-beyond-passwords.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00466-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94079
1,377
2.515625
3
All right all you Star Wars fans, the day you’ve been long waiting for may be within sight. Admit it, you have pretended to slay the villains with a lightsaber, making that vuuuummmm whhhnnnn sound. Do you still have a plastic toy lightsaber stashed in the back of a closet? Do you get a little too psyched when your kids want you to play with theirs? If any of that is true, this news is for you. A team of physicists from MIT and Harvard have teamed up to study light and challenge accepted theories about it. In the process, they’ve come to understand the physics of lightsabers. OK, they haven’t actually built a lightsaber, made famous as the weapon of choice for Jedi Knights in the Star Wars franchise, yet. But this is a big step toward the day when they can. Let me try to break it down for you. Photons, which are the fundamental particles of light, have long been thought to not be able to interact with each other. Wave two laser beams at each other and they’ll simply pass through each other. No vuuuummmm. No whhhnnnn. No clashing weapons. However, this group of scientists from the Harvard-MIT Center for Ultracold Atoms have figured out how to coax photons to bind together to form molecules that behave less like light and more like lightsabers. “Most of the properties of light we know about originate from the fact that photons are massless, and that they do not interact with each other,” said Mikhail Lukin, a professor of physics at Harvard, in a statement. “What we have done is create a special type of medium in which photons interact with each other so strongly that they begin to act as though they have mass, and they bind together to form molecules.” Scientists had theorized about this for a while, noted Lukin. This, though, is the first time they’ve been able to observe it. “It’s not an inapt analogy to compare this to light sabers,” he added. “When these photons interact with each other, they’re pushing against and deflecting each other. The physics of what’s happening in these molecules is similar to what we see in the movies.” So how did scientists make this happen? It would be so much fun to say the Force was with them. Sadly no. Instead, they pumped rubidium atoms into a vacuum chamber, then used lasers to cool the cloud of atoms to just a few degrees above absolute zero, Harvard explained. They then shot single photons into the atom cloud. The photons, shooting through the cloud, affect the atoms it touches, causing them to slow dramatically. That energy is passed from atom to atom. And when scientists fired two photons into the cloud, they exited it as a single molecule, according to Harvard. While working on someday building a lightsaber is a lot of fun, the research also has some potential applications in the computer science field. Lukin said the work could affect the way they build quantum computers or how chip makers deal with power-dissipation challenges. “What it will be useful for we don’t know yet,” he added. “But it’s a new state of matter, so we are hopeful that new applications may emerge as we continue to investigate these photonic molecules’ properties.”
<urn:uuid:5faa723a-9886-4c4b-8685-c1f96d37847c>
CC-MAIN-2017-04
http://www.computerworld.com/article/2475060/emerging-technology/scientists-getting-closer-to-building-star-wars-like-lightsabers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00466-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951047
734
3.109375
3
The competitor is not a skier or a figure skater -- or a person at all, but a new technology for wireless broadband Internet access developed in South Korea, called WiBro. And the race in question? How fast a country can provide its population with high-speed Internet access. After ranking as high as third worldwide in 2000, the United States dropped to 16th last year for its number of high-speed Internet subscribers per capita, according to the International Telecommunication Union, with 11.4 broadband subscribers per 100 inhabitants. South Korea, the global leader, has 24.9 subscribers per 100 inhabitants, trailed by Hong Kong, the Netherlands, Denmark, Canada and Switzerland, which all have at least 17 subscribers per 100 inhabitants. Updated data is expected soon for 2006, and some predict the United States will fall out of the top 20. The WiBro technology on display in Turin by Samsung Electronics Co. can transmit 30 megabits per second to a wireless tablet in what eventually could be a residential service. "They're already on to the next generation," says Jonathan Taplin, a professor at the Annenberg School for Communication at the University of Southern California. Other countries may have higher percentages of people using broadband because of their dense populations, which makes it easier to build the necessary network infrastructure, but Canada's strong showing proves that government policy is another reason the United States has begun to lag behind.
<urn:uuid:fac0a707-24ed-4291-9faf-9f5a713bb093>
CC-MAIN-2017-04
http://www.networkcomputing.com/wireless/americans-online-slow-lane/866074746
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00466-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942644
287
2.78125
3
First off, what is it? Well, for those of you who may not know, Ruby on Rails is an open source Web frame work that has been around since 2003. It was first developed by David Heinemeier Hansson and has since gone on to be used in thousands of Web applications such as Basecamp, Twitter and Github. Ruby on Rails has released their newest versions of their software. These iterations, versionRails 3.2.18, 4.0.5 and 4.1.1, are available for download from their website as of May 6, 2014. The reason for this update is a vulnerability that is found in CVE-2014-0130 which is a directory traversal vulnerability that affects all previous versions of Ruby on Rails. From the advisory: The implicit render functionality allows controllers to render a template, even if there is no explicit action with the corresponding name. A This module does not perform adequate input sanitization which could allow an attacker to use a specially crafted request to retrieve arbitrary files from the rails application server.A In order to be vulnerable an application must specifically use globbing routes in combination with the :action parameter. A The purpose of the route globbing feature is to allow parameters to contain characters which would otherwise be regarded as separators, for example '/' and '.'. A As these characters have semantic meaning within template filenames, it is highly unlikely that applications are deliberately combining these functions.A To determine if you are vulnerable, search your application's routes files for '*action' You can download the latest version or Ruby on Rails here. This story, "Ruby on Rails Security Update Available" was originally published by CSO.
<urn:uuid:0e1604d4-56dd-46c2-a3a0-65c88de842fc>
CC-MAIN-2017-04
http://www.cio.com/article/2376480/security0/ruby-on-rails-security-update-available.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00006-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942146
349
2.515625
3
CATV, namely Cable TV is also known as community antenna television. In addition to bringing television programs to those millions of people throughout the world who are connected to a community antenna, cable TV will likely become a popular way to interact with the World Wide Web and other new forms of multimedia information and entertainment services. CATV generally use coaxial cable to transmit TV programs, and actually the fibers bring a lot of benefits in data communication in CATV systems. In Cable TV, channels are divided into a number of frequencies and modulated over a single cable enabling the cable operator to propagate and distribute many channels on a fiber optic and coaxial cable direct-to-home. CATV works by spreading TV channels, FM radio, Data services and Telephony over a single wire. - CATV services enable the viewers to choose from a list of TV shows, such as movies, sports or other preferences etc. and watch according to their wish. - There is no overbuying of channels with using CATV. Because CATV is more suitable for those subscribers who wants only a few favorite channels because CATV allows the users to choose their favorite channels instead of paying for many unwanted channels which come in a package as a whole. - CATV has made possible telephony services along the same cable. - Though CATV remains uninterrupted due the presence of the coaxial cables and optic fibers, picture quality may sometime be affected. However, it is hardly affected by bad weather. - No converter needed - Easy installation - Using many amplifiers that reduce signal quality and not easy to repair - Having EMI distortion - Using Tree-Branch Structure - Not easy to expand
<urn:uuid:21b512b5-58d1-4181-9078-bbfa25ce123b>
CC-MAIN-2017-04
http://www.fs.com/blog/catv-cable-tv.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00520-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943551
344
2.8125
3
This course teaches you how to discover components that might cause a performance problem in the WebSphere infrastructure. WebSphere Application Server for z/OS provides a Java2 Enterprise Edition (J2EE) runtime environment for Enterprise JavaBeans (EJB), along with servlets and JavaServer Pages (JSP) in web applications. The course begins by showing you how to log on to z/OS, find the WebSphere components, and determine the current system status. The course then describes where WebSphere should fit among its dependent products such as DB2, IMS, and CICS, and introduces you to memory and subsystem considerations. After familiarizing you with the overall environment, the course presents a series of tuning demonstrations on the sample z/OS infrastructure. The demonstrations cover topics such as z/OS workload management, adding goals and report capability to the WebSphere cell, and gathering data about the cell by using resource monitoring. You then learn how to use the collected data to determine processor usage per transaction within the WebSphere cell. Finally, you learn how to install and use the IBM Support Assistant and the svcdump.jar tool. You see how to use the IBM Support Assistant to gather and analyze garbage collection data, and create a service dump of the WebSphere address spaces with svcdump.jar. You then learn how to use svcdump.jar to print the threads from the dump and examine what was happening at the instant the dump was taken.
<urn:uuid:ef8ce5cc-f280-467e-b69f-4cec6c71e5e8>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/118940/websphere-application-server-for-zos-performance-tuning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00062-ip-10-171-10-70.ec2.internal.warc.gz
en
0.887643
300
2.5625
3
NASA is making final preparations to launch a robotic probe in early September to study the moon and its atmosphere. Scientists hope the information will help them better understand Mercury, asteroids and the moons orbiting other planets. "The moon's tenuous atmosphere may be more common in the solar system than we thought," said John Grunsfeld, NASA's associate administrator for science. "Further understanding of the moon's atmosphere may also help us better understand our diverse solar system and its evolution." The probe, nick named LADEE for Lunar Atmosphere and Dust Environment Explorer, is set to blast off from NASA's Wallops Flight Facility on Wallops Island, Va. at 11:27 p.m. ET Sept. 6 -- two weeks from today. LADEE will lift off on board a U.S. Air Force Minotaur V rocket, which started out as a ballistic missile but was converted into a space launch vehicle. The robotic probe, which is about the size of a small car, will orbit the moon for an expected four to five-month mission. About a month after launch, the spacecraft will enter a 40-day test phase. During the first 30 days of that period, LADEE will be focused on testing a high-data-rate laser communication system. If that system works as planned, similar systems are expected to be used to speed up future satellite communications. After that test period, the probe will begin a 100-day science mission, using three instruments to collect data about the chemical makeup of the lunar atmosphere and variations in its composition. The probe also will capture and analyze lunar dust particles it finds in the atmosphere. This mission will be the first to launch a spacecraft beyond Earth orbit from NASA's Virginia Space Coast launch facility. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org. This story, "To the Moon Or Bust! NASA Preps to Launch Lunar Probe" was originally published by Computerworld.
<urn:uuid:b028544f-d9c3-474a-af44-e7c1438006d6>
CC-MAIN-2017-04
http://www.cio.com/article/2383062/government/to-the-moon-or-bust--nasa-preps-to-launch-lunar-probe.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00576-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916431
446
3.921875
4
This resource is no longer available BlackBerry NFC Security Overview Near Field Communications (NFC) is a short-range wireless technology that is used in applications required to use 4 centimeters or less to transfer information – allowing users to send/receive signals with a simple tapping motion. This technology is commonly seen in access cards to buildings, or to pay by tapping a credit card at cash registers. Tune into this webcast to learn about the NFC capabilities of BlackBerry smartphones – giving users the ability to share and exchange information simply by tapping the devices together. Uncover what NFC allows the smartphones to do – from using Bluetooth technologies to sharing documents, pictures, and web content. Learn more about this technology and the security, IT policies, and applications controls that are available on your BlackBerry.
<urn:uuid:a52dd30f-cb3d-49e4-881c-f4d0c80e28ff>
CC-MAIN-2017-04
http://www.bitpipe.com/detail/RES/1332189292_32.html?asrc=RSS_BP_TERM
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89979
159
2.671875
3
CoPP – Control Plane Protection or better Control Plain Policing. It is the only option to make some sort of flood protection or QoS for traffic going to control plane. In the router normal operation the most important traffic is control plain traffic. Control plane traffic is traffic originated on router itself by protocol services running on it, destined to other router device on the network. In order to run properly, routers need to speak with each other. They speak with each other by rules defined in protocols and protocols are running in shape of router services. Examples for this kind of protocols are routing protocols like BGP, EIGRP, OSPF or some other non-routing protocols like CDP etc.. When router is making BGP neighbour adjacency with the neighbouring router, it means that both routers are running BGP protocol service on them. BGP service is generating control plane traffic, sending that traffic to BGP neighbour and receiving control plane traffic back from the neighbour. Usage of Control Plane Protection is important on routers receiving heavy traffic of which to many packets are forwarded to Control Plane. In that case, we can filter traffic based on predefined priority classes that we are free to define based on our specific traffic pattern.
<urn:uuid:9d3239b8-9b33-4ca4-a119-cd72d4fd1567>
CC-MAIN-2017-04
https://howdoesinternetwork.com/tag/copp
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00420-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95135
250
3.234375
3
decryption The process of recovering a plaintext from a ciphertext using a private key. deterministic A function or algorithm is deterministic if the output is uniquely determined by the input. Compare to randomized. encoding method A normally randomized operation applied to a message before the encryption primitive is applied. Originally, the most widely employed encoding methods were quite simple padding operations (one example is the method in PKCS #1 v1.5). Nowadays, encoding methods (for example, OAEP) tend to be more sophisticated and are designed with a specific security goal in mind. encryption The process of transforming a plaintext into a ciphertext using a public key. The encryption process is required to be one-way in the strong sense, and it can be either deterministic or randomized. hash function See cryptographic hash function. mask generation function (MGF) A pseudo-random function taking a bit string of any length (and the desired length of the output) as input and returning a new bit string of desired bit length. In theoretical models, MGFs are treated as random oracles. In practice, mask generation functions are often based on a secure cryptographic hash function such as SHA-1. private key Private part of the key pair. Only the owner of the key pair is allowed to see the private key. The private key is used to decrypt ciphertexts obtained via encryption of plaintexts with the public key. randomized A function or algorithm is randomized if the output depends not only on the input but also on some random element. For example, the output from OAEP is a function of the input and a random seed. Compare to deterministic. EM = [ H([P||M] + G(S)) + S ] || [ [P||M] + G(S) ], where S is a randomly generated seed, P is some padding, and G, H are mask generation functions. M is easily derived from EM, but it is difficult to predict anything nontrivial about EM from M without knowing S. (+ denotes bitwise addition and || denotes concatenation of strings; the length of the output from G is equal to the length of [P||M], while the length of the output from H is equal to the length of S.) private exponent A large integer denoted d, part of the RSA private key. Satisfies m = med (mod n), where e is the public exponent. public exponent A small integer denoted e, part of the RSA public key. Often a prime close to a power of 2, for example, 3, 5, 7, 17, 257, or 65537. RSADP The RSA decryption primitive. Takes a ciphertext representative c and outputs a plaintext representative m, where m = cd (mod n); RSAEP The RSA encryption primitive. Takes a plaintext representative m and outputs a ciphertext representative c, where c = me (mod n); RSAES-OAEP Public-key encryption scheme combining the encoding method OAEP with the encryption primitive RSAEP. RSAES-OAEP takes a plaintext as input, transforms it into an encoded message via OAEP and apply RSAEP to the result (interpreted as an integer) using an RSA public key. Note that the explanations given in this section are very brief and incomplete. Unless otherwise stated, we consider a public-key encryption scheme consisting of a first step where the message to be encrypted is encoded (e.g., via OAEP) and a second step where an encryption primitive (e.g., RSAEP) is applied to the result. The encoding method consists of one or a few applications of a mask generation function, which we idealize as a random oracle. All security considerations are in a complexity-theoretic sense rather than in an information-theoretic sense. adaptive chosen ciphertext attack A chosen ciphertext attack where the adversary is allowed to send queries to a decryption oracle before as well as after she is given the challenge ciphertext (except that she is not allowed to ask for the decryption of the challenge ciphertext after she is given it). adversary Deterministic or randomized algorithm aiming to break a cryptographic scheme or parts of it. (Referred to as feminine on these pages due to the popular interpretation of the adversary as 'Eve' or 'Carol'; for similar reasons, the sender ('Alice') is feminine, while the receiver and holder of the private key ('Bob') is masculine.) chosen ciphertext attack An attack where the adversary is given access to a decryption oracle. There are two major kinds of chosen ciphertext attacks, adaptive attacks and indifferent attacks. Normally the goal in both these attacks is to find the decryption of a challenge ciphertext, but see also non-malleability. chosen plaintext attack The most primitive kind of attack where the adversary only knows the public key. Using the public key, the adversary is able to construct as many plaintext-ciphertext pairs as she wants, but she is not allowed to ask for the decryption of ciphertexts. Normally the goal of the attack is to determine the decryption of challenge ciphertext, but see also non-malleability. computationally indistinguishable Two algorithms are computationally indistinguishable if there is no efficient adversary who is able to distinguish between outputs from the two algorithms with better chance than 1/2. decryption oracle An oracle decrypting ciphertexts for an adversary. It is not always required that the decryption oracle output the correct plaintext, but its outputs must be computationally indistinguishable from the correct outputs. (In particular, if the scheme is deterministic, then the decryption oracle must output the correct plaintext.) encryption oracle An oracle encrypting plaintexts for an adversary. Such an oracle might appear as pointless, since any reasonable adversary is able to encrypt messages herself. However, in some theoretical models (e.g., consider plaintext awareness), the concept is useful as it allows us to model attack scenarios where an adversary is able to intercept ciphertexts via eavesdropping. In such instances, the adversary does not need any random oracle queries and random seed needed to get messages encrypted - a subtle distinction that may be of crucial importance for the model. indifferent chosen ciphertext attack A chosen ciphertext attack where the adversary is not allowed to send queries to the decryption oracle after she has been given the challenge ciphertext. This type of attack is sometimes referred to as a "lunch-break" attack. non-malleability In chosen plaintext and chosen ciphertext attacks, instead of asking for the encryption of a challenge ciphertext, one may ask for new ciphertexts "related" (in a way specified by the adversary) to the challenge ciphertext. For example, the adversary may try to output a new ciphertext such that the bitwise sum of the two underlying plaintexts is equal to a particular (nonzero) value. If any adversary is successful only with a negligible probability, then the encryption scheme is non-malleable ("tamper-resistant"). The concept is related to (but far from equivalent to) plaintext awareness. Trivially, non-malleability implies security against the corresponding ordinary attack (i.e., if an adversary is able to decrypt challenge ciphertexts, then the scheme is vulnerable to this malleability attack as well). one-way A function or algorithm H, deterministic or randomized, is one-way if the task of inverting it is infeasible. Inverting means finding a solution to an equation of the form H(x) = y in x, where y is randomly chosen. RSAEP is widely believed to be one-way (given that the private key is kept secret), as are well-trusted cryptographic hash functions such as SHA-1. There are stronger notions of one-wayness, for example partial one-wayness. one-way, partially A concept introduced in reference . A function is partially one-way with respect to some parameter k if the first k bits of the input cannot be determined from the output. The terminology is somewhat confusing, because partial one-wayness is a stronger - not weaker - concept than ordinary one-wayness. A result that is crucial for the security of RSAES-OAEP is that the concepts are equivalent for reasonably large values on k for the encryption primitive RSAEP; see . one-way, stronger notions of Ordinary one-wayness is not sufficient for a cryptographic scheme to be secure; a one-way function may give outputs that leak partial information about the corresponding inputs. For this reason, a stronger notion of one-wayness is often desirable. In such instances, one typically requires that the output do not leak any useful information about the input (other definitions appear in the literature). Encryption schemes and mask generation functions must be one-way in this strong sense, and any reasonably versatile cryptographic hash function should have this property as well. For encryption primitives, however, ordinary one-wayness is sufficient, given that the primitive is combined with an encoding method with certain properties. oracle A "sub-component" S of an adversary A living its own life independent of the adversary; A interacts with the oracle but cannot control its behavior. Typically, S takes some parameters as input and outputs some other parameters (such as a bit string). For example, S can be a random oracle or a decryption oracle simulating the decryption primitive. plaintext awareness, strong A concept introduced in . Consider a scheme that is secure against chosen plaintext attacks. We are given an adversary who knows the public key and who is given access to a random oracle and an encryption oracle. All her queries to the random oracle are recorded, as are all outputs from both oracles. The goal for the adversary is to construct a valid ciphertext, but it must not be in the recorded list of outputs from the encryption oracle, and somebody who is given the recorded information along with the public key must not be able to decrypt the ciphertext. If the adversary fails with overwhelming probability, then the scheme is (strongly) plaintext-aware. Plaintext awareness implies security against adaptive chosen ciphertext attacks. Compare to weak plaintext awareness. plaintext awareness, weak A concept introduced in . Similar to strong plaintext awareness, except that the adversary is not given access to an encryption oracle. In particular, she will have to record more random oracle queries than an adversary in the stronger model in order to get the same amount of information. This difference is subtle but makes the adversary considerably weaker. Weak plaintext awareness implies security against indifferent chosen ciphertext attacks but not against adaptive attacks. random oracle An oracle taking as input an element and returning a completely random and unpredictable element chosen from a certain set (for example, the set of all possible bit strings of a certain length). If the random oracle is supposed to simulate a deterministic function (such as a mask generation function), then it must output the same element every time it is given a certain input.
<urn:uuid:c9d8e36f-99e1-4608-a9a8-e30385c94c2e>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/historical/rsaes-oaep-dictionary.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00052-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90266
2,321
3.625
4
Despite the fact that the cloud has increased enterprise sustainability, data center facilities around the globe are expected to consume 19 percent more energy in 2012 than in 2011, according to new research from Datacenter Dynamics. This is a bit concerning, considering that 29 percent of data center operators fear that there will not be enough energy to meet the growing demands of today’s high-consumption data centers within the next three years. This is a scary fact, especially because data centers have established themselves as the backbone of our digital world. In response to this increasing demand for energy, data center operators are gearing up for a green revolution. This revolution consists of more than just using clean fuels, though. Data center operators must build their facilities in an energy-efficient manner and implement technologies that help decrease the amount of energy needed to run a facility. On top of that, industry bodies must set strict standards to help enterprises select the greenest facilities, encouraging green practices for both suppliers and customers. With these changes, data center operators will be able to conserve energy and reduce the environmental and economic cost of data centers around the world. Working From the Ground Up Starting at the most basic level, energy-efficient data center design can greatly enhance facilities’ sustainability. Modular design, for instance, enables data center operators to optimize energy use by only building to suit immediate operational needs, and then expanding the space, power and cooling in accordance with demand. This approach drastically increases utilization rates and builds on the basis of immediate need, rather than projected consumption. A prime example of the effectiveness of modular design is in how data centers plan for growth. Often a new data center is built to meet forecasted space requirements by adding extra square footage and servers before they are needed. But these additional servers must be cooled, lit and powered up without actually being used. This wasted utilization has been estimated to cost $19 billion each year. Modular design, on the other hand, takes a phased approach to data center growth, allowing companies to expand as needed and avoid wasting power or space. Heating and Cooling Best Practices for the Data Center On top of initial design elements, many best practices can be implemented to optimize the efficiency and sustainability of a data center facility. One growing practice is recycling a data center’s hot air, which is produced in mass quantities around the clock. Although waste heat from servers cannot be recycled within a data center, it can be redistributed as hot air to the local community to reduce the amount of heat needed by surrounding neighborhoods. Data centers throughout Europe and the U.S. are already implementing clever ways to use waste heat throughout the community in the form of central heat, heated pools and warm arboretums. This strategy lowers energy usage and costs to the local community, providing a method to recycle the large amount of waste energy that data centers produce. To offset all of the heat that is produced by the thousands of servers in each data center, most data centers also require a strong cooling system. More energy is needed for cooling purposes than for actual data storage and processing, however, making this a very expensive process. Rather than constantly running the air conditioning system, many data center operators are turning to fans as a means of keeping servers cool. Even better, new research from The Green Grid, a non-profit consortium to improve the resource efficiency of data centers and business computing ecosystems, has revealed that direct and indirect use of natural cooling, such as fresh air in locations with cool and mild climates, is increasingly common in data centers. In fact, nearly half of data centers use some natural air method to cool units. With the implementation of fans and natural cooling, data center operators can realize energy savings of 50 percent on average throughout the year. Alternate Energy Sources Regardless of how many heating or cooling recycling programs a data center implements, however, there will always be a need for energy. Rather than purchasing energy produced from dirty sources such as coal, data center operators can elect to purchase sustainable energy from clean sources, including tidal, hydro, wind and geothermic energy. Geothermic energy may in some cases be ideal for data centers because, even in mass quantities, it releases zero emissions. With clean energy sources such as geothermic energy, data centers can reduce the toll that their energy production takes on the environment. But we still must be conscious of the impact of widely-used geothermic energy, as nothing is unlimited. Therefore, even with the implementation of clean energy sources, data centers must aim to lower energy consumption and streamline data center operations. Standards Must Pave the Way The final piece of the data center green revolution is industry standardization. Even though data centers around the world have integrated many sustainable best practices into their facilities, industry-wide standards and measurement tools are necessary to regulate and improve efficiency. The Power Usage Effectiveness (PUE) ratio is the most widely accepted data center energy-efficiency metric. Developed by The Green Grid, PUE is designed to quantify the ratio between the overall energy used divided by the IT energy used. If the industry is able to slowly lower what is considered a “good” PUE for data centers, data center facilities will be able to considerably reduce the amount of energy they consume and work toward achieving a PUE of 1.0—the best PUE value possible, representing 100 percent efficiency. Since the introduction of PUE measurement, there is already greater awareness of the proportion of power required to support data center functions versus operations. Interxion started measuring and improving its energy ratio since 2003, and such industry awareness has slowed the increase in energy usage by data centers, as demonstrated by Stanford professor Jonathan G. Koomey. In 2009, the average PUE for data centers was 2.0, with a 100 percent increase in energy expected by 2011. Instead, the average data center PUE today is 1.8, indicating that sustainable practices are already making vast improvements to data center facilities. Where Do We Go From Here The large data centers built today are much more efficient than the first ones that were built many years ago. With the growing array of sustainable practices and technologies to help optimize energy usage and costs, combined with industry standards, the Green Revolution is at the doorstep of the data center industry. About the Author Lex Coors is Vice President of the Data Center Technology and Engineering Group at Interxion. Lex has supervised the design, build and upgrade of more than 55,000m² of data center space in 28 locations in 11 countries. During the past 25 years, he has built exceptionally strong credentials in the design of versatile, cost-effective and energy-efficient data center infrastructure. Lex has pioneered several new approaches to data center design and management, including the improvement of power ratio efficiency between server load and transformer load, and the industry’s first ever modular approach to data center architecture. Lex is a founder member of the Uptime Institute, a member of the European Commission DG Joint Research Committee on Sustainability and the European Data Center Code of Conduct Metrics Group. He also acts as Liaison Officer for The Green Grid in their collaboration with the European Commission, and he was a member of the Executive Advisory Board for the Uptime Institute’s recent symposium in New York, “Data Center Efficiency & Green Enterprise IT.” Photo courtesy of miyukiutada.
<urn:uuid:dd5b64bc-b9bb-45ab-a18a-b7532e4fa213>
CC-MAIN-2017-04
http://www.datacenterjournal.com/how-to-bring-the-green-revolution-to-your-data-center-door/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00538-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942347
1,509
3.25
3
Manufacturing Breakthrough Blog Friday April 29, 2016 In my last post we completed our series of posts on variation by discussing the basics of a Queuing System, two very important “laws of variability” and finally, a ten point summary of primary points, principles, and conclusions relative to understanding variability. These ten included: - Variability always degrades performance. - Variability buffering is a fact of manufacturing life. - Flexible buffers are more effective than fixed buffers. - Material is conserved. - Releases are always less than capacity in the long run. - Variability early in a line is more disruptive than variability late in a line. - Cycle time increases nonlinearity in utilization and efficiency. - Process batch sizes affect capacity. - Cycle times increase proportionally with transfer batch size. - Matching can be an important source of delay in assembly systems. In today’s post, I will present the first of three posts on a subject I refer to as Paths of Variation along with a real case study to demonstrate the teachings of paths of variation. Paths of Variation We’re all familiar with the positive effects of implementing Cellular Manufacturing (CM) in our workplaces such as the improved flow through the process, overall cycle time reduction, throughput gains as well as other benefits. But there is one other positive effect that can result from implementing CM that isn’t discussed much. This potential positive impact is what CM can do to reduce variation. But before we reveal how this works, let’s first discuss the concept of paths of variation. When multiple machines performing the same function are used to produce identical products, there are potentially multiple paths that parts can take from beginning to end as we progress through the entire process. There are, therefore, potential multiple paths of variation. These multiple paths of variation can significantly increase the overall variability of the process. Even with focused reductions in variation, real improvement might not be achieved because of the number of paths of variation that exist within a process. Paths of variation, in this context, are simply the number of potential opportunities for variation to occur within a process because of potential multiple machines processing the parts. And the paths of variation of a process are increased by the number of individual process steps and/or the complexity of the steps (i.e. number of sub-processes within a process). The answer to reducing the effects of paths of variation should lie in the process and product design stage of manufacturing processes. That is, processes should/must be designed with reduced complexity and products should/must be designed that are more robust. The payback for reducing the number of paths of variation is an overall reduction in the amount of process variation and ultimately more consistent and robust products. Let’s look at a real case study. Many years ago I had the opportunity to consult for a French pinion manufacturer located in Southern France. For those of you who are not familiar with pinions (i.e. pignons in French), a pinion is a round gear used in several applications: usually the smaller gear in a gear drive train. Here is a drawing of what a pinion might look like and as you might suspect, pinions require a complicated process to fabricate. When our team arrived at this company, based on our initial observations, it was very clear that this plant was being run according to a mass production mindset. I say this because there were many very large containers of various sized pinions stacked everywhere. The actual process for making one particular size and shape pinion was a series of integrated steps from beginning to end as depicted in the figure below. The company received metal blanks from an outside supplier which were fabricated in the general shape of the final product. The blanks were then passed through a series of turning, drilling, hobbing, etc. process steps to finally achieve the finished product. The process for this particular pinion was highly automated with two basic process paths, one on each side of this piece of equipment. There was an automated gating operation that directed each pinion to the next available process step as it traversed the entire process which consisted of fourteen (14) steps. It was not unusual for a pinion to start its path on one side of the machine, move to the other side and then move back again which meant that the pinion being produced was free to move from side to side in random fashion. Because of this configuration, the number of possible combinations of individual process steps, or paths of variation, used to make these pinions was very high. In my next post, we’ll introduce you to the multiple paths of variation that these pinions could traverse and discuss how these paths can significantly increase the overall variability of processes. As always, if you have any questions or comments about any of my posts, leave me a message and I will respond. Until next time.
<urn:uuid:94ae9d5c-aa4f-4147-8593-dece609ddd79>
CC-MAIN-2017-04
http://manufacturing.ecisolutions.com/blog/posts/2016/april/paths-of-variation-part-1.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958675
1,016
2.65625
3
Before the world became digital, the automotive industry heavily relied on dummies, cadavers, animals and even human volunteers to perform crash testing. Thanks to the power of computing, these less reliable physical crash tests are being replaced with more humane and accurate methods such as virtual simulation. But as cars become more complex and powerful, how accurate can simulations predict the injury outcome of a virtual crash? Very accurate with Big Data! Jaguar Land Rover (JLR) is an example of an automotive company that leveraged the power of Big Data to fuel its simulation operations to deliver Motor Trend’s SUV of the Year for 2012 – the Range Rover Evoque. By plugging EMC Isilon storage into JLR’s Computer Aided Engineering (CAE) infrastructure, engineers can quickly collaborate and iterate across massive amounts of detailed data for superior product design and more accurate crash testing. As a result, JLR was able to deliver a high quality product in a short time frame – all within a small carbon footprint compared to other automotive manufacturers. This is one of many Big Data stories EMC and Intel share through a new Internet show called “At The Intersection“. The goal of the show is to showcase trailblazing new technologies and the fascinating people driving them. Ken Jennings, all-time Jeopardy! champion, hosts the show’s discussion with the technology gurus driving innovation within their organization. Click here to watch the first show with JLR and to participate in a live Twitter conversation. Click here, for more technical information on JLR’s Big Data use case.
<urn:uuid:a73e8655-b146-4838-8040-37d3a8ae5b95>
CC-MAIN-2017-04
http://bigdatablog.emc.com/2012/09/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00172-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930005
331
2.5625
3
New site aims to teach visitors all about cookies. A new website has been launched by Google to help Internet users understand more about cookies, files stored on computers that remember websites visited to help targeted ads find the right place. The site, CookieChoices.org, also provides a code that website developers can implement into their own websites, which will notify visitors when they are being tracked and what information cookies can reveal about their browsing habits. The notifications will come in the form of pop-up alerts, and the site is designed to meet EU internet laws that demand website owners to give their visitors information about how their cookies are being used. The EU’s cookie directive was implemented on May 26, 2012, and means that website publishers have to get their visitors consent before placing a cookie on their machine. The landing page for the new website reads: "We offer two basic tools for websites. The first tool will create a splash screen, which you may wish to use for your landing page. The second tool can be used to overlay a notification bar on your landing page. If you decide that a splash screen or a notification bar are the right approach for your site, you are welcome to use the tools provided here. The notifactions can also be made visible for visitors outside of the EU unless website owners impose their own geographic restrictions. The website comes amidst a landmark ‘right to be forgotten‘ case in the EU which ruled that people should be able to ask Google to erase information about them that they want to be forgotten from the search engine’s results, even if that information is published on third party sites.
<urn:uuid:fa01785f-59f2-42c2-90f7-38777d37f858>
CC-MAIN-2017-04
http://www.cbronline.com/news/social-media/google-turns-on-transparency-website-for-the-eu-4307275
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00172-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95056
334
2.515625
3
Consider the following programming test: this data file defines a maze with diagonal walls made up of sections with a length the square root of 2. In 45 minutes, design and implement a solution in Pascal to count the number of enclosed areas and the area of the largest enclosed space. Got it? Ready... set... go! If you were able to do it, congratulations! That means you could pass the Google interview process. It also means that you have the programming skills of a Vietnamese eleventh grader. That's according to Google engineer Neil Fraser, who recently visited schools in that country and came away quite impressed with the commitment to teaching computer science starting at an early age. As he wrote earlier this month, it all starts early. In third grade they learn how to use Windows, in fourth grade they learn Logo commands and loops and by fifth grade they can write procedures. He witnessed a class of eleventh graders take the above programming test, which he said is on par with some of the questions during a Google interview. Most of the students had no trouble completing the task in the time allotted. He concluded that, “There is no question that half of the students in that grade 11 class could pass the Google interview process.” While the decision to start teaching computer science at an early age is relatively recent in Vietnam, it already puts them well ahead of the average student in the United States, which Fraser bemoans. In fact, he goes on to paint a pretty bleak picture of computer science education in the United States, saying that eleventh grade students in the U.S. have trouble with HTML tags. He attributes the lack of computer science education in the U.S. to: School districts not wanting to devote resources away from traditional subjects (e.g. English), so as not to threaten funding Lack of teachers qualified to teach computer science Students not wanting to learn computer science and be labeled “geeks” “The result in America is a perfect storm of opposition from every level,” Fraser writes. “Effecting meaningful change is virtually impossible.” If by "meaningful change" he means getting to the point where U.S. eleventh graders could pass a Google interview, then, yes, he's probably right. U.S. students are behind their counterparts in Vietnam and Estonia in learning programming. It's not clear that computer science will ever become a required subject in grade school or even high school in this country. However, I'm not sure that's the end of the world. As I wrote last year, there is indeed a clear gap in this country between the number of computer related jobs and the number of students graduating with the computer science degrees. But there is some good news to report on that front. Yesterday the Computing Research Association reported that, in the 2011-12 academic year, the number of students at U.S. universities choosing to major in computer science increased by 29%, the fifth year in row of growth. There was also a 20% increase in the number of bachelor’s degrees awarded in computer science, the third year in a row of double digit percentage growth. This shows that, at least for people entering the workforce (which is what's most important, in my opinion), we’re trending in the right direction. In the meantime, if Google is looking for 17 year old engineers, they know where to look. Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:fc88897e-98ab-4abf-bb31-2069c1f7343e>
CC-MAIN-2017-04
http://www.itworld.com/article/2714017/cloud-computing/google-engineers-not-smarter-than-vietnamese-11th-graders.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00474-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959103
761
2.53125
3
WASHINGTON, Sept. 27, 2012 /PRNewswire/ -- Public health officials and medical experts practiced what they preach by getting vaccinated against influenza during a National Foundation for Infectious Diseases' (NFID) news conference today. Urging the public to follow suit, experts cautioned that influenza is unpredictable and that last year's mild season is not necessarily an indication of what can be expected in 2012-2013 and, even during mild seasons, flu takes a serious toll. Assistant Secretary for Health at the U.S. Department of Health and Human Services (HHS) Howard K. Koh, M.D., announced the latest influenza vaccination coverage rates among children and adults, which reinforced the need for ongoing, collaborative efforts to improve influenza immunization. "I urge everyone to join me and get a flu vaccine this year," said Dr. Koh, who was the first to receive his flu vaccine during the news conference, held at the National Press Club in Washington, D.C. He was joined by leaders from the American Medical Association, American Academy of Pediatrics, American College of Obstetricians and Gynecologists, American Pharmacists Association, AARP, National Medical Association, Centers for Disease Control and Prevention (CDC) and NFID, in partnership with the National Influenza Vaccine Summit, and called on everyone 6 months of age and older to follow CDC's universal recommendation by getting vaccinated against influenza each year. According to the CDC data, which was published in today's issue of CDC's Morbidity and Mortality Weekly Report, influenza vaccination rates remained steady with an estimated 128 million people, or about 42 percent of the U.S. population receiving the influenza vaccine during the 2011-2012 season. However, rates varied widely between age groups and among states. "The past three years have demonstrated that influenza is predictably unpredictable," Dr. Koh added. "When it comes to flu, we can't look to the past to predict the future. Stay healthy -- get vaccinated!" More than 85 million doses of influenza vaccine have been distributed as of September 14. Manufacturers project that about 135 million doses of influenza vaccine will be available this season in doctors' offices, public health clinics, pharmacies, retail stores, and other venues. "In this election year, there are many important national health issues that are up for debate, but this is one that's easy to agree on," said William Schaffner, M.D., immediate past-president of NFID and chair of the Department of Preventive Medicine at Vanderbilt University School of Medicine, who led the news conference. "We should all be voting 'yes' for influenza and pneumococcal prevention. It is every individual's responsibility to put prevention to good use and make vaccination part of their routine healthcare." 2011-2012 Influenza Vaccination Coverage Shows Steady Progress, Highlights Gaps Steady gains have been made, particularly since influenza vaccine recommendations were expanded to include all healthy adults just two years ago. However, the CDC report indicates that coverage remains lower than the public health goals of 80 percent for people between the ages of 6 months and 65 years and 90 percent for people older than 65 years. Vaccination rates among children age 6 months to 17 years remained steady at 52 percent, with the greatest increase among children age 6 to 23 months (approximately 75 percent were vaccinated in 2011-2012, an increase of slightly more than 6 percentage points higher than the previous year). Coverage for children decreased with age; vaccination among adolescents age 13 to 17 years remains low at 34 percent. Adults age 65 years and older - the group with the longest standing recommendation to receive the influenza vaccine - had the highest coverage rates among all adults (approximately 65 percent), but showed a continued decline over the past few years (from approximately 74 percent in 2008-2009). While pediatric vaccination coverage among race/ethnic groups was comparable, similar to the previous season, disparities still remain in the adult populations. In addition to data on vaccination rates by age and race, Dr. Koh also released information about influenza immunization among pregnant women and healthcare personnel. Vaccination coverage among pregnant women remained consistent (47 percent), but still significantly higher than rates prior to the 2008-2009 influenza season which were regularly lower than 30 percent. The American College of Obstetricians and Gynecologists (ACOG) also recommends the influenza vaccine for pregnant women. "Influenza is five times more likely to cause severe illness in pregnant women than women who are not pregnant," said Laura Riley, M.D., director of Obstetrics and Gynecology Infectious Disease, Massachusetts General Hospital, representing ACOG. The flu vaccine is safe and offers protection for the mother. Research shows it can decrease the baby's risk of getting the flu for up to six months after birth." Among healthcare personnel, influenza vaccination rates increased slightly from the previous season (approximately 64 percent in 2010-2011 to 67 percent in 2011-2012), with highest rates among physicians (approximately 86 percent). By work setting, hospitals were associated with the highest vaccination coverage for healthcare professionals; coverage was lowest among healthcare professionals -- other than physicians and nurses -- working in long-term care facilities. Healthcare Community Plays A Critical Role In Motivating The Public Research has consistently shown that a recommendation from a healthcare professional will greatly help to improve vaccination rates among all populations. "It is critical for physicians to protect themselves from the flu and to also encourage their patients to get vaccinated," said Litjen Tan, M.S., Ph.D., director of Medicine and Public Health at the American Medical Association. "For example, pregnant women whose physician recommended the flu vaccine were five times more likely to get vaccinated, so we want to get the message out to all physicians that they can encourage patients to get vaccinated." Location of vaccination has also changed slightly over the last few years. With an increasing number of venues offering vaccines, more people are opting to get vaccinated outside of traditional medical settings. All 50 states, D.C., and Puerto Rico now allow pharmacists to administer influenza vaccine, according to pharmacist Mitchel Rothholz, chief strategy officer, American Pharmacists Association, and more than 20 million doses were administered by pharmacists last year. "Pharmacists have been offering flu vaccines for nearly two decades, but the 2009 pandemic prompted greater collaboration throughout the immunization neighborhood, resulting in sustained public health gains. Pharmacists and pharmacies are playing a greater role within the immunization neighborhood in making vaccines and vaccine information more accessible to all community residents." 2012-2013 Influenza Outlook The seasonal influenza vaccine protects against the three viral strains most likely to cause the flu in the upcoming year. This year's seasonal influenza vaccine has one strain in common with last year's vaccine, A/California/7/2009 (H1N1)-like virus, plus two new viral strains, A/Victoria/361/2011 (H3N2)-like virus and B/Wisconsin/1/2010-like virus. Four influenza vaccine options are available to meet the needs of various populations: a nasal spray; the traditional intramuscular injected vaccine; a high-dose injection for people age 65 years and older; and an intradermal vaccine that features a smaller needle. While vaccination is the first line of defense against influenza, at the news conference, CDC outlined its three-step approach to fighting influenza. Vaccination is the first and most important step, coupled with everyday preventive actions such as good hand and cough hygiene. For those who do get infected, appropriate use of influenza antiviral drugs can help reduce the risk of serious complications from the infection. CDC recommends either oseltamivir or zanamivir for treatment and prevention of influenza. Dr. Schaffner advised that the influenza season is also an opportune time for older adults to ask their doctors about pneumococcal disease and the status of their vaccination needs. Both vaccines can be administered at the same time. Pneumococcal infection is a common complication of influenza, although it can occur any time of year. Pneumococcal disease can be severe, leading to pneumonia, meningitis, and other serious infections. People age 65 years and older are recommended to receive the pneumococcal vaccine once. It is also recommended for adults 18 and older with certain health conditions, such as heart, lung and liver problems, diabetes, and asthma or those who smoke. Unfortunately, about 73 million U.S. adults who are recommended to receive the pneumococcal vaccine have not received it. Leading by Example At the news conference, NFID issued an influenza prevention commitment statement calling on healthcare professionals, business, and community leaders to "lead by example," by making influenza prevention a health priority. More than 30 companies and organizations have already signed on to show their support. Additional information is available at: nfid.org/leadingbyexample. About the National Foundation for Infectious Diseases The National Foundation for Infectious Diseases (NFID) is a non-profit, tax-exempt (501c3) organization founded in 1973 dedicated to educating the public and healthcare professionals about the causes, treatment, and prevention of infectious diseases. This news conference is sponsored by NFID in partnership with the National Influenza Vaccine Summit and is supported, in part, by the U.S. Centers for Disease Control and Prevention, MedStar Health Visiting Nurses Association, and through unrestricted educational grants to NFID from Genentech, Health Industry Distributors Association, MedImmune, Merck and Co., Inc., Pfizer Inc, and Sanofi Pasteur.
<urn:uuid:45d3d450-c9cd-4708-b554-be16ea851b49>
CC-MAIN-2017-04
http://www.continuityinsights.com/news/2012/10/2012-flu-outlook-unpredictable-season-plenty-vaccine-available
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00382-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954864
1,963
2.796875
3
If you’re a fan of the Terminator franchise, the idea of a learning machine probably evokes images of Skynet, hunter-killers prowling for human prey, and Arnold Schwarzenegger saying something to the effect of “I’ll be back.” A learning or thinking machine is one of the holy grails of computer science, and it is the subject of numerous research projects. One such project involves a neural network constructed by Google’s X Lab, results from which are slated for presentation soon (for the technically minded, a paper describing the results is available online: “Building high-level features using large scale unsupervised learning”). The results of this project hint at the possibility of vast computer networks that can learn or think, but is this the likely future of data centers? For Cat’s Sake Maybe it’s human laziness, or maybe it’s a more noble quest of some form, but the idea of a thinking machine is fascinating to many (including sci-fi creators). And a headline like “Google scientists find evidence of machine learning” is certain to capture attention. The Google scientists constructed a neural network using 16,000 processors. According to the New York Times (“How Many Computers to Identify a Cat? 16,000”), “The neural network taught itself to recognize cats...It performed far better than any previous effort by roughly doubling its accuracy in recognizing objects in a challenging list of 20,000 distinct items.” The article quoted lead scientist Andrew Y. Ng of Stanford University as saying, “The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data.” According to the paper published by the scientists, the algorithm was fed thousands of images—some containing cats and some not—from which the network extracted data about cats. The system then achieved a roughly 75% successful detection rate when tested on images containing cats (or not). But the devil may be in the details (and extracting plain-English details from a scientific paper is about as simple as getting a computer to identify a cat in an image). Is This Really a Step Toward Machine Intelligence? Whether a computer can really ever become “intelligent” is a matter that, to be discussed thoroughly and cogently, requires careful thought on a number of fronts—philosophical as well as technological. Computers can obviously do things that could easily give the appearance of intelligence: they can run complex simulations, find specific information out of reams of junk on the Internet (although they may be more or less successful at this task), run game characters that respond to your actions as you play and so on. But is this smarts, or just the appearance of smarts? The key to knowing whether a machine can actually think is to develop a test that only a thinking machine could pass. But sticking to the realm of image recognition, what might such a test look like? Interestingly, a young child doesn’t need to see thousands of different cats to be able to identify a cat when he or she sees one. It may only take one or two cats and one or two corrections (“no, honey, that’s a dog”). The child doesn’t perform any algorithm (in the computer sense of the term) when a cat walks by, whereas a computer system performs a variety of mathematical comparisons and calculations—actions a computer is undoubtedly good at. One might very understandably think, then, that a computer is thus “faking” intelligence. Part of the problem in pursuing machine intelligence is a lack of true understanding about human intelligence. To be sure, claims in the area of neuroscience are a dime a dozen—we’re just 10 years away from understanding X or Y or Z (not unlike how we are always 10 or so years away from a cure for this or that cancer). And although progress is being made, one could easily (and rather convincingly) argue that a full understanding of the brain is impossible—after all, we think using a brain, so can we really ever gain a full appreciation for it as an outside observer would? One might easily wonder if we really have even a basic understanding of how any of the brain works at all. To be sure, these are complex questions—questions that no short article can address adequately. But they are worth raising. Obviously, computers can do amazing things, and data centers and the networks that connect them have employed large amounts of computer power to produce many important benefits to society (and some downsides). And as processors get faster, smaller and cheaper, leading to a proliferation of computing power, questions naturally arise as to the limits of that power. A Level-Headed Assessment? Computer intelligence seems to be just as far off now as it was 10 or 20 years ago. Computers can do more, faster, but the nature of what they do really hasn’t changed: they follow a set of instructions fed to them by programmers, and they do so to the letter (or, perhaps more accurately, to the bit). Indeed, the programs become more complex as computing power and memory capacity increase, but they have not fundamentally changed. That leads to what might be the most salient question: can more of the same truly generate something new? Let’s be honest: computers are stupid machines (the moment of truth: the blue screen). They do certain tasks and do them well—they convert one set of ones and zeros into another set of ones and zeros, and nothing more. Naturally, if you think that is basically what the human brain does, then you might conclude that a computer—given sufficient development—could eventually produce all the complex responses and characteristics of the brain. But phenomena like self-consciousness cannot be explained purely on the basis of ones and zeros—whether in the brain or in a computer. And, sure, materialist philosophers will try, but they invariably explain away self-consciousness. Self-consciousness is not material, so at best, a materialist understanding of it would lead to an explanation whereby humans simply act as though they are self-conscious. A philosopher might even do a pretty convincing job of maintaining this position, but it in no way corresponds to or explains an individual’s own experience of self-consciousness. Okay, so maybe we’ve gone off the deep end here. The brief discussion above doesn’t do justice to the topic, but it does hint at all the different lines of reasoning that must be pursued—not just to develop a thinking or learning computer, but to first figure out what such a machine would even look like. The answers to such questions are far from clear, but in addressing them carefully—even if incompletely—we can take several important steps toward understanding the role and capabilities of computers, as well as how we should assess claims of computer intelligence. For my own part, I believe computers are great tools, but they’re still just tools. I don’t care how intricately you design a hammer, it’s still going to be a hammer; and although it may be able to “fake it” to some extent as another tool (say, a screwdriver), it will always be primarily for jobs that require a simple hammer. Over the decades, computers have become faster, cheaper and better at what they do—but they still basically do the same tasks. You can string a bunch of them together, but ultimately, they still just convert one set of ones and zeros to another. I feel quite safe in predicting that computers will ever remain stupid (but useful) machines—for whatever that’s worth. Photo courtesy of hfb
<urn:uuid:74041d23-7181-4893-a7f9-c2e25cf9843a>
CC-MAIN-2017-04
http://www.datacenterjournal.com/will-the-data-center-of-the-future-be-a-learning-computer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00293-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948938
1,627
3.4375
3
Internet telephony – What is it and how does it work? The terms IP telephony (IP = Internet Protocol) and Voice over IP (in short: VoIP) refer to making telephone calls via a computer network, whereby the data is transferred according to the IP Standard. This form of telephony is better known as internet telephony. It is necessary to prepare data for transfer via the internet to comply with the rules of the Internet Protocol. The transfer routes used here are the same as those employed for standard data transferral via the internet. With internet telephony from NFON it is possible to integrate Unified Communication (UC) as well as fax solutions via XCAPI. Internet telephony hardware In order to use internet telephony one requires the appropriate hardware. Four different alternatives are available here. The first option is to use a standard computer with microphone for user voice capturing, as well as a loudspeaker or headphones in order to listen to the conversation participants. Added to this comes the application and installation of special software on the PC. Secondly, it is possible to use specific VoIP end devices as well as IP and SIP telephones. These only differ from a standard telephone in terms of the technology that enables the data transfer via the internet. As a further option it is possible to use the conventional telephone and connect this to a special adapter, which converts the analogue telephone data into digital signals. Finally, it is also possible to use a mobile phone by connecting through the telephone system via an FMC client. The advantage of the last three options lies in the operation of the device, which can be used in the same way as a conventional telephone. A further benefit is that the user is also attainable when the PC is switched off. How VoIP works First, the acoustic signals are digitalised during the data transfer, and divided up into individual data packages. These data packages are subsequently labelled with a so-called header. These headers contain information about the identity of the sender and recipient, or regarding the status of the message. It is now necessary to establish a connection. To do so one uses a Session Initiation Protocol Address (in short: SIP address). This is only assigned once, so that it is possible to uniquely identify the address. Activating the device results in this logging into a server. The server then registers the login. If this SIP address is called up by another participant then this request is passed on to the server that the user is registered with. The server passes the call on to the end device and therefore establishes a telephone conversation. Because the SIP address is not bound to a certain connection - in the manner of a standard telephone number - the user is connected with the internet by means of the corresponding end device and is therefore attainable anywhere in the world. It is now also possible to connect internet telephony with the standard telephone network. This takes place via certain gateways. This provides the user with the option of using a conventional telephone in order to call a VoIP telephone and vice versa. The so-called Media Gateway can be used for example with an ISDN connection or likewise with an analogue telephone connection. The advantages of VoIP The first noteworthy advantage here is that internet telephony is a particularly low-cost option, because almost every household has flat rate internet nowadays. Therefore, no further costs arise because IP telephony simply accesses this. This means that the standard telephone connection is superfluous and can even be removed. The costs of the individual telephone calls are usually also lower here than with analogue telephony. Telephone conversations with participants using the same VoIP provider are usually free of charge. It is also often the case that calling participants using alternative VoIP providers is also free; only in a few cases are fees charged here. However, calls placed within the standard telephone network are always subject to charges, although these are very low with many providers.
<urn:uuid:4c079bc6-f95b-4ce0-9dc1-f9b5b6c285a5>
CC-MAIN-2017-04
https://www.nfon.com/gb/solutions/resources/glossary/internet-telephony/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00293-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938339
794
3.734375
4
"My primary goal of hacking was the intellectual curiosity, the seduction of adventure.” – Kevin Mitnick “A white-hat hacker is someone who enjoys thinking of innovative new ways to make, break and use anything to create a better world.” – Nico Sell, r00tz Asylum Honor Code Kids are naturally curious. They are born to hack their environment in order to learn how and why the world works as it does. See the story of the 9th-Grader after taking a homemade clock to school. They don’t even realize they could be getting into trouble. This leads to potentially dangerous situations, especially with the distributed, online world of cyberspace. We need to direct kids to be white-hat hackers. Cyber competitions provide safe havens where kids have an outlet for their curiosity. (See the post Cybersecurity competitions – Make a difference.) Another way is kid-centered hacking conferences. We’re turning kids into cons. This doesn’t mean convicts, but conference attenders. A growing trend in cybersecurity conferences is to include kids with a separate track/area just for them. Kids benefit from conference experiences like adults. It’s their opportunity to learn something new, practice their skills, and network with others. rootz Asylum, Hak4Kidz, and HacKid conferences are kid-centered events to spark their curiosity as ethical hackers in a safe and rewarding environment. All of these were started by cybersecurity professionals and parents looking to create a fun and safe place for kids to learn and practice various hacking skills. These conferences aren’t solely about cybersecurity, but include many forms of general life hacking. They focus on areas kids care about at a level they understand. Topics include robotics, online gaming, martial arts, medieval weapons, soldering, 3D printing, lock picking, and drones. This is in addition to traditional cybersecurity topics of programming, online safety, cyberbullies, cryptography, computer hardware engineering, and hacking contests. They use non-traditional methods to engage kids with as much hands-on learning as possible. The intent is to allow the kids to “get dirty” playing with the technologies without a fear of breaking things or getting into trouble. They have “junkyards” full of old PCs, cell phones, network routers, circuit boards, etc. that allow kids to understand bare-bones technology. Contests (with prizes!) challenge kids to Capture the Flag (CTF) in a virtual environment, solve crypto puzzles (using math), and develop games. Each of these conferences maintains a strict code of ethics to help kids (and adults) know their boundaries when hacking. r00tz Asylum exemplifies this with their Honor Code. Originated by Nico Sell in 2010 as DefCon Kids, R00tz Asylum gets kids learning cybersecurity from the best in the industry. There’s no better place than the Black Hat and DefCon conferences in Las Vegas every summer. In the 2014 opening address, Nico describes the origin. “’r00tz’ came from the idea that getting ‘root’ of a computer means taking full control of it.” At cons like r00tz Asylum, kids take control of their learning through multiple hands-on hacking sessions delivered by cybersecurity luminaries. The summary says it all, “r00tz is about creating a better world. You have the power and responsibility to do so. Now go do it! We are here to help you.” Started by David “Heal” Schwartzberg, Hak4Kidz is a series of kid conferences with the goal of developing a community of cyber kids with common interests, objectives, and a sense of belonging. Many kids who are into computers are still seen as socially-awkward introverts. Hak4Kidz gives those kids their own space where they can learn together. “Hak4Kidz is a conference for youth focused on internet safety and best practices with an opportunity to safely explore computer science and cybersecurity.” HacKid was started by Christopher Hoff with similar goals as r00tz and Hak4Kidz. “Kids are our future, why not give them that spark that will set them on a journey that only ‘hacking’ can inspire?” HacKid took a break in 2015, but check the website for future events. These Kids Cons have reached thousands of kids (and equal number of parents) over the past five years. They are having a positive impact by engaging kids in the world of white-hat hacking. Call to Action – Part 3: Volunteer at a kid’s security conference or camp. If one doesn’t exist in your area, consider starting one. Let’s work on replicating these events to cover as much territory as possible. Helping one kid makes it all worthwhile. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:39a74f49-1e4c-45ed-b754-377226875b18>
CC-MAIN-2017-04
http://www.csoonline.com/article/2984391/it-careers/kid-cyber-conferences-allow-them-to-go-hack-yourself.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955911
1,028
2.796875
3
|Collection Frameworks||Callback Frameworks| Object COBOL has a mechanism by which you can send messages to intrinsic data. This chapter explains how you can send messages to the types of intrinsic data supported by the Class Library, and how you can write classes to support other types. The Object COBOL Class Library includes a set of classes corresponding to some of COBOL's intrinsic data types. Objects from these classes correspond to COBOL data items. This enables you to store and manipulate intrinsic data in object-oriented ways. The intrinsic data mechanism is used by the collection classes to store intrinsic data without creating objects for every item of intrinsic data in the collection. You can think of intrinsic data as being static objects whose data is allocated by the Compiler at compile time; all other objects in Micro Focus Object COBOL are dynamic and their data is allocated at run-time. The three COBOL data types supported by the Class Library are: You can use the intrinsic data classes in several different ways: The supplied intrinsic classes are only capable of storing data of preset length. If you want to use them for intrinsic data of any other length, you must first clone the class, creating a new class for the length of data you require. There are three classes for representing intrinsic data: COBOLPICX can store data one byte in length; COBOLCOMP5 and COBOLCOMPX can store data four bytes in length. To store different length data you need to clone the class for a different length. To do this, send the message "newClass" to one of the intrinsic data classes, supplying the length as a parameter. It returns a class capable of storing data of the length given with the "newClass" method. move 6 to aLength invoke cobolPICX "newClass" using aLength returning aNewPicXClass where the parameters are : |aLength||Declared as a PIC X(4) COMP-5.| |aNewPicXClass||Declared as an OBJECT REFERENCE. You can use cloned classes as templates for INVOKE...AS, and for creating collections of intrinsic values. You can also create instances of intrinsic classes using the "new" method. You can send a message to COBOL intrinsic data by using INVOKE...AS. 01 aNumber pic x(6) comp-5 01 comp5Length6 object reference. ... invoke aNumber as comp5Length6 "hash" returning aHashValue In the example above, COMP5LENGTH6 is a cloned class for COMP-5 items of length 6. See the section Cloning an Intrinsic Data Class for an explanation of how to clone classes. The effect is that the "hash" message is sent to a static object which has the instance data in intrinsic data item ANUMBER. Just as with any object, if the instance doesn't understand the message, it is passed up the inheritance chain to its superclasses. If you want to use a type of intrinsic data not supported by any of the classes in the Class Library, you can create your own new intrinsic class. You do this by writing a class which inherits from Intrinsic, with data. The next two sections deal with the code you need to write for the: The class initialization code for your intrinsic class must set a default size in bytes for the data to be represented by instances, and put it in data item STORAGEREQUIREMENTS. This is declared in Intrinsic as follows: 01 storageRequirements pic x(4) comp-5. The class cloning mechanism enables users of your intrinsic class to handle data of different lengths. You must also code the following class methods: Returns the object handle to this class object. Returns the maximum number of bytes allowable for this type of intrinsic data. For example, a COMP-X data item can't be more than eight bytes long, so the "maximumSize" method in the COBOLCOMPX class returns eight. Code the method interface for "baseClass" like this: method-id. "baseClass" linkage-section. 01 lnkHandle object reference. procedure division returning lnkHandle. * Substitute the class-id of your class for * nameOfThisClass in the following statement. set lnkHandle to nameOfThisClass exit method. end method "baseClass" The "baseClass" method returns the object handle of the named class, rather than SELF, so that the correct handle is returned for the baseClass even when the message is sent to a clone of your intrinsic class. Code the method interface for "maximumSize" like this: method-id. "maximumSize". linkage-section. 01 lnkSize pic x(4) comp-x. procedure division returning lnkSize. * Code to return the maximum allowable size. exit method. end method "maximumSize". You must code comparison methods for instances of an intrinsic class, for use by the Collection class. There are two separate methods for each type of comparison; one compares this intrinsic object to another object, the other compares it to a value. You need to provide the following methods for comparing intrinsic objects: These are the methods for comparing objects with intrinsic data items: The value of an intrinsic instance is held in an inherited instance data item, INSTANCEDATA. This is declared as: 01 instanceData pic x. Although it is only declared as a single byte in length, it always references a memory area of the correct length for your intrinsic data. To get at the data in it, use reference modification. For instance, to see the first four bytes of instance data, you can refer to: Code the method interface to any of the object comparison methods as follows: method-id. "equal". linkage section. 01 lnkBoolean pic x comp-x. 03 isTrue value 1. 03 isFalse value 0. 01 lnkIntrinsic object reference. procedure division using lnkIntrinsic returning lnkBoolean. * Code to compare the value in lnkIntrinsic to * the value in this instance. Set isTrue if * the result of the comparison is true, otherwise * set isFalse. exit method. end method "equal". Code the interface to any of the intrinsic data comparison methods as follows: method-id. "equalByLengthValue". linkage section. 01 lnkBoolean pic x comp-x. 03 isTrue value 1. 03 isFalse value 0. 01 lnkLength pic x(4) comp-x. 01 lnkValue pic x occurs 1 to maxSize procedure division using lnkLength lnkValue returning lnkBoolean. * Code to compare the value in lnkValue to * the value in this instance. Set isTrue if * the result of the comparison is true, otherwise * set isFalse. exit method. end method "equalByLengthValue". Copyright © 1999 MERANT International Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law. |Collection Frameworks||Callback Frameworks|
<urn:uuid:54e9e5e6-7f99-4708-bfe0-cc435e6947b8>
CC-MAIN-2017-04
https://supportline.microfocus.com/documentation/books/sx20books/opfwin.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.806073
1,526
2.625
3
Nezan E.,FREMER | Siano R.,French Research Institute for Exploitation of the Sea | Boulben S.,FREMER | Six C.,University Pierre and Marie Curie | And 7 more authors. Harmful Algae | Year: 2014 The family Kareniaceae is mostly known in France for recurrent blooms of Karenia mikimotoi in the Atlantic, English Channel, and Mediterranean Sea and for the unusual green discoloration in the saltwater lagoon of Diana (Corsica) caused by Karlodinium corsicum in April 1994. In terms of diversity, this taxonomic group was long overlooked owing to the difficult identification of these small unarmored dinoflagellates. In this study, thanks to the molecular characterization performed on single cells from field samples and cultures, twelve taxonomic units were assigned to the known genera Karenia, Karlodinium and Takayama, whereas one could not be affiliated to any described genus. The molecular phylogeny inferred from the D1-D2 region of the LSU rDNA showed that five of them formed a sister taxon of a known species, and could not be identified at species-level, on the basis of molecular analysis only. Among these latter taxa, one Karlodinium which was successfully cultured was investigated by studying the external morphological features (using two procedures for cells fixation), ultrastructure, pigment composition, and haemolytic activity. The results of our analyses corroborate the genetic results in favour of the erection of Karlodinium gentienii sp. nov., which possesses an internal complex system of trichocysts connected to external micro-processes particularly abundant in the epicone, and a peculiar pigment composition. In addition, preliminary assays showed a haemolytic activity. © 2014 Elsevier B.V. Source
<urn:uuid:5998ee8b-9d07-4cff-aa85-f6a1489b5e78>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/fremer-739127/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00017-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917293
389
2.734375
3
An estimated 1.4 million jobs will open up in the computer-related fields in the next ten years. It’s time more girls got their shot Where did all the women go? That’s the question being asked at Google, Yahoo, Microsoft, and other tech companies, as well as in university computer labs. Over the past three decades, the number of women involved in coding and programming has plummeted. In 1984, 37 percent of all computer science college graduates were women. Today, that figure stands at a mere 12 percent. “There are some very strict stereotypes within the traditional education system.” It’s tempting to blame underfunded schools, which lack the resources to keep pace with staffing and new technology, as one cause for the decline. Resistance to curriculum changes also come into play. But those issues do not take into account a larger societal problem that plagues girls specifically—namely, they are often discouraged from pursuing an interest in coding and programming. According to studies, 74 percent of girls in middle school express interest in the STEM subjects (Science, Technology, Engineering and Math). But by senior year of high school, just 0.3 percent select computer science as a college major. “There are some very strict stereotypes within the traditional education system,” says Katy Campen, lead instructor at Tennessee Code Academy’s 100 Girls of Code initiative. “Just saying ‘girls probably aren’t good at math’ or ‘they would rather take English classes’ or something like that puts them down before they’ve even tried.” Fortunately, initiatives like 100 Girls of Code are addressing the issue. In daylong workshops held in various cities around Tennessee, and co-sponsored by Scripps Networks Interactive, girls ages 12-18 get the opportunity to learn introductory programming skills. They create their own rudimentary websites and use a simple app developed by MIT to design their own online games. Fourteen-year-old Inara Abernathy is one of those girls. “Making the game was the most fun part of the day,” she says. Since camp, Abernathy has continued to refine her design and claims an interest in coding as a potential future job. “The goal of 100 Girls is to raise awareness of computing,” says Campen, who taught the June 30 workshop in Knoxville. “We want to specifically encourage Tennessee females at a younger age so they get a jumpstart and hopefully pursue an education or career in a programming or a technology field.” A freelance programmer herself, Campen majored in advertising at the University of Tennessee before later studying programming. She offered her own perspective on working in such a male-dominated field. “A lot of times you can be placed in these characterized roles,” she says. “I’ve worked on projects where I’d automatically be put in charge of the design or the aesthetics because I’m a female. But I’m not very good at that. I’m better at the actual programming and the application of the language.” Another organization working to bridge the gender gap is Girls Who Code, a multi-city program with a goal of exposing 1 million young women to computer science by 2020. The organization recognizes the challenges women face who want to join computing fields—be it lack of education or lack of support or mentoring—and works to remedy those problems. Girls Who Code clubs launched in 2013 in Chicago, New York, Boston, Detroit, and San Francisco; the programs cater to girls in sixth through twelfth grades during the academic year. Other groups like Black Girls CODE, a San Francisco based group, has worked since 2011 to “increase the number of women of color in the digital space by empowering girls of color ages 7 to 17 to become innovators in STEM fields.” As of 2011, women of color represented less than 3% of those working in technology fields. Colleges are joining the party as well. Some big name schools like Carnegie Mellon and the University of Washington are changing their engineering and computer science programs to become more female-friendly. Some are dropping requirements around prior programming experience for computer science majors and providing more female academic support through mentoring programs. Men often have access to informal support networks from colleagues that women lack. College students are smart to jump on the coding train. The U.S. Department of Labor estimates there will be 1.4 million job openings for computer-related occupations in the next ten years. Of the STEM fields, computer science and computer engineering have the highest median earnings for recent college graduates without advanced degrees. Campen is optimistic about women getting into the field and about how the trend will benefit everyone. “Each day we have tons and tons of problems as a society and we need a very diverse group of people to solve them,” she says. “We need everyone—females and males—to help generate the creativity to get it done.”
<urn:uuid:bfe123d3-3b34-4069-a897-24519381ff10>
CC-MAIN-2017-04
https://www.ncta.com/platform/industry-news/wanted-girls-who-code/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00503-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959087
1,043
3.015625
3
Chip giant Intel practically introduced the term "economies of scale" to the high-tech industry and the channel. Now the chip maker hopes to continue using those capabilities to hone, fine-tune and advance its product lineup. By next year, the Santa Clara, Calif.-based company says it will move to boost its economies of scale even further by rolling out a new manufacturing process that will enable it to produce smaller, better-performing chips in a more efficient manner. The advancement will bring Intel's manufacturing process from its current 90 nanometers to a sleeker, smaller 65 nanometers. The technology change is significant for Intel. With the new 65-nanometer process, the company will be able to boost the number of transistors placed on a single chip to nearly a half-billion. This, in turn, will help Intel produce multicore processors in addition to adding power-saving function to the CPU, according to the company. Among other things, Intel is shooting to provide all-day battery life for notebooks by 2010--a development that will require heavy leverage of Moore's Law's demand that the number of transistors on a chip doubles about every two years. In addition to the new process, Intel said it has also designed "sleep transistors" and implemented them into its 65-nanometer SRAM memory. These sleep transistors are designed to shut off the flow of electrical current to blocks of SRAM when they are not being used, which can eliminate a significant source of chip power consumption. Economy of scale alone, however, isn't enough to dominate the entire market. Rival Advanced Micro Devices, Sunnyvale, Calif., has managed to build its market share and presence even while trailing Intel in manufacturing capability. AMD's advances have led some solution providers to take a wait-and-see approach to Intel's new manufacturing process. "[AMD] has a better price point; the performance seems to outperform Intel," said Patrick McNicholas, president of Maverick Computers, a Loxahatchee, Fla.-based system builder. "It's only been recently--the past couple of weeks--that we've been considering possibly using the Intel chipset."
<urn:uuid:604249b4-e0e6-4723-9c5a-8c382efb3f7f>
CC-MAIN-2017-04
http://www.crn.com/news/components-peripherals/46802550/intel-new-manufacturing-process-will-give-product-lineup-some-juice.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00227-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944439
448
2.578125
3
SERVQAL gathers three data sets: minimum, desired, and perceived service quality. These three data sets combine to create a zone of tolerance. Think of the zone of tolerance as the boundaries representing the upper (desired) and lower (minimum) boundaries of service quality. Plotting the third data set, perceived quality, in relation to the zone of tolerance lets IT suppliers and consumers instantly visualize service quality. It also shows where to improve, and can function as a continuous improvement model. Consider the image in Figure 2 below. It shows how to use SERVQUAL to show IT performance based on the job IT needs to do as measured by customer requirements. Figure 2. Example SERVQAUL SLM Report Figure 2 makes it is very easy to understand IT performance. Even non-technical business managers will have no problem at all understanding IT performance. There is no translation from speeds, feeds, MIPS and so on into human terms. The value of the measure means nothing; its the relationship of the measure to the boundaries that is meaningful. SERVQUAL offers two metrics: measure of service adequacy (MSA) and measure of service superiority (MSS.) MSA represents perceived quality less adequate quality while MSS perceived quality less desired quality. These metrics are internal to the provider. MSS > 0 means the provider is over-servicing based on requirements. MSA < 0 means the provider is under-servicing, as shown in figure 3. Figure 3. MSA and MSS Scores This model drives IT operations based on customer and business requirements. Using MSS and MSA values help the IT service provider prioritize work, balance resources and select improvement targets. Select for improvement those services not operating within the zone. Initiate corrective action for services moving down and out of the zone. Market services that move up in the zone. Stop investing when balanced in the zone. This model also provides a roadmap for improvement and remediation. The four SERVQUAL gaps identify where poor service quality (Gap #5) originates. It also shows where you dont have to focusan important and often overlooked item. Using Figure 3 as an example, it would seem we probably need to focus on Assurance, defined as knowledge and courtesy of employees and their ability to inspire trust and confidence, and Reliability or ability to perform the promised service dependably and accurately. Managers can quickly see they ought to reallocate and balance resources, perhaps moving funding or resources from Tangibles to Assurance. Based on MSS and MSA values, examine Gaps 1 to 4. Shrinking some gaps requires training, others usually require software support tools, and others require modifications to process. The resolution depends on the gap, and the tools required depend on the resolution. Here are some examples viewed from the perspective of the service supplier. Gaps 1 to 4 reflect where in the supplier organization the poor quality arises. Gap 5 is the Service Quality Gap measured as q=e-p. SERVQUAL not only provides a roadmap to measure and improve service quality, it also provides a model to present service quality metrics, and a means for quantifying and prioritizing supplier operations and improvement projects. Figure 4. SERVQAUL Mapping to ITIL v3 Gap 1 occurs when there is a discontinuity between customer expectations and managements understanding of customers expectations. Reasons here include insufficient research into, or understanding of, customer needs; inadequate use of the research; lack of interaction between management and customers; insufficient communication between staff and managers; etc. Resolutions include conducting research, making senior IT managers interact with customers, making senior managers occasionally perform customer contact roles, encouraging upward communication from customer contact employees, and so on. ITIL, Six Sigma and other solution sets can help here, along with good old-fashioned attention to customers. ITIL v3 Service Strategy directly addresses this gap. Gap 2 arises within the provider organization when there is a misunderstanding between management perceptions of customers expectations, and service quality specifications used by provider staff. Causes of this gap include: inadequate management commitment to service quality, absence of formal process for setting service quality goals, inadequate standardization of tasks, and a perception of infeasibility or that customer expectations cannot be met. Gap 2 resolutions include using tools like CMMI and ITIL to define process, clarify roles, and to document and measure service delivery goals and performances. ITIL v3 Service Strategy, and the SLP (Service Level Package) passes to Service Design, and the SDP (Service Design Package) Service Design passes to Service Transition directly address this gap. This Gap appears between service quality specifications and service delivery. Key factors here are lack of teamwork, poor employee/job fit, poor technology/job fit, lack of perceived control by contact personnel, inappropriate evaluation and compensation systems, role conflict and ambiguity among contact employees. Ways to address Gap 3 include: investing in employee training, supporting employees with appropriate technology and information systems, giving customer-contact employees sufficient flexibility to respond, reducing role conflict and ambiguity, recognizing and rewarding employees who deliver superior service. ITIL v3 Service Transition, and the early life support provided to Service Operation directly addresses this gap.
<urn:uuid:3d456224-e159-4510-8cc1-658a630a39a8>
CC-MAIN-2017-04
http://www.cioupdate.com/reports/article.php/11050_3782181_3/Why-IT-Service-Level-Management-Fails-And-How-to-Fix-It.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00347-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943083
1,071
2.609375
3
In the early days of fiber optics, fusion-splicing was an exacting, demanding task. Today — although care is needed — the splicing procedure is straightforward, with key steps fully automated. There are many occasions when fibre optic splices are needed. One of the most common occurs when a fibre optic cable that is available is not sufficiently long for the required run. In this case it is possible to splice together two cables to make a permanent connection. As fibre optic cables are generally only manufactured in lengths up to about 5 km, when lengths of 10 km are required, for example, then it is necessary to splice two lengths together. Mechanical and fusion splicing are two broad categories that describe the techniques used for fiber splicing. A mechanical splice is a fiber splice where mechanical fixtures and materials perform fiber alignment and connection. A fusion splice is a fiber splice where localized heat fuses or melts the ends of two optical fibers together. Each splicing technique seeks to optimize splice performance and reduce splice loss. Low-loss fiber splicing results from proper fiber end preparation and alignment. Fiber splice alignment can involve passive or active fiber core alignment. Passive alignment relies on precision reference surfaces, either grooves or cylindrical holes, to align fiber cores during splicing. Active alignment involves the use of light for accurate fiber alignment. Active alignment may consist of either monitoring the loss through the splice during splice alignment or by using a microscope to accurately align the fiber cores for splicing. To monitor loss either an optical source and optical power meter or an optical time domain reflectometer (OTDR) are used. Active alignment procedures produce low-loss fiber splices.
<urn:uuid:6be5289e-7c19-480c-a4f2-76a253e1b003>
CC-MAIN-2017-04
http://www.fs.com/blog/the-techniques-of-fibre-optic-splicing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00403-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920512
348
3.546875
4
What Level of SSL or TLS is Required by HIPAA? SSL and TLS are not actually monolithic encryption entities that you either use or do not use to connect securely to email servers, web sites, and other systems. SSL and TLS are evolving protocols which have many nuances to how they may be configured. The “version” of the protocol you are using and the nuances of the configuration directly affect the security achievable through your connections. Some people use the terms SSL and TLS interchangeably, but TLS (version 1.0 and beyond) is actually the successor of SSL (version 3.0). … see SSL versus TLS – what is the difference? In 2014 we have seen that SSL v3 is very weak and should not be used going forward by anyone (see the POODLE attacks, for example), TLS v1.0 or higher should be used. Among the many configuration nuances of SSL and TLS, which “ciphers” are permitted have the greatest impact on security. A “cipher” defines the specific encryption algorithm to be used, the secure hashing (message fingerprinting / authentication) algorithm to be used, and other related things. Some ciphers that have long been used, such as RC4, have become weak over time and should not be used in secure environments. Given these nuances, people are often at a loss as to what is specifically needed for HIPAA compliance or any kind of effective level TLS security. What HIPAA Says about TLS and SSL: Health and Human Services has published guidance for the use of TLS for securing health information in transit. In particular, they say: Electronic PHI has been encrypted as specified in the HIPAA Security Rule by “the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key” (45 CFR 164.304 definition of encryption) and such confidential process or key that might enable decryption has not been breached. To avoid a breach of the confidential process or key, these decryption tools should be stored on a device or at a location separate from the data they are used to encrypt or decrypt. The encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard. They go on to specifically state what valid encryption processes for HIPAA compliance are: Valid encryption processes for data in motion are those which comply, as appropriate, with NIST Special Publications 800-52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations; 800-77, Guide to IPsec VPNs; or 800-113, Guide to SSL VPNs, or others which are Federal Information Processing Standards (FIPS) 140-2 validated. In other words, SSL and TLS usage must comply with the details set out in NIST 800-52. This implies that other encryption processes, especially those weaker than recommended by this publication, are not valid. If you are using a level of encryption weaker than recommended, it is not valid, and thus for all intents and purposes your transmitted ePHI is unsecured and in violation (breach) of HIPAA. So, What does NIST 800-52 Say? NIST 800-52 is a long and detailed document that covers what is needed for strong TLS for government use. In addition to many small nuances, the biggest things to get out of this document are: - SSL v3 must not be used - TLS v1.0+ is OK to be used - Only ciphers in a specific, recommended list are OK to use The list of technically allowed ciphers (converted into the names used by openssl) are: - Turning OFF SSL v2 and SSL v3 - Enabling TLS 1.0 and higher - Restrict the ciphers you will be using to ONLY those in the CBC-free above list. One thing that is interesting to note is that there are many ciphers included in this list that are not 256-bit. E.g. 128bit AES is allowed for HIPAA and high-security government use. We often hear people stating that 256-bit encryption is a requirement of HIPAA … it is not (that answer is “too simple” — it comes down to which specific algorithms are used, for example). What Does LuxSci Do? LuxSci’s services use TLS for secure web site, MySQL, POP, IMAP, and SMTP connections. LuxSci enables you to use TLS in a HIPAA compliant way by: - Only allowing TLS v1.0+ (no SSL v3) - Only allowing connections using a subset of ciphers in the above recommended list Furthermore, LuxSci allows HIPAA-compliant customers to have email delivered to recipients using “TLS Only” secured connections to recipients servers that support TLS for SMTP. For many customers, the easy-of-use of TLS for secure email delivery is a great solution when available. LuxSci’s systems auto-check all of the recipient’s inbound email servers to ensure that all of them support TLS v1.0+ and at least of the recommended ciphers …. only in this case do we permit use of “TLS Only”. E.g. only in this case can we deliver messages to them in a compliant manner. We do observe some (a small subset) email servers on the Internet that only support SSL v3 or which only support old, weak ciphers. We do not allow our HIPAA customers to communicate with them using only TLS (SSL), as that would place them out of compliance. We recommend that these servers either upgrade their software configurations or that something like SecureLine Escrow is used to ensure compliant communications with them. Use our TLS Checker Tool to see if a domain supports SMTP TLS and if its support is “good enough” for HIPAA-compliant email delivery. - How to Tell Who Supports SMTP TLS for Email Transmission - How Can You Tell if an Email Was Transmitted Using TLS Encryption? - 256-bit AES Encryption for SSL and TLS: Maximal Security - Is SSL/TLS Really Broken by the BEAST attack? What is the Real Story? What Should I Do? - Infographic – SSL vs TLS: What is the Difference?
<urn:uuid:f0796e31-1384-4e4b-af62-e7066b43dcda>
CC-MAIN-2017-04
https://luxsci.com/blog/level-ssl-tls-required-hipaa.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00035-ip-10-171-10-70.ec2.internal.warc.gz
en
0.835849
1,349
2.65625
3
The role of technology in the Chile aid effort: Establishing an internet connection following the earthquake The Télécoms Sans Frontières (TSF) team arrived in Santiago early in the morning on 1 March. Here a team member uses a beacon to establish an internet connection. These beacons are vital for TSF during the first hours of an emergency. When a natural disaster strikes, one of the essential services that fail is the communications network. Telecoms Sans Frontieres (TSF) is a key part of the operation to re-establish links between people and help aid agencies co-ordinate their efforts. Currently, its work in Chile is providing crucial support to those affected by the 8.8-magnitude earthquake that struck the country on 27 February 2010. These pictures illustrate how technology is helping in the aid effort.
<urn:uuid:ebab5301-8b2d-4209-968e-08065a73931c>
CC-MAIN-2017-04
http://www.computerweekly.com/photostory/2240109152/Photos-The-role-of-technology-in-the-Chile-aid-effort/1/Establishing-an-internet-connection-following-the-earthquake
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00091-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949191
176
2.796875
3
Shells are like editors: Everyone has a favorite and vehemently defends that choice (and tells you why you should switch). True, shells can offer different capabilities, but they all implement core ideas that were developed decades ago. My first experience with a modern shell came in the 1980s, when I was developing software on SunOS. Once I learned the capability to apply output from one program as input to another (even doing this multiple times in a chain), I had a simple and efficient way to create filters and transformations. The core idea provided a way to build simple tools that were flexible enough to be applied with other tools in useful combinations. In this way, shells provided not only a way to interact with the kernel and devices but also integrated services (such as pipes and filters) that are now common design patterns in software development. Let's begin with a short history of modern shells, and then explore some of the useful and exotic shells available for Linux today. A history of shells Shells—or command-line interpreters—have a long history, but this discussion begins with the first UNIX® shell. Ken Thompson (of Bell Labs) developed the first shell for UNIX called the V6 shell in 1971. Similar to its predecessor in Multics, this shell (/bin/sh) was an independent user program that executed outside of the kernel. Concepts like globbing (pattern matching for parameter expansion, such as *.txt) were implemented in a separate utility called glob, as was the if command to evaluate conditional expressions. This separation kept the shell small, at under 900 lines of source (see Resources for a link to the original source). The shell introduced a compact syntax for redirection ( >>) and piping ( ^) that has survived into modern shells. You can also find support for invoking sequential commands (with and asynchronous commands (with What the Thompson shell lacked was the ability to script. Its sole purpose was as an interactive shell (command interpreter) to invoke commands and view results. UNIX shells since 1977 Beyond the Thompson shell, we begin our look at modern shells in 1977, when the Bourne shell was introduced. The Bourne shell, created by Stephen Bourne at AT&T Bell Labs for V7 UNIX, remains a useful shell today (in some cases, as the default root shell). The author developed the Bourne shell after working on an ALGOL68 compiler, so you'll find its grammar more similar to the Algorithmic Language (ALGOL) than other shells. The source code itself, although developed in C, even made use of macros to give it an ALGOL68 flavor. The Bourne shell had two primary goals: serve as a command interpreter to interactively execute commands for the operating system and for scripting (writing reusable scripts that could be invoked through the shell). In addition to replacing the Thompson shell, the Bourne shell offered several advantages over its predecessors. Bourne introduced control flows, loops, and variables into scripts, providing a more functional language to interact with the operating system (both interactively and noninteractively). The shell also permitted you to use shell scripts as filters, providing integrated support for handling signals, but lacked the ability to define functions. Finally, it incorporated a number of features we use today, including command substitution (using back quotes) and HERE documents to embed preserved string literals within a script. The Bourne shell was not only an important step forward but also the anchor for numerous derivative shells, many of which are used today in typical Linux systems. Figure 1 illustrates the lineage of important shells. The Bourne shell led to the development of the Korn shell (ksh), Almquist shell (ash), and the popular Bourne Again Shell (or Bash). The C shell (csh) was under development at the time the Bourne shell was being released. Figure 1 shows the primary lineage but not all influences; there was significant contribution across shells that isn't depicted. Figure 1. Linux shells since 1977 We'll explore some of these shells later and see examples of the language and features that contributed to their advancement. Basic shell architecture The fundamental architecture of a hypothetical shell is simple (as evidenced by Bourne's shell). As you can see in Figure 2, the basic architecture looks similar to a pipeline, where input is analyzed and parsed, symbols are expanded (using a variety of methods such as brace, tilde, variable and parameter expansion and substitution, and file name generation), and finally commands are executed (using shell built-in commands, or external commands). Figure 2. Simple architecture of a hypothetical shell In the Resources section, you can find links to learn about the architecture of the open source Bash shell. Exploring Linux shells Let's now explore a few of these shells to review their contribution and examine an example script in each. This review includes the the Korn shell, and Bash. The Tenex C shell C shell was developed for Berkeley Software Distribution (BSD) UNIX systems by Bill Joy while he was a graduate student at the University of California, Berkeley, in 1978. Five years later, the shell introduced functionality from the Tenex system (popular on DEC PDP systems). Tenex introduced file name and command completion in addition to command-line editing features. The Tenex C shell (tcsh) remains backward-compatible with csh but improved its overall interactive features. The tcsh was developed by Ken Greer at Carnegie Mellon University. One of the key design objectives for the C shell was to create a scripting language that looked similar to the language. This was a useful goal, given that the primary language in use (in addition to the operating system being developed A useful feature introduced by Bill Joy in the was command history. This feature maintained a history of the previously executed commands and allowed the user to review and easily select previous commands to execute. For example, typing the command would show the previously executed commands. The up and down arrow keys could be used to select a command, or the previous command could be executed !!. It's also possible to refer to arguments of the prior command; for example, !* refers to all arguments of the prior command, where to the last argument of the prior command. Take a look at a short example of a tcsh script (Listing 1). This script takes a single argument (a directory name) and emits all executable files in that directory along with the number of files found. I reuse this script design in each example to illustrate differences. The tcsh script is divided into three basic sections. First, note that I use the shebang, or hashbang symbol, to declare this file as interpretable by the defined shell executable (in this case, the tcsh binary). This allows me to execute the file as a regular executable rather than precede it with the interpreter binary. It maintains a count of the executable files found, so I initialize this count with zero. Listing 1. File all executable files script in tcsh #!/bin/tcsh # find all executables set count=0 # Test arguments if ($#argv != 1) then echo "Usage is $0 <dir>" exit 1 endif # Ensure argument is a directory if (! -d $1) then echo "$1 is not a directory." exit 1 endif # Iterate the directory, emit executable files foreach filename ($1/*) if (-x $filename) then echo $filename @ count = $count + 1 endif end echo echo "$count executable files found." exit 0 The first section tests the arguments passed by the user. The variable represents the number of arguments passed in (excluding the command name itself). You can access these arguments by specifying their index: For #1 refers to the first argument (which is argv). The script is expecting one argument; if it doesn't find it, it emits an error message, using $0 to indicate the command name that was typed at the console ( The second section ensures that the argument passed in was a directory. The -d operator returns True if the argument is a directory. But note that I specify a ! symbol first, which means negate. This way, the expression says that if the argument is not a directory, emit an error message. The final section iterates the files in the directory to test whether they're executable. I use the convenient foreach iterator, which loops through each entry in the parentheses (in this case, the directory), and then tests each as part of the loop. This step uses the to test whether the file is an executable; if it is, the file is emitted and the count increased. I end the script by emitting the count of executables. The Korn shell (ksh), designed by David Korn, was introduced around the same time as the Tenex C shell. One of the most interesting features of the Korn shell was its use as a scripting language in addition to being backward-compatible with the original Bourne shell. The Korn shell was proprietary software until the year 2000, when it was released as open source (under the Common Public License). In addition to providing strong backward-compatibility with the Bourne shell, the Korn shell includes features from other shells (such as history from csh). The shell also provides several more advanced features found in modern scripting languages like Ruby and Python—for example, associative arrays and floating point arithmetic. The Korn shell is available in a number of operating systems, including IBM® AIX® and HP-UX, and strives to support the Portable Operating System Interface for UNIX (POSIX) shell language standard. The Korn shell is a derivative of the Bourne shell and looks more similar to it and Bash than to the C shell. Let's look at an example of the Korn shell for finding executables (Listing 2). Listing 2. Find all executable files script in ksh #!/usr/bin/ksh # find all executables count=0 # Test arguments if [ $# -ne 1 ] ; then echo "Usage is $0 <dir>" exit 1 fi # Ensure argument is a directory if [ ! -d "$1" ] ; then echo "$1 is not a directory." exit 1 fi # Iterate the directory, emit executable files for filename in "$1"/* do if [ -x "$filename" ] ; then echo $filename count=$((count+1)) fi done echo echo "$count executable files found." exit 0 The first thing you'll notice in Listing 2 is its similarity to Listing 1. Structurally, the script is almost identical, but key differences are evident in the way conditionals, expressions, and iteration are performed. Instead of C-like test operators, ksh adopts the typical Bourne-style operators ( -lt, and so on). The Korn shell also has some differences related to iteration. In the Korn shell, the for in structure is used, with command substitution to represent the list of files created from standard output for the command ls '$1/* representing the contents of the named In addition to the other features defined above, Korn supports the alias feature (to replace a word with a user-defined string). Korn has many other features that are disabled by default (such as file name completion) but can be enabled by the user. The Bourne-Again Shell The Bourne-Again Shell, or Bash, is an open source GNU project intended to replace the Bourne shell. Bash was developed by Brian Fox and has become one of the most ubiquitous shells available (appearing in Linux, Darwin, Windows®, Cygwin, Novell, Haiku, and more). As its name implies, Bash is a superset of the Bourne shell, and most Bourne scripts can be executed unchanged. In addition to supporting backward-compatibility for scripting, Bash has incorporated features from the Korn and You'll find command history, command-line editing, a directory stack useful environment variables, command completion, and more. Bash has continued to evolve, with new features, support for regular expressions (similar to Perl), and associative arrays. Although some of these features may not be present in other scripting languages, it's possible to write scripts that are compatible with other languages. To this point, the sample script shown in Listing 3 is identical to the Korn shell script (from Listing 2) except for the shebang difference (/bin/bash). Listing 3. Find all executable files script in Bash #!/bin/bash # find all executables count=0 # Test arguments if [ $# -ne 1 ] ; then echo "Usage is $0 <dir>" exit 1 fi # Ensure argument is a directory if [ ! -d "$1" ] ; then echo "$1 is not a directory." exit 1 fi # Iterate the directory, emit executable files for filename in "$1"/* do if [ -x "$filename" ] ; then echo $filename count=$((count+1)) fi done echo echo "$count executable files found." exit 0 One key difference among these shells is the licenses under which they are released. Bash, as you would expect, having been developed by the GNU project, is released under the GPL, but csh, tcsh, zsh, ash, and scsh are all released under the BSD or a BSD-like license. The Korn shell is available under the Common Public License. For the adventurous, alternative shells can be used based on your needs or taste. The Scheme shell (scsh) offers a scripting environment using Scheme (a derivative of the Lisp language). The Pyshell is an attempt to create a similar script that uses the Python language. Finally, for embedded systems, there's BusyBox, which incorporates a shell and all commands into a single binary to simplify its distribution and management. Listing 4 provides a look at the find-all-executables script within the Scheme shell (scsh). This script may appear foreign, but it implements similar functionality to the scripts provided thus far. This script includes three functions and directly executable code (at the end) to test the argument count. The real meat of the script is within the which iterates a list (constructed after write-ln after each element of the list. This list is generated by iterating the named directory and filtering it for files that Listing 4. File all executable files script in scsh #!/usr/bin/scsh -s !# (define argc (length command-line-arguments)) (define (write-ln x) (display x) (newline)) (define (showfiles dir) (for-each write-ln (with-cwd dir (filter file-executable? (directory-files "." #t))))) (if (not (= argc 1)) (write-ln "Usage is fae.scsh dir") (showfiles (argv 1))) Many of the ideas and much of the interface of the early shells remain the same almost 35 years later—a tremendous testament to the original authors of the early shells. In an industry that continually reinvents itself, the shell has been improved upon but not substantially changed. Although there have been attempts to create specialized shells, the Bourne shell derivatives continue to be the primary shells in use. - The V6 Thompson Shell Port (osh), developed and maintained by J.A. Neitzel, is a great resource for the osh source as well as the external shell utilities that it relies on (such as goto). You can also find an archive of utilities written in the Thompson shell in addition to the original source code itself. - Goosh is the unofficial Google shell, which implements a shell interface over the commonly used Google search interface. Goosh is an interesting example of how shells can be applied to nontraditional interfaces. The Bourne shell is the anchor from which our current shells were derived. The files have a certain ALGOL68 flavor that was accomplished through the use The Bourne-Again Shell is the most commonly used shell in Linux, combining features of the Bourne shell, Korn shell, Cshell. For a great read, learn about the structure and internals of Bash in the third chapter of "The Architecture of Open Source Applications." - Check out additional developerWorks articles on shell scripting, such as Daniel Robbins' "Bash by example" Part 1 (March 2000), Part 2 (April 2000), and Part 3 (May 2000). You can also learn about Korn shell scripting (June 2008) and Tcsh shell variables (August 2008). - At the kornshell site, get the latest news on the Korn shell, including documentation and other resources. - Wikipedia includes a great comparison of shells, including general characteristics, interactive features, programming features, syntax, data types, and IPC mechanisms. - Tim's article "BusyBox simplifies embedded Linux systems" (developerWorks, August 2006) explores the BusyBox application and how to add new commands to this static shell architecture. - In the developerWorks Linux zone, find hundreds of how-to articles and tutorials, as well as downloads, discussion forums, and a wealth of other resources for Linux developers and administrators. - Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics. - Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools, as well as IT industry trends. - Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners, to advanced functionality for experienced developers. - Follow Tim on Twitter. You can also follow developerWorks on Twitter, or subscribe to a feed of Linux tweets on developerWorks. Get products and technologies - Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement Service Oriented Architecture efficiently. - Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
<urn:uuid:59900dd9-42ba-4454-9cdc-fc9eb98f2bf9>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/linux/library/l-linux-shells/index.html?ca=drs-
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00301-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911641
3,898
2.96875
3
The US Air Force will launch a second secretive spaceship, the X-37B, tomorrow if the weather holds and all systems are go. The first X-37B, known as Orbital Test Vehicle 1, launched April 22 last year and stayed in space conducting experiments for some 220 days. The ship fired its orbital maneuver engine in low-earth orbit to perform an autonomous reentry before landing, the Air Force stated. The X-37B carries a super-secret payload, but provides what the Air Force calls a flexible space test platform to conduct various experiments with network satellite sensors, subsystems, components and associated technology, according to the Air Force. According to the Air Force the spacecraft is based on NASA's X-37 design (NASA's X-37 system was never built) and is designed for vertical launch to low Earth orbit altitudes where it can perform long duration space technology experimentation and testing. Upon command from the ground, the orbital test vehicle autonomously re-enters the atmosphere, descends and lands horizontally on a runway. The X-37B is the first vehicle since NASA's Shuttle Orbiter with the ability to return experiments to Earth for further inspection and analysis, but with an on-orbit time of 270 days can stay in space for much longer, the Air Force states. More outer space news: What's hot in space? Technologies being tested in the program include advanced guidance, navigation and control, thermal protection systems, avionics, high temperature structures and seals, conformal reusable insulation, and lightweight electromechanical flight systems, the Air Force stated. The Air Force lists the following as the basic description of the X-37B: Primary Mission: Experimental test vehicle Prime Contractor: Boeing Height: 9 feet, 6 inches (2.9 meters) Length: 29 feet, 3 inches (8.9 meters) Wingspan: 14 feet, 11 inches (4.5 meters) Launch Weight: 11,000 pounds (4,990 kilograms) Power: Gallium Arsenide Solar Cells with lithium-Ion batteries Launch Vehicle: Lockheed-Martin Atlas V (501) Launch specialists at the Air Force Space Command's 45th Space Wing at Patrick Air Force Base, Fla., will launch the vehicle from Cape Canaveral Air Force Station on an Atlas V rocket from Space Launch Complex-41. The vehicle will land at Vandenberg Air Force Base, Calif., and will be recovered by the 30th Space Wing. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:c4354eb9-349e-4147-92db-e52a4a398e2f>
CC-MAIN-2017-04
http://www.networkworld.com/article/2228671/security/air-force-set-for-second-super-secret-spacecraft-blast-off.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00513-ip-10-171-10-70.ec2.internal.warc.gz
en
0.888704
533
3.09375
3
Harnessing Creativity to Make Powerful Decisions The creative process is driven by imagination. The more imaginative one is, the greater one's potential for creativity. The injection of creativity into problem analysis broadens the base of information and ideas that are ultimately incorporated into the selection of a solution. If problem analysis is to be imaginative, then it must therefore be imposed on the decision-making process. A major problem is that too few of us are imaginative. In his seminal study on creativity, Why Didn't I Think of That?, Charles W. McCoy Jr. reports that children lose one-half of their creativity between the ages of five and seven, and adults over forty retain less than two percent of the creative drive they had as children. The implications are that the average person of working age is not very creative. This is where creativity tools come into play. These tools are simple techniques that help ensure the analysis of a problem has greater breadth and depth than it might otherwise possess. Learn the necessary skills to build a creative toolbox and improve your decision-making skills. Creativity is Imagination The creative process is driven by imagination. The more imaginative one is, the greater one’s potential for creativity. George Bernard Shaw said, “Imagination is the beginning of creation. We imagine what we desire; we will what we imagine; and at last we create what we will.” Imagination is the internal process that drives the external expression which is perceived as creativity. Unfortunately, the problem is that too few of us are imaginative. In his seminal study on creativity, Why Didn’t I Think of That?, Charles W. McCoy Jr. reports children lose one-half of their creativity between the ages of five and seven, and adults over forty retain less than two percent of what they had as children. The implication is that the average working age person is not very creative. If problem analysis is to be imaginative, then it must therefore be imposed on the decision-making process. This is where creativity tools come into play. These tools are simple techniques that help ensure the analysis of a problem has greater breadth and depth than it might otherwise. Judgments are the Problem It is judgmental thinking that the tools of creativity are helping to overcome. Judgments are limits that hold back our thinking. People become increasingly judgmental as they age, and we presume to understand and “know” things. Over time, we become more opinionated and increasingly rigid in our perceptions of ourselves, others, and everything around us. Being judgmental is the basis of bigotry and preconceived notions about anyone or anything. It is the reason so many decision makers are narrow minded in their perspective or analysis of a problem and its solutions. Judgments are what keep us all “in a box.” They are why one person will consider a particular act “reasonable” while another would label it “outrageous.” Judgments are boundaries that we impose on ourselves; they are the limits on our imaginations and, therefore, our creativity. Self-judgment stems from fear of embarrassment or a rigid mindset that does not believe the imagination should be permitted to wander. Left to atrophy, the imagination eventually becomes unable to be spontaneous. The techniques described below are tools that help to get the creative juices flowing. Regular practice is needed in order for them to work well. Imagination takes time to do its magic. If you want creative solutions, you need to allow time for the imagination to perform. The optimum solution can only be discovered if imaginative thinking is given the time and tools to conceive it. Charles W. McCoy Jr. writes, “Imagination plays a crucial role in all genuine creative thinking, because it allows the mind to see the unseen, envision the invisible, and transform ideas into reality.” The more time and technique that is applied to the creative side of problem analysis, the more likely you are to fully understand a problem before arriving at a decision. The key to being truly creative is the ability and willingness to recognize the assumptions and beliefs that underlie perceptions of a problem and to think beyond them. Questioning the “norm” is an act of courage. To imagine courageously is to question tradition, to defy logic, and to refuse to conform. Imagining courageously is about openly questioning what we, as well as others, believe to be true about a situation or issue. It is about suggesting the outrageous. Being courageous can be controversial and even dangerous. It takes courage to recognize what is conventional wisdom and to then think beyond it in a creative and productive way. According to Charles W. McCoy Jr., “Genuine creativity requires raw courage; never flees from adversity, frustration or even failure; challenges conventional wisdom; and vigorously explores beyond the first workable answer to find the very best solution imaginable.” Imagining courageously is all about suspending judgment. Do not let “group think” control your thought processes. Actively and openly look for the boundaries of colleagues’ mental boxes as well as your own, and then cast your imagination outside those boundaries—even if doing so might offend.
<urn:uuid:71c4a2a9-c298-4ad9-9432-b3be04cfb2fa>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/harnessing-creativity-to-make-powerful-decisions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00081-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962458
1,067
3.375
3
Why do robots become astronauts? Simple: Because they like space. Kirobo, a Japanese robot, blasted off for the International Space Station last week. The robot, which can speak and recognize faces, is going to space with the primary mission of keeping astronauts company. Despite this, poor Kirobo will be traveling to the ISS alone, and will be unloaded and stowed away until astronaut Koichi Wakata, who Kirobo will be speaking to and interacting with, arrives in November. Of course, one part of Kirobo’s purpose is also to provide companionship to people living alone in general. So, naturally, the best place to test the 13-in. robot out… is in space. Kirobo is the result of collaboration between Dentsu, the University of Tokyo’s Research Center for Advanced Science and Technology, Robo Garage and Toyota. Robo Garage and the University of Tokyo worked on the hardware, Toyota worked on the voice recognition and Dentsu created the conversation content. From what I can tell, talking to Kirobo is just like talking to a real person. I mean, a 2.2 pound real person with a limited number of topics to discuss…
<urn:uuid:1ecb66a9-755c-4795-adc0-977e8d6b7dd0>
CC-MAIN-2017-04
http://www.computerworld.com/article/2474411/emerging-technology/one-small-step-for-kirobo--one-giant-leap-for-robot-kind.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00412-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915263
251
2.953125
3
Sometimes we want to perform the coverage analysis of the input file: to find areas of the program not exercised by a set of test cases. These test cases may come from a test suit or you could be trying to to find a vulnerability in the program by ‘fuzzing’ it. A nice feedback in the form of a list of ‘not-yet-executed’ instructions would be a nice addition to blind fuzzing. The straightforward way of creating such an analyzer in IDA would be to use the built-it instruction tracer. It would work for small (toy-size) programs but would be too slow for real-world programs. Besides, multi-threaded applications can not be handled by the tracer. To tell the truth, we do not really need to trace every single instruction. Noting that the instruction at the beginning of a basic block gets executed would be enough. (A basic block is a sequence of instructions without any jumps into the middle). Thanks to the cross-reference and name information in IDA, we can discover them quite reliably, especially in the compiler generated code. So, a more clever approach would be to set a breakpoint at the beginning of each basic block. We would keep a breakpoint at place until it fires. As soon the breakpoint gets triggered, we remove it and let the program continue. This gives us tremendous speed boost, but the speed is still not acceptable. Since an average program contains many thousands basic blocks, just setting or removing breakpoints for them is too slow, especially over a network link (for remote debugging). To make the analyzer to work even faster, we have to abandon IDA-controlled breakpoints and handle them ourselves. It seems difficult and laborious. In practice, it turns out to be very easy. Since we do not have ‘real’ breakpoints that have to be kept intact after firing, the logic becomes very simple (note that the most difficult part of breakpoint handling is resuming the program execution after it: you have to remove the breakpoint, single step, put the breakpoint back and resume the execution – and the debugged program can return something unexpected at any time, like an event from another thread or another exception). Here is the logic for simple one-shot breakpoints: if we get a software breakpoint exception and it’s address is in the breakpoint list remove the breakpoint by restoring the original program byte update EIP with the exception address resume the program execution This algorithm requires 2 arrays: the breakpoint list and the original program bytes. The breakpoint list can be kept as vector<bool>, i.e. one bit per address. Anyway, enough details. Here are some pictures. This view of the imported function thunks gives as live view of executed functions from Windows API (green means executed): If you continue to run the program, more and more lines will be painted green. In the following picture we see what instructions were executed and what were not: We see that the jump at 40158A was taken and therefore ESI was always 1 or less. If we collapse all functions of the program (View, Hide all), then this ‘bird eye’ view will tell us about the executed functions The last picture was taken while running IDA in IDA itself. We see the names of user-interface functions. It is obvious that I pressed the Down key but haven’t tried to press the Up/Left/Right keys yet. I see how the plugin can be useful for in-house IDA testing… Here is the plugin: http://www.hexblog.com/ida_pro/files/coverit.zip As usual, it comes with the source code. IDA v5.0 is required to run it. There are many possible improvements for it: - Track function execution instead of basic block execution. - Create a nice list of executed/not-executed functions. - Create something like a navigation band to display the results (in fact it is not very difficult, just create a window and draw on it pixel by pixel or, rather, rectangle by rectangle) - Count the number of executions. Currently the plugin detects only the fact of the instruction execution but does not count how many times it gets executed. Counting will slow things down but I’m sure that it can still be made acceptably fast. - Monitor several segments/dlls at once. The current version handles only the first code segment coming from the input file (so called loader segment). It can be made to monitor the whole memory process excluding the windows kernel code that handles exceptions. - Port to other platforms and processors. For the moment the code is MS Windows-oriented (the exception and breakpoint codes are hardcoded). Seems to be easy. - Make the plugin to remember the basic block list between runs. This will improve the startup speed of subsequent runs. - Add customization dialog box (the color, function/bb selector), in short, everything said above can be parameterized. This plugin demonstrates how to do some tricky things in IDA: how to refresh the screen only when it is really necessary, to hook low-level debugger functions, to find basic blocks, etc. Have fun and nice coverage!
<urn:uuid:24f8e42c-eb69-4704-8f11-bfc108f4d1f2>
CC-MAIN-2017-04
http://www.hexblog.com/?p=34
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00072-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935302
1,108
2.53125
3
XML Encryption Flaw Leaves Web Services VulnerableApache, Red Hat, IBM, Microsoft, and other major XML framework providers will need to adopt new standard, say German researchers who found the flaw. 10 Companies Driving Mobile Security (click image for larger view and for slideshow) Watch your Web Services: the official XML Encryption Syntax and Processing standard can be broken. So say two researchers from Ruhr-University Bochum in Germany, who have demonstrated a practical attack against XML's cipher block chaining (CBC) mode. "We were able to decrypt data by sending modified ciphertexts to the server, by gathering information from the received error messages," according to a statement released by the researchers, Juraj Somorovsky and Tibor Jager. They presented their findings in detail at last week's ACM Conference on Computer and Communications Security in Chicago. XML, aka "eXtensible Markup Language," is a widely used technique for storing and transporting data, and is a fundamental Web Services component. "XML Encryption was standardized by W3C in 2002, and is implemented in XML frameworks of major commercial and open-source organizations like Apache, Red Hat, IBM, and Microsoft," the researchers said in their paper. "It is employed in a large number of major Web-based applications, ranging from business communications, e-commerce, and financial services [and] healthcare applications, to governmental and military infrastructures." [ How do we make things more secure? Top FBI Cyber Cop Recommends New Secure Internet. ] Before releasing their paper, the researchers said they notified all affected XML framework providers--including Amazon.com, IBM, Microsoft, and Red Hat Linux--via the W3C mailing list (following responsible disclosure practices) and with some, engaged in "intensive discussions on workarounds." The potential exploit resembles a padding oracle attack--referring not to Oracle the vendor, but rather to a cryptographic concept, for which attacks were first introduced in 2002. Padding oracle attacks involve submitting bogus messages to a targeted system, then using the information returned by that system to ultimately crack its encryption. This attack, similarly, "exploits a subtle correlation between the block cipher mode of operation, the character encoding of encrypted text, and the response behavior of a Web Service if an XML message cannot be parsed correctly," said the researchers. In other words, by sending ciphertext to a targeted Web Service, and then evaluating the response returned by the server, the encryption scheme may be deduced. "We show that an adversary can decrypt a ciphertext by performing only 14 requests per plaintext byte on average," they said. "This poses a serious and truly practical security threat on all currently used implementations of XML Encryption." Amazon.com on Thursday acknowledged the researchers' work, saying that it had fixed the underlying XML-based messaging protocol--known as SOAP (for simple object access protocol)--vulnerabilities in its Elastic Compute Cloud (EC2), and that no customers had been affected by the potential attacks. "The research showed that errors in SOAP parsing may have resulted in specially crafted SOAP requests with duplicate message elements and / or missing cryptographic signatures being processed," according to the Amazon Web Services security bulletin. "If this were to occur, an attacker who had access to an unencrypted SOAP message could potentially take actions as another valid user and perform invalid EC2 actions." What's the best way to eliminate the XML Encryption vulnerability? Unfortunately, "there is no simple patch for this problem," said Somorovsky at Ruhr-University Bochum. "We therefore propose to change the standard as soon as possible." The most likely fix, according to the researchers, will involve replacing the CBC mode in XML Encryption with something that provides not just message confidentiality--as it does now--but also message integrity. "Adequate choices have for instance been standardized in ISO/IEC 19772:2009," they said. "We consider this solution as very recommendable for future versions of the XML Encryption standard." But adopting a new approach in future versions of the XML Encryption standard would likely have side effectives, including "deployment and backwards compatibility issues," they said.
<urn:uuid:2fa923a0-51b2-4401-a145-6d80820f26e4>
CC-MAIN-2017-04
http://www.darkreading.com/vulnerabilities-and-threats/xml-encryption-flaw-leaves-web-services-vulnerable/d/d-id/1100912
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00560-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94255
879
2.625
3
Google engineers are trying to make sure that their autonomous cars are extra cautious around children. The company explained in a blog post over the weekend that engineers had asked employees and their little trick-or-treating ghosts and goblins to trek around some parked autonomous cars. That gave the cars' sensors and software extra practice time recognizing kids in different shapes and sizes -- and even costumes and masks. "Halloween's a great time to get some extra learning done," wrote the team from Google's Self-Driving Car Project. "We teach our cars to drive more cautiously around children. When our sensors detect children -- costumed or not -- in the vicinity, our software understands that they may behave differently." Engineers want the cars to recognize children and be aware that they are more apt than adults to dart into the road, be obscured by parked cars or run down a sidewalk, chasing a ball. Safety's important: Even more of Google's self-driving cars are on the road today than there were this past spring. By late September, Google had licenses for 73 autonomous cars in its fleet. That's more than triple the 23 licenses it had last May. The company also has been talking with executives at car companies in Detroit, looking for a partner to build its autonomous cars one day. Googlers may be working on the software and artificial intelligence to run driverless cars, but the company doesn't necessarily want to get into the automobile manufacturing business. For that, Google would like to team up with an experienced manufacturer. However, it looks like Google is going to have some competition. Early in September, it was reported that Toyota is teaming up with Stanford University and MIT to work on the artificial intelligence needed to make the auto manufacturer's cars more autonomous. Toyota is investing up to $50 million in the project over the course of five years. This story, "Google to autonomous cars: Brake for kids!" was originally published by Computerworld.
<urn:uuid:7eac2516-c14e-4e6c-ae90-d8945636d312>
CC-MAIN-2017-04
http://www.itnews.com/article/2999987/emerging-technology/google-to-autonomous-cars-brake-for-kids.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00376-ip-10-171-10-70.ec2.internal.warc.gz
en
0.978512
403
2.640625
3
Fan Z.L.,CAS Institute of Subtropical Agriculture | Fan Z.L.,University of Chinese Academy of Sciences | Wang Y.,CAS Institute of Subtropical Agriculture | Sun Q.,University of Sichuan | And 6 more authors. Shengtai Xuebao/ Acta Ecologica Sinica | Year: 2015 Hunshandake Sandland is one of the biggest areas of semi-desert in China. It is also the primary source of sandstorms affecting Beijing and Tianjin. In recent years, scientists and the Chinese government have paid increased attention to the ecological restoration and reconstruction of the Hunshandake Sandland. To restore the vegetation of this region, the primary problem is how to control the populations of rodents.The striped hamster (Cricetulus barabensis) is a widely distributed species in the grasslands of Inner Mongolia. Striped hamsters like to eat plant seeds, which account for 70% of all their food. As a result, this animal has a negative impact on the restoration of vegetation on sandy land. It is necessary to find an effective way to control their population. Traditional chemical rat poisons can kill most individuals of a population in short time. However, thanks to their high reproductive ability, rodent populations are able to recover rapidly. Moreover, traditional chemical rat poisons pollute the environment and hurt nontarget animals at the same time.EP-1 is a new type of contraceptive compound, the main ingredients of which are levonorgestrel and quinestrol. EP-1 does not kill its target animals; instead, it affects the female reproductive system but the animals are able to recover from the damage. Laboratory experiments have shown that EP-1 has a remarkable impact on controlling the reproduction of rodents. Moreover, EP-1 has little effect on nontarget animals. Environmental pollution from EP-1 is also less than that from traditional chemical rat poisons.To test the effect of EP-1 on reproduction in the striped hamster, EP-1 baits were placed in the Hunshandake Sandland in April 2003. We chose four plots: two were baited and the others were control areas. Monthly trapping censuses were conducted to monitor the reproductive parameters of the rodent population during June to October. EP-1 did not influence the sex ratio of the striped hamster, and no significant differences were observed in the proportions of male hamsters between the baited and control areas (Student's t test, P < 0.05). EP-1 baiting obviously influenced the age structure of the striped hamster population, as the proportion of juvenile animals found in the baited area was only 40%-50% of that in the control area (Student's t test, P < 0.05). This impact lasted for more than 4 months. EP-1 baiting obviously influenced reproductive parameters. In the baited areas, EP-1 caused damage to the uteri of 70%-80% adult female hamsters, and in June 100% of the uteri were damaged. The organs turned black and uterine cysts were evident. This impact lasted for more than 5 months. Moreover, EP-1 baiting significantly reduced female fertility as no pregnant females were found in baited areas in June. The pregnancy rates in the baited areas were also very low in July and August: significantly different from the control area (Student's t test, P < 0.055). Litter sizes in the baited area were also influenced by EP-1 and were significantly lower than in the control area (pooled Student's t tests, P < 0.05). The impact of EP-1 on these rodents lasted for more than 4 months after a single baiting in spring, suggesting that it can influence the whole breeding season of this species. This fertility control effect might be related to the foraging behavior of striped hamsters. The impact of EP-1 baiting on the hamster populations declined with time, suggesting that female hamsters might be able to recover from the damage caused by EP-1. This recovery might also be explained by dispersal of the hamsters. © 2015, Ecological Society of China. All rights reserved. Source
<urn:uuid:80a536fa-4999-4951-9942-454c33d7682d>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/forest-bureau-station-of-xilin-gol-league-668581/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00284-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961244
859
3.140625
3
Blomqvist M.,Karolinska Institutet | Ahadi S.,Karolinska Institutet | Fernell E.,Autism Center for Young Children | Fernell E.,Gothenburg University | And 2 more authors. European Journal of Oral Sciences | Year: 2011 This study tested the hypothesis that adolescents with attention deficit hyperactivity disorder (ADHD) exhibit a higher prevalence of caries than adolescents in a control group. Thirty-two adolescents with ADHD and a control group of 55 adolescents from a population-based sample, all 17yr of age, underwent a clinical and radiographic dental examination. The mean±SD number of decayed surfaces (DS) was 2.0±2.2 in adolescents with ADHD and 0.9±1.4 in adolescents of the control group. Thirty-one per cent of the adolescents in the ADHD group had no new caries lesions (DS=0) compared with 62% in the control group. Six per cent of the adolescents in the ADHD group were caries free [decayed, missing or filled surfaces (DMFS)=0] compared with 29% in the control group. Adolescents with ADHD also had a higher percentage of gingival sites that exhibited bleeding on probing compared with the control group: 35±39% vs. 16±24% (mean±SD), respectively. At 17yr of age, adolescents with ADHD exhibited a statistically significantly higher prevalence of caries compared with an age-matched control group. Adolescents with ADHD need more support regarding oral hygiene and dietary habits. They should be followed up with shorter intervals between dental examinations to prevent caries progression during adulthood. © 2011 Eur J Oral Sci. Source Klintwall L.,Autism Center for Young Children | Holm A.,Autism Center for Young Children | Holm A.,Karolinska University Hospital | Eriksson M.,Autism Center for Young Children | And 8 more authors. Research in Developmental Disabilities | Year: 2011 Sensory abnormalities were assessed in a population-based group of 208 20-54-month-old children, diagnosed with autism spectrum disorder (ASD) and referred to a specialized habilitation centre for early intervention. The children were subgrouped based upon degree of autistic symptoms and cognitive level by a research team at the centre. Parents were interviewed systematically about any abnormal sensory reactions in the child. In the whole group, pain and hearing were the most commonly affected modalities. Children in the most typical autism subgroup (nuclear autism with no learning disability) had the highest number of affected modalities. The children who were classified in an " autistic features" subgroup had the lowest number of affected modalities. There were no group differences in number of affected sensory modalities between groups of different cognitive levels or level of expressive speech. The findings provide support for the notion that sensory abnormality is very common in young children with autism. This symptom has been proposed for inclusion among the diagnostic criteria for ASD in the upcoming DSM-V. © 2010 Elsevier Ltd. Source
<urn:uuid:3e5a8017-7db9-485e-820f-5760d2b222ce>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/autism-center-for-young-children-1586089/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00550-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936919
638
2.90625
3
As a systems programmer, from time to time you probably write small utility programs to make your job easier. You might write them in REXX or even Assembler. But have you considered Java*? Java has become well accepted as an application programming language on z/OS. IBM’s Java Batch Launcher and Toolkit (JZOS) also makes it simple to run Java in batch and provides classes to access various system services. This makes Java a real alternative for system programmer batch utilities. Java has some major advantages over REXX and Assembler. Built-in libraries like Java Collections simplify programming problems. Portable code means that samples published on the internet for other platforms can be easily adapted to z/OS*. Free and open source libraries provide solutions for common problems. As a result, many problems are more easily solved in Java than other languages on z/OS. The Collections Framework is possibly the most important tool in the Java toolbox. Two of the most useful collections are the ArrayList and the HashMap. The ArrayList is an array that resizes as required. The HashMap is a collection of items accessed by key. HashMap simplifies all sorts of tasks. For example, when comparing two lists of items with the same keys (e.g., catalog entries), you can store one list in a HashMap and then compare items from the other list, with no dependency on the order of items. You can also use a HashMap to accumulate statistics by key, which makes calculating group totals simple—again with no requirement that the data be in order. Other more specialized collections implement sets, queues and stacks as well as other types of lists and maps. Sample Code and Open Source Libraries Whatever your problem, chances are that someone else has been there before—but not necessarily on z/OS. The ability to use samples published for other platforms is a great time saver. For example, sending email from z/OS through Gmail, with Transport Layer Security and user authentication took about 30 minutes to implement using samples posted on the internet. There are also free and open source libraries providing more comprehensive solutions to various common problems. Apache Commons and Google Guava are well-known examples. Why not give Java a try, and see what problems it can solve for you? For more information, Java on z/OS examples and a Java API for SMF data visit:
<urn:uuid:e4e139c5-d976-4b6d-a69a-d886b84720cc>
CC-MAIN-2017-04
http://ibmsystemsmag.com/buyersguide/blackhillsoftware/Systems-Programmer-Productivity-Tools-using-Java-o/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00274-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928383
491
2.578125
3
Technology is being tested that allows heat generated by computers to warm offices and homes. IBM has launched a trial in Switzerland that could see the heat produced by large datacentres being recycled to heat offices. The three-year trial of the Aquarsar system could reduce carbon emissions by 85% because of lower demand for central heating, and less energy being needed to cool processors inside PCs. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Datacentres are responsible for a large share of global energy consumption. This is growing as use of the internet grows and developing countries strengthen their technology industries and infrastructures. The Guardian reported that in 2005 datacentres were responsible for 1% of global electricity consumption - double the figure of five years earlier. The figure is thought to be rising rapidly but it is not totally clear by how much because companies often will not disclose how many datacentres they run and how much energy they use. Tom Dowdall, green electronics campaign co-ordinator at Greenpeace, said the IBM trial was a good example of what could be achieved. But he added there are not enough incentives for companies to improve the efficiency of their datacentres. "The main driver for change is the price of electricity because companies want to cut their bills. But in the last couple of years the price has fallen. There's no regulation - there should be more incentives for companies to cut electricity use. This is a good example but it's not enough." Market analysis firm Datamonitor says green IT could jump ahead during the economic downturn. The company released research showing that flat IT budgets in 2009 have provided a new motivation for cost cutting green measures. It said, "Flat IT budget growth means that organisations that face critical datacentre limitations, such as a shortage of floor or rack space, are looking to software or outsourcing alternatives to building new datacentres or upgrading existing facilities."
<urn:uuid:f336c67f-f934-4d8a-8679-711311a7efb8>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1280090029/Computers-could-soon-be-heating-buildings-IBM
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00000-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966335
409
3.1875
3
An attacker needs to destroy evidence of his presence and activities for several reasons like being able to maintain access and evade detection (and the resulting punishment). Erasing evidence of a compromise is a requirement for any attacker who wants to remain obscure and evade trace back. This usually starts with erasing the contaminated logins and any possible error messages that may have been generated from the attack process. For instance, a buffer overflow attack usually leaves a message in the system logs. Next, attention is turned to affecting changes so that future logins are not logged. By manipulating and tweaking the event logs, the system administrator can be convinced that the output of her system is correct and no intrusion or compromise actually took place. Since, the first thing a system administrator does to monitor unusual activity is check the system log files, it is common for intruders to use a utility to modify the system logs. In some extreme cases, rootkits can disable logging altogether and discard all existing logs. This happens if the intruders intend to use the system for a longer period of time as a launch base for future intrusions. They remove only those portions of logs that can reveal their presence. It is imperative for attackers to make the system look like it did before they gained access and established backdoors for their use. Any files that were modified need to be changed back to their original attributes. Trojans such as ps or netcat come in handy for any attacker who wants to destroy the evidence from the log files or replace the system binaries with the same. Once the Trojans are in place, the attacker can be assumed to have gained total control of the system. Rootkits are automated tools designed to hide the presence of the attacker. By executing the script, a variety of critical files are replaced with trojanned versions, hiding the attacker with ease. Other techniques include Steganography and tunneling. Steganography is the process of hiding the data, for instance in images and sound files. Tunneling takes advantage of the transmission protocol by carrying one protocol over another. Even the extra space (e.g. unused bits) in the TCP and IP headers can be used for hiding information. An attacker can use the system as a cover to launch fresh attacks against other systems or use it as a means of reaching another system on the network without being detected. Thus, this phase of attack can turn into a new cycle of attack by using reconnaissance techniques all over again. Certified Ethical Hacker v7
<urn:uuid:ee6db90c-0826-4a86-ac2b-1e340aa34a9d>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2011/08/30/the-5-phases-of-hacking-covering-your-tracks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00302-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94592
505
3.0625
3
Every day, people, machines and the world’s growing multitude of sensors create more than 2.5 exabytes of data – that’s a 2.5 followed by 18 zeros – a bonanza of bits and bytes that is in many ways a double-edged sword. On one hand, private sector companies and the government are able to collect more data than ever for analysis – ideally, that’s a great thing. Never in human history has humanity had access to the kinds of data it does now. Yet big data sets are also attractive to hackers and malicious actors who see more data as more money or intelligence to steal. The two disciplines – cybersecurity and big data – are beginning to meld so that it’s difficult to talk about one without the other. Agencies across government are learning to better detect and analyze cyber threats, and one of the ways they are doing so involves big data. For example, agencies might sift through huge piles of data as they monitor traffic in and out of a network in real time to detect potentially adversarial anomalies. It takes a lot of technological horsepower to analyze that information, but the insight it provides could be the difference between a massive leak or media frenzy and business as usual. How else are cybersecurity and big data are linked today, and what might those roles look like in the future? On Tuesday, June 3, Nextgov will host a trio of speakers to discuss these issues at the Ronald Reagan Building in Washington, D.C. The panel is comprised of: Roberta Stempfley, Deputy Assistance Secretary for Cybersecurity Strategy and Emergency Communications, Office of Cybersecurity and Communications, Department of Homeland Security Diana Burley, Professor, George Washington University’s Graduate School of Education and Human Development Roger Hockenberry, CEO, Cognitio and Former CTO, Central Intelligence Agency Expect an interactive conversation surrounding big data and cybersecurity that touches on real-time threat detection, what agencies are (or should be) doing with data breach information and how the government can make use of existing technologies to prepare for future cyber adversaries. Register for the event here.
<urn:uuid:bcec90a6-ccb2-49c3-b107-b823b196d4f5>
CC-MAIN-2017-04
http://www.nextgov.com/technology-news/tech-insider/2014/06/big-datas-coming-role-cybersecurity/85588/?oref=ng-relatedstories
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00302-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938789
442
2.546875
3
While texting has become a popular mode of communication, enabling public safety answering points to join the conversation in times of emergencies has proven difficult. Durham, N.C., began accepting text messages sent to its 911 dispatch center by Verizon Wireless customers on a pilot basis on Aug. 3. The test is expected to run through January 2012. Local public safety officials see texting 911 as a way to reach hard-of-hearing individuals and people in situations where making noise could put them in greater danger. In addition, media reports have highlighted instances in which disaster survivors were able to send text messages when their wireless phones did not have enough signal to complete a call. As part of the pilot, Verizon Wireless configured its system to allow text messages to be sent to the Durham Emergency Communications Center, which installed software that recognizes that a text message sent to 911 is an emergency message. This means a text message sent to 911 by a Verizon Wireless subscriber within Durham is routed to the appropriate call center. Both the city’s communications center and Verizon are using Intrado systems to handle the messages. Calls from cell phones to the center are accompanied by the caller’s phone number and an approximate location based on the nearest cell tower. However, text messages are not routed through Verizon Wireless’ enhanced-911 infrastructure in the same way, spokeswoman Debra Lewis wrote in an e-mail. Because of this, a text message sent to the 911 call center would not be recognized by Verizon as an emergency message, so location information would not be sent. When a message comes in on Durham’s Intrado next-generation 911 system, an icon on the dispatcher’s screen lights up and the dispatcher hears a ringing sound. Clicking on the icon retrieves the message and begins the exchange. “The first question they’ll ask is ‘Where are you?’” said James Soukup, emergency communications director for Durham. “Unless they tell us that, we can’t help them.” From there the dispatcher can get other details from the subscriber to pass on to responders. From a technical perspective, two things need to happen for public safety answering points (PSAPs) to receive text messages sent from 911, according to Dami Hummel, vice president and general manager of Intrado’s mobility division. First, the carrier needs to set up its network to route all short message service (SMS) text messages to its 911 services vendor, which aggregates the messages and routes them to the appropriate PSAP. PSAPs need to have software installed that knows what to do with them. “The beauty with the SMS solution is that the protocols and interfaces for SMS are already developed and there are standards today for that,” Hummel said. “Now the vendors have to develop how they’re going to receive SMS and how the guts of the systems work.” So far, several factors have prevented PSAPs from accepting text messages during emergencies. These include a lack of federal regulations governing the use of SMS standards in relation to 911 and a lack of funds for PSAPs to upgrade their systems. However, on Wednesday, Aug. 10, the FCC unveiled a five-step plan to move the nation onto next-generation 911, which includes facilitating the completion and implementation of technical standards. Issues to be evaluated in the Durham pilot include the speed of message delivery, the use of the three-digit number instead of a standard telephone number, and if text messages can be sent when a subscriber’s phone doesn’t have a strong enough signal for a voice call . Other jurisdictions, such as Marion County, Fla., have set up text-to-911 systems that require messages be sent to 10-digit numbers. Hummel said a variety of systems have routed messages in ways that didn’t always get messages to call-takers in a timely fashion. Prior to the public pilot, the Durham Emergency Communications Center conducted internal testing that looked at scenarios including how it would handle multiple text messages that are received at once and what impact that could have on response. “Will it just take us longer to respond if you’re No. 30?” Soukup said. “That’s one aspect of it.” As of Aug. 5, the center had not received any text messages from Verizon subscribers. “If one message means we saved a life, it’s been effective,” Soukup said.
<urn:uuid:fd771214-f233-4b58-a6f6-4420aa90418f>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/Durham-NC-Public-Pilot-Texting-911.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00210-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942644
942
2.53125
3
GCN LAB IMPRESSIONS Heat sink revolution: Sandia Cooler smaller, quieter, 30X more efficient - By Greg Crowe - Jul 11, 2012 We all know that the main enemy to computer components' lifespan and performance is heat. Keeping heat out of the parts that make the computer work -- in particular the processor -- and sending it away has always been a challenge. It would be easy if you had the resources to sink your systems into stuff like liquid nitrogen, of course, but how many of us do? If you aren’t a Bond villain, the odds of that are pretty low. But Sandia National Laboratories may have come up with an answer: a new type of air-cooled heat exchanger for processors and other chips that normal folks and government agencies can use. Hot enough for you: Data center cooling system heats buildings The typical approach to cooling a computer is to have a heat sink made up of metal foils in physical contact with the chip, and a circular fan positioned to draw the hot air away from the heat sink. Unfortunately, this can create pockets of dead air among the foils, which of course just keep getting hotter. Researchers at Sandia have managed to combine the heat sink and fan into one with a rotating fin structure. Dubbed the Sandia Cooler, it looks like a set of curved heat sink foils that spiral out from the center in a clockwise pattern. When the array is spun counterclockwise, a mini vortex is created in the middle that draws air down into the structure and pushes it out along the curved channels between the foils. This, of course, cools the foils and keeps any air from forming pockets. Sandia said the cooler is 10 times smaller that current CPU coolers and 30 times more efficient. And it's more energy-efficient and significantly quieter to boot. More details about the project can be found in Sandia's presentation or in the video below.
<urn:uuid:547a7269-bd92-434a-9016-73ee3164a5d3>
CC-MAIN-2017-04
https://gcn.com/articles/2012/07/11/sandia-cooler-30-times-more-efficient-at-cooling-pcs.aspx?admgarea=TC_EmergingTech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00118-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925787
406
3.140625
3
With many businesses looking to expand their number of data sources to improve their analytics projects, one area that's seeing a great deal of focus is how companies capture and evaluate data gleaned from social media. As Facebook claims to have over a billion users, and Twitter in excess of 300 million active users, there is clearly a huge amount of potential data on these platforms that can give businesses an insight into what their customers are thinking. At the same time, affordable big data analytics tools such as Hadoop that are capable of handling large amounts of this unstructured data have allowed many more enterprises to take advantage of the opportunities this opens up. But are companies relying too heavily on the information they gain from social media? A new study by Northwestern University has suggested that in many cases, businesses may not be accounting for the systemic biases that these platforms have. Researcher at the institution professor Eszter Hargittai, who heads the Web Use Project, explained that the key thing businesses must bear in mind when analysing social media data is that their subjects are self-selecting. That is to say, they do not use sites such as Facebook and Twitter randomly, but make a conscious choice to engage. This means the data they produce may be potentially biased in terms of demographics, socioeconomic background or internet skills, the research stated. This can have significant implications for businesses and other organisations that use big data, because it excludes certain segments of the population and could lead to unwarranted or faulty conclusions. Prof Hargittai said: "Many data sets that use so-called 'big data' rely on social network sites such as Facebook and Twitter. But studies rarely discuss that people who select into using Facebook and Twitter don't necessarily represent larger populations." For example, a local authority may turn to Twitter to collect local opinions about how to improve the community. But in cases like this, it will be vital for them to understand what sort of cross-section of people will be likely to respond to the question. "You could be missing half the population, if not more. The same holds true for companies who only use Twitter and Facebook and are looking for feedback about their products," prof Hargittai said. "It really has implications for every kind of group." The data examined by the research revealed there there are several factors that influence what social media sites consumers choose to interact on, such as age and gender. Prof Hargittai said: "Even among young adults who are generally thought of as the most active on social network sites, we see socioeconomic differences when it comes to Twitter and Tumblr. We also see gender and skill differences on who is on what site." Therefore, these biases will have to be taken into account when businesses are incorporating social media into their big data projects. By being aware of the potential for the results to be skewed, companies will be able to adjust their operations accordingly, add other sources to create a more complete picture and ensure their projects stand the best chance of success.
<urn:uuid:92c8ab19-fa34-4b47-858a-277fc901146a>
CC-MAIN-2017-04
http://kognitio.com/businesses-must-be-aware-of-social-media-big-data-biases/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00514-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956543
614
2.828125
3
On Wednesday, the Nobel Prize in Chemistry was awarded to three scientists for pioneering methods in computational chemistry that have brought a deeper understanding of complex chemical structure and reactions in biochemical systems. These methods can precisely calculate how very complex molecules work and even predict the outcomes of very complex chemical reactions. One of the laureates—Martin Karplus of Harvard University—has been using supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) since 1998. The other laureates were Michael Levitt of Stanford University and Arieh Warshel of the University of Southern California. According to the Royal Swedish Academy, these accomplishments have opened up an important collaboration between theory and experiment that has made many otherwise unsolvable problems solvable. “Today the computer is just as important a tool for chemists as the test tube. Simulations are so realistic that they predict the outcome of traditional experiments,” writes the Royal Academy in its announcement of the winners. Supercomputers and Modern Chemistry Long gone are the days when chemists used plastic balls and sticks to create models of molecules. Today, modeling is carried out on computers, and Karplus’ work helped lay the foundation for the powerful programs that are used to understand and predict chemical processes. These models are crucial for most of the advances made in chemistry today. Because chemical reactions happen at lightning speed, it is impossible to observe every step in a chemical process. To understand the mechanics of a reaction, chemists build computer models of these events to study them in detail. The models also allow researchers to look at these reactions at different scales, from electrons and nuclei at sub-atomic scale to large molecules. Karplus, Levitt and Warshel, revolutionized the field of computational chemistry by making Newton’s classical physics work side-by-side with fundamentally different quantum physics. Previously, researchers could only model one or the other. Classical physics models were ideal for modeling large molecules, but they couldn’t capture chemical reactions. For that purpose, researchers instead had to use quantum physics. But these calculations required so much computing power that researchers could only simulate small molecules. By combining the best from both physics worlds, researchers can now run simulations to understand complex processes like how drugs couple to its target proteins in the body. For example, quantum theoretical calculations show how atoms in the target protein interact with the drug. Meanwhile, less computationally demanding classical physics is used to simulate the rest of the large protein. Karplus and NERSC Karplus began computing at NERSC in 1998, with an award from Department of Energy’s Grand Challenges competition. The Grand Challenges applications addressed computation-intensive fundamental problems in science and engineering, whose solution could be advanced by applying high performance computing and communications technologies and resources. At the time, Karplus and his colleague, Paul Bash who was at Northwestern University, were looking to understand chemical mechanisms in enzyme catalysis, which they couldn’t investigate experimentally. So they ran computer simulations at NERSC to gain a complete understanding of the relationship between biomolecular dynamics, structure and function. One of the enzymes they looked at was a class called beta-lactamases. Researchers knew that these enzymes were responsible for the increasing resistance of bacteria to antibiotics, but the precise chemical resistance mechanisms were unknown. So Karplus and Bash ran simulations on NERSC supercomputers to investigate this mechanism at an atomic-level of detail. In his 15 years as a NERSC investigator, Karplus and his research group have explored everything from how the molecule ATP synthase acts a motor that fuels cells, to how myosin, the molecular engine behind muscles, operates. Today, Karplus’ group is tackling the science behind molecular machines, which may someday power man-made systems, for example by converting sunlight into biofuels; working as tiny “molecular motors” capable of performing chemical analyses or other tests for “lab-on-chip” devices; or even “manufacturing” nanodevices. Here’s a sampling of his work at NERSC over the last two decades:
<urn:uuid:917c0055-bf27-4e98-97be-9fb1915a63b2>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/10/10/nersc-user-martin-karplus-wins-nobel-prize-in-chemistry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00422-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948685
858
3.453125
3
A Dark Web experiment shows just how dangerous stolen email credentials can be. With a stolen Gmail username and password combo, hackers showed that they could access bank accounts and more. In Bitglass’ second annual Where’s Your Data experiment, researchers created a digital identity for an employee of a fictitious bank, a functional web portal for the bank and a Google Drive account. The team then leaked “phished” Google Apps credentials to the Dark Web and tracked activity across the fictitious employee’s online accounts. Hackers on the Dark Web found they could gain access to the employee’s Google Drive account, and with a little more digging, access the employee’s bank accounts with login credentials that were stolen. During the month-long experiment, more than 1,400 visits were recorded to the Dark Web credentials and the fictitious bank’s web portal; there were five attempted bank logins and three attempted Google Drive logins within the first 24 hours; and the first file was downloaded within 48 hours of leaking the credentials. Overall, almost all (94%) of hackers who accessed the Google Drive uncovered the victim’s other online accounts and attempted to log into the bank web portal. About 12% of hackers who successfully accessed the Google Drive attempted to download files with sensitive content. And several cracked encrypted files after download. And, showing the popularity of the onion network, 68% of all logins came from Tor-anonymized IP addresses, suggesting that hackers are becoming more security conscious, and are realizing that they need to mask IPs when possible to avoid getting caught. What a difference a year makes. Last year, the Bitglass team leaked watermarked documents onto the Dark Web. The files were viewed 200 times in the first few days, but the frequency of downloads quickly decreased. In the prior experiment, few downloads used any form of anonymization via Tor, which made them easy to track. "Our second data-tracking experiment reveals the dangers of reusing passwords and shows just how quickly phished credentials can spread, exposing sensitive corporate and personal data," said Nat Kausik, CEO, Bitglass. "Organizations need a comprehensive solution that provides a more secure means of authenticating users and enables IT to quickly identify breaches and control access to sensitive data.” In case you were wondering where the denizens of the Dark Web reside, Bitglass uncovered that the hackers came from more than 30 countries across six continents. In terms of the percentages of the countries with non-Tor visits to the bank web portal, Russia accounted for 34.85%, followed by the US at 15.67%, China at 3.5%, and Japan at 2%. Photo © TheaDesigns
<urn:uuid:0597d732-313a-49a1-b359-9d3032e786a1>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/dark-web-hackers-use-stolen/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00146-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934519
558
2.640625
3
Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield, makes that argument in The Ethical Frontiers of Robotics, an article in the December 19 issue of Science. Sharkey is calling for international guidelines to establish how robots can operate safely and ethically. Citing widespread use of robots used for elder-care, like Secom's "My Spoon" automatic feeding robot and the Mitsubishi Wakamura medicine-reminder robot, Sharkey says service robots are becoming increasingly common. But he worries that no international regulations or policy guidelines exist beyond laws designed to punish negligence. In a paper published earlier this year, "2084: Big robot is watching you," Sharkey describes how robots are now being used around the world for policing. The most dangerous, he says, are the robot border guards in South Korea. As robots become more affordable and more capable, they will be used more frequently for law enforcement. Sharkey finds that troubling. "It is undeniable that robots are a safe way to reduce future crime," he writes. "However, the price for our protection may be too great. The progressive growth of robot policing poses some serious technological dystopian threats to our society. There is a trade-off between crime prevention and our privacy, our civil liberties and our basic human rights. All of these will be eroded by the development of new robot technologies for monitoring, checking, tagging, and following us." Sharkey believes there's no need to worry about "these robot being 'super-intelligent overlords taking over the planet and killing or enslaving all humans.'" Artificial intelligence has been a flop. "There is absolutely no evidence of machines becoming any more intelligent than they were 30 years ago," he writes. The danger is people equipped with this powerful technology. "As long as authorities are benign, caring, and don't make mistakes, such powerful policing could be of great benefit to mankind," he writes. "But, as we all know, absolute power corrupts. Those in control of the machines will control society."
<urn:uuid:adb54fe9-4c68-4bd9-b38a-21235d621425>
CC-MAIN-2017-04
http://www.networkcomputing.com/government/robot-ethics-urged/1398162721
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00174-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945565
420
2.734375
3