text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Who's Hijacking Internet Routes?Attacks Increase, But There's No Easy Fix in Sight Information security experts warn that Internet routes are being hijacked to serve malware and spam, and there's little you can do about it, simply because many aspects of the Internet were never designed to be secure. The Internet hijacking problem relates to Border Gateway Protocol, which is responsible for routing all Internet traffic. In the words of Dan Hubbard, CTO of OpenDNS Security Labs: "BGP distributes routing information and makes sure all routers on the Internet know how to get to a certain IP address." BGP provides critical Internet infrastructure functionality, because the Internet isn't a single network, but rather a collection of many different networks. Accordingly, BGP routing tables give the different networks a way to hand off data and route it to its intended destination. That assumes, of course, that no one tampers with BGP routing, in which case they could reroute traffic or disguise malicious activity. "The trouble is it ... all relies on trust between networks, so if someone hijacks an ISP router, you wouldn't know," Alan Woodward, a visiting professor at the department of computing at England's University of Surrey, and cybersecurity adviser to Europol, tells Information Security Media Group. "It's just another example of how people are forgetting that the Internet was never built to be a secure infrastructure, and we need to be mindful of that when relying upon it." Spam, Malware, Bitcoins Hijacking router tables could allow an attacker to spoof IP addresses and potentially intercept data being sent to a targeted IP address. Thankfully, Woodward says, that is "not a trivial task," and Internet service providers have some related defenses in place. But some attacks get through. One four-month campaign, spotted by Dell Secureworks in 2014, involved redirecting traffic from major Internet service providers to fool bitcoin-mining pools into sharing their processing power - which is used to generate bitcoins - with the attacker. Dell estimates that the attacker netted about $84,000 in bitcoins, although it's not clear that such attacks are widespread. What has been on the increase, however, are incidents in which malware and spam purveyors hijack an organization's autonomous system numbers, or ASNs, which indicate how traffic should move within and between multiple networks, says Doug Madory director of Internet analysis at Dyn Research, which was formed after Dyn last year acquired global Internet monitoring firm Renesys. In a blog post, Madory describes six recent examples of bogus routing announcement campaigns, some of which remain under way, and all of which have been launched from Europe or Russia. By using bogus routing, attackers with IP addresses that have been labeled as malicious - for example by the Zeus abuse tracker, which catalogs botnet command-and-control servers - can hijack legitimate IP address space and trick targeted autonomous systems on the Internet into thinking the attack traffic is legitimate. "These are not isolated incidents," Madory says of the recent attacks that he has documented. "First, these bogus routes are being circulated at a near-constant rate, and many separate entities are engaged in this practice, although with subtle differences in approach. Second, these techniques aren't solely for the relatively benign purpose of sending spam. Some of this host address space is known to circulate malware." One takeaway, Madory says, is that any information security analysts who review alert logs should know that the IP addresses attached to alerts may have often been spoofed via BGP hijacking. "For example, an attack that appeared to come from a Comcast IP located in New Jersey may have really been from a hijacker located in Eastern Europe, briefly commandeering Comcast IP space," he says. The security flaws associated with BGP that allow such attacks to occur haven't gone unnoticed. In January, the EU cybersecurity agency ENISA urged all Internet infrastructure providers to configure Border Gateway Protocol to ensure that only legitimate traffic flows over their over networks. But ENISA's advice belies that while BGP can be fixed, it can't be done quickly. "There are efforts to cryptographically sign IP address announcements," Madory says. "However, these techniques aren't foolproof and until they achieve a critical mass of adoption, they won't make much difference." No Quick Fix "Why Is It Taking So Long to Secure Internet Routing?" is the title of a recent research paper from Boston University computing science professor Sharon Goldberg, who notes that any fix will require not just a critical mass, but coordinating thousands of different groups. "BGP is a global protocol, running across organizational and national borders," the paper notes. "As such, it lacks a single centralized authority that can mandate the deployment of a security solution; instead, every organization can autonomously decide which routing security solutions it will deploy in its own network." That's one reason why BGP hasn't gotten a security makeover, despite weaknesses in the protocol having been well-known by network-savvy engineers for the past two decades. Lately, however, BGP abuse has been rising. "It appears to be more systematized now," Dyn's Madory warns. Pending a full fix, he says that service providers might combat these attacks by banding together and temporarily blocking Internet traffic from organizations that repeatedly fail to secure their infrastructure, thus allowing BGP attackers subvert it. In the meantime, keep an eye on security logs for signs of related attacks. "There's no easy defense, but it is kind of possible [to spot attacks] by monitoring and watching for unexpected changes in routing," Woodward says.
<urn:uuid:32bcc44a-aa1a-4371-afd3-daf93809692e>
CC-MAIN-2017-04
http://www.bankinfosecurity.com/whos-hijacking-internet-routes-a-7874/op-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00178-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959233
1,153
2.65625
3
The Internet of Things will be an ever-increasing threat. We have already seen massive botnets of home routers and security cameras used for devastating DDoS attacks such as the one on Dyn. The source code to do this has been published, and there is no way to systematically upgrade all IoT devices to protect against it. We can expect to see DDoS attacks increasing in magnitude to the level where entire countries or regions may be taken offline for substantial periods. In addition, the more creative cybercriminals will be finding other ways to use IoT devices, including click fraud, bitcoin mining, and spamming. Email hacking will be the new normal in politics and cybercrime As we have seen in the 2016 US election, email hacking is a powerful weapon for disrupting or discrediting an opponent. It does not require sophisticated use of a zero-day vulnerability, just a successful phishing attack, and there is no apparent downside to anonymously publishing stolen data. Political operations need a Chief Information Security Officer just as much as business. Dual factor authentication should be the norm for all email systems, and the use of private email accounts for business or government purposes has to stop. Zero day prices will continue to escalate far beyond bug bounties, leaving us all less secure. Zero day vulnerabilities are selling for increasing amounts, with up to $1,500,000 being offered for the ability to hack iPhones. While some software companies offer five and even six figure bug bounties, even the richest software companies are not offering anything to compare with this. For example, Apple’s bug bounty program has a maximum reward of $200,000. The main market for these zero days seems to be nation state actors who wish to spy on their own citizens or conduct espionage on other countries. However, even if the purchaser of a zero day is a friendly government, that does not mean that other less benign hackers will not discover the bug and use it maliciously. So long as the US and other governments continue to tolerate and participate in the trade in zero days we will all continue to be more vulnerable to criminals and foreign spies. Ransomware is here to stay. Make sure you have a current backup. While some forms of cybercrime such as credit card fraud require a fair amount of effort to monetize, the use of Tor and Bitcoin makes ransomware an easy way for a criminal hacker to make a living. All you need is a way of infecting machines using spam or malvertising, and some encryption code you can download from Github. There are a number of different strains of ransomware, all evolving as their operators try new approaches when the old ones are blocked. A recent ransomware attack on the San Francisco Muni public transport system took down ticket machines, so that the streetcars had to operate for a few hours without charging fares. However, Muni had a backup of their servers and was able to recover without paying ransom. A current backup is the best defense against ransomware. As EMV credit cards become the standard in the US, credit card fraud will move towards online purchases. The more secure EMV credit cards with embedded chips are gradually replacing the older and less secure magnet stripe cards in the US. This will make physical credit cards harder to forge, so monetization of stolen credit card information will increasingly move to online and phone purchases where the card does not have to be physically present. Smaller banks and credit unions are lagging behind the big players in the adoption of this standard. While the customer is not responsible for credit card fraud, debit card fraud can create difficulties because the customer’s checking account may be emptied and other bill payments fail. For the best security use an EMV card from an issuer that allows you to set up email alerts for online, phone, or high value regular purchases.
<urn:uuid:2c12d691-dad2-4aee-a5ad-74b943163e54>
CC-MAIN-2017-04
https://blog.cloudmark.com/2017/01/03/cloudmark-security-predictions-for-2017/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00391-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949124
771
2.53125
3
High-Performance Computers Are Crucial in Science, Engineering "High-performance computers like the IBM Blue Gene/P are critical in virtually every discipline of science and engineering, and we are grateful for IBM's help in bringing this resource to Rice," Provost McLendon said in a statement. "For individual faculty, the supercomputer will open the door to new areas of research. The Blue Gene also opens doors for Rice as the university seeks to establish institutional relationships both in our home city and with critical international partners like USP." Unlike the typical desktop or laptop computer, which has a single microprocessor, supercomputers typically contain thousands of processors. This makes them ideal for scientists who study complex problems, because jobs can be divided among all the processors and run in a matter of seconds rather than weeks or months. Supercomputers are used to simulate things that cannot be reproduced in a laboratorylike the Earth's climate or the collision of galaxiesand to examine vast databases like those used to map underground oil reservoirs or to develop personalized medical treatments."This significant investment by IBM is the result of a long-standing collaborative initiative with Rice where together we have developed a unique and substantial computational resource for the research community in Houston, across the country and around the world," said Tony Befi, IBMs senior state executive for Texas, in a statement. "This new computing capability will speed the search for new sources of energy, new ways of maximizing current energy sources, new cancer drugs and new routes to personalized medicine. So we're excited that Rice has now joined an exclusive club of the world's top research organizations who use powerful and energy-efficient Blue Gene supercomputers to solve some of the world's most pressing problems." In 2009, President Obama recognized IBM and its Blue Gene family of supercomputers with the National Medal of Technology and Innovation, the most prestigious award in the United States given to leading innovators for technological achievement, IBM said. Including the Blue Gene/P, Rice has partnered with IBM to launch three supercomputers during the past two years that have more than quadrupled Rice's high-performance computing (HPC) capabilities. The addition of the Blue Gene/P doubles the number of supercomputing CPU hours that Rice can offer. The six-rack system contains nearly 25,000 processor cores that are capable of conducting about 84 trillion mathematical computations each second. When fully operational, the system is expected to rank among the world's 300 fastest supercomputers as measured by the TOP500 supercomputer rankings. Meanwhile, on March 27, Rutgers teamed with IBM to launch a HPC center at the university focused on the application of big data analytics in life sciences, finance and other industries. The center is aimed at improving the economic competitiveness of New Jerseys public and private research organizations. The HPC center will be part of the newly created Rutgers Discovery Informatics Institute (RDI2) and will use supercomputing equipment and software provided by IBM in the projects first phase. Rutgers anticipates future expansion of the center will lead to the university having one of the worlds most powerful academic supercomputers. The institute, powered by an IBM Blue Gene/P supercomputer, has several goals. They include creating an HPC resource, with expert support, for industry in New Jersey and the surrounding region; educating the New Jersey workforce and Rutgers students in working with advanced analytics and a state-of-the-art HPC center; and providing HPC resources to Rutgers faculty members and regional organizations that are expanding their use of extremely large data sets. There is immense potential here because Rutgers and IBM have some of the best minds in high-performance computing, said Michael J. Pazzani, vice president for research and economic development and professor of computer science at Rutgers, in a statement. The ability to conduct data analysis on a large scale, leveraging the power of big data, has become increasingly essential to research and development. Just as important is the valuable new resource that we are creating for industry, Pazzani said. The institute will collaborate with businesses that need high-performance computing capabilities but cant justify the cost of building their own system. The collaboration involving Rutgers and IBM scientists and engineers is expected to extend beyond computer science and engineering, to encompass fields such as cancer and genetic research, medical imaging and informatics, advanced manufacturing, environmental and climate research and materials science. The application of analytics to big data has quickly emerged as the new foundry of the 21st century economy, said Phil Guido, IBMs general manager for North America, in a statement. IBM is eager to work with Rutgers to help improve New Jerseys economic competitiveness through this center. IBM firmly believes that public-private collaboration and research can be critical in ensuring our workforce is equipped and empowered with next-generation skills like analytics. The IBM Blue Gene supercomputer, housed in the Hill Center for Mathematics on Rutgers Busch Campus in Piscataway, N.J., will be the only supercomputer available to commercial users in the state. Only eight of the nations 62 scientific computation centers have industrial partnership programs. The two Blue Gene/ P racks at Rutgers will be far more powerful than any computer at the university today. Excalibur is the name Rutgers has chosen for it, playing off the universitys sports mascot, the Scarlet Knight. Rutgers has agreed to purchase hardware and software from IBM, as well as entering into a three-year maintenance agreement for the equipment. As future funding becomes available, Rutgers expects to add the latest-generation Blue Gene/Q system by the end of the year. Rutgers also envisions building an expanded facility on the Busch campus in 2013 as the system and center grows. USP officials said they expect their faculty to use the supercomputer for research, ranging from astronomy and weather prediction to particle physics and biotechnology.
<urn:uuid:fc0680c1-b25f-4d03-943f-997a0642bd47>
CC-MAIN-2017-04
http://www.eweek.com/c/a/IT-Infrastructure/IBM-Inks-Blue-Gene-Supercomputer-Deals-With-Rice-and-Rutgers-Universities-795511/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00299-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944385
1,177
3.375
3
-by Van Wiles, Lead Integration Engineer ITIL version 3 introduced the term Configuration Record to describe "A record containing the details of a Configuration Item...Configuration Records are stored in a Configuration Management Database." A Configuration Item (CI) is defined as "Any component that needs to be managed in order to deliver an IT Service. Information about each CI is recorded in a Configuration Record..." So, most of the time (both in documentation and in communications) when we are talking about CIs in the CMDB, we are really talking about configuration records. The CI is the actual thing that is managed, not the record in CMDB. Other valid terms for configuration records include "CI records" and "instances (of a CMDB class)". If you are writing about Configuration Items or Configuration Records, it is important to make this distinction. Unfortunately Configuration Record has no abbreviation (CR is an overloaded term already), so I usually just add "record" after "CI" to indicate I am talking about a record in CMDB, not the actual CI. For example, if I say "delete a computer system CI", technically I'm talking about picking up the box and physically removing it. To describe deleting the record in CMDB, I should say "delete the computer system CI record" instead, or "delete the instance of the ComputerSystem class." This is my first post on the new BMC Developer Network blog so I hope you see it. The UI is sure easier to use, so maybe I'll post more often! The postings in this blog are my own and don't necessarily represent BMC's opinion or position.
<urn:uuid:5233e126-93e5-4b6c-9a10-e9411c99b23c>
CC-MAIN-2017-04
https://communities.bmc.com/community/bmcdn/bmc_atrium_and_foundation_technologies/bmc_atrium_cmdb/blog/2008/09
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00299-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928284
333
2.875
3
Data center of the future: Google Glass, internal power plants Monday, Jan 20th 2014 Data centers are some of the most technically advanced, innovative facilities on the planet. These establishments utilize the latest and greatest technology to boost capacity and improve processes. This trend shows no signs of slowing, as experts predict vast changes in the industry in the next few years, shaping the data center of the future. Operators don Google Glass Data Center Knowledge contributor Jeffrey Dutschke predicted that Google's newest invention, Google Glass, will become an integral part of how data center operators carry out their daily duties. For those unfamiliar with the technology, Dutschke describes Google Glass as "a portable computer worn like a pair of glasses." It allows wearers to access and view applications literally right before their eyes. The device includes a camera, microphone and speaker, and Wi-Fi capabilities with Bluetooth communication. According to Dutschke, researchers are developing data center maintenance and repair applications for Google Glass, which could revolutionize the inner workings of computing facilities. Operators will also be able to send and receive SMS and email messages, enabling quick communication throughout the data center. "For apprentices and new hires, this will be an amazing way of interacting with their supervisor and getting advice while on the job," Dutschke noted. Additionally, the device will provide means for quickly and easily creating work orders through its voice command feature. Operators will be able to scan parts using the item's QR code and the Google Glass camera. Workers will also have the ability to rapidly gather data as they move about the facility during their daily routines. One of the most innovative uses of Google Glass in data centers is the ability to access real-time heat mapping to better gauge the data center temperature. Dutschke predicted that later Google Glass models will be equipped with an infrared camera, which would allow the wearer to easily see areas of high temperatures. While this feature is not available yet, there is technology on today's market that can oversee server room temperatures for optimal server uptime. Temperature monitoring systems prevent IT equipment from reaching unsafe levels that can lead to costly downtime. Data center/power plant Scientific American contributor David Wogan recently reported that Microsoft researchers are working on creating arrangements which would "bring the power plant into the data center itself to improve efficiency and reliability using fuel cell technology." In other words, instead of depending on the utilities available in the area, data center facilities would have the means to generate their own supply of electricity. Sean James, Microsoft Global Foundation Services senior research program manager, said the company is currently working on an arrangement that would power data center facilities through fuel cells integrated into the server racks. "This brings the power plant inside the data center, effectively eliminating energy loss that otherwise occurs in the energy supply chain and doubling the efficiency of traditional data centers," James wrote. James said that this power-saving technology could improve efficiency by as much as 40 percent. However, with more equipment in the server room, there is an added need for temperature monitoring to prevent overheating. A temperature monitoring system is essential to ensure that excess heat is eliminated and the data center temperature remains in the target zone.
<urn:uuid:536de623-f817-4eca-94c1-157588d79ae8>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-of-the-future-google-glass,-internal-power-plants-569695
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00207-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940417
651
2.78125
3
Table of Contents This section deals with user privacy. Systems that deal with private user information such as social security numbers, addresses, telephone numbers, medical records or account details typically need to take additional steps to ensure the users' privacy is maintained. In some countries and under certain circumstances there may be legal or regulatory requirements to protect users' privacy. All systems should clearly and prominently warn users of the dangers of sharing common PC's such as those found in Internet Cafes or libraries. The warning should include appropriate education about: the possibility of pages being retained in the browser cache a recommendation to log out and close the browser to kill session cookies the fact that temp files may still remain the fact that proxy servers and other LAN users may be able to intercept traffic Sites should not be designed with the assumption that any part of a client is secure, and should not make assumptions about the integrity.
<urn:uuid:0adb03fa-83c2-48cc-abac-dcf09d09ef40>
CC-MAIN-2017-04
http://www.cgisecurity.com/owasp/html/ch12.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00235-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932206
183
2.59375
3
Confusion has come along with the associated taxonomy of VoIP technology and IP telephony. Both of them refer to use the same IP network to send voice messages. But the main difference between VoIP and IP telephony is that VoIP is connecting old fashion analog phones to specific gateway device who are able to convert analog voice data into digital bits and send them across the internet bypassing the expensive PSTN telephone networks. In the case of IP telephony the phones by them selves are digital devices and they are made to record the users voice directly into digital signal and send it across IP network using special Communication manager devices that are enabling this technology to work. IP telephony technology resides on IP network and natively uses the IP network for communication. Tag: voice over IP Vishing and Toll Fraud Vishing is quite similar to the term Phishing and it means collecting private information over the telephone system. In the technical language the terminology of phishing is a recent addition. The main concept behind phishing is that –mail is sent to user by an attacker. The e-mail looks like a form of ethical business. The user is requested to confirm her/his info or data by entering that data on the web page, such as his/her “social security number”, even “bank or credit card account” number, “birth date”, or mother’s name. The attacker can then take this information provided by the user for unethical purposes. The Attack of SIP protocol We previously discussed in this blog the SIp protocol. We have also said that “Session Initiation Protocol” (SIP) is becoming popular quite fast and it has also achieved quick acceptance in “mixed-vendor VoIP networks”. One of the most striking properties of SIP is its use of “existing protocols”. And by default, SIP messages are sometimes sent in the form of plain (normal) text. This is quite unfortunate as the very properties that make SIP striking can also be leveraged by attackers to make a compromise regarding the security of a particular SIP network. Until and unless your e-mail account is guarded well with the help of a “spam filter”, you most likely and infrequently receive unwanted e-mails. Spam is annoying and irritating for the person using e-mails. VoIP supervisors and administrators should be well aware and familiar with VoIP spam, which is generally known as (SPIT) “spam over IP telephony”. These days IP phones are easily obtainable and abundant in many corporate fields, they have become striking targets for attackers. Also VoIP administrators should keep an eye against VoIP differences, of fishing and spam as both are very popular in e-mail fields, and also as toll fraud, which is rather frequent in PBX fields. This article is about mentioning all the popular attack targets for a VoIP network and seeing how there are deployed.
<urn:uuid:9d177cb4-7538-4954-8bd3-cbde64ae3078>
CC-MAIN-2017-04
https://howdoesinternetwork.com/tag/voice-over-ip
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00079-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958592
617
3.203125
3
Another computing expert has weighed in on Moore’s Law predicted demise. Speaking at a press roundtable on Wednesday in San Francisco, Henry Samueli, co-founder, chairman and CTO of Broadcom, said that the famous observation that ushered in five decades of ever smaller, faster and cheaper chips will not hold out much longer. This is yet another in a long-line of similar claims from people with first-hand knowledge of the principles involved. IT World was first to report on the story. Initially proposed by Intel co-founder Gordon Moore in the 1960s, Moore’s Law anticipated that the number of transistors on an integrated chip would double every 24 months. The observation became a kind of short-hand for seemingly endless generations of cheaper, more powerful processors. There are two main issues with regard to the continued shrinking of silicon-based CMOS transistors: one is based on the limits of physics and the other is economic in nature. Most experts agree that the economic incentives for scaling transistors will be reached before the ultimate physical limitation. Essentially Moore’s Law has been on extended life support, with manufacturers going to ever greater lengths with each generation of chip architecture. Denser chips used to be cheaper to make because of manufacturing economies of scale, but at a certain point the expense of developing and manufacturing smaller, more powerful chips cancels out the expected cost savings. Many, like Samueli, believe that that tipping point is at hand. “The cost curves are kind of getting flat,” Samueli told reporters at the Broadcom event. Where before, chipmakers could count on faster processors, less power consumption and lower cost — now they must choose two out of three. As process nodes approach the atomic scale, chip designers must contend with the strange behavior of the quantum world. There is still room for further miniaturization, but the challenge grows more difficult with each shrink. Samueli believes that the industry will reach a fundamental limit in another three generations or so, at the 5nm point, which is about 15 years away. At 5nm, the transistor gate is only 10 atoms wide. “You can’t build a transistor with one atom,” Samueli said. “As of yet, we have not seen a viable replacement for the CMOS transistor as we’ve known it for the last 50 years.”
<urn:uuid:53762427-59db-4f3d-bf61-4ba1bf6f9fa9>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/12/06/broadcom-cto-moores-law-blowout-sale-almost/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00473-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950444
491
3.0625
3
Pan J.,Northwest University, China | Ma J.,Northwest University, China | Gao T.,Station Energy | Qiu L.,Northwest University, China | And 2 more authors. Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering | Year: 2016 There were approximately 20 billion chickens in the world in 2010, with 23% of the chicken population found in China. In 2009, the rapid development of chicken farms in China resulted in the production of 126 million tons of poultry manure. If managed inappropriately, poultry manure can cause serious environmental problems by polluting water, air, and soil. Anaerobic digestion is a complex bioconversion process that can treat organic waste and generate biogas with a high methane content that can be used as renewable energy, hence reducing the consumption of fossil fuel and curtailing greenhouse gas emissions. Ca2+-betonite is an environmental friendly material and is widely used in compost, heavy metal removal, pollutant adsorption, etc. In order to investigate the effect of betonite addition on anaerobic digestion of poultry manure, an orthogonal experiment L8(23)was conducted to determine amount of poultry manure (organic loading rate (OLR)), amount of bentonite and inoculum concentration on characteristics of methane production and organic acid production during anaerobic batch digestion of poultry manure under mesophilic condition (35±1) ℃. The results showed that bentonite addition could significantly increase methane production per VS by 22.72% and 27.72% with 3.0% and 1.5% betonite addition (based on poultry manure total solids), respectively compared to the control group under low OLR condition. Methane production from poultry manure with 3.0% or 1.5% betonite addition had no significant (P>0.05) difference under low OLR condition. Specifically, methane production could be very significantly (P<0.05) increased by adding betonite under high OLR condition. Methane production was increased by 78.68% and 55.41% with 3.0% and 1.5% betonite addition, respectively, compared to the control group under high OLR condition. Methane production from poultry manure with 3.0% or 1.5% betonite addition had significant (P<0.05) difference under high OLR condition. In the treatment with 19.91 g VS (volatile solid) poultry manure, 3.0% bentonite addition and 20% inoculum concentration, the highest methane production was observed with methane production 301.92 mL/g, very significantly higher (P<0.05) than control group (87.8% more) and its variable cost was also the lowest with 2.43 Yuan per m3 methane among all treatments. Variable costs of methane production from anaerobic digestion of poultry manure with betonite were 0.40 to 1.68 Yuan per m3 lower than from anaerobic digestion of poultry manure only. Peak values of dissolved organic carbon (DOC) appeared five days earlier and were lower than the control group under low OLR condition. Variance of DOC, pH value and dissolved inorganic carbon (DIC) in control group was higher than treatment groups with betonite showed stability of poultry manure anaerobic digestion could be improved by bentonite addition through increasing consistent of DOC degradation. Interestingly, formic acid and propionic acid were not found during the whole anaerobic digestion process of poultry manure with betonite. Variance of acetate, lactate and n-butyrate of treatment groups with betonite were lower than the control group showing that betonite addition could enhance the stability of anaerobic digestion process of poultry manure. Organic loading rate was the key factor of anaerobic digestion of poultry manure with bentonite under low OLR condition. Amount of betonite was the key factor of poultry manure anaerobic digestion with bentonite under high OLR condition. Inoculum concentration and OLR had significant interaction on acetate concentration. Organic loading rate had significant effect on lactate concentration. No interactions of these three factors were found on lactate concentration. © 2016, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved. Source Hu L.,Zhejiang University | Hu L.,Zhejiang Provincial Key Laboratory of Subtropic Soil and Plant Nutrition | Hu L.,Cornell University | McBride M.B.,Cornell University | And 14 more authors. Environmental Research | Year: 2011 Our aim was to investigate rhizosphere effects on the chemical behavior of Cd. This was done in a glasshouse experiment, where two rice cultivars (Zhenong54 and Sixizhan) were grown in soil spiked with cadmium (Cd) at two levels, 3.9±0.5 and 8.3±0.5mgkg-1 soil, placed in a rhizobox until ripening stage. Chemical forms of cadmium near the root surface were then assessed using a sequential extraction procedure (SEP). There were significant differences in Cd species, especially exchangeable Cd (EXC-Cd) between the two rice cultivars as affected by rice roots. The lowest EXC-Cd with Zhenong54 appeared in the near-rhizosphere area with little difference between tillering stage and ripening stage while Sixizhan had its lowest EXC-Cd concentration in the root compartment. Both cultivars had slight changes in the Fe/Mn oxide-bound fraction of Cd (FMO-Cd) at the grain ripening stage while the control treatments without plants had a significant increase in FMO-Cd at the same time, indicating a transformation from a less bioavailable form (FMO-Cd) to more bioavailable forms (EXC-Cd). Soil microbial biomass in the vicinity of the root surface had opposite trends to some extent with EXC-Cd, partly because of the root-induced changes to bioavailable Cd. Unlike Zhenong54, Sixizhan had a higher Cd concentration in the root, but only a small proportion of Cd translocated from the root to grain. © 2011 Elsevier Inc. Source
<urn:uuid:d672d525-6770-47f4-a557-9eb5bab756c8>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/ministry-of-agriculture-key-laboratory-of-nonpoint-source-pollution-control-2115539/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00133-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940652
1,300
2.65625
3
Most companies relied on varied data such as text data, machine data, social media data, and among others for analytical purposes, in order to remain competitive. Geospatial analytics is a revolutionary technology of gathering and displaying imagery, GPS, satellite photography, and historical data.It integrates the simultaneous applications of statistics, computer programming, and operation research to measure the gathered data. The risks involved in business processes are growing with the expanding Strategic Business Units (SBU) in vast geographies. This technology emerged as a vital source of information to plan and execute projects in order to minimize workload, make successful business predictions, to evaluate business operations with minimum risks, and to develop sustainable strategies. Many organizations are using geospatial analytics tools which have unique ability to go beyond standard data analysis with the help of tools to integrate, view, and analyse data using geography. These tools help organizations to make informed business decision by collecting and analysing on parameters such as customer purchasing behaviour, geographic trends, and other geography related information. In the North American region, many countries have an ample of geographic data for analysis, and governments also making GIS datasets publicly available. Whereas, government as well as many private organizations have also started relying more on geospatial data to support business decisions, infrastructural developments such as power, land, urban developments, and natural resources. Besides, there are several other industries such as, maintenance and construction of roads, public utility services, health, and education are also essentially using geospatial technology and tools for planning, management, and decision-making by gathering and analysing spatial information. Geospatial technology involves Global Positioning systems, Geographic Information Systems, Remote Sensing, and others such as Global Navigation Satellite Systems (GNSS) Light Detection and Ranging (LIDAR), Location-Based Services (LBS) and many more. The main system which comprises the geospatial analytics solutions is the Geographic Information System (GIS) which is used to predict, manage, and learn about various phenomena affecting the earth’s systems and its inhabitants. The rising impact of geographic business units and dispersed customer base has led to increased necessity for geospatial analytics solutions in various business domains. In 2014, the North America Geospatial Analytics market is dominated by Esri, Trimble Navigation, Ltd., and WS Atkins Plc. WS Atkins Plc., is one of the dominating player in the North America Geospatial Analytics market by providing wide range of services enables the company to enhance its top-line performance. Scope of the Report This research report categorizes the North America Geospatial Analytics market into the following segments and sub segments: North America Geospatial Analytics Market Size and Forecast By Vertical - Natural Resource - Utility and Communication - Defense and Intelligence North America Geospatial Analytics Market Size and Forecast by Application - Medicine and public safety - Disaster Risk Reduction and Management - Climate Changed Adaptation North America Geospatial Analytics Market Size and Forecast by Type - Surface Analysis - Network Analysis North America Geospatial Analytics Market Size and Forecast by Technology - Global Positioning Systems (GPS) - Remote Sensing (RS) - Geographical Information Systems (GIS) North America Geospatial Analytics Market Size and Forecast by Country - Rest of North America Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:f6b5a85a-c97b-42c0-9e84-55b42eddab0f>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market/north-america-geospatial-analytics-2910729506.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00161-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896339
734
2.90625
3
During four decades in operation, NASA’s Landsat satellite system has tracked the destruction of tropical rainforests, the loss of conservation areas to forest fires, and the shrinking of lakes and inland seas due to irrigation. The longest-running Earth-imaging satellite also has provided free data for researchers studying everything from climate change to urbanization and population growth and provided many of the base images for Google Earth and other mapping tools. NASA and the U.S. Geological Survey, which jointly manage the Landsat program, presented some of the satellite system’s most stunning images during an event Monday to commemorate the 40th anniversary of its launch. This time-lapse image of the Columbia Glacier, taken between 1986 and 2011, for example, shows its swift retreat as a result of changing global weather patterns: Landsat told political and social stories as well. This 1972 to 2010 time lapse shows the massive growth of Beijing’s municipal footprint marking China’s rise as an economic superpower: Landsat satellites orbit Earth every 90 minutes and over a period of 16 days capture a 360-degree view of the planet, said James Irons, Landsat’s data continuity mission project scientist at NASA’s Goddard Space Flight Center. Seven Landsat satellites have been launched throughout the program’s history. The eighth is scheduled to launch in early 2013, according to Waleed Abdalati, NASA chief scientist.
<urn:uuid:519b0506-d774-4985-9cf5-b2889e472897>
CC-MAIN-2017-04
http://www.nextgov.com/big-data/2012/07/longest-running-earth-imaging-satellite-turns-40/56940/?oref=ng-relatedstories
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00555-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913562
298
3.984375
4
Definition: A solution to a problem that is better than all other solutions that are slightly different, but worse than the global optimum. See also prisoner's dilemma, optimization problem. Note: Some search methods may get trapped in a local optimum and miss the global optimum. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "local optimum", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/localoptimum.html
<urn:uuid:ae31a417-27f2-494c-aec2-3b7269513754>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/localoptimum.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00125-ip-10-171-10-70.ec2.internal.warc.gz
en
0.845789
170
2.890625
3
Higgs Boson Discovery: Why It's Important to All of Us NEWS ANALYSIS: Even though there are no apparent practical applications for the discovery of the elusive Higgs Boson, a subatomic particle that gives mass to everything else in the universe, there is no telling what role it may play in the future. It will certainly expand our understanding of how the universe took shape. Its impossible to overstate the importance of the Higgs Boson, the discovery of which was announced at CERN (Conseil Européen pour la Recherche Nucléaire). Without this particle, you wouldnt exist. For that matter, the universe wouldnt exist. Thats because the Higgs Boson creates a field that gives other particles mass, giving weight and shape to all the matter we see in the universe Or, to describe the indescribable, what the Higgs Boson does is create a field of virtual particles that pop in and out of existence, and while theyre in existence those virtual particles provide mass to other particles that are able to interact with them. Photons, which are being created by the bazillion (to use the precise scientific term) by the monitor in front of you, are the particle manifestation of the electro-magnetic field we call light. Photons are very real and depending on how theyre observed can appear as either particles or waves. Photons have little mass, so little in fact that it takes something as massive as a star (or a former star in the form of a black hole) to affect them. In the rest of the universe, the photon can be considered massless. Another particle with which youre familiar is the electron, which does have mass, although not a lot. I mention the electron, because it illustrates why you may want to pay attention to the Higgs Boson. The electron was discovered in 1897, but while it was understood to play a role in electricity and in electrical transmission, the use of the electron itself didnt really happen until the development of electronics. And some of the electrons more interesting capabilities werent understood until much more recently. For example, a major cause of power loss in electronic devices is due to a characteristic called electron tunneling. Because an observer cant know precisely where an electron is physically located, its position is a statistical probability. But the electron doesnt care where it probably is. The electron can be anywhere in a range of 1 to 3 nanometers. This means it can pass through a barrier or it can appear on an adjacent wire in an integrated circuit. But engineers can make use of this characteristic to create devices such as tunnel diodes. These are devices that have a number of useful characteristics, but the bottom line is that they are widely used in frequency converters and detectors.
<urn:uuid:485b5c8c-3a62-4b3c-b1bd-eae651531c87>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Cloud-Computing/Higgs-Boson-Discovery-Why-Its-Important-to-All-of-Us-757158
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00335-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950512
577
3.125
3
SQL Server Standard Uses All Available Memory Plesk Servers > SQL > MSSQL SQL Server Standard Uses all Available Memory When using SQL Server Standard, SQL Server uses all or most of the available memory on the server. In order to run databases as fast as possible, SQL Server caches as much of the database and queries into memory as possible. SQL Server does this by design in order to increase performance. While this should not cause any problems, if you would like to limit the amount of RAM that SQL Server can use on your dedicated server, please follow the steps below: Log into your server through Remote Desktop. SQL Server Management Studio and log in. In Management Studio, right-click on the server and select Click on the section on the left-hand side. Specify the maximum amount of memory that you want SQL Server to use. (1GB = 1024MB). 6) In Management Studio, Right-Click on the server and select Article ID: 501 Created: April 10, 2012 at 7:07 AM Modified: August 26, 2014 at 9:17 AM Was this article helpful? Thanks for your feedback... Share this article Other Social Networks Share With Others Send Reset Email Please log in below Not Logged In You must be logged in to perform this action.
<urn:uuid:9d017bdf-d637-478b-b446-a4adc50cba29>
CC-MAIN-2017-04
https://support.managed.com/kb/a501/sql-server-standard-uses-all-available-memory.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00363-ip-10-171-10-70.ec2.internal.warc.gz
en
0.730439
282
2.5625
3
KANSAS CITY, KS – The KCK Public Library announced that an agreement was signed to examine the potential for the library system to use new unlicensed radio spectrum known as “Super Wi-Fi” to supply wireless broadband connectivity to a branch library and other public spaces in Kansas City, Kansas. “The project is intended to both provide broadband to a remote environmental learning library as well as to help extend wireless library Internet access beyond the library walls,” said KCK Public Library Director, Carol Levers. “Our primary objective is to upgrade connectivity at our Mr. & Mrs. F.L. Schlagle Environmental Library but we believe the halo effect will result in giving people more places to find convenient access,” she said. Schlagle is now only connected via a pair of aging T1 lines compared to the high performance connections at other schools and library facilities in KCK. To close or at least substantially lessen this bandwidth gap, a pilot project utilizing newly FCC-approved unlicensed Kansas City, Kansas Public Library Initiates Super Wi-Fi Pilot radio spectrum, so-called Super Wi-Fi also known as Television White space (TVWS), will be tested. TVWS can carry data signals for miles while being capable of passing through walls and other obstructions that normally limit wireless connectivity. Cross Institutional Collaborations For backhaul, the project will use the fastest available wireline connection. In addition to addressing the connectivity gap with the Schlagle site, there is also a desire to explore using this unlicensed spectrum capability to increase connectivity options in other local public spaces. “If successful, we hope that by situating TVWS base stations at all five KCK library branches we will be able to wirelessly feed “satellite” library hotspots with traditional no-fee library Internet access at helpful locations around town,” said Director Levers. The project was hatched as part of a new local consortium of KC metro area school, public and academic librarians, called the “KC K20-Librarians”, also initiated by KCKPL to develop cross institutional collaborations. The TVWS project is similar to what is being advocated by New America Foundation’s AIR.U and the Gig.U project initiative, a partnership among higher education associations to promote connectivity in rural college communities. “KCKPL has an opportunity here to create a model that leverages big bandwidth and newly available TVWS technologies to extend satellite library hotspots for after-hours connectivity and other services to other locations in their communities,” said Michael Calabrese, director of the Wireless Future Project at New America Foundation and co-founder of AIR.U. Advising on the project is Don Means of Digital Village in Sausalito, CA, initiator of the national “Fiber to the Library” (FTTL) project and director of the Gigabit Libraries Network, a global collaboration of hi-tech innovation libraries. “Interestingly,” said Means, “the typical available range of TVWS roughly approximates the average distance between public libraries, from a few urban blocks to a few rural miles. Libraries, as natural community technology hubs, should be able to embrace this as a deployment strategy and provide an answer to the question of “What to do with a gig? Share it!” To reach that point, libraries need far faster Internet connections if the country is to achieve the goal of gigabit connectivity to libraries and other anchor institutions as articulated in the National Broadband Plan. National Plan principal architect, Blair Levin, former FCC chairman chief of staff and now CEO of Gig.U, a national consortium of 37 research universities and partner in AIR.U, says, “Hopefully this transaction will become another potential solution for extending next generation broadband networks from areas served by Gig.U related deployments and other high performance fiber projects into surrounding areas.” The KCKPL TVWS Pilot is set for its first phase deployment in early summer with the intention to eventually provide the same Base Station capability at all KCKPL branches, feeding even more library hotspots across the city.
<urn:uuid:43869bb1-cf6d-4a8c-89ed-a10c14a01f04>
CC-MAIN-2017-04
http://bbpmag.com/wordpress2/2013/05/kansas-city-kansas-public-library-initiates-super-wi-fi-pilot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00208-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939916
851
2.609375
3
It may come as no surprise that Property and Casualty insurance varies from state to state, but were you aware that these differences aren't just circumstantial? In fact, most states go as far as to enact specific laws they believe will best protect their citizens or aid insurance providers that create employment and economic revenue. Such legislative actions have significant impacts on how insurance products are ultimately sold and managed. Are the Differences Major? In general, Property and Casualty insurance offers protection against a range of property risks, such as fire, flooding, earthquakes and boiler leaks. One thing you might have noticed when examining contracts, however, is the fact that some risk situations are outright excluded from coverage. For instance, if a consumer lives in a state like Massachusetts, their insurance may automatically come with a storm damage clause. Because the likelihood of storm damage is generally perceived to be rare, insurance company lobbyists may not have campaigned against the inclusion of such terms. In states like South Carolina, on the other hand, the routine occurrence of severe weather systems may mean that consumers have to purchase separate hurricane coverage for such events. Notably, Florida has enacted laws designed to change the way insurance works and support state-run providers in light of local proclivities for natural disasters. Some private insurance firms have even quit offering coverage in these areas as a result, and the corpus of legislation impacting how products may be sold is continually expanding. Defining Key Terms Also remember that although they're commonly grouped together, Property insurance and Casualty insurance are different. Property insurance is designed to protect businesses or individuals who have invested in the property itself, while Casualty insurance provides them with legal liability protection in case someone else incurs a property loss or an injury. Because state laws vary drastically when it comes to tort law and liability proceedings, it's quite possible that a state may require specific endorsements and minimum deductibles for policies to be valid. Quantified minimums are common, and they may also be accompanied by special stipulations pertaining to business consumers, such as New Jersey's Temporary Disability Benefits Law and various worker compensation laws enacted throughout the nation. Due to the unique history of insurance laws in any given state, it's usually critical to study specific codes and statutes in order to gain a better understanding of the variances.
<urn:uuid:d7f00550-3735-47bd-95f4-16a7c1dea536>
CC-MAIN-2017-04
http://blog.mindhub.com/2014_08_01_archive.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00024-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955141
463
2.5625
3
SSS is a general term that is more applicable because memory-based storage can come in a variety of configurations. SSD is a very specific term that should be limited to memory-based storage that happens to be in a hard drive form factor. The advantage of SSD, of course, is that it, for the most part, can be installed in anything that formerly accepted a hard drive. If you want to upgrade the hard drive on your laptop or server, you can pull out the hard drive and replace it with an SSD. SSD technology is also the path that many storage systems manufacturers have chosen to quickly deliver memory-based storage to their customers. They can use their existing storage shelves that used to hold hard drives and now place SSD into them. There are issues with integrating SSD into legacy architectures as we detail in our article "SSD in Legacy Storage Systems," which include that the shelf or the controller architecture may not be able to sustain the performance capabilities of a whole shelf full of SSD. In essence memory-based storage acts as a bottleneck exposer. This performance concern has lead to the growth of a wide range of options in the SSS market. There are PCIe-based SSS devices. They eliminate many of the bottleneck variables by removing the latency caused by the storage network. There are challenges sharing PCIe SSS across multiple servers. which applications like clustering and virtualization require. Another option is to use SSS appliance-based systems that put memory-based storage into an appliance and make it available like any other block device attached to the storage network via fibre or 10-Gbps Ethernet. These systems often don't use SSD form factor, but instead use memory modules or custom boards so that greater density can be achieved. Shared SSS appliances of course have to deal with the storage network as well as developing their own internal switching architectures so that packaging of the appliance itself does not become an inhibitor to performance. As we will discuss in our upcoming webcast "Understanding SSD Performance," there is more to performance than how fast the devices are. How SSS is packaged will be one of the key factors in determining performance. We have made the device, because it is memory, so fast that it now exposes all the other weaknesses in the performance chain and it is up to suppliers to develop technology that removes those bottlenecks. Follow Storage Switzerland on Twitter
<urn:uuid:8b2571d6-735b-4d02-8436-195d4ea304de>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/solid-state-disk-or-solid-state-storage/300506920?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00172-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962884
480
2.890625
3
In biometrics, iris and retinal scanning are known as “ocular-based” identification technologies, meaning they rely on unique physiological characteristics of the eye to identify an individual. Even though they both share part of the eye for identification purposes, these biometric modalities are quite different in how they work. Let’s take a closer look at both and then explain the similarities and differences in detail: Retinal Scanning: The human retina is a thin tissue composed of neuralcells that is located in the posterior portion of the eye. Because of the complex structure of the capillaries that supply the retina with blood, each person’s retina is unique. The network of blood vessels in the retina is so complex that even identical twins do not share a similar pattern. Although retinal patterns may be altered in cases of diabetes, glaucoma or retinal degenerative disorders, the retina typically remains unchanged from birth until death. (Source: Wikipedia) A biometric identifier known as a retinal scan is used to map the unique patterns of a person’s retina. The blood vessels within the retina absorb light more readily than the surrounding tissue and are easily identified with appropriate lighting. A retinal scan is performed by casting an unperceived beam of low-energy infrared light into a person’s eye as they look through the scanner’s eyepiece. This beam of light traces a standardized path on the retina. Because retinal blood vessels are more absorbent of this light than the rest of the eye, the amount of reflection varies during the scan. The pattern of variations is converted to computer code and stored in a database. Retinal scanning also has medical applications. Communicable illnesses such as AIDS, syphilis, malaria, chicken pox well as hereditary diseases like leukemia, lymphoma, and sickle cell anemia impact the eyes. Pregnancy also affects the eyes. Likewise, indications of chronic health conditions such as congestive heart failure, atherosclerosis, and cholesterol issues first appear in the eyes. Iris Scanning: The iris (plural: irides or irises) is a thin, circular structure in the eye, responsible for controlling the diameter and size of the pupils and thus the amount of light reaching the retina. “Eye color” is the color of the iris, which can be green, blue, or brown. In some cases it can be hazel (a combination of light brown, green and gold), grey, violet, or even pink. In response to the amount of light entering the eye, muscles attached to the iris expand or contract the aperture at the center of the iris, known as the pupil. The larger the pupil, the more light can enter. Iris recognition is an automated method of biometric identification that uses mathematical pattern-recognition techniques on video images of the irides of an individual’s eyes, whose complex random patterns are unique and can be seen from some distance. Unlike retina scanning, iris recognition uses camera technology with subtle infrared illumination to acquire images of the detail-rich, intricate structures of the iris. Digital templates encoded from these patterns by mathematical and statistical algorithms allow unambiguous positive identification of an individual. Databases of enrolled templates are searched by matcher engines at speeds measured in the millions of templates per second per (single-core) CPU, and with infinitesimally small False Match rates. Hundreds of millions of persons in countries around the world have been enrolled in iris recognition systems, for convenience purposes such as passport-free automated border-crossings, and some national ID systems based on this technology are being deployed. A key advantage of iris recognition, besides its speed of matching and its extreme resistance to False Matches, is the stability of the iris as an internal, protected, yet externally visible organ of the eye. Similarities and Differences: While both iris and retina scanning are ocular based biometric technologies, there are distinct similarities and differences that differentiate the two modalities. Iris Recognition uses a camera, which is similar to any digital camera, to capture an image of the Iris. The Iris is the colored ring around the pupil of the eye and is the only internal organ visible from outside the body. This allows for a non-intrusive method of capturing an image since you can simply take a picture of the iris from a distance of 3 to 10 inches away. Retinal Scanning requires a very close encounter with a scanning device that sends a beam of light deep inside the eye to capture an image of the Retina. Since the Retina is located on the back of the eye, retinal scanning was not widely accepted due to the intrusive process required to capture an image. Here is an overview of some similarities and differences between iris and retina scanning: - Low occurrence of false positives - Extremely low (almost 0%) false negative rates - Highly reliable because no two people have the same iris or retinal pattern - Speedy results: Identity of the subject is verified very quickly - The capillaries in the iris and retina decompose too rapidly to use a amputated eye to gain access - Retinal scan measurement accuracy can be affected by disease; iris fine texture remains remarkably stable - An iris scan is no different than taking a normal photograph of a person and can be performed at a distance; for retinal scanning the eye must be brought very close to an eyepiece (like looking into a microscope) - Iris scanning is more widely accepted as a commercial modality than retinal scanning - Retinal scanning is considered to be invasive, iris is not Chart: Iris vs. Retinal Scanning: What are the similarities and differences?
<urn:uuid:c447450a-ca16-4d8c-bf1e-fb9b5d804974>
CC-MAIN-2017-04
http://blog.m2sys.com/biometric-hardware/iris-recognition-vs-retina-scanning-what-are-the-differences/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00474-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931078
1,182
3.484375
3
Imagine you’re driving down a street in your town, and as you pass through an intersection you see a flash out of the corner of your eye just before a car running the red light broadsides you. Now, imagine that your vehicle was in communication with the other vehicle, and your car automatically stopped or took evasive action to avoid the accident. That would be pretty amazing—and that is just the sort of car-to-car communication technology the Department of Transportation wants to make mandatory for all passenger vehicles. However, the technology may also invade your privacy and put you at risk. It’s really just a next step in the evolution of safety. We require safety belts because they keep you secured in your seat during an accident. We require airbags because the airbag can deploy in the blink of an eye—much faster than you can possibly react in a crash. If we have the technology for vehicles to proactively communicate with one another and simply avoid the accidents in the first place, then of course we should use it, right? The US Department of Transportation estimates that V2V (vehicle-to-vehicle) communication could prevent four out of five accidents. According to data from the NHTSA (National Highway Traffic Safety Administration), there were 33,561 fatalities in 2012 from motor vehicle crashes. Granted, some of those were drivers, and some were passengers, so it doesn’t translate directly, but just using rough math reducing the total crashes by 80 percent could potentially save more than 25,000 lives. The car-to-car communication transponder technology that the DoT has in mind would communicate a car’s location, direction, and speed to nearby vehicles. The system could then alert the driver of potential danger and/or automatically slow or stop the car to avoid a crash. There are a couple concerns to address with such a system, though. First, there is the question of privacy, and whether or not that data could be used against you. If your car is sending detailed speed data to nearby vehicles, and you pass a police car, would that police officer be able to pull you over and write you a ticket simply based on the fact that your own car announced that you were speeding? The second concern is that the system could be hacked, and somebody could override your vehicle and force it to stop when there is no impending accident. Security researchers demonstrated a hack at the 2013 Black Hat conference last summer that enabled them to remotely control computer-operated functions in modern vehicles. The hackers were able to sound the horn, slam on the brakes, spoof the GPS coordinates, or even move the steering wheel simply by issuing commands from a computer. This second concern exists with or without the proposed V2V transponder technology. It is already possible as a function of just how dependent our vehicles are on computer systems. However, tying those computer-aided functions into a system that communicated over WiFi may just make it that much easier for would-be hackers to remotely access and control your vehicle’s behavior. I’m all for cutting down on accidents by 80 percent, and possibly saving tens of thousands of lives. It just needs to be done in a way that addresses these privacy and security concerns.
<urn:uuid:2bb115a9-e81a-47f6-9fed-b8a05e88309c>
CC-MAIN-2017-04
http://www.csoonline.com/article/2151441/privacy/interconnected-cars-add-unique-privacy-concerns.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00226-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967386
668
3
3
Reinventing the laptop is often left to Apple, HP and other tech giants. But in 2009, a team of three Stanford University graduate students joined them in the ranks. The Bloom is a recyclable laptop that can be disassembled without tools in less than 45 seconds. The Stanford students — Aaron Engel-Hall, Rohan Bhobe and Kirstin Gail — created the laptop in a graduate mechanical engineering course that challenged the students to address a real-world problem: e-waste. They were paired with four students from Aalto University in Finland to build the laptop. 3-D design software company Autodesk, the group’s sponsor, tasked the students with building a recyclable consumer electronic device. After nine months of brainstorming, research and designing, the students completed the prototype. While conducting research, the team learned that most users don’t know how to recycle their laptops, Engel-Hall said, so it was important to design a laptop that made the process easy. “The truth is that in any electronic device, there are some very hazardous materials if they’re not dealt with properly,” Engel-Hall said, “and many times will end up in landfills.” E-waste is already an issue, and it will continue to worsen until 2015, when volume will peak at 73 million metric tons, according to Pike Research. Global volumes will decline in 2016 and beyond, however, as a number of key e-waste initiatives begin to turn the tide, the firm reports. And Bloom may play a role in solving the problem. “Our goal for this laptop at the very beginning of the class was to alleviate e-waste as best we could,” said Engel-Hall, adding that the team chose to build a recyclable laptop because it posed the biggest challenge. “It was the most difficult device we could choose to make recyclable because it contains basically every ‘bad apple’ — every hazardous material and component that requires special handling to recycle than any other device has.” Computer companies have shown interest in the laptop and its documentation, Engel-Hall said, but plans to manufacture it have not yet been made. Parts like printed circuit boards, battery, hard drive and screen — which require special handling — can be inserted into a prepaid envelope that’s stored behind the laptop’s screen. The parts are sent to a facility that can properly recycle them.
<urn:uuid:d28d267b-118d-42de-b611-c341c357d9a7>
CC-MAIN-2017-04
http://www.govtech.com/technology/Stanford-Researchers-Make-Recyclable-Laptop.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00438-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947531
516
3.375
3
The RSA Key Kerfuffle: Why Randomness Is Hard Just how much of a problem is the RSA key kerfuffle? Two research teams weigh in about encryption schemes. Two separate teams of research teams recently raised concerns about the security of the encryption schemes used by popular security protocols such as secure sockets layer (SSL). According to a report issued by a team of U.S.- and EU-based researchers, two out of every 1,000 keys generated by the popular RSA encryption algorithm are "insecure." A separate group of researchers, working primarily out of the University of Michigan (UM), says the risk is higher: as many as four out of every 1,000 RSA keys are insecure. The two research teams came to different conclusions about the impact of their findings, however. The UM-based group limits the scope of its advisory primarily to embedded devices; the first group, on the other hand, sees the problem as much more widespread. RSA Security Inc., for its part, claims that neither team has discovered a flaw in the RSA algorithm itself. The problems, RSA maintains, are strictly on the implementation side. "[T]he data does not point to a flaw in the algorithm, but instead points to the importance of proper implementation, especially regarding the exploding number of embedded devices that are connected to the Internet today," said RSA in a statement. The report prepared by the multinational team is based in part on information collected by the SSL Observatory project of the Electronic Frontier Foundation (EFF). EFF describes SSL Observatory as an attempt to collect and study the certificates used to encrypt HTTPS traffic on the IPv4 Internet; as part of its SSL Observatory effort, EFF maintains a dataset of all publicly-visible SSL certificates. The multinational team used these certificates, along with a separate dataset of other credentials (chiefly PGP keys), as the basis for its research. The title of the team's report,Ron Was Wrong, Whit Is Right, is a cheeky allusion to two giants of public key cryptography: Ron Rivest and Whitfield Diffie. The former is the "R" in RSA. According to the report, there's every reason to panic. "[A]mong the 4.7 million distinct 1024-bit RSA moduli that we had originally collected, 12,720 have a single large prime factor in common," notes the report, a collaboration of security researchers Maxime Augier, Arjen K. Lenstra, James P. Hughes, Joppe W. Bos, Thorsten Kleinjung, and Christophe Wachter. "[I]t does not seem to be a disappearing trend: in our current collection of 11.4 million RSA moduli[,] 26,965 are vulnerable, including ten 2048-bit ones," the report continues. According to researchers, successful exploitation "could affect the expectation of security that the public key infrastructure is intended to achieve." The team's findings seem to be at odds with the underlying math of the RSA algorithm, which is premised on the idea that the output of (i.e., the keys produced by) an input process using multiple (pseudo-)random values should be prohibitively difficult to factor. In other words, the keys produced by encryption schemes such as RSA should be, practically speaking, "secure." The team doesn't necessarily take issue with the math underlying the RSA algorithm. It instead focuses on the practical difficulty of implementing any encryption algorithm that requires a pseudo-random value as input. It does, however, contrast the RSA algorithm -- which (in its default configuration) uses multiple (pseudo-)random values to produce its output -- with those of other schemes, such as Diffie-Hellmann, ElGamal, and the digital signature algorithm (DSA), which instead use a single (pseudo-) random value. "We do not question the validity of this conclusion, but found that it can only be valid if each output is considered in isolation. When combined[,] some outputs are easy to factor because the above assumption sometimes fails," the report explains. "Cryptosystems such as RSA that require ... multiple secrets are more affected by the apparent difficulty to generate proper random values than systems ... that require a single secret." Connect the dots and you have a widespread problem, the research team concludes. "We were surprised ... by the extent to which public keys are shared among unrelated parties. For [encryption schemes such as] ElGamal and DSA[,] sharing is rare, but for RSA[,] the frequency of sharing may be a cause for concern," the report notes, suggesting that its findings won't come as a surprise to "agencies and parties that are known for their curiosity in such matters." The National Institute of Standards and Technology (NIST) proposed DSA for the digital signature standard (DSS) in 1991. At the time, this move was seen as controversial. According to the report, however, there may have been more to this decision than was then known. The report does emphasize that DSA, ElGamal, and similar schemes likewise require a sufficient degree of randomness, although -- unlike default-RSA -- they require only a single (pseudo-random) input value to generate a key. The Zero Effect How might an attacker go about exploiting this vulnerability? It's easier than you might think. As part of its testing, for example, the UM-based team developed a tool that can generate private keys for "all the hosts vulnerable to ... attack ... in only a few hours." (See below.) What's more, says a CISSP with a prominent public sector-oriented services firm, the problem itself isn't unknown. "It's called a birthday attack. It's called that because if you pick a room of 'X' number of people, it's almost guaranteed that a higher percentage of them than you might expect will have the same birthdays," this security professional explains. The findings show that the algorithms or methods used to generate keys in the affected implementations are insufficiently random, this CISSP says. ("Randomness" in this context applies to the selection of the large prime numbers used to generate keys in the first place.) In other words, some keys -- i.e., a higher proportion than one might reasonably expect -- share the same birthdays. This CISSP summarizes the problem by quoting a line from the circa-2000 film The Zero Effect. "If you're searching for something specific," he quotes, "your chances of finding it are very low. If you're searching for anything at all, your chances of finding it are very high." In this case, "anything at all" refers to the set of known (or predictable) prime numbers. A posting on the group blog "Freedom to Tinker" by Nadiah Heninger, post-doctoral fellow in the department of computer science and engineering at U.C. San Diego, and a member of the UM-centered team, describes just such a Zero Effect-like scenario. "The keys we were able to compromise were generated incorrectly -- using predictable 'random' numbers that were sometimes repeated. There were two kinds of problems: keys that were generated with predictable randomness, and a subset of these, where the lack of randomness allows a remote attacker to efficiently factor the public key and obtain the private key," she writes. "With the private key, an attacker can impersonate a [W]eb site or possibly decrypt encrypted traffic to that [W]eb site. We've developed a tool that can factor these keys and give us the private keys to all the hosts vulnerable to this attack on the Internet in only a few hours." Unlike the multi-national team, Heninger and the UM-based team see the scope of the problem as comparatively limited. "[T]here's no need to panic as this problem mainly affects various kinds of embedded devices such as routers and VPN devices, not full-blown [W]eb servers," she writes, dismissing speculation -- in The New York Times and elsewhere -- that the findings could or should undermine confidence in Web commerce. "Unfortunately," she concedes, "we've found vulnerable devices from nearly every major manufacturer and we suspect that more than 200,000 devices, representing 4.1 percent of the SSL keys in our dataset, were generated with poor entropy. Any weak keys found to be generated by a device suggests that the entire class of devices may be vulnerable upon further analysis."
<urn:uuid:7bd7d5ad-7591-4c62-bc89-591b1e8d0ce4>
CC-MAIN-2017-04
https://esj.com/articles/2012/02/27/rsa-key-kerfuffle.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00464-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954689
1,737
2.59375
3
Definition: A binary relation R for which a R b and b R a implies a = b. See also symmetric, irreflexive, partial order. Note: The relation "less than or equal to" is antisymmetric: if a ≤ b and b ≤ a, then a=b. The relation "is married to" is symmetric, but not antisymmetric: if Paul is married to Marlena, then Marlena is married to Paul (symmetric), but Paul and Marlena are not the same person. Equals (=) is antisymmetric because a = b and b = a implies a = b. Less than (<) is also antisymmetric because a < b and b < a is always false, and false implies anything. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "antisymmetric", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/antisymmetric.html
<urn:uuid:b8ca31a9-fd51-406c-ae07-87ec74bcb0b6>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/antisymmetric.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00126-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901354
286
3.484375
3
The IT Infrastructure Library® (ITIL®) encompasses the following six areas: - Problem Management - Incident Management - Change Management - Configuration Management - Service Level Management - Release Management To gain a further understanding of ITIL, download a Giva ITIL whitepaper. Giva eKnowledgeManager specifically addresses Problem Management. Problem Management helps ensure the stability of IT infrastructure and IT services. It requires maintaining a database of Problems and Known Errors. A Problem is an unknown, underlying cause of one or more Incidents representing Configuration Item(s) (e.g. Software, Hardware, documentation, etc.). Once the Configuration Item and the underlying cause is known, then a Problem becomes a Known Error. The Giva knowledge management software maintains a centralized database of Problems and Known Errors, streamlining creation, categorization, and retrieval of this information. Giva eKnowledgeManager helps you capture Problems and Known Errors as they are created, so that you can distribute and share the information with the right individuals at a later date. By tracking Known Errors, it helps the Change and Release processes correct the errors, improve the IT infrastructure, and eliminate further Incidents.
<urn:uuid:a76676ca-6a98-4cf8-a1a6-154e55092f05>
CC-MAIN-2017-04
https://www.givainc.com/knowledge-base-software/itil-problem-incident-service-level-cloud-hosted.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.816178
244
2.53125
3
Exascale computing is going to require chipmakers to build extremely efficient microprocessors. This has been the focus of the Green500 list, which forgoes talking about the world’s fastest clusters in favor of those with the best performance per watt rating. In this brave new world of high performance computing, and increasingly, and any kind of computing, chip efficiency is now under intense scrutiny. Yesterday, Real World Technologies posted an article detailing the computational efficiency of CPUs and GPUs, and how those designs have evolved over the last three years. In the analysis, author David Kanter looks at both the computational performance per watt as well as performance per physical die area. He also compares how the chip architectures have faired since 2009, when Kanter did his initial analysis. The evaluations were based on double precision floating point performance. In 2009, the standout in performance per watt and physical space was AMD’s RV770 GPU. The processor was able to perform 1.6 gigaflops/watt. It was also capable of performing just under one gigaflop per mm2. Intel’s Silverthorne processor was slightly less efficient than the RV770, but had far less density than AMD’s GPU. Subsequently renamed “Atom,” the chip was able to perform between 1.5 to 1.6 gigaflops/watt and was primarily tasked with powering mobile consumer devices. While the R770 appeared to be the clear winner in 2009, GPUs were not widely accepted as compute engines and suffered a number of challenges. Many were unable to deliver double precision floating point calculations, and those that could, often did so with limited performance. Programming GPUs was also difficult, as APIs were in their early stages. GPU technology has improved significantly over the past three years though. Almost all these “graphic” processors can perform double precision calculations and have become simpler to program, thanks to more mature programming frameworks like CUDA, and OpenCL. CPUs have also improved over the interval, including new vector extensions like x86 AVX. So what does the landscape look like today? From Kanter’s analysis: IBM currently takes the energy efficiency crown with their Blue Gene/Q (BG/Q) processor, which just so happens to power the most powerful supercomputer in the world. The chip can perform roughly 3.75 gigaflops/watt and is represented in the top 20 systems on the current Green500 list. Not far behind in efficiency is NVIDIA’s Fermi GPU, which performs close to 3 gigaflops/watt. The K computer’s SPARC64 chip is just a little further behind at 2.2 gigaflops/watt. All other mainstream CPUs in use for HPC – Intel’s Sandy Bridge, AMD’s Interlagos and IBM’s POWER7 – are further back, below 1.5 gigaflops/watt. Kanter says this divergence reflects a fundamental difference between traditional processors (x86 CPUs, POWER, and others) and throughput processors (GPUs and BG/Q). But, he notes, the difference in efficiency between the two groups has narrowed since 2009, and he expects them to eventually converge. Probably not in the short-term though. Before the end of this year, Intel will release its first Many Integrated Core (MIC) coprocessor, now rebranded as Xeon Phi, which promises over 1 teraflop of absolute performance. It will directly compete with NVIDIA’s Kepler K20 GPU, also due out later this year. Both chips will probably best BG/Q silicon on performance/watt. Further out, it should get even more interesting. Server-capable 64-bit ARM processors, low power x86 CPUs from Intel, heterogeneous CPU-GPU chips from (at least) AMD and NVIDIA, and whatever IBM is planning as sequel to BG/Q, are all in the pipeline for 2013-2014.
<urn:uuid:3c1d3497-8587-4e3d-9cdd-6f6becc09bbc>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/07/26/the_2012_performance_per_watt_wars/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00548-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955982
831
2.90625
3
Often, the role of IT and cloud computing on the economy is not given enough attention. Cloud computing will not only impact the role and lives of people within the IT domain but will have an effect on people outside of it as well. In fact, it will have a greater impact on the latter. The effect will start with a business moving IT into a third party data center which will be followed by business processes also being moved in that direction. This will also reduce the need for manpower because with the help of the cloud, there are fewer people required for tasks like automating a process and so on. According to CSC, an IT services provider, the acceleration in business process outsourcing through the cloud are a big factor in causing economic disruption. For instance, IDC predicted that cloud computing is in a position to create 14 million new jobs in another 3 years. The question however is whether this number represents the number of people who will merely replace a fraction of the number of people who have lost their jobs. Read More About Cloud Computing
<urn:uuid:782fce3b-6e28-4952-a860-2c629b3a70b1>
CC-MAIN-2017-04
http://www.datacenterjournal.com/cloud-computing-and-its-economic-effects/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00485-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964807
210
2.59375
3
“Traceroute” is a utility that’s commonly used when troubleshooting IP networks, but many network managers at the CCNA level and beyond aren’t really sure how it works or what you can do with it. One reason for this might be that, unlike most things in the IP world, there are no standards documents describing how “Traceroute” functions. Thus the implementations are vendor-specific, and not even the utilities’ names are standardized. With Cisco IOS and Unix it’s called “traceroute”, in the Microsoft world, it’s “tracert”, and other operating systems have similar utilities, such as “tracepath” for Linux. There is RFC 1393 Traceroute Using an IP Option, but as far as I know no vendor implements this, so we’ll talk about how “trace” programs work in the real world. Now suppose that we have two host computers in an IP internetwork that also contains several routers, as shown in Figure 1: Let’s pretend that we don’t know the topology, and we’d like to determine it. Specifically, we’re interested in the path taken by a packet going from Host 1 (H1, with IP address 18.104.22.168), to Host 2 (H2, with IP address 22.214.171.124). We’ll use a “trace” program on H1 to determine this. When we do, what appears on H1’s screen looks something like this: H1#trace ip 126.96.36.199 probe 1 Type escape sequence to abort. Tracing the route to 188.8.131.52 At this point we invoked the trace program from H1, tracing towards the destination IP address 184.108.40.206 (H2) and told it to send one “probe” packet for each hop. The probe is a unicast IP packet in which H1 has set the IP source address to its own, the IP destination address to H2, and the IP TTL field to a value of one (an artificially low number that wouldn’t be used for normal data). H1 will then encapsulate the probe packet in a frame and send the frame to R1, its default gateway. When H1 sends the probe packet, it also starts a timer. When the packet arrives at R1 several things occur, and RFC 1812 Requirements for IP Routers describes what those things are. Here’s a quote from section 5.3.1 of that RFC: The Time-to-Live (TTL) field of the IP header is defined to be a timer limiting the lifetime of a datagram. It is an 8-bit field, and the units are seconds. Each router (or other module) that handles a packet MUST decrement the TTL by at least one, even if the elapsed time was much less than a second. Since this is very often the case, the TTL is effectively a hop count limit on how far a datagram can propagate through the Internet. You might remember that in the IP world a “datagram” is the same thing as a “packet”. So, according to the RFC, when a router forwards a packet, it must decrement the TTL. By “elapsed time” we mean the amount of time that the packet spent sitting in the router between its arrival and being forwarded towards the destination. Okay, let’s continue on with section 5.3.1 of the RFC: When a router forwards a packet, it MUST reduce the TTL by at least one. If it holds a packet for more than one second, it MAY decrement the TTL by one for each second. Sounds reasonable … the TTL (Time To Live) was originally envisioned as a seconds counter (not a hop counter), so if the packet stays in the router for more than a second, decrement the TTL by the number of seconds that the packet sits in the router, and if it stays for a second or less, decrement the TTL by one. Since modern routers generally process packets pretty quickly (think about the per-packet latency of a router that’s forwarding a million or more packets per second), we’ll assume that the TTL is generally decremented by one for each router hop. And now, back to section 5.3.1: If the TTL is reduced to zero (or less), the packet MUST be discarded, and if the destination is not a multicast address the router MUST send an ICMP Time Exceeded message, Code 0 (TTL Exceeded in Transit) message (sic) to the source. Okay, so if a router decrements a packet’s TTL to zero or less, it must discard the packet. By default it also sends a packet containing an ICMP “TTL Exceeded” message (TEM) back to the original packet’s source. In closing, let us quote from the RFC one last time: The IP TTL is used, somewhat schizophrenically, as both a hop count limit and a time limit. Next time we’ll investigate how this RFC impacts the behavior of “trace” programs.
<urn:uuid:36bf0914-1240-4c0e-a357-5ab06683421b>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2011/01/19/traceroute-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00117-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923314
1,125
3.53125
4
Cambridge, UK, September 4, 2000 - Kaspersky Lab Int., an international anti-virus software development company, announces the discovery of W2K.Stream virus, which represents a new generation of malicious programs for Windows 2000. This virus uses a new breakthrough technology based on the "Stream Companion" method for self-embedding into the NTFS file system. The virus originates from the Czech Republic and was created at the end of August by the hackers going by the pseudonyms of Benny and Ratter. To date, Kaspersky Lab has not registered any infections resulting from this virus; however, its working capacity and ability for existence "in-the-wild" are unchallenged. "Certainly, this virus begins a new era in computer virus creation," said Eugene Kaspersky, Head of Anti-Virus Research at Kaspersky Lab. "The 'Stream Companion' technology the virus uses to plant itself into files makes its detection and disinfection extremely difficult to complete." Unlike previously known methods of file infection (adding the virus body at beginning, ending or any other part of a host file), the "Stream" virus exploits the NTFS file system (Windows NT/2000) feature, which allows multiple data streams. For instance, in Windows 95/98 (FAT) files, there is only one data stream - the program code itself. Windows NT/2000 (NTFS) enables users to create any number of data streams within the file: independent executable program modules, as well as various service streams (file access rights, encryption data, processing time etc.). This makes NTFS files very flexible, allowing for the creation of user-defined data streams aimed at completing specific tasks. "Stream" is the first known virus that uses the feature of creating multiple data streams for infecting files of the NTFS file system (see picture 1). To complete this, the virus creates an additional data stream named "STR" and moves the original content of the host program there. Then, it replaces the main data stream with the virus code. As a result, when the infected program is run, the virus takes control, completes the replicating procedure and then passes control to the host program. "Stream" file infection procedure before infection ||File after infection | "By default, anti-virus programs check only the main data stream. There will be no problems protecting users from this particular virus," Eugene Kaspersky continues. "However, the viruses can move to additional data streams. In this case, many anti-virus products will become obsolete, and their vendors will be forced to urgently redesign their anti-virus engines." Protection against the "Stream" virus has already been added to the daily update of AntiViral Toolkit Pro (AVP). Please, update your anti-virus. AntiViral Toolkit Pro can be purchased in Kaspersky Lab online store at the following address: http://www.digitalriver.com/dr/v2/ec_Main.Entry?SP=10007&SID=25571&CID=0.
<urn:uuid:253dac93-7bb2-4810-a3fd-7eafe072ce64>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2000/A_New_Generation_of_Windows_2000_Viruses_is_Streaming_Towards_PC_Users
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00419-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912847
645
2.75
3
Why do techies talk about layer numbers like, “That’s a layer2 problem or a layer 3 problem”? The short answer is maybe they expect everyone to know what they know. So let me share what they know about the layers, the numbers, and what happens at each layer. In the last entry, I gave you some generic layers and the standards that apply to them. Here we’ll talk about the most accepted model for explaining layers: ISO’s Open Systems Interconnect (OSI) model. Let’s start at the bottom and work our way up the layers. Layer 1 is the Physical layer. It includes the cabling, connectors, wall jacks, and interfaces in the equipment from computers to switches to routers in wired networks as well as the radios, lasers, satellites, dishes, and microwaves used in wireless networks. It supports physical connections without addresses. Layer 2 is called the Data Link Layer. It links the Physical layer to the Network layer for sending and receiving frames of data. To do that, it uses a Logical Link Control (LLC) functions and Media Access Control (MAC) addresses to identify the source and target devices on the LAN. The Network Interface Card (NIC) in your computer has a unique burned in MAC or physical address that the switch uses to connect it to the rest of the local network devices. Layer 2 has error detection to keep from wasting the rest of the stack’s time with bad frames. MAC addresses only work on a local network. The Network Layer is the busiest layer in the model. Layer 3 includes logical addresses to identify the network, subnet, and interface of the source and target of the datagrams being sent and received beyond the LAN into the rest of the Internet. It also uses those logical addresses to route packets through all networks. There are other protocols at the Network layer to do network diagnostics and identify logical errors. There is also a protocol to match the destination logical address with the target system’s MAC or physical address. Basic security also happens here including packet filtering and access control. Layer 3’s main device is a router, though, except for routing, the things the network layer does also happen in computers, servers, smart phones, tablets, and other “end devices” which are the senders and receivers of network communications. Layer 4 or the Transport layer’s task is to get the data from one end device or host to another. It identifies the application and a return socket, detects errors, separates the message into pieces small enough for the application and the network to handle, and makes it possible to have more than one session over one physical link. The Session layer picks up where the Transport Layer left off by offering logging on and logging out of a network application. Layer 5 works with an application to setup, manage, and shut down a virtual connection. It acknowledges data received and retransmits data as needed. Some protocols combine the Transport and Session layer functions. Layer 6 is officially known as the Presentation Layer. It works with the application to format the data before sending it down the stack to go out onto the network. This includes enciphering and deciphering for security, compression and decompression for efficiency, graphics formatting, and any format translation to make it possible for different systems to understand the data. The Application Layer provides a way for the end user to work with the network application. Layer 7 is what the user sees and how users enter data or destinations for applications like file transfers, network printing, messaging, Web browsing, and email. Some people go beyond the official OSI model. They add some layers of their own that, they believe, have an effect on network applications. - Layer 8 — Office Politics - Layer 9 — IT Department - Layer 10 — Users Internetworking Multilayer Switches
<urn:uuid:e063e912-24c8-45a7-bf33-7df0b3002571>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/04/18/layers-and-numbers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00355-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917193
803
4.15625
4
Tucked away in a garage-like room on the former Castle Air Force Base, a team of undergraduate and graduate students toil at making drones better at collecting data, and a recent development could be big step toward that goal. Though there are 18 projects running parallel with each other, none has a military application, and that’s the way the team likes it. “We’re not spying on you,” said Brandon Stark, the lab manager. “We’re spying on cows and plants.” Drones have a dubious reputation, but they don’t have to, he said. UC Merced’s team is working to use drones to make data collection easier for farmers, environmentalists and firefighters, to name a few who could benefit from the unmanned aerial vehicle. The lab recently gained clearance from the Federal Aviation Administration to fly its AggieAir Minion DC1 drone at the Merced County Radio Control Club’s field south of Atwater. Stark, a 29-year-old doctoral student from Tracy who’s majoring in electrical engineering and computer science, said he expects to receive permission to do the same over the university’s protected land, which includes 6,500 acres of grassland and vernal pools. A sizable amount of work was required to get the FAA’s permission, Stark said, and having it makes UC Merced the only school in the University of California system with its own space. Now the team can do its research at will. Their research could be important in agriculture-heavy Merced County, as some drones are used to survey crop land. Eventually, they will be used regularly to survey individual plants. The drones fly over fields taking traditional photos and “near infrared” photos, a type of infrared that is particularly good for studying vegetation. Taken about every five seconds, the photos are put together to make a three-dimensional map of the area. The researchers can already use the near infrared photography and other data to identify plant species. The next goal is to take readings on which parts of an orchard are not doing well to identify the cause – water stress, high nitrates or pests. “Having a more detailed scope on our images and more detailed look into our field, we can look to help to alleviate some of that variability and give a more consistent crop yield,” Stark said. The laboratory is also a hands-on training ground for mechanical engineering students. Brendan Smith, 24, a doctoral student from Los Angeles, works primarily with the Aquacopter, a four-propellered drone made to land on a body of water and take water samples. Life forms in the water leave behind DNA and researchers can use it to estimate animal populations in the water. Smith has interned in Silicon Valley and China, among other places. He was thinking about working full time, but then ran across the drone lab and decided to pursue his doctoral degree. “I found this lab and fell in love with it,” he said. “I actually had no idea about (drones) until I joined the lab.” His next project will be using a drone to collect air samples, which can be tested for its likelihood to cause Valley fever. Dan Hirleman, dean of UC Merced’s School of Engineering, said the university’s use of drones and development of new technology could set it apart from other schools. “We’re kind of at the ground zero for a lot of what’s going on in those areas,” he said. “It’s just a perfect fit with our sustainability theme and the application area.” The research could also become a factor in attracting elite students to the university, he said, because it’s advanced and “captures your imagination.” The drones will likely become more common in any industry needing to collect environmental data. Amanda Carvajal, executive director of the Merced County Farm Bureau, wouldn’t speculate on the potential for the use of drones in Merced County agriculture. But she can remember questions about global-positioning systems and their usefulness for farmers more than a decade ago, and said they are now an essential piece of farming. ©2014 the Merced Sun-Star (Merced, Calif.)
<urn:uuid:38d862f1-5e61-490f-907f-5f129871f870>
CC-MAIN-2017-04
http://www.govtech.com/education/UC-Merced-Drone-Gains-FAA-Clearance.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00355-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955118
908
2.796875
3
Color Symphony - Quiz Questions with Answers - Through which six countries does the “Blue Danube” flow? - Which is the Red Planet? - What four seas are named after colors? - What is the color of mourning in China? - Who was the artist who gave his name to a shade of red, often used to describe red hair? - Which reptile is proverbially known for its power of changing color? - What color are Siamese kittens at birth? - What was the color of the boat of the owl and the pussy cat? - A plant supplied the blue dye with which the Ancient Britons used to stain their bodies. What is its name? - What is the name of the chemical substance which causes the green color of plants? - What is the color of an Irish kilt? - What do the terms signify? (a) white Collar (b) White Feather - Who or what are “Blue Babies”? - In which part of the world will you fine the highly-colored ‘Bird of Paradise’? - What is meant by the following colorful phrases? (a) to paint the town red (b) a blue-blooded person - What colours are the following gems: - What are the three pigments responsible for the colour of a human being? - What breed of dog has a blue tongue? - Does the zebra have (a) black stripes on white, or (b) white stripes on black - What was the original colour of post-boxes? - Germany, Austria, Czechoslovakia, Hungary, Yugoslavia and Romania - White Sea, Black Sea, Red Sea and Yellow Sea - The Chameleon - (a) An office worker - Babies born with defective hearts - New Guinea and adjacent islands - (a) to indulge in riotous revelry (b) an aristocrat - (a) Yellow (b) dark red (d) purple or violet - Melanin, Carotene and Hemoglobin - Black stripes on white This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:b63c3b0e-00ad-415b-8cc3-2f121ff3eb55>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-715.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00383-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931338
543
2.65625
3
What Is a Digital Asset?By Baselinemag | Posted 2003-02-01 Email Print Do you know everything you need to know about digital assets and metadata? Check out this PDF for a brief rundown. A digital asset is a computer file containing "unstructured" datasuch as an image, a video or audio clipor "structured" datasuch as a document or spreadsheetthat has been tagged with descriptive information. This "metadata" helps define such assets as the cover, chapters, author's bio and pictures in a book. Then, it helps make it possible to retrieve and reuse such digital assets. The AOL Time Warner Book Group creates about three-dozen types of assets from a book such as James Patterson's Four Blind Mice.
<urn:uuid:1a70827a-fbba-4777-bb3f-f9b1a95406fb>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Projects-Integration/What-Is-a-Digital-Asset
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00411-ip-10-171-10-70.ec2.internal.warc.gz
en
0.891975
154
2.84375
3
Internet shoppers this past holiday season spent an estimated $2.35 billion online, according to The Commerce Threshold, published by Forrester Research. Forrester predicts that global Internet sales will reach $3.2 trillion in 2003 if businesses and government cooperate to develop electronic commerce. With all this money changing hands, concern turns to transaction security. Unless consumers are assured of security and privacy, Internet transactions will not become mainstream. For many vendors and Web-site operators, the primary concern is the confidence that their site will not be vandalized by crackers or used as a gateway to break into their local area networks. According to the Clinton administration's 1997 report, "A Framework for Global Electronic Commerce," there are five basic principles of information security: privacy, integrity, authenticity, confidentiality and nonrepudiation. Privacy involves keeping transaction information between agency and consumer. Integrity is a guarantee that the message is not altered, erased or intercepted by a third party. Authentication lets both sender and receiver know they're dealing with whom they think they're dealing with. Nonrepudiation ensures that parties involved can't deny that they actually sent the transmission. Cryptography enables confidential information to be transmitted across unsecured networks without the risk of interception or tampering, essentially by putting the data into code. The receiver has a secret key used to decrypt the message. Only those having the correct key can decode the document. It may be foolproof, but not expert-proof. Unauthorized users may decrypt a message by figuring out the key themselves. One way is to find a pattern that can be used to reconstruct the original message or the key used to encrypt it. Another is a full-frontal assault in which crackers try to break the code by guessing millions of possible keys until the right one is found. A fast computer is capable of trying millions of guesses in seconds, but the process is still no walk in the park. In symmetrical encryption, the same secret key is used to both encrypt and decrypt a message. Symmetric algorithms present problems for users who have never met or exchanged keys. Public Key Cryptography Public key cryptography is also known as asymmetric cryptography. Keys come in pairs. One key is public, widely available. The other, private key is a closely guarded secret. To send a secure message, one looks up the recipient's public key and uses it to encrypt the message. The message then can be sent over an unsecured channel without fear of interception. The private key is necessary to decode it. The advantage of public key encryption is that no arrangements need to be made in advance. Another benefit of public key cryptography is that it allows users to create digital signatures. Digital signatures are a reversal of the public key encryption/decryption scheme. A digest of the text is encrypted and sent with the text message. A "message digest function," or "one-way hash," takes a plain text message and transforms it into something that looks random. Message digest functions generate short, fixed-length values known as "hashes." The hash is much shorter than the original message. There is no known way to create two different messages that generate the same hash. The recipient decrypts the signature and recomputes the digest from the received text. If the two digests match, the message is authenticated, verifying that the text has not been altered in transit. Messages encrypted using an individual's private key can only be deciphered with the public key. Both symmetric and public key cryptography provide integrity-checking. If a message is modified in transit, either because of a communication error or deliberate intervention, the message won't decrypt correctly. While public key encryption systems seem ideal for the Internet, they appear to be slower than symmetric systems, making them unsuitable for transferring large documents. The solution is to combine the two systems. First, a secret key is generated at random. This secret key, or session key, is discarded after the communication session. Second, using symmetric algorithms and the session key, the message is encrypted. Third, the session key is encrypted with the recipient's public key. This becomes the "digital envelope." The digital envelope is a code within a code. The public key method is used to exchange the secret key, and the secret key is used to encrypt and decrypt the message. The encrypted message and digital envelope are sent to the recipient. The recipient's private key decrypts the message, recovering the session key. The session key decrypts the message. The message is secure because it is encrypted using a symmetric session key that only the recipient and sender know. In public key encryption, a large networked database keeps track of everyone's public keys and distributes them on demand. Certifying authorities are third-party commercial enterprises that vouch for the identities of individuals and organizations. They provide users with a digital certificate that has been signed by one of these authorities. From the certificate, the sender can verify the recipient's identity and recover his or her public key. There are a variety of cryptographic protocols on the Internet, each specialized for a different task. The Protocol of Protocols Some protocols are designed to secure specific applications such as e-mail and remote login. Others are for more general applications, providing cryptographic services to multiple communications modes. SSL (Secure Sockets Layer) is the dominant Web protocol for encrypting general communication between server and browser. SSL was introduced by Netscape, which has released four versions. Microsoft released its PCT protocol in 1996, with its first release of Internet Explorer. Microsoft supports SSL in all version of its Internet software, in addition to PCT. SSL 3.0 is implemented in all newer Explorer versions. Secure Electronic Transaction (SET) is a specialized protocol for safeguarding credit card transactions. It was jointly developed by Visa, Mastercard, Netscape and Microsoft. Unlike SSL, a general-purpose system of encrypting communications, SET is highly specific, used only to secure credit and debit card transactions between customers and merchants. Although a large number of software vendors announced support for the protocol, only Verifone Corp. released a SET product. It is predicted that Web browsers will eventually provide direct support for SET, either by incorporating the protocol in the browser software itself or by having users download it in the form of an ActiveX control, Java applet or plug-in. It is likely to assume a major role in Web financial transactions this year. Using SSL to accept credit card payments is the way it's most often done on the Net and the basis for the "commerce systems" sold by Netscape, Microsoft and others. The problem is that while SSL transmits the credit card numbers safely from customer to merchant, it does not help with the rest of the transaction: checking the number for validity, checking that the customer is authorized to use this card, authorizing the transaction with the customer's bank and actually processing the transaction. High-end commerce systems validate orders as they come in, contacting a credit card authorization service's server via SSL or a proprietary protocol. Such systems may also manage refunds, back orders, transaction logging, shopping cards, online catalogs and inventory control. A fully functional credit card processing system is either a lot of custom programming or an expensive packaged solution. Another problem with SSL-based schemes is server security. Because credit card numbers are transmitted to the merchant's Web server, there's a fair chance that the merchant will choose to save it to a file or database. If someone succeeds in breaking into the merchant's server, the entire database of credit card numbers may be compromised. Virtual Private Networks Virtual private networks (VPNs) or extranets play an important role in securing the Internet, and are becoming attractive to many businesses and government agencies. A VPN allows safe business-to-business transactions using a "secure tunnel" through the public network. VPNs employ tunneling, encryption, authentication and access control. There are two categories of tunneling. One is end-to-end tunneling, in which a tunnel is established from a user's PC to another PC or server. PCs at both ends establish the tunnels and encrypt and decrypt data between the two computers. The other category is node-to-node tunneling, used to connect LANs at different sites. The data is encrypted and tunneled to the remote users. Three common tunneling protocols are Point-to-Point Tunneling Protocol (PPTP) -- most widely used -- IP Security (IP Sec), and the Layer 2 Tunneling Protocol (L2TP). See No Evil, Intercept No Evil? Stronger encryption is clearly necessary for any serious security. However, the debate rages between security and privacy advocates. Some government and law enforcement agencies oppose the use of stronger and unbreakable encryption keys, citing national security and law enforcement reasons. They don't want terrorists, drug dealers and other criminals to have absolutely unbreakable encryption. On the other hand, privacy advocates are concerned about the security of individuals' private information from both government and online criminals. Both advocates have strong arguments, and it may be some time before technology and legislation strike a balance.
<urn:uuid:d04a0660-1988-4304-82ef-34ceb1b2611a>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/An-Internet-Security-Primer.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00135-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926052
1,873
3.0625
3
When Air Traffic Control Became National DefenseBy Edward Cone | Posted 2002-04-08 Email Print In the wake of Sept. 11, a key piece of FAA technology has become an important weapon in the fight against airborne terror. After Sept. 11, a key piece of FAA technology was deployed at the Colorado Springs headquarters of the North American Aerospace Defense Command (NORAD). Inside the military agency's Cheyenne Mountain facilities now sits a duplicate version of Explorer, the master tracking system that displays all the commercial aircraft flying across the country at any time. Communications between the civilian and defense air watchdogs also has been enhanced by a direct phone line. And NORAD personnel are now on-site at FAA headquarters and regional air traffic centers to speed reaction to any emergencies, says Major Barry Venable of NORAD. But no project aimed at modernizing the FAA's methods of controlling airplane traffic likely would have made a difference to the four doomed flights of Sept. 11, 2001, aviation experts and administration officials say. The biggest problem that day was more psychological than technological: the assumption that these hijackers, like others before them, would force the planes' pilots to make unscheduled landings, not seize control of the aircraft and turn them into missiles. Although the FAA has a history of letting projects run well behind schedule, those systems, which allow pilots to change course in flight and give them better options when they take off and land, are designed to improve normal air traffic. They are designed to get regularly scheduled flights to their destinations more efficiently, not cope with hostile takeovers of aircraft. Not that the FAA is beyond blame in what happened last fall. Security lapses allowed hijackers to get on board planes with box-cutters and knives. And there are technologies that might have made a difference if placed in cockpits, such as location transmitters that can't be turned off. "Things like that have been bandied about for a long time," says Mary Schiavo, the former Department of Transportation Inspector General who has long criticized the FAA for being too hesitant to mandate new equipment that would improve safety. A transponder, for instance, makes a plane easier to track because it provides identifying information, in addition to amplifying the signal sent back in response to a radar pulse. Tracking a plane that doesn't want to be tracked also can be done the old fashioned way, by bouncing a signal off the skin of the aircraft. But that is trickier and could involve NORAD, which has been criticized by online magazine Slate for its own slow response in September. Schiavo points to streaming video from the cockpit as a technological improvement that "would have been truly lifesaving" because authorities could have seen the cockpit being invaded and the pilots killed. But prior to September, pilots opposed the idea, figuring it was meant to check up on them, she says.
<urn:uuid:b7a99506-c96f-476b-9520-feb0318ddf84>
CC-MAIN-2017-04
http://www.baselinemag.com/project-management/When-Air-Traffic-Control-Became-National-Defense
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00521-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968085
591
2.703125
3
Cyber-threats and data breaches are growing in number and severity, botnets are enlisting new conscripts at a chaotic pace, cryptoransomware attacks are raking in millions for malicious hackers… and we are hard-pressed and ill-prepared to face the challenges that lie ahead. The widening gap of cybersecurity talent is at the heart of this crisis. There’s currently a 1 million shortage of skilled workers in the cybersecurity sector. According to (ISC)2, that number will rise to 1.5 million by 2020 – Cisco’s Annual Security Report says we’ll reach the 1.5 million threshold by 2019. A study led by ISACA shows that most organizations are having trouble find cybersecurity talent to fill their IT security vacancies. An argument that helps in understanding the cybersecurity talent gap is offered by Ira Winkler, President of Secure Mentem, in a ComputerWorld op-ed. In the article, Winkler rightly argues that the shortage of cybersecurity talent is rooted in the fact that companies and agencies are looking in the wrong places. Winkler proposes that instead of perceiving security as a standalone discipline, it should be considered as a discipline within the computer field. Most companies require hard-to-obtain certifications for their security posts. In the U.S. alone there are currently around 50,000 jobs that require CISSP-certified professionals, but the actual number of people who can fill those posts are not even near that number. However, many prominent security professionals have entered the field without a cybersecurity degree or any security-specific training, because they had already acquired the needed basis through their practice of other disciplines such as programming or network administration. And fact of the matter is that most security incidents and cyber-attacks do not take place through highly sophisticated methods, but are rather as a result of badly implemented security policies within organizations or the general lack of awareness among employees which lead to different forms of social engineering attacks such as phishing and the distribution of malware. Remedying this situation does not require too much domain-specific knowledge. Organizations and firms only need to look for cybersecurity talent among the more experienced members of their staff. Different initiatives and programs, sponsored by government agencies and the private sector, have been launched to help deal with the security talent shortage problems. Some of them involve using gaming concepts and competition to find cybersecurity talent among professionals in other IT and programming sectors, and to attract the young, tech-savvy masses into considering this as a career by informing them about the industry’s dire need and the rewarding job opportunities that are available in the domain. Examples of cybersecurity competition include UK’s Cyber Security Challenge, and CyberPatriot in the U.S. Another approach that is worth mentioning is efforts being made to raise awareness at the average employee and executive levels about cybersecurity issues. Human errors account for a huge number of security incidents, and organizations should try to improve security by making turning their employees in their biggest security assets. Some of the nice trends we’re seeing in this area is again the use of gaming concepts, such as PwC’s Game of Threats, which allows senior executives and board members deal with real-world cybersecurity situations from a higher perspective in a game. Data Guardian has also come up with a nice gaming concept, called Data Defender, which actually turns cybersecurity measures and practices into a game which rewards employees for their good behavior and penalizes them for policy breaches. Bug bounty programs might also help in both finding cybersecurity talents and preventing discovered security holes from being put to malicious use. Tech firms have been using this type of approach for years, and more recently the Pentagon announced its own bug bounty program, inviting white hat hackers to find security gaps in its networks and reap the rewards. And finally, the improvement of AI and advances in machine learning technologies might help somewhat in dealing with security threats and filling the cybersecurity talent gap. We’re still not quite there, but we’re getting close. Though I do admit that I’m reluctant to see humans giving up their jobs to robots. The cybersecurity talent shortage is real and serious, and we need to think about it and deal with it today. Tomorrow might be too late.
<urn:uuid:a598b74d-e9f9-4f9c-b448-f8ebef3ab2ad>
CC-MAIN-2017-04
https://bdtechtalks.com/2016/03/15/the-cybersecurity-talent-shortage-crisis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00245-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96115
868
2.734375
3
If University of Barcelona cosmologist Fergus Simpson is correct, your basic intelligent alien from another planet is heavier than a grizzly bear -- and hopefully, less inclined to eat us when we finally meet. We didn't notice the "or more" part at first. Yikes! We might have to look further up on our chart of animal weights to a moose, a Grevy's zebra, or a Bactrian camel. How did Simpson come up with this estimate? Is it science or speculation? According to Newsweek: The argument relies on a mathematical model that assumes organisms on other planets obey the same laws of conservation of energy that we see here on Earth—namely, that larger animals need more resources and expend more energy, and thus are less abundant. There are many small ants, for example, but far fewer whales or elephants. Makes sense, as far as it goes, though it doesn't take into account one important variable: A planet's surface gravity relative to Earth's. A smaller planet with a weak gravitational pull probably would tend to have larger forms of life than a Jupiter-sized planet with crazy-strong gravity. So if the aliens are huge and full of bad intent, our only hope may be our planet's gravitation. Hopefully, we won't have messed that up before the times comes. This story, "They might be giants: Scientist theorizes that intelligent extraterrestrials are huge" was originally published by Fritterati.
<urn:uuid:a5cc2935-3bef-462a-81e9-ea3cadf5b2a9>
CC-MAIN-2017-04
http://www.itnews.com/article/2906184/they-might-be-giants-scientist-theorizes-that-intelligent-extraterrestrials-are-huge.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00365-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945162
305
3
3
Targeted attacks are threats aimed directly as specific categories. These are called Advanced Persistent Threats (APT). These threats are created specifically to victimize a certain individual or organization. The experts at welivesecurity indicate that APTs have special objectives: APTs can seriously impair an organization’s efforts to function properly. APTs are usually planted as seeds which grow into threats that can endure for an extensive amount of time. Over time APTs can adapt, change, and spread across a network infrastructure. APTs can then expose 0-day vulnerabilities, thus creating vulnerabilities that have yet to manifest itself. APTs have a history of attacking various organizations, including those in banking, business, educational, government, and medical organizations. The attacks are never random, and any business without ample protection would be at risk for an attack. Threats such as these have layers and phases of attack cycles. Such method include cyber, physical, and even deception. According to the Identity Theft Resource Center there were 720 major data breaches in 2014. Sources indicate that these are well-known public attacks. These numbers only represent the incidents that were reported. An even greater number is estimated for unreported attacks which have been hidden from the public eye to protect the organization from negative publicity. In light of the recent attacks, awareness for IT security increased: While many are embracing IT security as a part of their day-to-day responsibilities, 5.5% of users are still unprotected against these attacks. Easy steps such as two-factor authentication are making progress. Longer passwords with phrases are also becoming the norm. Password strength meters can now be seen on most up-to-date websites. These efforts strengthen our defenses against cyber-attacks, but only until cyber-criminals find another way to malign your data. Always be up-to-date with the latest in cyber security to protect yourself from cyber-attacks.
<urn:uuid:138744e7-3389-43fb-bbe5-ca117050d496>
CC-MAIN-2017-04
https://www.apex.com/what-are-targeted-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00486-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961902
396
2.90625
3
The Chilean Government's Ministry of Energy and Chilean Economic Development Agency Corfo (Corporación de Fomento de la Producción) have big plans for meeting future power needs, and have selected Abengoa to construct the largest solar-thermal power plant in the continent. The project uses green energy to transform 17.5 hours of Chilean sunlight into 110 MW of electricity per day, the first power plant of its kind for Chile. Construction will begin in the mountainous Atacama Desert, which receives some of the highest concentrations of solar radiation on the planet. Abengoa has built several solar power plants around the world, most of which use solar-thermal tower technology. Like the tower they built in South Africa, this plant will use a series of mirrors called heliostats to direct sunlight onto a large tower. The heliostats are attached to motors that allow them to track the sun's movement on two separate axes in order to capture as much heat from the sunlight as possible, which is then aimed at the tower. The tower is filled with specialized molten salts that efficiently transfer their energy to a heat exchanger, which generates superheated steam. This steam then powers a high-powered turbine to generate the electricity, much like how a nuclear power plant uses the radiation produced by nuclear fission to superheat steam for a turbine. The difference, of course, is that sunlight is much more environmentally friendly than decaying plutonium. The solar plant will also feature a thermal storage system that has never been used by Abengoa before, which will allow the plant to produce electricity 24 hours a day. Technological advances like these showcase the forward-thinking practices that have recently taken Latin America by storm, making it a closely watched region for prospective enterprises. By the year 2014, Latin America is expected to see a growth in Internet technology spending of over 10 percent, a larger growth than any other region predicted this year. Later this month, TMCnet will be holding its LatinComm Conference & Expo to showcase some of the opportunities that tech businesses can capitalize on in the multi-billion dollar Latin America market. Edited by Blaise McNamee
<urn:uuid:d6aac31c-6fb6-4dfc-acc7-641c896a933e>
CC-MAIN-2017-04
http://www.iotevolutionworld.com/topics/smart-grid/articles/2014/01/16/366997-abengoa-gets-green-light-construct-south-americas-largest.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00302-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940609
441
3.0625
3
The Power of GreenBy Alison Diana | Posted 2008-07-30 Email Print Saving the world starts with saving a dollar—and I.T. is doing its part. Some companies’ green initiatives are driven by corporate consciousness and a sense of responsibility to the community. In its Statement of Ethics, Wal-Mart requires that associates comply with all relevant environmental laws, reduce waste and appropriately dispose of toxic or hazardous materials, and respect the environmental rights and interests of the communities in which they are located. The largest retailer in the world also mandates the use of ISO 14001, a series of environmental management standards developed and published by the International Organization for Standardization. Other local, national and global initiatives help organizations decrease their negative impact on the environment. In June, the Climate Group—a global nonprofit organization designed to build public-private partnerships to resolve climate-change problems—unveiled the U.S. portion of the “Together” campaign, which helped British consumers save 522,000 tons of CO2 and more than $200 million on household energy bills. And the private sector is not alone in implementing environmental programs. As part of a drive to conserve energy, the city of Las Vegas joined the campaign and is networking with other cities to develop innovative ways to promote sustainability and lessen negative impacts on the environment. In December, 2007, it was named American City of the Year by the World Leadership Forum, primarily due to its leadership in sustainability, such as running 90 percent of city vehicles on alternative fuels, according to the international group. Las Vegas is also part of the Green Grid, a global consortium that advances energy efficiency in data centers and business computing ecosystems. “The city joined the consortium because we realized that this sustainability issue is not just one thing,” Marcella says. “It’s not just hardware. It’s not just applications, and it’s not just people.” By 2015, federal agencies must reduce energy consumption by 30 percent, according to INPUT, a Reston, Va., research firm. This, combined with rising energy costs and the need for more technology, will drive the green IT market, the firm says. Requirements are spreading to local and state government, too. Las Vegas, for example, has a green provision in its purchase orders. “It has to be incorporated in everything we do,” Marcella says. Some businesses are already seeing a dollar return on their green initiatives, while others look forward to reaping rewards. Rotech, a Winter Park, Fla., provider of home medical equipment, estimates that heating and cooling account for about 95 percent of its electric bill and anticipates healthy savings after it virtualizes another 15 racks in late summer. “At that time, I want to look at the power usage and compare it with 12 months ago and 24 months ago,” says Marlin Clark, Rotech’s director of information systems technology. “It will be interesting to see the difference.” Some organizations enjoy both tangible and intangible benefits from their environmental programs. “Our green initiatives were designed to reduce power consumption and save on energy costs, but we have received positive press and recognition because of it,” says The Planet’s Lowenberg. Whatever the driver for reducing IT’s environmental impact, the benefits could stretch far beyond corporate profits, reaching into the lives of a generation that is just now tuning into the Muppets and the plights of a certain green frog.
<urn:uuid:ccbe4310-4fc7-4aa8-b8a4-b7fdc36e707e>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/IT-Management/The-Power-of-Green/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947836
726
2.59375
3
This may seem like a simple question but for a lot of system administrators who “inherit” systems or are unfamiliar with operating systems that have been forced upon them, it can be very confusing, especially if you're coming from a proprietary UNIX® operating system such as Solaris™ or HP-UX to a Linux®-based distribution. For most of us old-school UNIX people, the reliable “uname” utility is what we are most familiar with. Execute it with the -a option and you get something like: SunOS sungod 5.10 Generic_137138-09 i86pc i386 i86pc Cryptic to most but for a seasoned Solaris administrator it means that the host “sungod” is running Solaris 10 on an x86 (non-SPARC) system and the current kernel patch level is 137138-09. If you run the same command on a Linux system, you might see something like: Linux greenlantern 188.8.131.52-0.3-default #1 SMP 2010-09-20 11:03:26 -0400 \ x86_64 x86_64 x86_64 GNU/Linux At most, you can determine that the host “greenlantern” is in fact a Linux system running a default kernel version 2.26.47 and it is a 64-bit system because of the “x86_64” in the statement. The “uname” utility was first introduced as part of the UNIX Programmer's Workbench (PWB) in 1973. Not only is “uname” a utility, it is a system call – uname() conforms to System Vr4 and POSIX.1-2001. It extracts information from the running kernel. Linux distributions are built off of standard kernels but are packaged and bundled differently. Some distributions are Debian-based while others might be Red Hat-based. The collection of packages and how the packages were compiled and ultimately delivered are what make Linux distributions unique. Most UNIX and Linux operating systems have some form of a release file detailing the operating system version and release information. This file, usually in the /etc directory, is a simple text file. Some operating systems adhere to POSIX while others strive to be Linux Standard Based (LSB). Of course there are more standards and this fact reminds me of Andrew Tanenbaum's famous statement, “The nice thing about standards is that there are so many of them to choose from.” For those systems which comply with LSB, you can use the lsb_release(8) utility. For example, running the lsb_release command on my openSUSE system reveals the following: $ lsb_release -r -i -c -d Distributor ID: SUSE LINUX Description: openSUSE 11.1 (x86_64) Much more informative than the “uname” utility. It should be noted that the utility just parses various configuration files such as those in /etc. Specifically, on SUSE systems it examines the following files: $ ls -l /etc/SuSE-* -rw-r--r-- 1 root root 24 Dec 3 2008 /etc/SuSE-brand -rw-r--r-- 1 root root 38 Dec 4 2008 /etc/SuSE-release Here is a list of some operating systems, related commands, and their release files which will help you determine the specific version and release of your operating system: Operating System | Command or Configuration Files AIX uname -a FreeBSD uname -a HP-UX uname -a OpenSUSE and Novell SUSE /etc/SuSE-brand Red Hat /etc/redhat-release Finally, many system administrators are confused when they apply all of the available updates to their system via their local software repositories but are still running the same minor revision. For example, if you are running openSUSE 11.1 and you perform a “zypper update” to install available software updates, this will not bring your system up to openSUSE 11.2. To do this, you must specifically issue the distribution upgrade command. (zypper dist-upgrade). This is because when you perform a normal update, it is only examining the repositories your current system has configured. For example, here is my list of repositories (zypper lr): $ zypper lr # | Alias | Name | Enabled | Refresh 1 | NVIDIA | NVIDIA | Yes | Yes 2 | NVIDIA-11.1 | NVIDIA-11.1 | Yes | No 3 | Packman Repository| Packman Repository | Yes | Yes 4 | adobe-linux-i386 | Adobe Systems Inc | Yes | No 5 | google | Google - i386 | Yes | No 6 | google-chrome | google-chrome | Yes | Yes 7 | google-testing | Google Testing - i386 | Yes | No 8 | openSUSE 11.1-0 | openSUSE 11.1-0 | No | No 9 | repo-debug | openSUSE-11.1-Debug | No | Yes 10 | repo-non-oss | openSUSE-11.1-Non-Oss | Yes | Yes 11 | repo-oss | openSUSE-11.1-Oss | Yes | Yes 12 | repo-source | openSUSE-11.1-Source | No | Yes 13 | repo-update | openSUSE-11.1-Update | Yes | Yes On the other hand, Red Hat distributions such as CentOS would be updated to the next minor revision (e.g., 5.4 to 5.5) because of the way the repositories are structured. For system administrators maintaining patch levels and an accurate inventory of their systems, it is imperative they know how to determine the exact operating system version you are running. Hopefully, this post has provided some guidance on clarifying how to find this important information. Cross-posted from Security Blanket Technical Blog
<urn:uuid:9d9b3698-8937-4754-af8b-b1fa601dce6c>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/9657-Which-Linux-or-UNIX-Version-Am-I-Running.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00420-ip-10-171-10-70.ec2.internal.warc.gz
en
0.873088
1,307
2.6875
3
Agencies at every level of government have a wealth of information in legacy databases that needs to be accessed or shared. Whether the agency wants to share this information internally or externally, the problem is essentially the same: making the information available in a secure, accessible way that minimizes development time and keeps application maintenance costs to a minimum. As the world races toward universal acceptance of the World Wide Web, many are looking to the Web as the way to solve this problem. Admittedly, the Web offers some powerful advantages: Access is nearly ubiquitous; it is platform independent; existing infrastructures can be used, often with few or no changes; and the same technology can be used for internal as well as external applications. Despite these advantages, the protocol that makes the Web work (Hypertext Transfer Protocol or HTTP) leaves much to be desired when building robust client/server applications. HTTP is a rather simple protocol: * The client, typically called a browser, requests a file from a program called a Web server. Usually, the Web server is running on another computer so the request is done over a corporate network or via the Internet. The requested document usually includes embedded Hypertext Markup Language (HTML) "tags," which describe how the document should be displayed. Some tags may also include references (links) to other documents or images. * The Web server looks for the document and, assuming it finds it, sends it to the requesting client. * The client reads and formats the document in accordance with the embedded HTML tags. If the client later wants another document, the above cycle is repeated. The fundamental problem with using HTTP as the basis for robust, interactive applications is that it has no memory. HTTP doesn't provide a means for knowing whether the same user has made one request or a thousand requests. This approach works perfectly for the original intended purpose: to provide a simple way to organize research data and make it accessible. However, it does not work for most database-related work that requires a "session" -- an ongoing connection between the database program and a particular user. For example, suppose a user is looking at an alphabetized list of people in a mailing list and wants to scroll down to see the "next" group of names. In order to provide the "next" group, the application program needs to know which query produced the list of names and which names are currently displayed. HTTP isn't capable of this kind of interaction -- as soon as a document is delivered in response to a request, HTTP forgets everything related to the request and the requester. In more technical terms, this lack of session continuity is called a "stateless connection," meaning the server does not keep track of the "state" of the client. Due to the statelessness of HTTP, Web-server software can operate very quickly and, with very few exceptions, doesn't need to know anything about the content of the documents it provides: It simply finds the requested document, checks file permissions and, if allowed, returns it to the requester. Expanding the capability of Web-based applications are programmers and vendors who have devised ways to create what might be called "pseudosessions." These solutions all amount to different ways of keeping track of values between HTTP requests. For example, these techniques can let a user enter his name in a Web-based form, store the name so that the next time he requests a document (whether six seconds or six weeks later), the Web-server software appears to remember the user's name. The limited functionality of Web-server software can also be expanded by installing "partner programs," which work with the Web-server software and provide functionality not present in the software itself. Here's how it works: * The client makes a request. * The Web-server software sees the request and recognizes it as "belonging" to one of its partner programs. It sends the request to the appropriate partner program. * The partner program carries out the requested action. This could involve running a program, asking a database for some information, formatting a report, uploading or downloading a file, etc. * The partner program returns an HTML file to the Web-server software; this may be as simple as a message stating it completed the task, or it could be a fully formatted summary of information returned from a database. * The Web-server software sends the document to the requesting client. As far as the Web-server software is concerned, it received a request and returned a file. It doesn't know or care how the file was created or what occurred behind the scenes. * The client reads and formats as usual. A growing number of solutions and languages exist for creating these partner programs. However, many require sophisticated programming skills often beyond the experience of people who have cut their development teeth writing HTML (see "Cold Fusion Advanced Capabilities" on page 68). This is where Cold Fusion enters. Cold Fusion, developed by Allaire of Cambridge, Mass., consists of several pieces: * The Cold Fusion Server (CFS) is a program that works cooperatively with Web-server software. When a user requests a page with a ".cfm" extension (as opposed to the usual ".html" or ".htm" extensions for ordinary HTML pages), the Web server passes the request to CFS, which reads and interprets the Cold Fusion commands in the document. These commands may instruct CFS to query a database, upload a file, set the value of a variable, etc. After the commands have finished, CFS sends a standard HTML page to the Web-server software, which returns the page to the client browser via the network. * The Cold Fusion Markup Language (CFML) contains the Cold Fusion commands that are understood by CFS. Like HTML, CFML consists of simple tags added to text documents. However, unlike HTML, most CFML tags do not tell the client browser how to display a document. Instead, CFML tags give CFS instructions that are carried out before the page is returned to the client browser. CFML pages are stored in files called Cold Fusion "templates" and are identified by the ".cfm" extension. * Cold Fusion Studio, sold separately, is a development environment specifically designed to facilitate creation of Cold Fusion templates. COLD FUSION MARKUP LANGUAGE Since CFML is a markup language, it is quite easily learned by people with little to no programming experience. "There are very basic prerequisites for learning Cold Fusion," said Steve Drucker, president of Figleaf Software and co-author of The Cold Fusion Web Database Construction Kit. "You need to know HTML and have some knowledge of SQL. With that, the beginning course takes you through developing a full-blown order-entry application. Actually, anyone who sits with [Cold Fusion] for a couple of hours is amazed what they can do with it." This ease stems from the fact that any complexity associated with connecting to and requesting information from a database is transparently handled by CFS. All the developer needs to know is the CFML tag that defines a query and the tag that displays the query results. Instead of training an HTML writer on an entirely new scripting language, he or she can be taught new tags that are used in the same way as the familiar HTML. COLD FUSION SERVER The CFS is Windows NT- or Sun Solaris-based server software that runs in association with a Web server. When CFS is installed, the associated Web-server software is automatically configured to pass all requests for files ending with ".cfm" on to CFS, which reads the CFML tags in the requested file, carries out the specified actions and returns an HTML document to the Web-server software. On testing, CFS software was easy to install. Configuration was largely automatic and a Web-based utility is available through which administrators can change Cold Fusion parameters, define sources of data, turn debugging off and on, and perform other common tasks. This administration screen can even be accessed remotely -- a convenience for the system administrator who receives a support call at home from an application group working in the evening or over the weekend. According to Drucker, CFS performance is good and scales well. This tends to be supported by the existence of some well-known, high-access Web sites that depend on Cold Fusion (see , for example). CFS allows developers to easily create pseudosessions without having to delve into the details of how it is done. COLD FUSION STUDIO Although Cold Fusion Studio is sold separately from CFS, it is intended as an integrated development tool. It also ships with a single-user version of CFS, making it easy for developers to build Cold Fusion applications without the initial investment needed to purchase CFS. Studio nicely combines the editing capabilities of Allaire's HTML editor, Homesite, with facilities geared specifically to the Cold Fusion developer, including tool bars that give push button access to Cold Fusion tags. Although this type of functionality is common to most HTML editors, Studio fortunately avoids the trap other editors fall into: trying to make things too "easy." Many HTML editors try to shield developers from the underlying HTML by providing a graphical user interface (GUI). The problem with this approach is simple: With the exception of the most basic HTML pages, the GUI usually gets in the way. Most developers prefer to work with plain-text HTML files, so they have direct access to HTML tags and can more precisely place text and images. Studio is set up to let developers work this way. A particularly nice feature of Studio is its integrated GUI query builder; this helps the developer write and test database queries before cutting and pasting them into the actual application code. John Stanard, a Web application developer based in Northern Virginia, particularly likes this feature. He tested numerous HTML editing tools before settling on Studio. "As a Web developer, I've found Cold Fusion Studio to be one of the best tools I have used," commented Stanard. "It's integrated database connectivity, project management and version control features have reduced development time enormously." David Aden is a senior consultant for webworld studios -- a Northern Virginia-based Web application development consulting company. June Table of Contents
<urn:uuid:c17bc4ee-2c71-486b-9bac-61380c2668a1>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Its-a-Cold-World-Wide-Web.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00540-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928602
2,106
3.203125
3
FCW Time Machine: 1988 Kick-starting artificial intelligence The director of the Army's Artificial Intelligence Center, Lt. Col. Anthony Anconetani, was thrilled in June 1988 to be developing expert systems using computers based on Intel's powerful new 80386 processor. The center had at least four artificial intelligence projects under way when Federal Computer Week reporter Fred Reed talked to Anconetani. Here is how Reed described them. Document Designer was formerly known in-shop as Organize the World, or OT World. Document Designer takes a file that shows manpower authorization figures for an organization and turns it into a wiring diagram showing relationships among organizations and suborganizations. The program is mouse-driven, so any suborganization can be instantly expanded to show its components at various levels of organization. At each level, staffing is shown. To change the organization, people can use the mouse to drag units to a new place on the wiring diagram, whereupon all staffing levels affected by the change automatically adjust. Further rules can be incorporated so that, for example, a colonel should not work for another colonel. Violations are flagged. OT War takes as its input files showing force structure, as well as data on equipment and operational plans. It correlates these to show where units are, what kind of equipment they have and what their situation will be in the future. In time of war, the program will help determine the sequence for feeding units into combat by keeping track of such variables as readiness and status of equipment. It eventually will be able to catch impossibilities that plague military operations, such as trying to send the same unit to two places at once. A model of the Army's Automated Combat Control System (ACCS) down to battalion level will help the service decide the most efficient way to allocate money and effort in the development of ACCS. For example, completing one part of ACCS may depend on having first completed another one, which may in turn depend in complicated ways on yet other factors. Force Alignment Planner, a knowledge-based program, relates force structure to the pool of military personnel with an eye to maintaining career paths that will keep people in the Army. Among other things, the program will keep tabs on those jobs --such as military intelligence, signal corps and quartermaster corps -- in which there are more jobs than qualified people. To make matters worse, the scarcity of people is greater at higher ranks, where demand is consequently greater. The Force Alignment Planner should help make decisions about retraining people for unfilled jobs and in recruiting to fill vacancies. The Physical Disability Rating Adviser is an expert system that recommends the percentage of disability that should be assigned to patients. This program, according to Anconetani, has been validated with a group of 20 case histories of patients suffering from psychiatric disorders, and further validation of more than 200 case histories is in progress. The result, he said, is an improvement in consistency in awarding disability.
<urn:uuid:6d7c8f57-59e4-4ea3-afc4-8fdb1b3da6dc>
CC-MAIN-2017-04
https://fcw.com/Articles/2007/02/12/FCW-Time-Machine-1988.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00356-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950308
604
2.640625
3
- How much is it going to cost? - What is the minimum resources / capacity required to roll out a public cloud service? Both questions are very much related. But to get to and idea of how much your cloud infrastructure is going to cost, you first need to fully understand what your resource requirements are and how much capacity (minimum resources) will be required to maintain an acceptable level of service and hopefully turn a profit. In traditional dedicated or shared hosting environment, capacity planning is typically a fairly straight forward endeavor, (a high allotment of bandwidth and a fairly static allotment of resources), a single server (or slice of a server) with a static amount of storage and ram. If you run out of storage, or get too many visitors, well too bad. It is what it is. Some managed hosting providers offer more complex server deployment options but generally rather then one server you're given a static stack of several, but the concept of elasticity is not usually part of the equation. Wikipedia gives a pretty good overview of concept of capacity planning which is described as process of determining the production capacity needed by an organization to meet changing demands for its products. Although this definition is being applied to a traditional business context, I think it works very well when looking at public cloud infrastructure. Capacity is defined as the maximum amount of work that an organization is capable of completing in a given period of time with the following calculation, Capacity = (number of machines or workers) × (number of shifts) × (utilization) × (efficiency). A discrepancy between the capacity of an organization and the demands of its customers results in inefficiency, either in under-utilized resources or unfulfilled customers. The broad classes of capacity planning are lead strategy, lag strategy, and match strategy. - Lead strategy is adding capacity in anticipation of an increase in demand. Lead strategy is an aggressive strategy with the goal of luring customers away from the company's competitors. The possible disadvantage to this strategy is that it often results in excess inventory, which is costly and often wasteful. - Lag strategy refers to adding capacity only after the organization is running at full capacity or beyond due to increase in demand (North Carolina State University, 2006). This is a more conservative strategy. It decreases the risk of waste, but it may result in the loss of possible customers. - Match strategy is adding capacity in small amounts in response to changing demand in the market. This is a more moderate strategy. Compounding cloud capacity planning is the idea of elasticity. Now not only are you planning for typical usage, you must also try to forecast for sudden increases in demand across many customers using a shared multi-tenant infrastructure. In ECP we use the notion of capacity quota's where new customers are given a maximum amount of server capacity, say 20 VM's or 1TB of storage. For customers who require more, they then make a request to the cloud provider. The problem with this approach is it gives customers a limited amount of elasticity. You can stretch, but only so far. Another strategy we sometimes suggest is a flexible quota system (Match strategy) where after a period of time, you now trust the customer and automatically give them additional capacity or monitor their usage patterns and offer it to them before it becomes a problem. This is similar to how you seem to magically get more credit on your credit cards for being a good customer or get a call when you buy an unexpected big ticket item. The use of a quota system is an extremely important aspect in any capacity / resource planning you will be doing when either launching or running your cloud service. A quota system gives you a predetermined level of deviation across a real or hypothetical pool of customers. Which with out it, is practically impossible to adequately run a public cloud service. Next you must think of the notion of overselling your infrastructure. Let's say your default customer quota is 20 virtual servers, what percentage of those customers are going to use 100% of their allotment? 50%, 30%, 10%? Again this differs tremendously depending on the nature of your customers deployments and your comfort level. At the end of the day to stay competitive you're going to need to oversell your capacity. Overselling provides you the capital to continue to grow your infrastructure, hopeful slightly faster then your customers capacity requirements increase. The chances of 100% of your customers using 100% of their quota is probably going to be slim, the question you need to ask is what happens when 40% of your customers are using 60% of their quota? Does this mean 100% of the available capacity is being used? Cloud capacity planning also directly effects things like your SLA's and Q0S. Regardless of your platform, it's never good idea to use 100% of your available capacity nor should you. So determining the optimal capacity and having away to monitor it is going to be a crucial aspect in managing your cloud infrastructure. I believe to fully answer the capacity question you must first determine your ideal customer. Determine where your sweet spot is, who you're going after (the low end, high end, commodity or niche markets). This will greatly help you determine your customer's capacity requirements. I'm also realistic, there is no one size fits all approach. For the most part, Cloud Computing is a best guess game, there are no best practices, architectural guidelines or practical references for you to base your deployment on. What it comes down to to is experience. The more of these we do the better we can plan. This is the value that companies such as Enomaly and the new crop of cloud computing consultants bring. What I find interesting is the more cloud computing as a service model is being adopted by hosting firms, the more these hosters are increasingly coming to us not only for our cloud infrastructure platform, but to help them navigate though a scary new world of cloud capacity planning.
<urn:uuid:40c39322-e52e-47e4-9e8e-7b3cf52ef284>
CC-MAIN-2017-04
http://www.elasticvapor.com/2009/09/public-cloud-infrastructure-capacity.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00440-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949433
1,197
2.53125
3
What You'll Learn - Core fundamentals of software and application development - Core programming, decision structures, error-handling, and object-oriented programming - Programming for desktop vs. programming for web applications - Integrate applications with databases and data stores for real-time queries Who Needs To Attend Business analysts, project managers, IT professionals, and business stakeholders who require a deeper understanding of programming basics and need to work more effectively with software developers and the development community.
<urn:uuid:171834bb-f3b4-4ba4-a97c-1ef9aeb5d4ed>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/116945/mta-software-development-fundamentals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00036-ip-10-171-10-70.ec2.internal.warc.gz
en
0.867339
97
2.578125
3
A load balancer is a device that distributes network or application traffic across a cluster of servers. Load balancing improves responsiveness and increases availability of applications. A load balancer sits between the client and the server farm accepting incoming network and application traffic and distributing the traffic across multiple backend servers using various methods. By balancing application requests across multiple servers, a load balancer reduces individual server load and prevents any one application server from becoming a single point of failure, thus improving overall application availability and responsiveness. Load balancing is the most straightforward method of scaling out an application server infrastructure. As application demand increases, new servers can be easily added to the resource pool, and the load balancer will immediately begin sending traffic to the new server. Core load balancing capabilities include: When one application server becomes unavailable, the load balancer directs all new application requests to other available servers in the pool. To handle more advanced application delivery requirements, an application delivery controller (ADC) is used to improve the performance, security and resiliency of applications delivered to the web. An ADC is not only a load balancer, but a platform for delivering networks, applications and mobile services in the fastest, safest and most consistent manner, regardless of where, when and how they are accessed. Load balancing uses various algorithms, called load balancing methods, to define the criteria that the ADC appliance uses to select the service to which to redirect each client request. Different load balancing algorithms use different criteria. Traffic volumes are increasing and applications are becoming more complex. Load balancers provide the bedrock for building flexible networks that meet evolving demands by improving performance and security for many types of traffic and services, including applications. NetScaler ADC is an industry leading application delivery controller that delivers business applications to any device and any location with unmatched security, superior L4-7 load balancing, reliable GSLB, and 100 percent uptime. Learn more about load balancers:
<urn:uuid:b71b8b4c-2374-4ca4-8980-dc9c8ac94092>
CC-MAIN-2017-04
https://www.citrix.com/glossary/load-balancing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00090-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906382
391
3
3
Voice over IP is short for Voice over Internet Protocol, and is better known as VoIP. Voice over IP refers to the transmission of voice traffic over internet-based networks instead of the traditional PSTN (Public Switched Telephone Network) telephone networks. The Internet Protocol (IP) was originally designed for data networking and following its success, the protocol has been adapted to voice networking by packetizing the information and transmitting it as IP data packets. VoIP is now available on many smartphones, personal computers and on internet access devices such as tablets. Voice over IP (VoIP) can facilitate tasks and deliver services that might be cumbersome or costly to implement when using traditional PSTN: - More than one phone call can be transmitted on the same broadband phone line. This way, voice over IP can facilitate the addition of telephone lines to businesses without the need for additional physical lines. - Features that are usually charged extra by telecommunication companies, such as call forwarding, caller ID or automatic redialing, are simple with voice over IP technology. - Unified Communications are secured with voice over IP technology, as it allows integration with other services available on the internet such as video conversation, messaging, etc. These, and many other advantages of voice over IP, are making businesses adopt VoIP Phone Systems at a staggering pace.
<urn:uuid:23850c05-5539-4238-a627-5a1918a51619>
CC-MAIN-2017-04
http://www.3cx.com/pbx/voice-over-ip/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00395-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946756
268
3.6875
4
RETIRED CONTENTPlease note that the content on this page is retired. This content is not maintained and may contain information or links that are out of date. The purpose of this Standard Operating procedure is to ensure that all staff responsible for Incident Management are aware of the objectives, roles, and procedures involved in every phase of the process. This document should be used as a best-practice guide, and can either be adapted or used as an example to guide your organization’s own process formalization. This template includes procedures for the following phases of Incident Management: - Incident Intake - Initial Triage - Incident Classification - Incident Escalation - Incident Investigation and Diagnosis - Incident Resolution - Critical Incident Procedures - Process Metrics and Reporting Use this template to develop standard operating procedures that will successfully manage the entire lifecycle of an incident. Use the blueprint: Establish a Right-Sized Incident Management Process, to guide you in formalizing your procedures and adapting the recommendations to best fit your organization.
<urn:uuid:325870da-f7ec-457f-b70c-70a3f0af848a>
CC-MAIN-2017-04
https://www.infotech.com/research/incident-management-standard-operating-procedure
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00395-ip-10-171-10-70.ec2.internal.warc.gz
en
0.881352
211
2.671875
3
Mad Dog 21/21: Zigbee And The Waggle Dance May 27, 2014 Hesh Wiener In 1973, Karl von Frisch received a Nobel Prize for his work on honeybee communications. Among other accomplishments, he decoded the waggle dance, the method by which a honeybee tells others where it found pollen. Twenty-five years later, the Zigbee Alliance began promoting a data communications scheme inspired by the waggle dance. In 2003, Zigbee became an IEEE standard, and in 2006 it was revised and improved. Today it is a core technology for local Internet-of-Things networks, controllers, devices, and sensors installed by builders ranging from giant data service providers down to do-it-yourselfers. Zigbee networks can share spectrum and controllers–such as tablet computers–with Wi-Fi networks. In smart home applications, for example, high data rate devices like security cameras will often use Wi-Fi to talk to a controller, while low data rate devices, such as smoke alarms, door and window sensors, and motion detectors, are likely to communicate using Zigbee. The local controller may talk to other devices such as PCs, smartphones, and tablets using Wi-Fi or a hardwired LAN. IoT developers, by using tablet computers that resemble low-end general purpose Android slates for controllers, take advantage of the inexpensive hardware and software. A Zigbee controller tablet can manage multiple types of networks the way a similar small computer does in a smartphones or general purpose tablet. Instead of NFC or Bluetooth or GPS radios, an IoT tablet might have transceivers for Zigbee or another IoT local network scheme. Like general purpose smart devices, IoT controllers have Wi-Fi radios and software that makes it easy to integrate the IoT with local equipment and also provide communication via WANs to remote server farms managed by home security companies or other providers. The Android platforms used to manage IoT devices let the local controllers run some ordinary apps. For instance, an end user might want to check a weather forecasting app before adjusting thermostats, lighting, or other environmental controls. The solution is to install a standard weather service app. The weather app runs right on the Zigbee controller. It is the very same app available to users of Android phones and tablets via Google Play or another app source or a variation of that app provided by the maker of the controller. However, even though IoT controllers share quite a bit of technology with run-of-the-mill Android tablets, the Internet of Things is in some ways quite different from the Internet of mobile clients. IoT really needs Zigbee or an alternative local networking scheme that has technical and economic characteristics similar to those offered by Zigbee. One of the most important characteristics of Zigbee is its very low power requirement. A Zigbee radio that fits inside a magnetic door or window closure sensor might pack a small battery that lasts three, four, even five years. Zigbee devices are also smart and dependable. A Zigbee smoke detector rarely needs a fresh battery and, when the battery starts to run low, it issues a distress call telling its controller to initiate a service call. A Zigbee light switch might not need wiring at all. Instead, the energy required to flip the switch on or off could generate enough power to fire up a radio and sent a command as a short burst of data. There are corresponding smart devices for the appliance end of things. For instance, there are light bulbs with Zigbee radios that can be adjusted by remote control; the adjustments include brightness and hue. Having these radios alive all the time doesn’t lead to large power bills; they use milliwatts. With Zigbee, the low power means a short range. While one Wi-Fi hub can fill a typical home with plenty of signal, Zigbee might be able to reach only 50 to 100 feet, and possibly less. Consequently, the star network topology used with Wi-Fi hubs simply won’t work in an IoT setting. Instead, Zigbee uses a mesh system. If a controller reaches out to talk to a device, such as a thermostat, and it cannot reach the target directly other Zigbee devices within range will step up and act as repeaters. Similarly, a sensor trying to send a message to a controller that is out of reach asks its neighbors to repeat the message. They will do this until the information gets to its target and the receiving device gets an acknowledgement back to the originator of the message. By naming its scheme after the dance of the honeybee, the Zigbee Alliance elevated local networking to a higher philosophical level. The waggle dance is a social communication, the Twitter of the insect world. The way it works, when a bee returns to its hive excited by a significant find of suitable flowers, it will move into the middle of a group of worker bees and dance around in a figure eight pattern. During the middle portion of the path the bee will waggle from side to side. The orientation of the dance tells other bees about the direction to the source of pollen. The waggle pattern provides information about the distance from the hive to the flowers. Now it turns out that the waggle dance is only used if the returning bee has come a considerable distance, typically more than 100 yards. If the returning bee has found flowers closer to the hive it performs a different dance. In any event, in a lively hive every bee coming back with a waggle dance motivates several bees to search for the flowers it has found. These bees may in turn dance up and motivate even more workers; they are like the repeater radios in an electronic Zigbee network. Zigbee isn’t the only local networking scheme used by IoT technologists, but it is the leading open standard, and its popularity seems to be growing. Its two main rivals, Z-Wave and Infineon, are proprietary. Their creators are happy to license the technology and they also do a pretty good job of policing their licensees to enable end users can rely on interoperability within either of their schemes. The Zigbee Alliance is trying to keep things orderly, too, but as is the case with other open standards, Zigbee is somewhat more prone to suffer compatibility issues. Still, the openness has been a fabulous magnet for developers, and it is one reason Zigbee apparatus is the primary IoT offering for do-it-yourselfers at Lowe’s. Smart home services from Comcast and Time Warner Cable also use Zigbee augmented by Wi-Fi for cameras and some other devices. By contrast, Verizon’s smart home services use the proprietary Z-Wave local technology. Even if Verizon is not in the Zigbee camp, it has a pretty complete set of devices. It is too early to say whether Zigbee will evolve in ways that put it far ahead of other IoT local networking schemes, but at the moment it looks like Zigbee is pulling ahead. The IoT world will soon include its own wide area networking scheme, one that could handily beat mobile telephony and wired WAN alternatives. The first noteworthy player is Sigfox, based in France, which is piloting a service in the Silicon Valley area, one that spans the peninsula from San Francisco down to San Jose. Sigfox hopes to offer networking that costs $10 per year per hub and possibly, for customers with many hubs and low traffic, a lot less than that. Sigfox radios use low power and low data rates but have a far longer reach than local networking technologies like Zigbee. In the USA, one of the outfits that will piggyback on Sigfox is Whistle, which will offer a communicating pet collar and a support service to help track Spot and Puff. IBM is well aware of Sigfox and at least one of its customers, an IoT company called Worldsensing. Four years ago, IBM gave Worldsensing a 2010 Smart Camp award for its FastPrk smart cities parking system. Since then, Worldsensing, using a Sigfox WAN, lit up what it says it the largest automated parking space management system in the world, in Moscow; it has been running for about a year and a half. Nevertheless, even though its PR folk are buzzing away, IBM has not yet developed a significant presence in the part of the Internet of Things where networked gadgets are routinely doing the waggle dance. One would think this would come naturally to the company, and perhaps it soon will. After all, Bee is IBM’s middle name.
<urn:uuid:d44e1c34-d11a-4d90-9e0d-9018f99c3957>
CC-MAIN-2017-04
https://www.itjungle.com/2014/05/27/tfh052714-story04/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00119-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938311
1,747
2.5625
3
Definition: (1) The inputs for which a function or relation is defined. For instance, 0 is not in the domain of reciprocal (1/x). (2) The possible values of a variable. See also range, total function. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "domain", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/domain.html
<urn:uuid:6597458a-e713-4d49-a0ed-993509c2a203>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/domain.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00055-ip-10-171-10-70.ec2.internal.warc.gz
en
0.817445
162
2.734375
3
Satellites come to the rescue when ground systems fail Data now drives satellite communications in disaster response - By William Jackson - Oct 28, 2010 Satellites deliver the fallback system for emergency responders in areas where disasters have destroyed or damaged terrestrial infrastructure. In the past, satellite communications principally facilitated voice traffic, but that is rapidly changing. “We are definitely doing more data than voice,” said Jack Deasy, civil programs director at satcom provider Inmarsat. “Data is what is driving the industry.” The first large-scale demonstration of that shift was during the response to the Haiti earthquake in January, which left much of the island nation’s communications infrastructure in ruins. For the first two weeks, many response teams relied almost exclusively on mobile satellite terminals for communications using Inmarsat’s Broadband Global Area Network (BGAN) service. The terminals have a throughput of 200 to 400 kilobits/sec, which was adequate for voice and more than adequate for e-mail, text messages, tweets and other data services that rescuers relied on to share information and tap expertise across the world. Big telework savings trumps butts in the seats Navy tests telework tool for Reserves For continuity, build telework into operations The shift to data is a reflection of the increasingly mobile, connected lives people live, Deasy said. “As the world moves toward wireless connectivity, people want that capability everywhere,” especially in disaster areas. That includes government users. “The government is often the early adopters, and they are big users, especially for mobility,” he said. About 40 percent of Inmarsat’s revenue is from government customers, and the United States is its largest customer. The satellite industry has a 10- to 15-year lead time for fielding new systems, and Inmarsat bet in the 1990s, when it began designing the fourth-generation satellites that support BGAN, that IP data connections would become increasingly important. The BGAN satellites launched in 2005 and 2006, and the service became fully operational in 2008. Operating in the L Band spectrum, at 1.5 GHz, it enables voice and data communications through a laptop-sized terminal that a user can set up in minutes to establish a shared 500 kilobits/sec IP channel. Voice codecs use about 4 kilobits/sec, so there also is plenty of room for data in a channel. BGAN uses three satellites in geosynchronous orbit over the equator, each about the size of a double-decker bus with solar panels about 100 yards across. Inmarsat is preparing to deliver more bandwidth for mobile IP in its fifth generation of satellites and services. It is spending $1.2 billion for an Earth station and a new fleet of satellites that Boeing is building. The new satellites will operate in the Ka Band, from 26.5 GHz to 40 GHz, and are supposed to be able to support throughput of 50 megabits/sec to a small terminal. The satellites are expected to launch in 2013 and 2014. William Jackson is a Maryland-based freelance writer.
<urn:uuid:5a43b7d5-c114-46fc-b7aa-7cafe2016316>
CC-MAIN-2017-04
https://gcn.com/articles/2010/11/01/telework-sidebar.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00055-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947381
661
2.921875
3
In emergencies or disaster situations, immediate on-site communication is key, but not always possible. When participating agencies' radios operate on different frequencies, cross-communication is impossible without the aid of an interoperability channel or device. The need for interoperability is a growing concern across the nation, and has state public safety agencies searching for solutions. In New Jersey, regionwide interoperability channels, or shared channels, in each radio band allow daily radio interoperability. The state also uses caches of radios placed throughout the state at strategic locations and programmed with regional and local interoperability channels. However, some agencies' radios could not use the interoperability channels because they did not have the signal capacity, so another answer was needed. The New Jersey Department of Law and Public Safety recently purchased 21 Incident Commanders Radio Interface (ICRI) interoperability units to enable communications interoperability across multiple agencies in the event of an emergency, and 21 more are coming soon. The ICRIs, manufactured by Communications-Applied Technology (C-AT), will allow agencies and jurisdictions with incompatible radios to communicate, despite their different frequency allocations. Do You Copy? New Jersey's ICRI bridges can establish a cohesive communications system in less than 5 minutes, and set up is simple, said Ray Hayling, New Jersey's chief public safety communications officer, who explained that this was an important factor in choosing the ICRI. "We didn't want anything that would be overly complicated, but would do the job," he said. Interconnect cables link up five different radios to the small, 3-pound ICRI box, and these radios automatically allow other radios using the same frequency to receive and transmit communications through the ICRI bridge. The audio from a transmitting radio is received by the ICRI and distributed to all other radios that have a similar or identical radio connected to the box, whether legacy, ultra high frequency (UHF), very high frequency (VHF), or 800 MHz, which creates communication across multiple frequencies. The transmitting radio only needs a similar radio attached to the box to send communications, but any other type of radio attached to the box can receive the communications and distribute them to like radios on the same frequency. "I can talk in an 800 MHz radio, and the box takes the audio and sends it back through the radios attached to the box, but on the UHF, VHF and other bands that it needs to," said Hayling. "The ICRI allows you to have interoperability across any frequency band." Additionally two talk groups are possible using the ICRI bridge. Flipping a toggle switch above the port for each radio will either place the radio in one of the two talk groups, or render the radio temporarily inactive. When creating the ICRI box, C-AT considered several attributes necessary in an emergency situation. "The ICRI had to be physically small, it had to be physically rugged, it had to be very simple to operate, and it had to run for a very long time on an internal power supply," said Seth Leyman, founder and president of C-AT. The ICRI box runs on eight "AA" batteries for an average of 30 continuous hours. To sustain power, C-AT designed the ICRI with minimal lights and energy-efficient screen. Although the box can use an alternate current or direct current power source, some emergency situations prevent the use of external power, Leyman said. With an internal power supply, the ICRI box is a mobile unit that can be transported to where it's needed most. One of the major concerns using interconnect switches like the ICRI, Hayling said, is their ability to interfere with other interconnect switches in close proximity. To address this issue, New Jersey's ICRI boxes have a function called Voice ID that identifies the box causing the problem so those in charge of distributing the boxes can
<urn:uuid:80d95d76-1a9c-4a17-a4c1-001ecef3b651>
CC-MAIN-2017-04
http://www.govtech.com/wireless/99394919.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00541-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940745
790
2.890625
3
For all the talk about analytics these days, there has been little mention of one of the most powerful techniques for analyzing data: location intelligence. It's been said that 80% of all transactions embed a location. A sale happens in a store; a call connects people in two places; a deposit happens in a branch; and so on. When we plot objects on a map, including business transactions and metrics, we can see critical patterns with a quick glance. And if we explore relationships among spatial objects imbued with business data, we can analyze data in novel ways that help us make smarter decisions more quickly. For instance, a location intelligence system might enable a retail analyst working on a marketing campaign to identify the number of high-income families with children who live within a 15-minute drive of a store. An insurance company can assess its risk exposure from policy holders who live in a flood plain or within the path of a projected hurricane. A sales manager can visually track the performance of sales territories by products, channels, and other dimensions. Geographic Information Systems. Location intelligence is not new. It originated with cartographers and mapmakers in the 19th and 20th century and went digital in the 1980s. Companies, such as Esri, MapInfo, and Intergraph, offer geographic information systems (GIS) which are designed to capture, store, manipulate, analyze, manage, and present all types of geographically referenced data. If this sounds similar to business intelligence, it is. Unfortunately, GIS have evolved independently from BI systems. Even though both groups analyze and visualize data to help business users make smarter decisions, there has been little cross-pollination between the groups and little, if any, data exchange between systems. This is unfortunate since GIS analysts need business data to provide context to spatial objects they define, and BI users benefit tremendously from spatial views of business data. Convergence of GIS and BI However, many people now recognize the value of converging GIS and BI systems. This is partly due to the rise in popularity of Google Maps, Google Earth, global positioning systems, and spatially-aware mobile applications that leverage location as a key enabling feature. These consumer applications are cultivating a new generation of users who expect spatial data to be a key component of any information delivery system. And commercial organizations are jumping on board, led by industries that have been early adopters of GIS, including utilities, public safety, oil and gas, transportation, insurance, government, and retail. The range of spatially-enabled BI applications are endless and powerful. "When you put location intelligence in front of someone who has never seen it before, it's like a bic light to a caveman," says Steve Trammel, head of corporate alliances and IT marketing at ESRI. Imagine this: an operations manager at an oil refinery will soon be able to walk around a facility and view alerts based on his proximity to under-performing processing units. His mobile device shows a map that depicts the operating performance of all processing units based on his current location. This enables him to view and troubleshoot problems first-hand rather than being tethered to a remote control room. (See figure 1.) Figure 1. Mobile Location Intelligence. A spatially-aware mobile BI application configured by Transpara for an oil refinery in Europe. Transpara is a mobile BI vendor that recently announced integration with Google Maps. GIS Features. Unlike BI systems, GIS specialize in storing and manipulating spatial data, which consists of points, lines, and polygons. A line is simply the intersection of two points, and a polygon is the intersection of three or more points. Each point or object can be imbued with various properties or rules that govern its behavior. For example, a road (i.e., a line) has a surface condition and a speed limit, and the only points that can be located in the middle of the road are traffic lights. In many ways, a GIS is like computer-aided design (CAD) software for spatial applications. Most spatial data is represented as a series of X/Y coordinates that can be plotted to a map. The most common coordinate system is latitude and longitude, which enables mapmakers to plot objects on geographical maps. But GIS developers can create maps of just about anything, from the inside of a submarine or office building to a geothermal well or cityscape. Spatial engines can then run complex calculations against coordinate data to determine relationships among spatial objects, such as the driving distance between two cities or the shadows that a proposed skyscraper cast on surrounding buildings. Approaches for Integrating GIS and BI There are two general options for integrating GIS and BI systems: 1) integrate business data within GIS systems and 2) integrate GIS functionality within BI systems. GIS administrators already do the former when creating maps but their applications are very specialized. Moreover, most companies only purchase a handful of GIS licenses, which are expensive, and the tools are too complex to use for general business users. The more promising approach, then, is to integrate GIS functionality into BI tools, which have a broader audience. There are several ways to do this, which vary greatly by level of GIS functionality supported. - BI Map Templates. Most BI tools come with several standard map images, such as a global view with country boundaries or a North American view with state boundaries. A report designer can place a map in a report, link it to a standard "geography" dimension in the data (e.g. "state" field), and assign a metric to govern the shading of boundaries. For example, a report might contain a color-coded map of the U.S. that shows sales by state. This is the most elementary form of GIS-BI integration since these out-of-the box map templates are not interactive. - GIS Mashups. GIS mashups are similar to BI mashups above but go a step further because they integrate with a full-featured GIS server, either on premise or via a Web service. Here, a BI tool embeds a special GIS connector that integrates with a mapping server and gives the report developer a point-and-click interface to integrate interactive maps with reports and dashboards. In this approach, the end-user gains additional functionality, such as the ability to interact with custom maps created by inhouse GIS specialists and "lasso" features on a map and use those selections to query or filter other objects in a report or dashboard. Some vendors, such as Information Builders and MicroStrategy built custom interfaces to GIS products, while other vendors, such as IBM Cognos and SAP BusinessObjects, embed third party software connectors (e.g., SpotOn and APOS respectively.) - GIS-enabled Databases. Although GIS function like object-relational databases, they store data in relational format. Thus, there is no reason that companies can't store spatial data in a data warehouse or data mart and make it available to all users and applications that need it. Many relational databases, such as Oracle, IBM DB2, Netezza, and Teradata, support spatial data types and SQL extensions for querying spatial data. Here both BI systems and GIS can access the same spatial data set, providing economies of scale, greater data consistency, and broader adoption of location intelligence functionality. However, you will still need a map server for spatial presentation. As visual analysis in all shapes and forms begins to permeate the world of BI, it's important to begin thinking about how to augment your reports and dashboards with location intelligence. Here are a few recommendations to get you started: - Identify BI applications where location intelligence could accelerate user consumption of information and enhance their understanding of underlying trends and patterns. - Explore GIS capabilities of your BI and data warehousing vendor to see if they can support the types of spatial applications you have in mind. - Identify GIS applications that already exist in your organization and get to know the people who run them. - Investigate Web-based mapping services from GIS vendors as well as Google and Bing since this obviates the need for an inhouse GIS. - Start simply, by using existing geography fields in your data (e.g., state, county, and zip) to shade the respective boundaries in a baseline map based on aggregated metric data - Combine spatial and business data in a single location, preferably your data warehouse so you can deliver spatially-enabled insights to all business users. - Geocode business data, including customer records, metrics, and other objects, that you might want to display on a map. Location intelligence is not new but it should be a key element in any analytics strategy. Adding location intelligence to BI applications not only makes them visually rich, but surfaces patterns and trends not easily discerned in tables and charts. Posted September 19, 2011 2:51 PM Permalink | No Comments |
<urn:uuid:53383832-e982-4bd8-b745-306994679a77>
CC-MAIN-2017-04
http://www.b-eye-network.com/blogs/eckerson/archives/mobile_bi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00449-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932148
1,858
2.640625
3
If the server you are attempting to contact requires authentication, you will need to supply a username and password. If you receive an "Bad Credentials" error from the server, check that you have entered the correct username and password and that you are authorized to access the files and directories you have chosen. Your authentication credentials will determine your access rights to the files on the server. If you receive "Permission Denied" or similar errors, it may indicate that the given username does not have access to a particular file or folder. The username and password are always encrypted when communicating with the server, even if the "Encryption" checkbox is unchecked. Usernames and passwords must be less than 32 characters, or the limit imposed by the server operating system, whichever is less. Usernames and passwords should consist of only ASCII letters, numbers, and printable symbols. The use of other characters, such as extended unicode characters, may work in some environments but is not assured and may compromise security. If the server is running on a Windows system with Active Directory or LDAP enabled, you may specify an authentication domain after your username by using the following syntax: This should only be done for Windows servers which use explicit domains for authentication.
<urn:uuid:9ea625ac-a57c-486f-8068-70acfcd68d34>
CC-MAIN-2017-04
http://www.dataexpedition.com/expedat/Docs/MTPexpedat/authentication.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00083-ip-10-171-10-70.ec2.internal.warc.gz
en
0.891336
257
3.15625
3
The application is designed to use technology to share this fascinating part of NYC’s past. A new augmented reality mobile app has been released to combine the use of smartphones and tablets with the history of New York City in order to teach young people about the rich cultural history of Jewish immigration and other highly interesting topics from the start of the 20th century. Other subjects covered by this application include the labor and women’s movements in New York City. The purpose of this augmented reality game is to reveal the “secret history” that is held by many different buildings, neighborhoods, and other parts of NYC, at times when the device user is actually in the relevant location. The app is called Jewish Time Jump: New York. It works with AR technology and was created by ConverJent, which is a Jewish learning games nonprofit organization. The augmented reality app was created with a grant provided by the Jewish education group, the Covenant Foundation. This augmented reality app is a 2013 Games for Change Awards finalist in the category of being “Most Innovative”. The game is a new, high tech twist on the concept of a scavenger hunt. The players of the game must locate the required clues by heading to the various locations both inside and across from Washington Square Park. This is adjacent to the building that was once the home of the famous Triangle Shirtwaist Factory and that is today a part of New York University. As the players of the augmented reality game move from one location to the next, they receive information about events, archival photos, and even characters on their mobile device screens. The content presented is triggered by the GPS signal of the device. Players are also able to use their mobile devices to view historical documents as they play. These can include flyers and old Yiddish newspaper pages (which have been translated). The game is set around an important New York City labor history event. It provides an augmented reality experience surrounding the 1909 shirtwaist strike, which is also referred to as the Uprising of the 20,000.
<urn:uuid:d313887e-8b52-42ba-bd47-f38f4479a513>
CC-MAIN-2017-04
http://www.mobilecommercepress.com/augmented-reality-game-app-being-used-to-teach-jewish-cultural-history/856548/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00229-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96984
421
3.078125
3
Logitech has manufactured its one billionth computer mouse as the technology moves to its 40th birthday. Logitech's one billionth mouse has been produced at its main China factory. The mouse now faces serious competition from touch and sensor technolgies, with many laptop users relying on the touchpad built into their machines. Touch screens are also expected to make a dent in the mouse market. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The sale of desktop PCs has also declined, and these are usually operated with a mouse. The mouse has moved with the times down the years though. Although the device's wire "tail" was the reason for its moniker, many users have sliced that off with the use of infra-red, laser and wireless devices. Logitech's one billionth mouse came off the production line last month and Logitech is now running a $1,000 reward competition to find it. The computer mouse turns 40 on 9 December - the 1968 date when the mouse was first put through its paces by US researchers at Stanford University.
<urn:uuid:0a9469a3-a34e-448a-abe4-30ea71f4b839>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240087689/One-billion-mice-from-Logitech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00137-ip-10-171-10-70.ec2.internal.warc.gz
en
0.978692
237
2.578125
3
As data centers grow in size, their owners have become acutely aware of the cost both in dollars and carbon emissions. Microsoft and other large data center providers have placed a lot of their monstrous facilities in the Pacific Northwest because they could get plenty of cheap, clean hydroelectric power that way. But they can’t place every data center in the Pacific Northwest. The more hops you create for people around the country and the world, the more latency you get. So it helps to place data centers in different locations. There's not much for hydroelectric in Texas, but there's plenty of wind. So Microsoft is teaming up with a Texas wind farm, committing to a 20-year clean energy purchase contract. The electricity from this project will be sent to the local grid that serves Microsoft’s San Antonio data center. The wind farm itself, however, will be some distance away: it will be some 70 miles northwest of Ft. Worth, Texas, near the town of Jacksboro. The farm is known as the Keechi project, by RES Americas, and will begin construction early in 2014. When it is completed in 2015, the farm will put out 110 megawatts of power. The Keechi wind farm will bring additional wind power capacity into the Texas electricity supply chain. Every major firm has its own plan to be carbon-neutral, and Microsoft's has been to achieve net zero emissions for its data centers, software development labs, offices, and employee business air travel in over 100 countries around the world. It's working on this through a variety of strategies, like an internal carbon fee, purchasing renewable energy, and improving data collection and reporting.
<urn:uuid:ea6ceb32-4ec3-4914-ad8d-38eb2fdd809e>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225770/microsoft-subnet/microsoft-continues-green-tech-push-with-windfarm-deal.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00441-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952693
338
2.609375
3
Table of Contents What is a Driver? A driver is a program that is able to control a device that is connected to your computer. These drivers are used by the operating system to enable it to communicate with the particular device the driver was made for. Devices that you connect to your computer are often very specialized which makes it so Windows can not communicate directly with the device without a program telling it how to. This program, or device driver, acts as a translator between the installed device and the programs that utilize the device. Why do I need to update? By default Windows contains generic drivers for many different types of hardware connected to your computer. Unfortunately, many of these drivers that are bundled with Windows tend to be generic and do not support all of the advanced features of the hardware being installed. Therefore you would want to download and install the driver created by the hardware manufacturer so that Windows understands how to use these special features. Also as time goes by, hardware manufacturers release new versions of their drivers to fix bugs, increase performance, increase stability on your computer, or add new features. When these drivers are released it is recommended that you upgrade your driver to take advantage of these new enhancements. When new drivers are released they tend to come in two types of updates. The first type is a program that you run that will automatically update the driver for you and then prompt you to reboot your computer. The second type is a set of driver files that you need to manually update the drivers with. This tutorial will focus on teaching you how to upgrade your driver using both methods. Finding Out the Manufacturer and Model of Your Device Before we begin updating your driver, we need to know the manufacturer and model number for the device. This is a pretty simple problem to overcome. Simply look at your device for a brand name, and that should be the manufacturer. For example I am looking at my modem and on the top it says "Binatone". Pretty simple huh! To find the “model” of your device look at the back/bottom of your device for a code (my modem's is ADSL 2000). If this does not work, try looking in the paperwork that came with your device and see if you can find it there. On the other hand, if you have an internal device that is not easily accessible, it may be difficult for you to find the make and model for it (for example a video card). For internal devices you should use the Device Manager to find out this information: Click on the Start button in the bottom right hand corner of your desktop as shown below: Click on the Control Panel menu option to open the Control Pane as shown below: Double-click on the System icon as shown below: Click on the Hardware tab at the top of the box (red arrow), then click on the box which says Device Manager (blue arrow) as shown below: A window will appear which contains a list of the devices on your computer like the image below. You will need to click on the plus (+) arrow next to the hardware category for the driver you want to update (red arrow). For my continuing example of updating a video driver I would click on the plus (+) arrow next to display adapters (where video cards reside). After clicking on the (+) sign, the category will open listing the devices that are installed on your computer that fall under this category of hardware. You should see your video card listed and you would make a note of the make and model of the card you wish to update. Stay in the current window, as the following steps will continue from here. Determining the current version of your driver Before you upgrade your driver, you want to determine whether or not you have the latest version. When developers create drivers they assign a version number to it. Each time the manufacturer releases a new update to this driver, they increase the version number. In this way you can determine if you have the latest version of the driver by comparing the version number of your currently installed driver to the version number of the driver currently . So if there version number is higher than yours, you know that there is a newer version available for download. To determine the current version of your driver you would do the following: While in the device manager, as described above, you need to click on the (+) arrow next to the category of device you want to update (red arrow). Then right click on the device which you would like the update. Again, in my example I would right click on the Radeon 9500 pro / 9700 which the blue arrow is pointing at. After right clicking a list of options will appear. Click properties: A new windows will open, which will display the various properties of your device. Click on the Driver tab in the top of the window (red arrow). Then look at the details in the Driver Version line (blue arrow): Write down this version number so you can reference it later. Finding the latest driver So, now that you know the name, model, and version number of your device it is time to determine if there is a newer driver available for you to use. The easiest way to find a updated driver is to check the manufacturer’s web site. This will ensure you have the latest and most up-to-date drivers available for your device. Finding your manufacturers web site should be pretty simple. Using the example above, I found my device manufacturer for my video card was ATI radeon. Usually the manufacturer’s web site is its name with standard internet tags around the end (www. and .com). If you are unable to find the website this way, try going to www.google.com and searching for the name there. Usually the first entry should be the official manufacturer’s site: When you find the address of the manufacturer, go to the site and have a look around. It would be impossible to give instructions for each manufacturer, but you should be looking for a drivers page. On some manufacturer's sites the Drivers link is prominent. On others you generally need to go into their support section to find the updated drivers. If that does not work, you can search for it on the site. After taking a good look around the manufacturer’s site, you should have found the driver section for your device. However, in the event that you are unable to find a driver section, there are a number of handy sites which collect all the drivers available into an alphabetical list by manufacturer name. My favorite is www.driverzone.com. It has an up-to-date list of available drivers, and is very easy for novices to navigate around. This step is the only part of the tutorial where I cannot give you specific instructions; it varies too much from brand to brand. If you have found the drivers page, simply compare the version number of the driver they have available for download to the version number you retrieved earlier. If their version number is higher, then they have an updated driver for your machine. If it is the same version, then there is no newer driver available. Downloading the driver update When you have found driver update for your device that is newer that the one you have installed, simply go back to the manufacturers site/or driverzone, and find the newer driver. Click on the download link to this file and you should be prompted with a download box. If you use Internet Explorer it will look like so: You should save your driver update download to the desktop. This is so that the file can be easily found later on. To do this, click save and setting the Save In pathname as Desktop (red arrow) and press the Save button. So now you should have the file placed neatly on your desktop for easy access. If the files extension is .zip, then the driver will need to be extracted first. This can be done very easily using BleepingComputer's own tutorial on the subject here: How to create and extract a ZIP File in Windows ME/XP/2003 How to create and extract a ZIP File in Windows 95/98/2000 You should extract these files to the desktop now. If on the other hand, the file is an executable (ends with .exe), then move on to the next section. Installing the Driver update To launch the driver update program you would look for the program that you downloaded or extracted. The setup file should look something like this: Double-click on the setup file and follow the on screen instructions to install the update. When the driver has finished installing, it will usually prompt you to reboot. Reboot your machine and you should now be using the updated drivers. If on the other hand, there is no setup.exe file or other executable to run, then you will need to manually update the driver through the Device Manger. Using the previous instructions open the Device Manger. Using the drop down plus (+), open the category of the device you want to update and select the device by clicking on it once to highlight it. Right-click on the device and click Properties. Now click on the Driver tab and then click on the Update Driver button: A wizard will begin in a new window. If the Wizard asks Can Windows connect to Windows Update to search for software? select the option labeled No, not this time and press the Next button. You will now see a screen similar to the one below. Click on the option labeled Install from a list of specific location (Advanced) (red arrow). Then click next (blue arrow). A screen will open similar to the one below. Select the option labeled Don't search. I will choose the driver to install. (red arrow) and press the Next button. A new screen may come up showing compatible hardware. Simply press the Have Disk button. Then click on the Browse button and navigate to the folder on your desktop where you extracted the driver files. Once you are navigated to that folder you will see something like below. You will see a list of .inf files that contain the information about the driver update found in that folder. Select the .inf file (red arrow) and press the Open button (blue arrow). Then press the OK button. You will now see a list of compatible hardware. Select the driver and press the Next button. Windows will copy the updated driver to your system. When it is done, press the Finish button. You will now be back at the properties page for your device. If you look at the version, you should see that the version number now corresponds to the new driver you just installed. You can now press the Close button and exit the Device Manager. After following the above instructions, you should be able to update your device drivers. This will be useful for meeting minimum requirements for applications/software, to fix bugs that out-of-date drivers may be causing, or to improve the performance of your hardware. As always, if you have any questions feel free to ask them in the computer help forums. David Blyghton (D-Trojanator) Bleeping Computer Advanced Microsoft Tutorial BleepingComputer.com: Computer Support & Tutorials for the beginning computer user. A very common question we see here at Bleeping Computer involves people concerned that there are too many SVCHOST.EXE processes running on their computer. The confusion typically stems from a lack of knowledge about SVCHOST.EXE, its purpose, and Windows services in general. This tutorial will clear up this confusion and provide information as to what these processes are and how to find out more ... Damn you Microsoft! I am a notepad addict. If you look at my taskbar at any time and you will see at least 5 notepads, usually a lot more running at one time. Why? Because it is fast and small I use it to keep notes, to do lists, phone numbers, write code, search and replace, etc. The reasons are endless.... A basic, but important, concept to understand when using a computer is cut, copy and paste. These actions will allow you to easily copy or move data between one application and another or copy and move files and directories from one location to another. Though the procedures in this tutorial are considered to be basic concepts, you would be surprised as to how many people do not understand these ... After a version of Windows is released, over time bugs are found or new enhancements are added by Microsoft. In order to fix these bugs and add these new enhancements, Microsoft will occassionally release a large update called a Windows service pack that contains all of bug fixes, enhancements, and new features created since Windows was released. Unfortunately, CDs that you have for Windows ... The Recovery Console is a special boot up method that can be used to help fix problems that are preventing your Windows installation from properly booting up into Windows. This method allows you to access the files, format drives, disable and enable services, and other tasks from a console prompt while the operating system is not loaded. It is suggested that the Recovery Console is to only be used ...
<urn:uuid:71d7376f-8239-47fd-a7d9-1444c15596b5>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/how-to-update-a-windows-hardware-driver/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00349-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926238
2,689
3.546875
4
Overview of physical security and environmental controls Security is normally an area that is usually very broad based owing to the fact that there are many ways through which it can be implemented and enacted. One of the ways through which it can be enacted is through the development of security policies. Another aspect of security is the physical security. This is where security is implemented physically and through actions. Environmental control and especially in our working environments is also another aspect that should be taken with much affirmation since our places of work should also be environmental friendly. HVAC: In most data centres, this is an abbreviation that one will not miss and it stands for Heating, Ventilating and Air Conditioning. This is a system that plays a very important role in keeping the environment at a constant temperature. This is a very complex system that calls for high level engineering and science and one can barely design it by one's self. It is also important that the HVAC system is properly integrated into the fire system so that in case of a fire, the cooling system does not circulate oxygen to feed the fire. In terms of the Heating, Ventilating and Air Conditioning perspective, one's data centre should be separate from the rest of the building. With overheating being a huge issue in a data centre, one need to ensure that such temperature changes to not affect the whole building but only the data centre section. There are also other systems known as closed-loop systems and positive pressurization. In the closed-loop, the air in one's building is in constant recirculation hence no air from outside is pulled in to cool the building. The positive pressurization means that when one open the door, air inside the building will rush out automatically especially in cases of a fire and one want to get rid of the smoke. Fire suppression: When working in an environment where there are many computers and power systems, it is evident and vivid that water must not be of any close proximity. This means that in such an environment, one should have very little fire suppression systems that rely on water. Fire detections is also very important since it provides a good basis of the probable cause hence making it easier for one to supress it. One should make sure that one has smoke, fire and heat detectors installed in one's data centre. When one is planning to take care of a fire with water, there are different methods that one can use. One is the dry pipe method where the pipe that has one's water is completely dry and in case of fire detection, the pipe fills up with water to the appropriate pressure and puts out the fire. The wet pipe method is one where one can immediately discharge the water in case of a fire alarm. There is also the preaction suppression method where the pipe where the pipe is filled with water and has the appropriate pressure but won't turn on until the temperature hits a certain amount making this system to go into effect. Fire suppression can also be done with the use of chemical that are environmental friendly. This means that there are many fire suppression options apart from water. EMI shielding: Electromagnetic interference is a common problem that occurs when we put many computers very close to each other. For instance, if one places a radio near a computer, one may realize that there is some electromagnetic interference radiating through the heat sinks, circuit boards and cables among other interfaces that are directly in the computer. If one open up a computer, one realize that there is a lot of metal shielding that may be on the case itself or either wrapped around the computer itself so as to prevent some of the electromagnetic interference from getting into one's environments. The metal shielding should not be removed at all costs since it prevents the radiated signals from getting into other components and devices that could be in one's environment. Hot and cold aisles: When talking about hot and cold aisles, we generally refer to the manner in which our data centres are engineered; that is in which rack and what directions we put our servers. For instance, if one look at one's data centre, one may see servers arranged in different racks and on with raised floors underneath. It is underneath the raised floors that we have cold air moving in and blowing up into openings in the floor. Through this, the cold air is pulled into the racks of the servers by the fans and pushes it through the system. There is also the back of the server where all the hot air from the server is coming out, moving to the top of the building and then pulled below by the air conditioning systems where it is cooled. When designing this for maximum optimization, we should have cold aisles where all the cool air is being pulled through and hot aisles where the hot air from the computer systems can be sent to the top of the building for recirculation. Environmental monitoring: After all environmental control systems have been set up such as cold and warm isles; it new becomes our responsibility to make sure that we establish whether our installation is having the actual effect on the temperature. So as to know if there is any effect occurring, we have to monitor the temperature over a period of time so as to make sure that whatever we are cooling is working properly and functional. For instance, one should ensure that if one increases the temperature, it will not result into an increase in the costs one incurs. In most cases, one only turn on and off the cooling systems without necessarily keeps track of any changes. In this case, it is important that one obtain a thermometer that one can constantly watch and monitor. In addition, one can use it to keep track of information such as humidity and daily temperature changes. With the help of such a thermometer, one should witness different temperature patterns for the different time intervals. One may also find out that different periods of the month have different temperature recordings which could depend on the level of CPU utilization. A higher CPU utilizations means more heat generated. With these logs available, one might later look into them and make some analysis on the working of one's cooling system for instance determine if there is proper amount of humidity in one's environment. Another aspect of environmental monitoring can be video monitoring. In this case, one might decide to have one's own closed-circuit television which is an in-house component one can use to capture videos and data from one's cameras. With such video devices, one can protect one's assets. This is a common feature in shopping malls and supermarkets. When setting up such cameras, one should take into account their location. One can decide to locate them inside one's building to monitor one's assets and also outside so as to monitor people in the parking lot. One should also consider the size of the area to monitor since there are cameras that offer a large field of view while others offer a small field of view. One should also consider the lighting of the place to monitor. If the area has less lighting, one might want to install special type of cameras that can monitor even with a low lighting system such as during the night. One should also make sure that one's video monitoring system has a proper integration with other security monitoring systems and devices so as to make sure that one's video system works properly with other intrusion systems for proper capturing of information. Temperature and humidity controls: Temperature in a data centre can be quite a challenge. This is in that when one's systems get too hot they might crash, and when they get too cold, then one might waste a lot of money with one's cooling system. Most of the data centres are normally very cold contrary to a Google recommendation to have an 80 degrees measure in the cold aisle which will optimally work for all one's systems. Humidity on the other hand refers to the amount of moisture in the atmosphere. Too much moisture in the air could lead to corrosion of one's systems and therefore having cooling systems helps in removal of such moisture. If the humidity is too low, one might experience some static discharge which can be dangerous to one's computers and other sensitive electronic components. Hardware locks: Hardware locks are among the most common physical security components. These are devices that are present on all doors. In most cases, this physical security aspect uses the whole and key mechanism where a key is required to open up the lock. However, in other cases, a key may not be necessary. Mantraps: Mantraps are other special security enforcement methods. These systems are designed to detect illegal access of an area and they automatically initiate a lock of all the entrances so that the trespasser does not leave the room and hence some sort of trap is developed. Video Surveillance: Video surveillance is an aspect of physical security where surveillance cameras are installed in various places either inside or outside a building. With these cameras, all activities happening are captured and displayed on a special type of screen for supervision. Video surveillance is considered very effective since it provides all- time security surveillance either during the day or at night. One disadvantage is that this form of security relies on the presence of power and hence a power outage can lead to loss of the surveillance. Fencing: Fencing is another form of physical security. This involves having a perimeter barrier erected around an organization or company. Through this, unauthorized entry of people and animals is limited. This means that an individual seeking access to the area that has been fenced can only use the authorized entry point which in most cases is usually the gate. Proximity readers: Proximity readers are some special type of devices that are able to establish the distance an individual is from a restricted area. With such readers, an approaching individual is detected and all his or her activities can be monitored. Once the individual gets to the restricted area, the readers can raise an alarm so as to draw the attention of security personnel. Access list: An access list is a manner in which security is enforced inside organizations. In this case, there are usually special lists that are compiled giving a clear outline of the people who should access a particular facility or section in the organization. For instance, one can have an access list at the entry point of a server room so as to ensure that only the permitted database administrators can gain access to it. Proper lighting: Proper lighting is also another way of enhancing security. This mainly applies in open places such as streets where many people carry out their daily activities. With proper lighting, all activities can be monitored for security purposes. Signs: Signs can also be used as a physical means of enforcing security. In most cases, the signs normally appear in the form of warnings. For instance one can have a sign prohibiting the access of a particular section of an organization. Such signs are normally very distinct and can be seen from a distance. Guards: Guards are individuals employed to man a particular area. These are individuals entitled with the responsibility of making sure that there proper security. They are people who can be stationed at different places such as gates, door entrances and exits. Apart from monitoring activities happening, the also carry out inspection on people and vehicles getting in and out of any premises. Barricades: Barricades are barriers that inhibit the accessibility of a particular area to people or vehicles. In other cases, they can be used to bring permanent closure to a particular entrance. Biometrics: Biometrics is among the latest technology to enforce security. These are devices that are made to recognize a finger print, the eye or even make some facial recognition so as to allow one to gain access to a particular area. They are usually installed on doorways. They are fed with information of only those people allowed to access the building. Protected distribution (cabling): Cabling is another aspect of security enhancement. In most cases, most of the cables normally carry some electric power hence making them ideal for an electric fence. The cables are always live and therefore an attempt to penetrate through them can lead to electrocution. Alarms: Alarms are special sound systems that are used to create the attention of security personnel in case of a security breach. Alarms can either be automated while others are manual and only ring when powered on. Motion detection: Motion detectors are highly sensitive devices that have a very high capability of detecting the slightest motion in a building. Such devices are usually installed in places where access is completely restricted such as bank safes. Deterrent: This is a security control measure where there access to an area is false fully restricted. This means that access cannot be obtained at all costs. Preventive: Preventive control is a method where all the necessary security measures are taken in advance so as to avoid an instance where an individual can gain access to a building without the awareness of security personnel. Detective: Detective control is where all the security personnel in an organization rely on security intelligence through carrying out various investigations and research. Compensating: Compensating control comes in where a particular organization decides to utilize a particular security enforcement strategy that can cover up for many other security enforcement practises. In this case, one can use an alarm system that can be used in case of a fire, security breach among other reasons to call for alarm. This means that many security issues can be handled using the same device. Technical: Technical control entails carrying out some security analysis so as to have an effective security system. This means that there has to be well calculated time intervals between security switches so as to ensure that security is still in force even with the absence of security personnel. Administrative: Administrative control is where a specific individual is allocated a specific security area to handle and manage. The different security sections in an organization are managed by different people hence bringing some order in the execution of security policies. Basically, a well dynamic and vibrant security system is crucial for top protection of every aspect in an organization. It is for this reason that security must be given an upper hand in terms of seniority and also funding. On the other hand, the environment in which many machines are operated should also be one that is ideal and provides all the conditions for efficient running of the machines.
<urn:uuid:5adfce4b-abb8-447b-a37f-a8303be1b107>
CC-MAIN-2017-04
https://www.examcollection.com/certification-training/security-plus-overview-of-physical-security-and-environmental-controls.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00193-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958058
2,880
2.78125
3
America’s electric cars are better for the environment, but they share a dirty little secret. The Chevy Volt, Nissan Leaf and Tesla Roadster all use a super greenhouse gas known as HFC 134a as the refrigerant for their air conditioners. The liquid coolant is so potent that when it leaks into the atmosphere, it traps 1,400 times more heat than carbon dioxide over a 100-year time horizon. For automakers and advocates of green transportation, it poses an uncomfortable truth: Vehicles touted as a solution to climate change carry a hairspray-sized canister loaded with a chemical that significantly contributes to warming of the earth’s climate. As much as half of current HFC emissions, a small but fast-growing source of global warming pollution, come from leaks out of the air conditioners in cars. Already a number of Chevrolet, Buick, GMC and Cadillac gas-powered cars use an alternative climate-friendlier coolant called HFO 1234yf, as carmakers confront growing pressure from environmentalists and as regulations are developed by governments. Climate experts say it’s clear that all electric automakers should get on board soon. “It makes sense for electric vehicles to use (alternatives), and to reduce their overall global warming potential,” said Don Anair, deputy director of the clean vehicles program at the Union of Concerned Scientists, a science advocacy group. But among 16 EV models on America’s roads, only two — Chevy’s newest model of its all-electric Spark and the leasable Honda Fit — have ditched the super greenhouse gas HFC 134a for the climate-safe alternative so far. Many automakers of both electric cars and conventional ones have expressed reluctance to commit to the switch, citing the cost and limited supply of new alternatives. The European Union has banned HFC 134a for any newly redesigned or re-engineered vehicles this year, and for all vehicles in 2017 — though the industry elsewhere is not rapidly adopting the EU’s lead. The United States has no such mandate yet. The U.S. Environmental Protection Agency is considering one. How countries got to this point is a classic case of unintended environmental consequences. Under the Montreal Protocol, a 1987 treaty that zeroed out substances harmful to the earth’s ozone layer, HFC 134 was chosen by nations as the best alternative at the time to replace ozone-depleting CFCs. The result today is that most of the billion or so cars on the world’s roads use the HFC refrigerant — but while the ozone layer has rebounded, HFC 134a is exacerbating the global warming problem. So far, HFC 134a and other types of hydrofluorocarbons have contributed to less than 1 percent of total global warming, according to a study published in the Atmospheric Chemistry and Physics scientific journal. But the use of these damaging gases is climbing generally, as developing world economies use and produce more HFC-spewing cars, air conditioners and refrigerators, and as they make more foam insulation that uses the chemical during manufacture. Emissions from HFCs are growing at a rate of 10 to 15 percent per year, the study says. Left unchecked, HFCs alone could add up to 0.5 degree Celsius of the global average temperature rise by the end of the century, about a quarter of the 2 degree Celsius rise that nations are struggling to stay within through international agreements. Electric vehicles are touted for producing zero tailpipe emissions and being a critical force in reducing fossil fuel use and curbing climate-changing pollution — which could make their use of the super greenhouse gas HFC 134a all the more hypocritical. The Chevy Volt, Nissan Leaf and Tesla Roadster represent about two-thirds of the roughly 180,000 EVs sold in recent years in the United States. Kevin Kelly, a spokesman for General Motors, wouldn’t say if or when the hybrid Chevy Volt, the biggest-selling U.S. EV, might switch to HFO 1234yf. The auto giant in 2010 said it would be the first U.S. carmaker to voluntarily phase out HFC 134a from many of its passenger cars. So far, only its Chevy Spark EV, which had sold 703 units as of February, and its conventional Cadillac XTS luxury sedan use the new refrigerant. Kelly declined to disclose which other GM models have or will soon follow suit. Spokespeople for Nissan Motors in the United States were not immediately able to provide more information. Tesla Motors did not respond to repeated requests for comments. David Doniger, a policy director at the Natural Resources Defense Council who has worked on ozone issues since the 1980s, noted the hypocrisy, but said he is “more concerned about getting the overall transition to occur as quickly as we can.” All car manufacturers “have the opportunity to switch refrigerants, and they should do it as quickly as they can,” he said. “From an environmental point of view, if you want to get the changeover happening at a large scale, I wouldn’t focus first on electric cars — I’d just be focusing on volume.” Nissan Motors, for instance, sold 1.2 million gas cars in the U.S. in 2013 and just 23,000 all-electric Leafs. What has made vehicle air conditioners a primary concern of advocates is that the refrigerants often leak into the air slowly over years. Cars that are dumped or crushed in the junkyard usually leak, too, since there’s little regulatory or economic incentive for mechanics to collect and destroy old coolant. The leading alternative to HFC 134a developed so far is HFO 1234yf, a hydrofluoroolefin compound, sometimes described simply as “YF,” which traps about as much heat in the atmosphere as carbon dioxide does. YF has 0.3 percent the climate impact of HFC 134a. It is available in nearly a dozen car models, and about half a million vehicles worldwide run it through their air conditioning systems, according to Honeywell, the New Jersey-based industrial conglomerate and marketer of YF. Honeywell expects more than 2 million vehicles to use the refrigerant by the end of this year. Through a joint venture with Delaware-based DuPont Co., Honeywell operates a manufacturing facility in China. Last December, the company announced plans to build a $300 million YF facility in Geismar, La., that is expected to come online in 2016. But while YF works well, it needs more energy to do its job — 10 percent or more energy by some estimates. In an electric vehicle, cooling and heating already use up a significant part of the battery’s juice, shortening the car’s driving range and making a more energy-intensive refrigerant less attractive, according to Stephen Andersen, who directed ozone protection and climate programs in the EPA for more than two decades. Andersen also cited concerns about YF’s availability and being able to get it serviced in auto shops. Using the new refrigerant “would be one more albatross for the electric car,” said Andersen, who is now U.S. director of research at the Institute for Governance and Sustainable Development. Electric vehicle drivers already face a dearth of options for recharging their batteries on the road even at daily commuting distances — a factor in low sales. The all-electric Chevy Spark EV — which debuted in California and Oregon last year — and the Honda Fit EV, available for lease in the United States, are bucking those concerns, however. Both use YF. YF has hardly won universal acceptance. In Europe, Germany’s largest automakers are refusing to use YF, creating a standoff that is slowing down the switch, Bloomberg News reported last year. The European Union, meanwhile, has prohibited HFC 134a from newly designed cars. By 2017, it will be banned from all new cars there. Yet parent company Daimler AG said it would recall its new Mercedes-Benz cars that contain YF, after the product failed internal safety tests, and use HFC 134a instead — a violation of the EU’s new regulations. According to the company, in some head-on collision test scenarios, YF burst into flames. Volkswagen AG said it would re-evaluate HFO 1234yf and put off plans to use it “until further notice.” HFO 1234yf passed industry and EPA evaluations, and General Motors is standing by the alternative refrigerant despite Daimler’s tests. Last month, EU scientists said they found the chemical does not pose any serious safety risks, and the European Commission recently launched a legal proceeding against Daimler for its refusal to get rid of HFC 134a in Mercedes-Benz cars. German automakers are proposing to use a different alternative instead — a carbon-dioxide-based refrigerant called R744 that has a global warming potential similar to HFO 1234yf. Daimler, Volkswagen and Audi, BMW and Porsche said last year that they would steadily roll out their new technology across their respective fleets. Before HFC 134a became a key climate concern, it was the best-available clean alternative and the one most easily adopted worldwide. In the late 1970s and ’80s, scientists observed that its predecessor, CFC 12, was depleting the Earth’s ozone layer, exposing the planet to more of the sun’s harmful radiation, particularly above the South Pole. In 1987, participants in the Montreal Protocol agreed to start phasing out chlorofluorocarbons entirely, and by 1995, the United States had banned production and import of CFCs and other ozone depleting substances. CFC 12 was also a powerful global warming agent, able to trap nearly 11,000 times more heat in the atmosphere than carbon dioxide. HFC 134a is considered ozone-safe and has one-eighth the global warming potential of CFC 12. “It was considered a victory for both the climate and ozone,” said Andersen, the former EPA director who has also played an instrumental role in implementing the Montreal Protocol. Doniger, who also participated in the Montreal Protocol negotiations, recalled that the chemical industry had resisted calls for developing brand-new refrigerants. Hydrofluorocarbons, far less damaging than CFCs as a greenhouse gas, emerged as a practical choice and won acceptance, Doniger said, “even though we knew that this second generation of refrigerants were not perfect.” The changes brought by the Montreal Protocol have succeeded, Doniger explained, because the treaty was designed to force practical rules that would evolve gradually. For example, CFCs, the original threat to Earth’s ozone layer, are still allowed for use as a propellant in medical inhalers for asthma, though new alternatives promise to end even that exception. HFCs, still a small contributor to climate change, are growing enough to become a new concern. YF is now a viable alternative. “We need to keep going,” Doniger said. The Montreal Protocol is an ongoing process, with more than a hundred countries meeting twice a year to hash out improvements by proposing how nations must phase out greenhouse gas usage as industry discovers smarter alternatives. Reducing and eliminating HFCs, especially in air conditioning, has been debated for years, with Micronesian island nations and North America pushing proposals. China has agreed to cooperate and follow the lead of the United States. India, Doniger said, has emerged as a key opponent for now. A major effort by U.S. climate change activists is underway to phase out HFC 134a and other super-potent greenhouse gases in the United States. President Barack Obama’s new slate of aggressive fuel efficiency standards gives automakers incentives to make the switch. Companies that use cleaner refrigerants, such as HFO 1234yf, or install systems that significantly stop leaks can earn credits toward the federal requirements that force automakers to increase fuel efficiency standards. Last month, the U.S. EPA laid out a schedule for proposing tougher regulations under the federal Clean Air Act to directly require the replacement of HFC 134a and other HFCs with newer, lower-impact alternatives. Doniger said he expects the agency will issue a proposal in the next six months. The rules could “accelerate the industry-wide transition away from HFC 134a,” he said, by forcing automakers to use alternatives on a rapid schedule beginning next year. If efforts succeed, they could substantially eliminate the global warming emissions caused by HFCs and “significantly improve the chances of staying below the 2-degree Celsius global warming guardrail,” according to the study in Atmospheric Chemistry and Physics. That subject will be on the table for Montreal Protocol meetings later this year. “We could eliminate one of the six (main) greenhouse gases,” said Durwood Zaelke, president of the Institute for Governance and Sustainable Development. “The consensus has gotten stronger and stronger that this needs to be done,” Zaelke said. ©2014 InsideClimate News
<urn:uuid:ef325001-e9dd-4817-8f70-183b4678d0c3>
CC-MAIN-2017-04
http://www.govtech.com/transportation/Many-Green-Cars-Carry-Greenhouse-Gas.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00247-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948289
2,753
2.984375
3
Many of the TOP500 machines rely on InfiniBand to power their computations, but since this is still something of a new technology, enhancements and new developments to refine how it works are still in development. An undergraduate student at Northeastern University has devised a new way for supercomputers making use of InfiniBand to grab data during the computational process and retain it to ensure against loss of progress if there are system problems. According to Gene Cooperman, the student’s professor, InfiniBand is “behind some of the world’s largest computers, and yet the number of people who understand the internals of the technology is very small, largely because it is relatively new.” He says that no one has been able to restart an InfiniBand process midstream and the student’s developments might allow scientists to conduct their work more efficiently. The student, junior Greg Kerr, has been selected to present an hour-long talk on his findings at Recon in Montreal, Canada, on the first day of the event. Cooperman feels this is an honor since it will give attendees the rest of the conference to discuss his student’s work.
<urn:uuid:bb5bb7ab-fb0d-4582-b84f-3536998af3ba>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/06/28/student_fashions_technique_to_capture_infiniband_processes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00275-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96325
243
3.296875
3
NOAA's National Marine Protected Areas Center has created a first ever online inventory of the nation's marine protected areas (MPAs). This unique, comprehensive inventory catalogs and classifies marine protected areas within U.S. waters, and was developed with extensive input from state and federal MPA programs, as well as other publically available data. It provides baseline information that will contribute to the development of the National System of MPAs. "This is a milestone in the development of a national system of marine protected areas," says John H. Dunnigan, NOAA assistant administrator of the National Ocean Service. "Not only will the MPA Inventory be a key resource for nominating eligible sites to the national system, but it will also serve as a valuable tool for MPA managers and stakeholders, enabling them to make more informed decisions about current and future management of our nation's marine resources." The MPA Inventory, posted on http://www.MPA.gov, contains a range of information on each protected area established or managed by federal, state, or territorial agencies or programs. For each site, it includes the following information: site name, region, level of government, level of protection, permanence, constancy, scale of protection, conservation focus, primary conservation focus, fishing restrictions, and area. Both tabular and GIS spatial data can be downloaded, as well as mapping products and analysis reports created using the MPA Inventory data.
<urn:uuid:0d844f44-f3eb-4f78-a8f6-c5ab90bd388c>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Online-Inventory-of-Marine-Protected-Areas.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00240-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921004
292
3.0625
3
This guide has been designed to help storage managers with their storage software and hardware decisions. It contains information on managing, implementing and maintaining storage technologies to help IT professionals with their storage software and hardware purchases. This guide to buying storage hardware and software covers hard disks, tape drives, disk storage, and virtual storage. Table of contents: A hard disk stores and provides access to large amounts of data. The data is stored on an electrometrically charged surface and recorded in concentric circles. The storage of the data is referred to as different tracks within a set of stacked disks. Laptops remotely wiped and tracked at Camden Borough Council automated alerts let the IT manager know when the user’s hard drive is becoming full. When free space drops below 10% the user can be contacted about an assessment and maintenance. How to manage Virtual Hard Disk data with encapsulation If you think encapsulating Virtual Hard Disk data is the best way to manage Hyper-V storage, pass-through disks might be an easier option for you. Find out how pass-through disks can backup and restore huge VHD files. If hard disk drive areal density is limited, how much further can a spinning disk go? A hard disk is limited by the number of edges and transitions between states, so we take a look at how much further a spinning disk can go. How to justify the cost of a solid-state drive (SSD) We put solid-state drives up against hard disk drives and work out the best use cases for SSDs. Learn why applications, designed to impact revenue, are a perfect fit. A tape drive is designed to store computer data on a magnetic tape. This is typically used for backup and archiving. A tape drive works on the same basis as a tape recorder, in the fact that both record data on a loop of flexible material. This data can be read and erased from the tape drive. Recording and playback is recorded in two different ways onto a hard drive – either through a helical scan, where the tape’s heads touch the tape or through linear tape technology where the head never touches the tape. Personal data of 34,000 customers misplaced by Morgan Stanley Morgan Stanley’s compact disks went missing, with the details of 34,000 customers on them. The password protected but not encrypted disks disappeared whilst in transit. Bid of £2.6 billion for Hitachi Global Storage Technologies from Western Digital Tape drive vendor Western Digitaloffered a bid of £2.6 billion to purchase Hitachi Global Storage Technologies. The bid saw an end to the Japanese HDD vendor's previous preparations for an IPO. Is mainframe tape backup: out dated nowadays? According to this systems integrator, the UK will start to abandon its out dated 1980s-style backup technologies soon. Why firms are avoiding encryption on backup tapes and databases Companies are ignoring database and tape encryption due to cost and complexity, according the results of this survey. Zurich receives data breach fine After Zurich Insurance UK outsourced some of its customer data to Zurich Insurance Company South Africa Ltd, the company had to admit to the loss of 46,000 records during a routine tape transfer. The unencrypted back-up tape was lost in August 2008, and as a result Zurich Insurance Plc was forced to pay a recording fine. A technical guide to your tape backup If you don’t believe tape is dead, here’s a guide to how best use the technology for backup. Disk storage refers to data that is recorded on a surface layer. The data is stored and recorded by electronic, optical and magnetic methods. As it is recorded the data lays across one, or sometimes, several round rotating platters. Why folder and file encryption isn’t as safe as full disk encryption How to ensure that a lost corporate laptop doesn’t cause a data breach. Full disk encryption vs. file and folder encryption – which one is safest and easiest to use? European storage budgets remain low Storage budgets continue to shrink with the most having to be spent on disk systems, according to SearchStorage.co.UK’s Purchasing Intentions survey. Head to head: Tape archive vs disk archive At EMC World the vendor’s message was “tape sucks,” however several other vendors still claim tape is necessary. Find out which one is best. The cost of disks cut by Compellent Data Progression: Radiology company case study Read how Compellent’s Data Progression software expanded data storage through to cheaper tier 3 and SATA disks save this radiology firm cash to spend elsewhere. Virtual storage is a reference to memory being extended beyond its main storage. This is extended through to what is called secondary storage and is managed by system software. The way it is managed means programmes still treat this designated storage as if it were main storage. VMware vSphere and HP 3PAR Utility Storage combine to make HP 3PAR Storage Systems VMware and 3PAR have combined forces to offer sever virtualisation and virtualised storage. The tier 1 HP 3PAR Storage System is designed for virtual data centre and cloud computing environments. How data centres have expanded to accommodate a demand for more data storage Due to an explosion in data, data centres have had to expand to cope with more and more data storage. We take a look at how data centres have coped with this rapid expansion. Job overlap: What exactly does a WAN manager do in terms of storage? There are several areas where a storage manager overlaps with a WAN manager, so find out where job responsibilities overlap for data deduplication, disaster recovery (DR), remote site acceleration and optimisation. • Is free storage software worth the risk? • Storage management solutions for managing increasing data demands • Using storage efficiency to achieve end-to-end data reduction • Has backup become too complicated? • Storage performance monitoring in a sprawling, virtual world • Unified storage systems showdown: NetApp FAS vs. EMC VNX • Integrated stacks haven’t yet won over IT shops • Thunderbolt storage devices not seen as an SMB staple
<urn:uuid:677fe61b-be12-42ed-a920-4f10f06c7af3>
CC-MAIN-2017-04
http://www.computerweekly.com/guides/A-guide-to-buying-storage-hardware-and-software
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00148-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930148
1,284
2.671875
3
Sad Fact of History Some people might say that US-VISIT represents a great opportunity to stretch the capabilities of computer technology. But the reason this opportunity has arisen at all is because more than 2,500 people were murdered in the 9/11 terrorist attacks. It is a sad fact of history that most of the great advances in science and technology have been achieved only because governments are prepared to spend huge sums to build new weapons and defense systems.An early successful example of the government harnessing computer technology for national defense is a 1960s era museum piece called SAGE (Semi Automatic Ground Environment) developed by IBM based on research by the Massachusetts Institute of Technology. It used radar and computer power to allow air defense controllers to track intruding aircraft. It was a precursor of the current civil air traffic control systems used worldwide. It was also nowhere near as complex as the US-VISIT system. Even so, it is entirely possible with massive investments of money, time and human resources the government will actually deploy a US-VISIT system that reliably performs what it was designed to do. But that doesnt mean we will be one iota safer from attack by determined terrorists. If we are very lucky, the nation will somewhat be less blind to terrorist threats than we were before 2001. We will have also paid a heavy price beyond the yet uncounted billions to build the system. The government will also have an unprecedented capability to track the movements of all of us, citizens and foreigners, the innocent and the criminal. There was a distant time in this country when we had a right that wasnt written into the Constitution or the Bill of Rights. That was the right of invisibility. If you obeyed the law, paid your taxes, worked and lived quietly at home, you could expect that the government would pay no attention to your comings and goings. The successful deployment of US-VISIT will mark the final erosion of the invisibility and anonymity that used to be one of the blessings of living in a free society. Its difficult to decide who deserves more blame for this erosionthe computer system or the terrorist fanatics who seek to destroy our society. eWEEK.com Enterprise Applications Center Editor John Pallatto is a veteran journalist in the field of enterprise software and Internet technology. The atomic bomb, nuclear submarines and the nascent U.S. antimissile system are just a few examples of the lengths that the government is prepared to go in the name of national security.
<urn:uuid:5c7033cc-8fab-4df7-a325-2be4f9f2fefe>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Security/What-Price-Security-USVISIT/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965846
501
2.546875
3
In a newly created proof-of-concept hack, German researchers have been able to show that the mechanisms used for Internet software distribution can be turned into virus vectors without the original code being modified. Felix Grobert, Ahmad-Reza Sadeghi and Marcel Winandy, researchers from Ruhr University Bochum, developed an on-the-fly mechanism that makes it possible to inject code into a download and remain undetected, The Register reported. The hack requires two components — the Cyanid which catches, modifies and filters the HTTP downloads and a binder known as Calcium used to infect binaries — and depends on the ability to redirect traffic to be successfully executed. “Our algorithm deploys virus infection routines and network redirection attacks, without requiring to modify the application itself,” the group wrote in a research paper. “This allows to even infect executables with an embedded signature when the signature is not automatically verified before execution.” Linking To Legitimate Software To Stay Hidden The attack works by using the Calcium binder to link the original application and the malicious code. Once an infected application is launched, the binder starts working and creates its own file for additional embedded executables where it reconstructs and launches them undetected by the user. Because the original application is left intact, the malware can be attached to an executable with an embedded signature and still succeed in certain scenarios. The researchers suggest that organizations attempting to mitigate the results of such an attack should tighten the delivery mechanisms they use to protect against traffic hijackers, according to the Register. Current antivirus software could also be modified to identify the presence of binders and trusted virtualization architectures would be useful as well, since the secure, verifiable boot process they use would help to keep critical applications isolated. As malware and cyberattacks grow increasingly harmful, companies are making larger investments into services that will help improve enterprise security. A recent ABI Research study estimates that the market for data loss prevention solutions will grow to $1.7 billion by the end of this year, Business Wire reports. Part of the increase in cybersecurity services is due to the amount of people affected by cyberattacks last year, when more than 800 million records were compromised as a result of data breaches. For organizations looking to increase enterprise security and improve data loss prevention, strong authentication is a reliable way to protect privileged information. This security technique requires users to enter multiple forms of identification before accessing sensitive data, ensuring malicious actors cannot obtain information they are not authorized to have.
<urn:uuid:b8841a53-be6c-4ad0-a03e-151422333010>
CC-MAIN-2017-04
https://www.entrust.com/new-malware-pairs-legitimate-software-remain-undetected/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927799
521
3.0625
3
Power-related issues are a growing predicament on a global scale. The global population and industrial development are growing more rapidly than existing power infrastructure can handle, having a detrimental effect on efficiencies worldwide. The ever increasing global power issues all stem from an international power grid that is, in a word, archaic. Back in the 19th century electricity was turned from a scientific curiosity into an essential tool for modern life. During that period, names like Nikola Tesla, Thomas Edison and Alexander Graham Bell were leading the way in electrical engineering. And worldwide population continued to grow exponentially, accelerating the use of electricity at a rate no one anticipated. Then, in the 1950s–1970s the first uninterruptible power supply (UPS) and surge protectors were created. But aging infrastructure in tandem with a rise in electricity consumption has resulted in a grid that has not evolved to properly support the population and global infrastructure. Until very recently there was a large gap between the growth of electricity use worldwide and power protection technology. The Global Issue: Population and Industrial Development Exceeding Existing Infrastructure The global statistics are in, and the findings are eye opening, as Figure 1 outlines. Over the next two decades, demand for electricity is forecasted to grow by 40 percent in the U.S. alone. Increased demand is most dramatic in Asia, averaging 4.7 percent per year until 2030. And while Africa accounts for over one-sixth of the world’s population, the country only generates 4 percent of global electricity. As a country, India loses 28 percent of the electricity it carries. In South America, demand for electricity is projected to double over the next few years, outstripping generation capacity and the aging infrastructure, thus causing increasing power disturbances. Figure 1: The global issue: population and industrial development is growing more rapidly than the existing power infrastructure can handle. This exponential growth means increased stress on the grid, which in turn means more strain on individual electronic items, reducing lifespan, lowering reliability and affecting everyday life. Electronics Power Protection Landscape Today’s electronics are pervasive, and the majority of equipment is deployed with insufficient protection, resulting in damage from power grid disturbances, which are surprisingly frequent and destructive. All electronic equipment has two things in common: it needs power to operate, and it is significantly affected by power interruptions. Digital electronics are much more susceptible to glitches, and the evolution of electronics equipment has opened the possibility for more power-related issues. According to Electronic Power Research Institute (EPRI), the consequences of these daily loss-generating disturbances has been called “the most important concern affecting most industrial and commercial customers,” as they cost hundreds of billions of dollars annually to businesses in the United States alone. Original equipment manufacturers (OEMs), enterprise companies, data centers and even consumers, until now, have relied on either UPS or surge protection to shield equipment from grid fluctuations, thinking that these devices adequately protect electronics from damage. UPSs offer protection and effectiveness from a technical standpoint by isolating electronics from the grid and powering them by battery. The downside, however, is that UPSs are expensive for most applications and too large to integrate into electronics. Thus, users either choose not to protect their equipment at all or turn to inexpensive surge protection or power strips that only shield electronics from less than one percent of damaging power disturbances. One of the greatest limitations of power strips is their inability to handle high voltage surges, making them practically useless in terms of electronics protection. In addition to risks from an already unstable power grid, digital electronics are microprocessor-based, leaving them susceptible to power fluctuations. Evolution of Electronics Although the grid globally remains unreliable, electronic equipment that defines modern life has become highly sophisticated, using a substantial amount of energy—with each new generation of devices more hungry than the last. In addition to its energy consumption, the majority of this equipment is being deployed with insufficient power protection and suffers from an extreme amount of power grid activity that is costing the industry an estimate of tens of billions of dollars annually in lost data, materials and productivity. Grid Disturbances & Fluctuations Increasing demand for electricity is putting enormous pressure on a grid not equipped to support such heavy usage. In a CNN article regarding the rise in U.S. electricity blackouts, experts on the nation's electricity system point to a frighteningly steep increase in non-disaster-related outages affecting at least 50,000 consumers. Research performed at the University of Minnesota indicates that over the past two decades, blackouts have increased 124 percent, yet blackouts are merely one of the ways in which the power grid affects connectivity. Electronics are affected by an infinite number of uncontrollable variables ranging from voltage surges and spikes to voltage sags, power outages, overvoltages and brownouts. Even a one-second outage can damage equipment and disrupt operations to the point where labor becomes impaired as systems are reset and brought back online. In addition to power outages, even a minor voltage fluctuation or other disruption of the electrical signal can wreak havoc. Research indicates surges are not as severely damaging compared with the frequent and potentially destructive disturbances emitted from power grids. Far greater damage can be the result of voltage sags, brownouts, overvoltage conditions and power outages, which may have grave consequences as they relate to reliability and overall lifespan. Electrical disturbances of all types occur frequently, as Figure 2 highlights, and although we may not see an immediate effect such as a blackout, disturbances on the grid can still have lasting implications on our devices. Figure 2: The inherently chaotic power grid is caustic to connected electronics and affected by an infinite number of uncontrollable variables. The Consortium for Electric Infrastructure to Support a Digital Society commissioned a study in 2009 to obtain a definitive estimate of the direct costs of power disturbances to U.S. businesses. The study sought to quantify the cost of brief outages—for example, outages of one second or a couple of minutes long—unlike previous studies that have confined their analysis to lengthier outages of one hour or longer even though shorter outages are more common and can cause data loss and damage to industrial equipment. The study revealed the following: - The average cost of a one-second outage among industrial and digital-economy firms is $1,477, versus an average cost of $2,107 for a three-minute outage and $7,795 for a one-hour outage. - Digital-economy establishments report that 49 percent of the outages they experience last less than three minutes. - Add all that up and the U.S. economy is losing between $104 billion and $164 billion each year to outages. Additionally, a study by EPRI from 2005 suggests that the cost to the North American industry of production stoppages caused by voltage sags now exceeds $250 billion per year. Wide Range of Industries Feel the Impact The grid is relied on by a wide variety of industries ranging from consumer devices including laptops and televisions to sophisticated medical equipment. All of these industries use power to function and are affected differently by an unreliable power source. Consumer electronics, like all electronics, are vulnerable to power-related glitches such as equipment lock ups and resets, service calls for unknown stoppages and modem problems. According to a report by the Consumer Electronics Association (CEA) and Business Monitor International (BMI), the average U.S. household has 24 consumer electronics products, contributing to the growth in the consumer electronics devices market, which is expected to increase from $253.5 billion in 2011 to $322.9 billion by 2015. The latest projected figures from GfK Digital World, produced in partnership with Consumer Electronics Association (CEA), reveal global spending on consumer technology devices will surpass $1 trillion in 2012 for the first time. This is an increase of 5 percent over 2011’s figure of $993 billion. Consumer electronics devices range from TVs and personal devices to laptops, smartphones and audio equipment, and the industry as a whole can be segmented into entertainment, productivity and communications categories. In addition, consumer electronics accounts for 15 percent of global residential electricity consumption. This continued massive growth of digital electronics creates the possibility for additional power-related issues. Data Centers on the Rise Beyond consumer devices, more than ever, companies are moving IT infrastructure to data centers. Emerson, a networking provider who recently commissioned a study on the global data center phenomenon, revealed that there are over 509,000 data centers of varying sizes across the globe. These data centers combined accommodate the 1.2 trillion gigabytes of data created every day. In addition, despite stalled growth during the recession, IDC estimates approximately $22 billion will be spent on new data center development worldwide this year alone. Although downtime may be extremely low at data centers, damaging power disturbances are a very common, costly occurrence. To make matters worse, according to a data center report in 2010, problems with UPS equipment and configuration are the most frequently cited cause of data center outages. As a result, there is a growing movement to discover and implement a solution to add additional protection for these electronic assets without incurring massive cost, size and service requirements. Invaluable Medical Technology The need for reliable power goes beyond consumer electronics and data centers; another example is the global medical technology market. The medical device industry is large, intensely competitive and highly innovative, with annual worldwide sales in 2009 exceeding $220 billion according to Zacks Equity Research. A study performed five years ago by the U.S. International Trade Commission discovered that the United States, EU and Japan together account for approximately 90 percent of the global production and consumption of medical devices. The study also discovered that the U.S. medical device industry is the most competitive in the world, having been recognized for its ability to continually design, develop and place medical devices in U.S. and foreign markets. When improving the reliability of technology and services for the medical field, it’s critical for manufacturers and electronic equipment designers to remember the complexity of today’s digitally advanced world, which is largely affected by the power grid. If medical equipment malfunctions, it can have an immediate impact on patients, doctors and nurses. Given the sheer importance and monetary value, it’s necessary to understand the significance of protecting this equipment from power disturbances. The current solutions in place are expensive to acquire, costly to maintain and increasingly difficult and expensive to dispose of when replaced. Owing to these limitations, electronic equipment serving the medical industry is either protected at too great a cost or not protected at all. A Solution: A New Approach in Power Protection Technology New technology developed by Innovolt provides electronics power protection technology designed to guard against damage from 99.5 percent of power interruptions and is accessible to and effective for all electronics, regardless of size. This technology manages the impact of power disturbances and effectively increases the lifespan, reliability and efficiency of electronics equipment. According to Innovolt, companies that have deployed its technology have seen a decrease in service calls on protected equipment. Innovolt has developed an intelligent electronics protection platform that in comparison with traditional surge protection and filtering technologies is a cost-effective, viable and proven long-term option for electronics protection. Similar to UPS systems, the technology provides immunity from grid and line disturbances, yet with a greater success rate, increased functional form-factor and more affordable design. Fortune Global 500 OEMs including Ricoh and Toshiba as well as other companies including Konica Minolta, ECi OMD and Katun have been quick to adopt Innovolt’s technology platform, a move that signifies the critical need for electronics equipment protection. Electronic disturbances can occur at any time, without warning. With risks such as decreased profitability, productivity and customer satisfaction, businesses and consumers cannot afford to risk leaving their electronics unprotected. As we continue investing in new technology, we must understand the severity and implications of exposing our costly investments before it is too late. Innovolt’s ultimate future goal for electronics protection is to improve performance, reliability and longevity of equipment and reduce the number of service calls. About the Author Jeff Spence joined Innovolt as President and COO in 2010 after more than 15 years in executive and corporate development roles growing worldwide companies in the energy, finance, telecommunications and technology sectors across five continents and dozens of countries. In addition to his leadership roles with Innovolt, Spence continues to consult to the industry regarding sales, corporate finance, technology, business incubation and international business development. He is an active speaker across multiple industries and disciplines, having appeared at high-profile conferences including Comdex, Networld+Interop, The Homeland Security Summit, and the International Autobody Congress & Exposition (NACE). In addition, Spence has counseled policy groups including the United Nations, the European Union, and a host of other government and business groups on subjects ranging from economic development, entrepreneurialism, sales and marketing to government intervention and monetary policy. The Cost of Power Disturbances to Industrial & Digital Economy Companies, June 29, 2001, CNN Tech: U.S. electricity blackouts skyrocketing, October 15, 2010, http://www.cnn.com/2010/TECH/innovation/08/09/smart.grid/index.html Photo courtesy of PCgeek86
<urn:uuid:1939fecf-dbff-4b92-8172-8c41e5309414>
CC-MAIN-2017-04
http://www.datacenterjournal.com/a-word-to-the-wise-know-your-power/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00570-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940734
2,731
3.03125
3
Network encryption is a security best practice as it protects the privacy and confidentiality of network traffic as it travels from source to destination. While this can be beneficial, security professionals understand that network encryption can also be used for malicious purposes. Cyber-criminals and hackers can use encrypted channels to hide reconnaissance activities, malware distribution, and command-and-control traffic alongside benign SSL/TLS sessions. By encrypting their malicious actions, hackers are able to circumvent traditional network security tools used for packet filtering, traffic inspection, and advanced threat detection/prevention that can only examine unencrypted network packets. The dilemma is also exacerbated by the fact that advanced persistent threats (APTs) are increasingly using non-standard ports - beyond HTTPS/web on tcp port 443 - to infiltrate organizations and confiscate proprietary data. CISOs must also realize that this threat will only increase as organizations encrypt more and more of their overall network traffic. Download this survey to learn: - How organizations are vulnerable to cyber-attacks through encrypted channels; - What potential threats lie within encrypted traffic; - Challenges associated with the inspection of encrypted network traffic.
<urn:uuid:afb80ea5-1c49-41f0-92d2-2c3f22c12266>
CC-MAIN-2017-04
http://www.bankinfosecurity.com/whitepapers/network-encryption-its-impact-on-enterprise-security-w-1716
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00202-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924443
226
2.890625
3
In a new study published in the journal Neuron, scientists from The Scripps Research Institute (TSRI) are the first to sequence the complete genomes of individual neurons and to produce live mice carrying neuronal genomes in all of their cells. Use of the technique revealed surprising insights into these cells' genomes--including the findings that each neuron contained an average of more than 100 mutations and that these neurons accumulated more mutations in genes they used frequently. "Neuronal genomes have remained a mystery for a long time," said TSRI Associate Professor Kristin Baldwin, senior author of the new study and member of the Dorris Neuroscience Center at TSRI. "The findings in this study, and the extensive validation of genome sequencing-based mutation discovery that this method permits, open the door to additional studies of brain mutations in aging and disease, which may help us understand or treat cognitive decline in aging, neurodegeneration and neurodevelopmental diseases such as autism." Our individual genomes are inherited from our parents and make us unique in our behavior, appearance and susceptibility to disease. While new mutations in genomes of individual cells are known to cause cancer, only recently have researchers begun to appreciate how different the genomes within normal cells of the body may be. Several lines of research have suggested cells in the brain may be particularly unique--and prone to accumulating new mutations of various sorts, including "jumping" genes called transposons. Many of these mutations may not be harmful--but collecting too many mutations, or having them build up in genes needed for a cell's function, might lead to loss of neurons or incorrect brain wiring, which are suspected causes of diseases such as Alzheimer's and autism. "We need to know more about mutations in the brain and how they might impact cell function," said TSRI Research Associate Jennifer Hazen, co-first author of the new study with Gregory Faust of the University of Virginia School of Medicine. However, studying mutations in single neurons has presented a challenge: A single cell doesn't contain enough genetic material for analysis, yet these mutations only exist in single cells. Unfortunately, current single-cell analysis approaches introduce new DNA errors and also destroy the only copy of the cell's DNA in the process, making it impossible to go back and check to see if the mutations were really there. Scientists can't generate copies of neurons because, unlike other cell types, neurons don't divide in cell culture. "There has been no easy way to get more copies of a neuron," explained TSRI Research Assistant William Ferguson, a co-author of the paper. The new study helps solve this problem. The team took a mouse neuron's nucleus, which houses its DNA, and inserted it into an egg cell, which then divided and copied the mutation. The cloned cells then developed into thousands, or even millions, of stem cells with enough DNA for genomic analysis. The researchers repeated the process to create several lines of cloned neurons. "We worked to get the egg itself to copy the genomes of brain cells using cloning," said Baldwin. "We're tricking the neuron into thinking it's not a neuron," added Hazen. "This gives us a renewable source of copies of these genomes." To confirm that the cloned cells were indeed neurons, rather than other brain cells, the researchers tagged the cells with bright fluorescent markers. "When you see the marker, it's a sigh of relief--it worked," said TSRI Research Assistant Alberto Rios Rodriguez, a co-author of the study. Genomic analysis of the cloned cells provided further evidence that the neuron's unique mutations were indeed being passed along. For the first time, the team was even able to make cloned stem cell lines neurons from mice older than eight weeks. This allowed the researchers to see mutations that build up over time. Even more strikingly, several of these stem cell lines could be grown into fertile adult mice which were clones of a single mouse neuron and carried the neuronal mutations in every cell on top of the rest of the DNA from the original mouse. Sergey Kupriyanov, director of the Mouse Genetics Core at TSRI and co-author of the study, called the project "technically challenging." The researchers discovered that not every mutated neuron could be developed into a stem cell line, although more research is needed to explain why. The stem cell lines that did develop, however, provided some surprising insights into the brain. The researchers found that neurons accumulate more mutations in the genes they use, which contrasts with other cell types that seem to protect their commonly used genes. "Even more surprisingly," said Baldwin. "We found that every neuron we looked at was unique--carrying more than 100 DNA changes or mutations that were not present in other cells." The researchers aren't sure why this diversity is so common--there's no evidence that neurons rearrange their DNA like blood cells do--but Baldwin said that if this phenomenon holds true in humans, our brains could hold 100 billion unique genomes. Next, the researchers plan to use their technique to study neuronal genomes of very old mice and those with neurologic diseases. They hope this work will lead to new insights and therapeutic strategies for treating brain aging and neurologic diseases caused by neuronal mutations. Schiapparelli L.M.,Dorris Neuroscience Center | McClatchy D.B.,Dorris Neuroscience Center | Liu H.-H.,Dorris Neuroscience Center | Liu H.-H.,Scripps Research Institute | And 2 more authors. Journal of Proteome Research | Year: 2014 Mass spectrometric strategies to identify protein subpopulations involved in specific biological functions rely on covalently tagging biotin to proteins using various chemical modification methods. The biotin tag is primarily used for enrichment of the targeted subpopulation for subsequent mass spectrometry (MS) analysis. A limitation of these strategies is that MS analysis does not easily discriminate unlabeled contaminants from the labeled protein subpopulation under study. To solve this problem, we developed a flexible method that only relies on direct MS detection of biotin-tagged proteins called "Direct Detection of Biotin-containing Tags" (DiDBiT). Compared with conventional targeted proteomic strategies, DiDBiT improves direct detection of biotinylated proteins ∼200 fold. We show that DiDBiT is applicable to several protein labeling protocols in cell culture and in vivo using cell permeable NHS-biotin and incorporation of the noncanonical amino acid, azidohomoalanine (AHA), into newly synthesized proteins, followed by click chemistry tagging with biotin. We demonstrate that DiDBiT improves the direct detection of biotin-tagged newly synthesized peptides more than 20-fold compared to conventional methods. With the increased sensitivity afforded by DiDBiT, we demonstrate the MS detection of newly synthesized proteins labeled in vivo in the rodent nervous system with unprecedented temporal resolution as short as 3 h. © 2014 American Chemical Society. Source
<urn:uuid:91ef8c90-4a6d-4979-a7b5-6ca8e2442a9d>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/dorris-neuroscience-center-2736965/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00202-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946612
1,417
3.625
4
Pushdo is a dropper. Its primary functions are to drop other malware (such as Zeus or SpyEye) onto infected computers, or to deliver spam campaigns through a Cutwail module. In both cases, ‘success’ can only be achieved via active communication with its command-and-control (C&C) servers. Such communication is a botnet’s weakest link. It can be detected. The C&C servers can be located and either blocked locally by security software, or taken down internationally by security firms and law enforcement agencies. If the C&C servers can be taken out, the botnet is neutralized. But now Damballa, Dell SecureWorks and Georgia Tech have discovered a new Pushdo variant that employs the latest evasion technique to protect its C&C servers. It uses domain fluxing via a technique known as a dynamic generation algorithm (DGA) to massively stack the odds against discovery. “It becomes a signal versus noise issue,” Adrian Culley, technical consultant at Damballa, told Infosecurity. Although not unique (Zeus peer-to-peer uses a similar approach) this is only the third botnet that Damballa has found using the DGA technique; but Culley expects it to become increasingly popular. “By dynamically generating a list of domain names based on an algorithm and only making one live at a time, blocking on ‘seen’ C&C domain names becomes nearly impossible,” Damballa’s Jeremy Demar explains in a blog post about the new Pushdo variant. The concept is relatively simple. The Pushdo bot owner has pre-registered thousands of domains – but only one is ‘live’ at anytime. The bot malware contains an algorithm that randomly generates one of these domains and attempts to communicate with it. If successful the bot downloads its latest instructions. If unsuccessful (because the URL is blocked by software or simply isn’t live at the time), the bot moves on to the next dynamically generated URL and tries again. The Pushdo algorithm generates 1380 domains per day. Only one of these will be live, and the odds on repeated communication that can be detected by rule-based detection methods is minimal. “Picking up on this level of communication is searching for the proverbial correct digital needle in an electronic haystack itself made of digital needles,” explained Culley. Discovery of such malware becomes counter-intuitive – it relies on detecting failures rather than successes. “A key detection attribute for advanced malware that employs DGAs to find live C&C servers rests in its failure,” explained Demar; “in particular, its daily production of unsuccessful DNS resolutions for nonexistent domain names (NXDomains).” It was finding such a cluster on March 2, 2013, that led to the discovery of the new Pushdo variant. Working with Dell SecureWorks and Georgia Tech, Damballa proceeded to analyze the new variant and sink-hole a few of the domains. The analysis led to discovery of the algorithm itself – so in theory the good guys know the location of all the possible C&C servers used by this particular variant/DGA. The sink-holing, by monitoring attempts to connect, suggests that more than 1 million computers are infected with the new variant. “India and Iran appear to be the most infected population,” noted Demar. “In the US, an average of approximately 23,000 unique hosts (from residential and business networks) were trying to connect to the Pushdo DGA domain names,” he said, adding, “Several US government, government contractors and military networks appear to be infected.” Damballa will make its sink-hole data available “for remediation purposes to parties able to demonstrate proof of remediation efforts.”
<urn:uuid:c8a739a3-1eb7-4e4e-bbf2-194563da925c>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/enhanced-and-advanced-pushdo-botnet-is-back/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00322-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935384
798
2.640625
3
Anybody that will be coding publicly-accessible SQL-based web applications needs to be aware of the threat from SQL Injection attacks. SQL injection attacks are attempts made by a malicious user to gain access to the SQL back-end database and can occur from, for example, a PHP front-end. One way for the attack to work is to input unexpected data. If it is formatted in a way that would be translated into a valid command, the attacker can interact with your database in an unintended way. This can either allow conditions to be met when they aren’t (like a successful login) or the database to be modified (dropping a table). The Wikipedia entry for SQL Injection is quite good and a recommended reading to understand the variety of attacks that could compromise your data. Shadowserver has a well-written article explaining how SQL Injection, Redirects, and Drive-By Downloads work with a graphic to explain it. You can use a tool called Pixy to check over your PHP code for SQL Injection and cross-site scripting vulnerabilities by downloading the tool or pasting your code into an online version.
<urn:uuid:21010b62-d1fa-4d2a-81d1-3b414d0a0295>
CC-MAIN-2017-04
https://www.404techsupport.com/2008/10/sql-injection-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00166-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914065
233
2.78125
3
These questions are derived from the Self Test Software Practice Test for CompTIA’s RFID+ exam. Objective: Design Selection SubObjective: Summarize how hardware selection affects performance Single Answer, Multiple Choice You are an RFID specialist in a DVD library. The owner of the DVD library wants to deploy an RFID solution to prevent the theft of DVDs from the library. You plan to put tags on the DVDs to detect theft. The tag on a DVD will be turned off only when the DVD is purchased by a customer. Which type of RFID tags should you use in this scenario? - SAW tags - active tags - EAS tags - passive tags C. EAS tags You should use electronic article surveillance (EAS) tags in this scenario. The tag on a DVD should be turned off when the DVD is purchased by a customer. The tag will remain turned on until it is purchased. EAS tags are simple electronic tags that can be turned on or off. EAS tags have a storage capacity of 1 bit and can be used to store two values: 0 and 1. When a customer purchases a DVD, the tag is turned off at the payment counter. When a person steals a DVD and tries to pass through the exit area carrying a DVD with a tag that is not turned off, an alarm is triggered. This prevents theft of DVDs from the library. You should not use surface acoustic wave (SAW) tags in this scenario. In this scenario, you need a type of tag that will enable you to detect the theft of a DVD. You need to use a tag that can be turned on or off depending upon whether an item has been purchased or not. EAS tags, not SAW tags, will serve the purpose. SAW tags use low-power RF waves in the 2.45 GHz frequency range. SAW devices are widely used in cell phones, color televisions, and so on. You should not use active tags in this scenario. Active tags have a large storage capacity and enhanced information processing power. Therefore, active tags can be used to track high-value goods that need to be monitored over long ranges. Active tags are also used in RFID solutions that require large storage and advanced functionalities, such as installing temperature sensors on aircraft parts to monitor the carriage of perishable goods. In this scenario, you need a tag type that can be turned on or off depending on whether an item has been purchased. You should not use passive tags in this scenario because you need a tag type that can be turned on or off depending on whether an item has been purchased. EAS tags will serve the purpose. RFID Essentials, Chapter 3: Tags, Information Storage and Processing Capacity p. 67-71. RFID Journal, Boeing, FedEx Test Active UHF Tags, http://www.rfidjournal.com/article/articleview/2351/1/1/ RFID Journal, Frequently Asked Qusetions, RFID Tags, http://www.rfidjournal.com/faq/18/68
<urn:uuid:b67624ef-b00f-4e0e-925a-0f43461d3368>
CC-MAIN-2017-04
http://certmag.com/design-selection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00074-ip-10-171-10-70.ec2.internal.warc.gz
en
0.8589
642
2.59375
3
The Education Department is looking for advice on how the private sector and nonprofits might use government-gathered data to make higher education cheaper, more accessible and a better value for the cost. The department wants feedback from developers on how they could use APIs, also known as application programming interfaces, to build websites, mobile applications and other products that help the public learn more about higher education and financial aid. An API is a system for streaming information directly from one digital place, such as an Education Department database, to another place, such as a website that helps students compare universities based on affordability, reputation and other criteria. “Students and families need reliable, timely information in an open and accessible format to identify, afford and complete a degree or program that is affordable and will help them reach their educational and career goals,” the RFI said. The RFI was spawned, in part, by a project President Obama announced in August 2013 to make college more affordable, especially for low-income or first generation students. That project includes the government independently assessing each college’s value and tying financial aid funding to college performance. The RFI also ties in with an Obama administration mandate to make significantly more government data available through machine-readable formats so it can be scooped up by outside developers. An earlier mandate, the Digital Government Strategy released in 2012, required every agency to offer at least two APIs. The Internal Revenue Service launched a tool in February that allows parents and students to automatically transfer much of their tax information from IRS servers to the Federal Application for Student Aid.
<urn:uuid:c0118a05-fa94-4b99-8793-5633b14e52f9>
CC-MAIN-2017-04
http://www.nextgov.com/emerging-tech/2014/04/how-government-data-could-make-college-cheaper/82639/?oref=ng-dropdown
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00074-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954729
320
2.703125
3
From New York to California, state and local governments are taking steps to educate citizens on the plethora of dangers in the online world. This is the third annual National Cyber Security Month which was started by the National Cyber Security Alliance (NCSA) with the mission to "stay safe online." Many dangers await online, from children being exposed to inappropriate content or being solicited over instant messaging, to identity thieves using, rootkits , worms or malware. People must be extremely careful when entering personal information on even familiar Web sites because they could be phishing But the question remains: is one month enough to educate the public? Some have questioned the government's actions calling this nothing more than a reason to spend more tax money. But with any cause it is only through consistence and persistence that anything can be accomplished. And although federal and state governments are involved, much of the effort to educate people is coming from grassroots endeavors. Take for example The Guardian Angels working in the schools of New York State to educate both parents and children about Internet safety. The group is partnering with the New York State Office of Cyber Security & Critical Infrastructure to create a "Strike Force on Cyber Safety." Change needs to be a collaborative effort, not just a governmental one. But the government is doing its part. Different plans have been enacted, steps taken, departments created. News coverage of the various initiatives has been wide, bringing to light the diversity of efforts. Many attorneys general have released consumer alerts about phishing and other Internet scams, such as Arkansas Attorney General Mike Beebe and Illinois Attorney General Lisa Madigan. New York Governor George Pataki signed a proclamation earlier this month recognizing October as Cyber Security Awareness Month. In Illinois, Governor Rod R. Blagojevich created an Internet Crimes Unit to be dedicated solely to combating online crime such as identity theft. Many different organizations and groups, including NASCIO and the Cyber Security Industry Alliance, have spoken up in favor of promoting cyber security. Forty-two attorneys general signed a declaration in support of the goals and ideas being promoted during Cyber Security Awareness Month. Colleen Pedroza, state information security officer in the California State Office of Technology Review, Oversight, and Security, explained that her office has released Internet safety video clips, as well as pamphlets and a newsletter on Internet safety. Much of this information will be passed on to various California counties. "We are excited about this month being National Cyber Security Month," Pedroza said. California also held a Cyber Security Summit which focused on keeping children safe online . Governor Schwarzenegger, in his address at the summit called for the building of "stronger partnerships between governments and between the private and public sectors, between law enforcement and everyone, to fight cyber crime." Online safety is more that just watching who children are chatting with, or keeping Social Security Numbers close to the vest. It is about becoming aware of online actions, trends and habits. It is about common sense and it is a collaborative effort. Taking a month to increase awareness will only be successful if people remember all the tips and guidelines, and, more importantly, practice them.
<urn:uuid:dab61a66-ddcb-47fb-94b2-66b6d8432b86>
CC-MAIN-2017-04
http://www.govtech.com/security/A-Different-Type-of-October-Scare.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00102-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955266
636
2.796875
3
As part of my project to learn more about coffee I’ve assembled a simple bulleted list of what I consider to be the basics. Now I never have to wonder about the differences between dark roast vs. light roast (dark is sweeter with less caffeine) or the difference between a latte and a cappuccino (espresso-to-milk balance). This should take less than three minutes to finish, and when you’re done you’ll be a Level 1 coffee geek, which means you’ll know more than the average coffee snob, but without the attitude. Coffee started in Africa (Ethiopia) in the 9th century and spread to the Muslim-controlled regions, e.g. Egypt, Yemen. From there it came to Italy first through Mediterranean trade routes, and from Italy it made it to Europe and the rest of the world. Coffee plants are fairly large evergreens–the size of small trees. The coffee bean is actually the seed from the berries of a coffee plant (see below). Think of a purple grape with a seed in the middle. The berries themselves usually have two seeds (coffee beans) in each of them, but sometimes they only have one. These are called peaberries. Coffee berries take seven to nine months to ripen, and move from green (unripe) to yellow, and then red (image above). They turn black when they are dried. There are two main types of coffee plant (and therefore coffee)–Coffea Canephora and Coffea Arabica. Arabica Coffee (which comes from the Coffea Arabica plant) is what “fine” coffee is usually made from. When you hear about “Colombian” coffee, it’s Arabica - Coffea Canephora creates what’s called Robusta Coffee, which is more bitter and less flavorful than Arabica Coffee Robusta coffee is much cheaper to produce, and as a result is often used like something of a filler for industrial/commercial brands Robusta coffee is more resistant to disease Robusta coffee has around 40-50% more caffeine than Arabica High-quality Robusta is often used in espresso blends Coffee used to be produced under the shade of large trees. This is the “natural” way of growing coffee and is better for the environment because it supports more wildlife and a more diverse ecosystem. This method, however, has been replaced by farming in direct sunlight to increase yield (the berries grow faster). “Organic” and “natural” brands tend to market the fact that their offerings are grown the old, “green” way, i.e. in the shade of larger trees. This is often referred to as “shade-grown”. Brazil is the world’s leading producer of coffee, not Colombia. Colombia is third, behind Vietnam. Vietnam mostly produces Robusta. Fair Trade coffee (also called “free” trade) is coffee where the company pre-negotiates a “fair” price for the coffee with farmers pre-harvest. It’s designed to give more control over the farming and selling of the product to the farmers themselves, and to prevent them from being taken advantage of by big business. The basic steps for going from plant to cup are: Picking the berries (usually by hand) Sort by ripeness and color Remove the flesh of the berry (think grapes) Ferment the seeds (called coffee beans) to remove the layer of plant mucous (mucilage) on them Dry them. This used to be done by laying them on concrete in the sun and raking them, but now it’s mostly done by hot-air blowers Roasting dries the bean out and and makes it much larger. Roasting starts when the core of the bean reaches around 200C. Roasting causes caramelization as the starches in the bean become sugars, which gives the dark brown color. Caffeine is lost during roasting, but an essential oil is created, called caffeol, that is largely responsible for a coffee’s flavor The longer they roast, the darker coffee beans get. Coffees are called light, medium light, medium, medium dark, dark, or very dark based on how dark they look to the human eye, although machines are used to get really precise measurements. The darker the roast, the smoother and sweeter the coffee tends to be because the beans are more caramelized (sugary) and less fibrous and “planty”. It’s helpful to imagine two types of oils within coffee. The first are the natural oils that are part of the plant (and therefore the beans). These give the coffee their distinct flavors based on the type of plant, the soil, and the conditions it was grown in. These oils disappear as you roast more. The second type of oils are from caramelization, and they produce the “roasted” taste. They appear more as you approach darker and darker roasts. As a general rule, the more you roast a bean the more the original flavor characteristics of the bean are cooked out, and the more the “roasting flavor” starts to appear. This can be seen in roasting by two visual indicators: 1) the color of the bean turns from light to dark, and 2) as the beans start to move into the darker roasts you will start to see oiliness appear on the surface of the bean, making it shiny. French Roast is an extreme roast, where you get extreme flavor but not from the coffee itself, but rather from the roasting process. Lighter roasts have more caffeine and more bitterness due to the presence of not-yet-destroyed oils that don’t exist in darker roasts. Since heavy roasting eliminates whatever natural flavors that used to exist within the coffee, and adds its own “roasted flavor”, coffee connoisseurs generally prefer lighter roasts so that the taste of the coffee bean itself can be appreciated rather than the taste of roasting (which can be very similar despite the coffee used). Decaffeination is done with coffee beans are still green, and is accomplished via either soaking in hot water or by steaming and then using a solvent that dissolves the oils within the beans that have caffeine in them. Just like pepper, coffee is best when it’s recently ground. Coffee enthusiasts buy whole beans and grind them as close as possible to preparation time. A Barista is someone who works at a specialty coffee shop and serves espresso-based beverages. The word is Italian, and as one might guess from the name it means someone who works behind a bar. In the past it included not just those who served coffee, but also alcohol-based drinks. While Starbucks employees are called baristas, the term among coffee aficionados and most Europeans is applied only to those who have attained a high level of skill with coffee blends, espresso, quality, coffee varieties, roast degree, espresso equipment and maintenance, latte art, etc. So some Starbucks employees undoubtedly know their stuff, but just because one has the title of “barista” in America doesn’t mean they do. There are basically two types of coffee grinders: burr and blade. Burr grinders are superior and more costly because they produce more uniformly shaped grinds, and you can control the coarseness (size) of the grind. Blade grinders are cheaper and produce randomly shaped fragments that tend to lead to poor flavor. Of the burr grinders there are also two types: wheel and conical. Conical are the best of those two, and thus are the most expensive. That’s what coffee is, so now let’s talk about how to get it ready for consumption. There is great similarity between the concept of roasting coffee beans and cooking fine steaks. Those who truly know about and appreciate fine steaks generally shun the idea of cooking them beyond medium rare, with most preferring rare. The reason is that as you cook the steak more, just as with coffee beans, you remove the qualities that made it unique and of high quality in the first place. In short, if you’re going to cook/roast something to that extreme there’s no point in starting with something expensive since it’s all going to end up tasting the same anyway. The coarseness of a coffee grind is a factor in preparation and flavor because it determines how much water gets exposed to coffee. Basically, if it’s a coarse grind then you might not get all of the coffee’s essence out of a given grind, while water will thoroughly penetrate and extract a fine ground. Automatic coffee makers use a fairly fine ground, but not as fine as an espresso machine. French Presses use a more course ground. Brewing coffee is broken down into a few types, but they all involve exposure of the beans to water at high temperature in order to extract the flavor of the bean. Here are the main two methods: “Steeping” is the most common way to brew. With this method you simply expose coffee grounds to hot water, let them mingle for a bit, and then let the resulting liquid out through a filter. French Presses are the most common tools used for steeping. Standard coffee makers use this system as well, but the coffee interacts with the water for less time, resulting in a weaker flavor than a French Press. Also, coffee makers often can use paper or metal screens as the filter, and french presses usually use metal filters. Metal filters are considered superior to those made of paper because they don’t change the flavor. Steeping is usually done with medium to coarse grounds. Also, with a French Press you control the steep time, which affects flavor. Espresso (there’s no “x” in it) is also created using hot water mixing with coffee, but the grounds are much finer for espresso and the water is introduced to the coffee using pressure. Both the ground fineness and the pressure result in more water-to-coffee contact, which means more of the coffee’s contents are extracted (and at a higher rate), resulting in more caffeine and a stronger flavor. The pressure used also creates a characteristic foam on top. Espressos are the base of many popular coffee beverages, such as lattes, cappuccino, macchiato, and mochas. My favorite way to make coffee: (based on world-class Barista technique - Get a fresh batch of your favorite coffee in its whole bean form, i.e. not ground yet - Grind it yourself using a high-quality conical burr grinder - Combine 70g of grounds of your coffee per liter of filtered water into a quality French Press - After combining the water and coffee, use a chopstick to briefly whisk them together. Nothing extreme, a few good swipes. Just make sure full uniform contact is made. - Brew for four (4) minutes with no lid on the press. You’ll notice a “bloom” forming. - At the four minute mark, skim the bloom off with a spoon, trying to get as much as possible, but not stressing it too much - Put the lid on and pour. Another top method, which is becoming my favorite, is to use an Aeropress. The Aeropress handles things a bit differently than the French Press. It’s idea is to: - Use cooler water (this produces sweeter / less bitter coffee) - It exposes the coffee to the water for less time - It uses pressure to speed up the extraction So the idea is that you expose the coffee to water for less time, but you use cooler water and pressure to get the perfect extraction. The result—supposedly, and I’ll vouch for it here—is that it produces the smoothest and richest cup of coffee that you’ll ever taste. My favorite methods are these, in order: - French press - Pour over Go to any coffee shop and you’ll notice an assortment of coffee beverages. Latte-frap-capo-chino-whatever. Here’s what they really are: Most of the popular coffee beverages you see at a place like Starbucks are modified espressos. The major ones include: An “Americano” is a watered-down espresso, a name supposedly started in WWII. A Cappuccino is a Latte with less steamed milk that a latte, usually served in a porcelain cup. Attaining the perfect balance between the espresso, the milk, and the foam is an art form. A Latte is 1/3 espresso and 2/3 steamed milk A Cafe Mocha is a Latte with chocolate added, usually via syrup A Macchiato is another espresso beverage with two main variations. “Macchiato” means “stained”, and a espresso macchiato is an espresso “stained” with a little bit of steamed milk. A latte macchiato is a steamed milk with a “stain” of espresso. This primer is a work in progress, so if you have any corrections or ideas for what should be included, please let me know. :: [ Coffee | wikipedia.org ] [ Coffee Guide | coffeeguide.com ] [ List of Coffee Beverages | wikipedia.org ] [ French Presses | wikipedia.org ] [ Coffee Recipes | cooksrecipes.com ] [ Coffee Geeks | coffeegeeks.com ]
<urn:uuid:e3774fed-c621-4a3b-a986-1735f0de180e>
CC-MAIN-2017-04
https://danielmiessler.com/study/coffee/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00552-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948261
2,917
2.765625
3
“Attacks which attempt to exploit vulnerabilities of routers are extremely lucrative for attackers,” says Tim Berghoff, G DATA Security Evangelist. “If attackers succeed in exploiting security holes, they are capable of performing manipulations such as changing the DNS settings. This would put criminals in a positions to intercept personal data such as credit card details or login data for online platforms and services. Likewise, there is the risk of premium telephone numbers being dialled without the owner’s knowledge or consent, resulting in high costs for the line owner.” Experts estimate that the attack that has just been discovered is just the tip of the iceberg – more attacks on routers and IoT devices can be expected in the future. G DATA security expert Tim Berghoff explains the background to the attack in the G DATA SecurityBlog: https://blog.gdatasoftware.com/2016/11/why-hacking-routers-is-worthwhile
<urn:uuid:82afe76a-86c9-40e8-86e0-658938072d17>
CC-MAIN-2017-04
https://www.gdatasoftware.com/news/2016/12/29360-attacks-on-vulnerabilities-in-routers-are-extremely-lucrative?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=gdata-news
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00368-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939746
196
2.515625
3
Temperature monitoring more important than ever in healthcare Monday, Mar 18th 2013 As more condition-sensitive medication and tools - supplies which can break down when stored under non-ideal environmental conditions - are utilized in healthcare, the need for temperature monitoring becomes even more paramount. According to an earlier report from Pharmaceutical Commerce, approximately 70 percent of all global pharmaceutical products will need to be stored under specific cool conditions by 2014. This trend is primarily being driven by the increased use of biological materials to make pharmaceutical products and as goods and services are shipped farther away from their point of origin. However, the research predicted that soon just about all medicine and healthcare instruments will be stored in locations where temperature monitors are present. "It's important to note that not all biologics require cold-chain handling, nor are all small-molecule products free of that requirement," said Nick Basta, editor in chief of Pharmaceutical Commerce and a co-author of the report. "But when you look at how national and international regulations are evolving, you see that even room-temperature products will soon require additional monitoring steps that add complexity to the transportation process." How healthcare providers can best use temperature monitoring As this trend becomes more prevalent, pharmaceutical companies and others who store medicine need to be more proactive in their storage approaches. In a recent article for Healthcare Packaging, Justin Bates, director of healthcare strategy for temperature-sensitive products at UPS, outlined a few key steps organizations can take to better ensure the safety and security of their supplies. One of the first steps stressed by Bates is that supplies often will have to be outside of specific controlled environments at certain moments. After all, pills and equipment will likely need to be shipped in from elsewhere. To combat the threat that these gaps may pose, organizations may want to consider installing a temperature sensor in a variety of locations. For example, a hospital that receives many medicine shipments in a given day may want to put a temperature monitor in the loading dock. In addition, organizations should implement a quality disaster recovery plan and make sure that all of their decisions are supported by data. Although a quality temperature monitoring system should ensure that a worst case scenario never comes to fruition, healthcare providers should still take the necessary steps to make sure that a contingency plan is in place just in case. When crafting a doomsday scenario - or any other consideration - involving sensitive and critical supplies, actionable insights should be leveraged so that the organization can know for certain it is taking the right course of action.
<urn:uuid:794d3f15-b08c-4c22-ba70-0f8c5b0e6154>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/healthcare/temperature-monitoring-more-important-than-ever-in-healthcare-404451
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00213-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950312
509
2.53125
3
In my next few posts, we’re going to discuss NAT and PAT. No, they’re not brother and sister, they’re not even cousins. They are Network Address Translation (NAT) and Port Address Translation (PAT). It’s common today to use private addressing within an Autonomous System (an “AS” is a collection of routers and subnets under a common administrative domain). Per RFC 1918 (Address Allocation for Private Internets), the private networks are: - 10.0.0.0/8 – One class “A” network - 172.16.0.0/12 – A block of sixteen class “B” networks - 192.168.0.0/16 – A block of 256 class “C” networks One problem is that per RFC 1918, advertising the address spaces listed above to the public Internet is not allowed. What this means is, that if you send a packet with a “private” source address to the Internet, the destination will not be able to reply to you (because the routers on the Internet backbone won’t know where you are). The solution to this problem is NAT, specified by RFC 1631 (The Network Address Translator). The first type of NAT we’ll discuss is referred to as “static NAT”. In this method, you build the translation table by hand. For example, let’s say that we want to translate addresses on the 10.1.2.0/24 subnet (private address space) to addresses on the 188.8.131.52/24 network (public). We could translate the first address like this: - Router(config)#ip nat inside source static 10.1.2.1 184.108.40.206 The translation tells the router that if a packet with the specified source address (10.1.2.1) hits the inside interface and is bound for the outside interface, translate the source address statically to the second address (220.127.116.11). You can have multiple translation lines, as many as you need, so let’s add some more: - Router(config)#ip nat inside source static 10.1.2.2 18.104.22.168 - Router(config)#ip nat inside source static 10.1.2.3 22.214.171.124 - Router(config)#ip nat inside source static 10.1.2.4 126.96.36.199 The next thing to do is to tell the router which interface (or subinterface) is the “inside” and which is the “outside”. For our example, let’s assume that the FastEthernet0/0 interface connects to our LAN, and the Serial0/0 interface leads to our Internet Service Provider (ISP): - Router(config)#interface fa0/0 - Router(config-if)#ip nat inside - Router(config-if)#int s0/0 - Router(config-if)#ip nat outside Notice that although we only specified the translation of the source address as the packet transited from the inside to outside interface, the router will automatically translate the destination addresses of packets traversing the router from the outside to inside interface. You can have multiple “inside” and/or “outside” interfaces (or subinterfaces). The beauty of it is that the translation is invisible to all devices, other than the one device performing the translation. You can view the translation table with the command show ip nat translations, and which interfaces are the “inside” and “outside” (along with other info) with show ip nat statistics. When you display the translation table (sh ip nat trans), you’ll notice that it specifies “inside local” and “inside global” addresses. The “inside” refers to where the addressed device physically resides (inboard of the “inside” interface, that is, on our side of the router). The “local” or “global” refers to the vantage point from where the address is being viewed. That is, “local” means “as seen from the inside”, and “global” means “as seen from the outside”. In other words, the “inside local” address is our host’s untranslated (actual) address, and the “inside global” address is the translated address (as seen by those outboard of the “outside” interface). Next time, we’ll examine a variation referred to as “dynamic NAT”. Author: Al Friebe
<urn:uuid:e3b60de3-b371-4813-8fb2-e625854e7eba>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/07/20/nat-and-pat-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00423-ip-10-171-10-70.ec2.internal.warc.gz
en
0.862631
1,016
3.703125
4
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: Introduction Select a size The Impact Background Music has on Memory and Recall Information While performing a task a common theme is people will play music in the background, or will have some form of noise in the background. These sources of music or background noise can be displayed to improve one’s ability to recall information. The information asked to recall, could be words, or images as seen in the studies that are going to be described. Many studies have found results into whether music is helpful or detrimental. Along with other concepts, such as; what kind of music impacts memory, and if the person themselves is the cause of why the music or noise in the background is a benefit or a distraction. The present study will relate to these factors, and will show results to whether music has an impact on memory and recalling skills. Listening to background music, or noise is a commonly done while studying, writing, reading, etc. Music or background noise can cause a positive and negative impact on tasks related to memory, depends on a number of factors. One factor would be how well the participants’ eyes can adjust to reading a paragraph after being interrupted with some sort of background noise. Cauchard, Cane, & Weger (2012) conducted a study describing how reading is an everyday activity, and that music or background noise does not impact what one would see on the screen, but impacts how long it takes for one to process what they read, and how long the task took. However, this study used an audio story to interrupt the paragraph the participant was reading, and usually an audio story would not be playing the background when a college student is performing a task. In another study conducted by Kang and Williamson (2014), explains that music can aid second language learning. Learning another language is a complex, and difficult task. This study used CDs and melodies to help the participant memorize words within the language. The CDs and melodies were chosen for the participants, and were instrumental music that seemed to help memory. The study supports that different types of music, and different type of backgrounds noise, and interruptions (audio story) can determine how music improves memory or not. A conflict that rises is this study shows a great complexity, and participants are under pressure to learn a new language, while the present study being conducted is to only measure how well one can remember a set of words, and images when music is playing in the background. However, knowing that the type of music chosen helped those participants to learning a new language finds that music can help build better memory skills. When the concept of how the skill of memory works there is another skill is intertwined. This skill would be to encode. How well one is able to encode information in one sitting, and then be able to retrieve that information when asked too. Whether or not the skill of encoding is improved or impaired can depend on what kind of setting one finds themselves in. In a study done by Ferreri, Bigand and Bugaiska (2015) they measured how well the participants could recall a word they read with music, environmental noise, or silence in the background. They found that the sound of music helped to encode context better than silence and especially better than environmental sounds. This study helps support the fact that music can improve one’s ability to encode the information being read. The present study can be the most compared to this one, since we are testing how well one can read and process what they were shown while music is playing in the background. The study conducted by Ferreri, Bigand, et al (2015) also provides information for the present study that is not being tested which is the process how well one encodes in silence, and environmental noise. Since found that music helps in a memory related task than in silence, and normal everyday sounds, studies found results stating how well a participant can recall information can depend on the type of music. If the participants, choose the music they listen to while doing the task it can affect how well the remember the content of material. The present study is using a pop song that was previously chosen. What if one does not like that specific genre? Carr and Rickard (2015) conducted an experiment where they measured how easily it was for the participants to remember images they were shown while listening to music. The music consisted of two tracks that specific participant liked, two tracks chosen from another participant, and a radio interview. The conclusion of the study was that when participants listened to the music they choose while doing the task they were about to recall the images better than the music chosen by the experimenters’. Having evidence that discusses how what type of music is in the background describes who that person, and if the music is what that specific person enjoys then they would show a positive affect when asked to recall certain material from a task. When categorizing people in determining who they are, and what music or what type of environment they work better in can be put into two types of groups: introverts and extraverts. A participant being an extravert or introvert can impact how well they are able to have successfully recalled the information with the presence of music or silence. The present study is not concerned whether if one is an introvert or extravert, but having this knowledge is beneficial because if one participant shows negative results many factors including the fact that person may not enjoy that type of music. Furnham and Bradley (1997), Cassidy and Macdonald (2007), and Furnham and Strbac (2002), all conducted studies using the concept of that depending whether one was an introvert or extravert weighed heavily on how well their memory recall was. The tasks included were a reading comprehension test, a memory prose test, and a mental arithmetic task. The background noise used was a collection of office sounds, radio music, and silence. The results stayed in the same realm for each of the three studies. Introverts preformed lower, but not by a significant amount when music and noise were playing. Extraverts performed better when it was nosier in the background. These tasks are excessive what the present study will measure, but knowing that music can possibly be an improvement in memory or a distraction in certain tasks when performed by certain people builds the argument how music impacts memory, and recall. Every individual is different, especially in college when one is finding themselves. Their cognitive ability does depend on who they are, and if the reason for one not having an improved memory recall because of music can be because of the person they are. Studies conducted before using the same idea as the present study gives evidence that music can improve memory along with negatively affects one’s memory performance. Meaning the present study will bring in concepts of how good participants adapt to their surroundings. Along, with how well a memory task with words and images, while music is playing in the background is performed. IMPACT MUSIC HAS ON MEMORY AND RECALL IMPACT OF MUSIC ON MEMORY AND RECALL noise, or silence in the background. They found that the sound of music helped to encode context better than silence and especially better than environmental sounds. This study helps support the fact that music can improve one’s ability to encode the information being read. The present study can be the most compared to this one, since we are testing how well one can read and process what they were shown while music is playing in the background. The study conducted by Ferreri, Bigand, et al (2015) also provides information for the present study that is not being tested which is the process how well one encodes in silence, and environmental noise. Since found that music helps in a memory related task than in silence, and normal everyday sounds, studies found results stating how well a participant can recall information can depend on the type of music. If the participants, choose the music they listen to while doin
<urn:uuid:18a55d65-3df8-4859-8bd9-781408e6cfb3>
CC-MAIN-2017-04
https://docs.com/kyra-heiler/2007/introduction
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00333-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957359
1,671
3.765625
4
In recent years, in order to reduce the cost of fiber access optical fiber network end-users, some well-known domestic and foreign universities and research institutions, manufacturers racing to develop cheap, connecting a simple, reliable plastic optical fiber and communication systems. This article briefly describes the progress of the plastic optical fiber structure, materials, manufacturing methods, performance and communication systems. Plastic optical fiber structure: the short-distance communication using plastic optical fiber, its profile of the refractive index distribution can be divided into two types: step-index plastic optical fiber and a graded-index plastic optical fiber. Step index plastic optical fiber due to its mode dispersion interaction between people shot reflection of light occurs repeatedly, the emitted waveform relative to the incident waveform broadening, its transmission bandwidth of tens to hundreds of MHz • km. Gradient refractive index gradient plastic optical fiber to optimize distribution suppression mode dispersion, from reducing the manpower of the material dispersion, and thus obtain a bandwidth of up to hundreds MHZ • km to several GHz • km gradient plastic optical fiber. The plastic optical fiber materials: When the plastic optical fiber material is selected, the main consideration is the material itself is a translucent, refractive index, etc.. The core material in addition should be good transparency, uniform optical refractive index appropriate, attention should be paid to the mechanical, chemical stability, thermal stability, processing and cost factors. Currently, often selected for the plastic optical fiber core materials are: poly (methyl methacrylate) (PMMA), polyphenylene propylene (PS), polycarbonate (PC), fluorinated poly methyl acrylate (FPMMA) and perfluoro resin and so on. Often selected as the plastic optical fiber cladding materials are: poly (methyl methacrylate), fluorine plastic, silicon resin or the like. The manufacture of plastic optical fiber: the quartz glass optical fiber manufacturing method is completely different: extrusion method and interfacial gel method, the communication method of manufacturing a plastic optical fiber. The extrusion method is mainly used in the manufacture of the step-type plastic optical fiber. The process steps are as follows: First, as a core of poly (methyl methacrylate) the monomers methacrylamide methylphenidate through after purified by distillation under reduced pressure, together with a polymerization initiator agent and a chain transfer agent is fed to the polymerization vessel together, Then the container was placed in an electric oven heating, the placement of a certain time, so that the monomer is completely polymerized, and finally, will be filled with a fully polymerized poly (methyl methacrylate), the container was heated to the drawing temperature, and dry nitrogen pressurized molten polymer from the upper end of the container, the bottom of the container mouth is extrusion of a plastic optical fiber cores at the same time so that the extruded core coated with a layer of low refractive index polymer is made of bands jump-type plastic optical fiber. Gradient type of plastic optical fiber manufacturing method of the interfacial gel method. The interfacial gel method, the process steps are as follows: the first high refractive index dopant in the core monomer to prepare the the core mixed solution, followed by the initiator and a chain transfer agent into the core to control the rate of polymerization, the polymer molecular size the mixed solution, and then the solution was put into a selected as the cladding material within the hollow tube of poly methyl methacrylate (PMMA), and finally the mixed solution of PMMA with a core tube placed in an oven at a certain temperature and time conditions of polymerization. In the polymerization process, the PMMA tube gradually a mixed solution of swelling, the gel phase is formed in the inner wall of the PMMA tube. In the gel phase molecular movement speed slows down, the polymerization reaction is accelerated due to the “gel effect”, the thickness of the polymer is gradually thickened, the polymerization was terminated on PMMA tube center, thereby obtaining a refractive index along radial gradient distribution fiber preform law, the last before plastic optical fiber preform is fed into the furnace heat drawn into graded-index plastic optical fiber.
<urn:uuid:56ab6925-e9f8-45f7-a97e-5cb3460b4d15>
CC-MAIN-2017-04
http://www.fs.com/blog/principle-of-plastic-optical-fiber-transmission-system.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00177-ip-10-171-10-70.ec2.internal.warc.gz
en
0.900845
868
2.859375
3
Chapter 14 Notes Asymmetric Encryption Algorithms: • Digital Signature Algorithm (DSA) • Diffie-Hellman (DH) • Elliptic Curve Cryptography (ECC) • The sender creates a hash to identify the message • The sender then encrypts the hash with his private key and appends it to the message • The recipient decrypts the hash with the sender’s public key and re-computes that value to verify the integrity of the message. • Certificate Authority (CA): Trusted third party that signs the public keys in the PKI system. • Certificate: Issued by the CA to bind a user or device to a public key. Components of PKI: • CA to provide management of keys • PKI users and/or devices • Storage and protocols • Supporting organizational framework and user authentication through Local Registration Authorities (LRA). • Supporting legal framework • Single Root CA: centrally administered, single point of failure, and difficult to scale • Hierarchical CA: delegation and distribution of trust, certification paths • Cross-Certified CAs: horizontal trust relationship PKI Keys: Users are given two pairs of keys. One is used for encryption and the other is used for signing. Two certificates validate each of the two public keys from the two key pairs. Usage of PKI Keys: • Signing keys may be used less and therefore can have a longer lifetime • In a key recovery scheme, the option exists for only the encryption private key to be backed up • Different key lengths and algorithms can be used for different key pairs in order to fulfill legal requirements. RA Offloading: In order to secure the CA, as well as to reduce CA overload, many key management tasks can be offloaded to RAs. RAs can handle: • Enrollment and authentication of users • Key generation for users who do not have generation capabilities • Distribution of certificates after enrollment X.509v3 Usage and Applications: X.509v3 is an IETF industry standard for basic PKI including certificate and certificate revocation list (CRL) formats. It is widely used in many applications including: • SSL web authentication • S/MIME encrypted email • IPSec VPNs • Client certificates • PKCS #7: Defines the syntax of cryptographic protected messages, specifically, it is widely used in S/MIME email. • PKCS #10: Defines certification request syntax. Simple Certificate Enrollment Protocol (SCEP): • Client creates certificate request according to PKCS #10 • The request is enveloped in PKCS #7 and sent to the officiating RA or CA • When received by the RA or CA, it is either automatically or manually accepted or rejected. Identity Management: In PKI, identity management is gained through the CA acting as a trusted third party and the X.509 standard which describes how to store an authentication key. The CA certificate contains the following: • The CA’s identify • The CA’s public key • The signature encrypted with the CA’s private key • Parameters including serial numbers, algorithms used, and validation fields PKI Unique Authentication Characteristics: • Authentication begins with each party obtaining the CA’s certificate as well as their own certificates. • True non-repudiation is provided through public/private key pairs. Caveats of Using PKI: • A user’s certificate is compromised (private key is stolen): A CRL must be kept, and users must be informed of CRL parameters • The CA’s root certificate is compromised: An Authority Revocation List is needed and the entire PKI system must be updated. • The CA administrator must follow strict rules for the certification enrollment process and must use additional out of band authentication procedures.
<urn:uuid:9a1fe117-ce1d-45dd-b366-2f7bd3f7141c>
CC-MAIN-2017-04
http://networking-forum.com/viewtopic.php?f=71&t=26150&view=next
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00479-ip-10-171-10-70.ec2.internal.warc.gz
en
0.87104
816
3.328125
3
In a Drive-by-Download attack, the web application is tampered (i.e. injected with HTML code) that instructs a visitor’s browser to download malware located in an attacker’s controlled server. Most often, tampering is not visually apparent to visitors, thus innocent victims are unaware of the background download operation. If any warning appears it is usually dismissed since victims believe it to be part of the original application. The malware is usually Trojan horse software that takes control of the victim’s machine, making it part of a larger botnet. One prevalent type of cyber crime is the operation and expansion of botnets. These are computers owned by innocent individuals which were infected by Trojan horse software that controls them on behalf of the net owner (usually a technical person working on behalf of not-so-technical delinquents). In order to maintain a viable and profitable botnet, attackers need to constantly infect more computers with their control agent. One less efficient method of doing that is by compromising each target machine individually. Another, more lucrative method is to have a well known widely accessed application to distribute the control agent to innocent victims. Attackers can compromise the target application and have the malicious code hosted on it. This tends to be quite difficult as upload exploits are not common and as many application servers host antimalware software that would detect the control agent’s code. The alternative method chosen by attackers is that of drive-by-download. In this type of attack cyber criminals rely on a relatively small and much more common vulnerability of HTML injection (sometimes referred to as persistent XSS) vulnerability. The attacker abuses the injection vulnerability to add some HTML code to the target application. That HTML code, when rendered by a victim’s browser would download the actual malware into the victim’s machine. Common HTML constructs used for this purpose are script elements as well as iframe elements that have their src attributes pointing to the actual server holding the malware. Sometimes, an attacker would use a misleading popup window combined with a button on it to have the hapless victim explicitly invoke the download operation. One of the most common methods employed so far by hackers to launch drive-by-download attacks is the use of SQL injection. Sites vulnerable to SQL injection and in particular those that employ MS SQL Server as their backend are susceptible not only to confidentiality breaches but also unauthorized modifications. Attackers would craft a SQL injection attack that actually injects HTML code into database rows and columns that are later used in the construction of the applications HTML pages. For example, in a forum application where user posts as well as user details are kept in a database an attacker can infect the forum with malicious HTML code. All posting records as well as the names of the users who made the posts are in jeopardy. Many sites were hit during 2008 using this same method combined with some preliminary exploratory work using Google searches. In several waves of mass SQL Injection attacks millions of legitimate Web sites were compromised, among them some high profile ones (e.g. sites owned by CA and Microsoft). In these incidents, the attackers injected HTML code which downloads different binaries according to the victim’s browser’s version. These binaries then exploit different weaknesses of the specific browser in order to take over the victim’s PC. Third-party components used in Websites may also act as a conduit of drive-by-download attacks. A Website may reference a widget without knowing that the specific widget contains, either intentionally or not, malicious script. Another example is that of advertisements which contain some malicious code. Once the victim’s browser fetches the advertisement, it unknowingly also fetches the corresponding attacker’s code, as was the case for Major League Baseball’s website in early 2009. Hiding such defective code within advertisements has become common enough practice to earn the nickname "malvertisements". Drive-by-download attacks should be prevented by a combination of two methods. - Applications should be protected against tampering, detecting infection attempts in the first place. This can be achieved by combining secure software development practices together with real-time measures such as web application firewalls. - Protect application users against infection if for some reason the application has been infected. This is achieved using a real-time detection mechanism with frequently updatable signature database to detect victim infection vectors as they flow out of an infected server.
<urn:uuid:8d850024-85b9-426d-a23f-7a14291f1548>
CC-MAIN-2017-04
https://www.imperva.com/Resources/Glossary?term=drive_by_downloads
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00479-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940013
896
2.90625
3
If you’re one of the 175 million Pandora users, then you have surely experienced the excitement of having the Internet’s most popular radio station introduce you to a brand-new artist or song. While it may seem like magic, there is a perfectly logical explanation behind Pandora’s ability to seemingly read your mind and know your taste in music. The true magic behind Pandora lies hidden in the numbers and data collected from music analysis, personalization, and the music delivery methods it uses. The Music Genome Project – A Mind Reader for Music Starting with the analysis of the raw data, musicologists undergo a lengthy process analyzing the distinct characteristics of each piece of music. Pandora’s Music Genome Project looks at more than 450 attributes in order to create a musicological “DNA” for each track, including melody, harmony, instrumentation, rhythm, vocals and lyrics, to name a few. It states on Pandora’s website, “the Music Genome Project’s database is built using a methodology that includes the use of precisely defined terminology, a consistent frame of reference, redundant analysis, and ongoing quality control to ensure that data integrity remains reliably high. Pandora does not use machine-listening or other forms of automated data extraction.” In 2012, Pandora’s library had over one million tracks by more than 100,000 artists. When you consider that this categorization is done manually, the scale of the project becomes almost overwhelming. The Music Genome Project is the largest musical categorization process of its kind. However, what makes Pandora unique and popular is the ability to personalize its music delivery. A user creates a station from a “seed” such as an artist, track, or genre. The Music Genome process then begins finding new songs of the same “DNA” and further personalizes itself as a user starts giving music a “thumbs up” or “thumb down.” In 2012, users created over 1.6 billion unique stations, each personalized by one of the 175 million registered members. The “thumbs ups” and “thumbs down” feedback is invaluable. Beyond the benefit of personalized stations, Pandora is able to take that feedback and use it to enrich the Music Genome Project, allowing Pandora to curate better stations based on its listeners. Delivering Music Everywhere You Go Pandora is the largest Internet based radio station, capturing more than a 70 percent market share in Internet radio listening. In January 2013, Pandora owned an eight percent share of the total U.S. radio market, delivering 1.39 billion hours of music. In 2012, Pandora users listened to 13 billion hours of music. That’s the equivalent of 1.5 million years of straight music listening. Of the staggering 13 billion listening hours, 75 percent of the music delivered by Pandora was through mobile and other connected devices. Pandora just recently announced that it has over 1,000 partner integrations – 760 of them being consumer electronic devices such as phones, TVs, Blu-ray players, etc. Pandora is also available in 85 new car models and 175 different aftermarket car radio devices. In order to maintain the high performance in delivery for each user, Pandora relies heavily on a caching system to help deliver its most popular tracks. Aaron Porter, Pandora’s Director of System Administration, explained that the growing popularity of Pandora presented challenges of scalability and reliability with this caching tier. At first, Pandora loaded its servers with RAM to ensure a quick and quality experience for the end user. Scalability, however, became extremely difficult with this approach. Pandora turned to Fusion-io and its ioDrive platform, allowing it to use flash memory as a caching tier. “The ioDrives perform as well as our RAM caches, but offer 10 times the capacity per server,” said Aaron. “Our total frequently-accessed music cache now holds 10 times the songs it used to, which both enhances existing user experience and gives us plenty of headroom for future growth.” With the increase in capacity and performance delivered by the flash-based servers, Pandora was able to decrease its overall server footprint by 40 percent, allowing it to slow down its scale-out plans, and receive an almost instant ROI from the flash. You can learn more about Pandora’s experience with flash memory in this case study by Fusion-io. Scaling Users and Scaling Performance It’s easy to see how impressive Pandora’s technology is when it comes to serving up its music library. But even more astounding is seeing how the company is capable of handling database demands as they continue to add music to their library, refine their personalization algorithms, and grow their user-base. Despite these increasing demands, Fusion-io’s flash-based memory tier has helped slow Pandora’s hardware scale out. It will be interesting to see how Pandora’s continued innovations inside the datacenter, delivering higher performance and reduced energy consumption, will allow the company to enhance it’s magical customer experience.
<urn:uuid:02681d8a-48e0-4e9e-a9ab-a14b7734dbeb>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/02/18/scaled_out_music_scaled_down_infrastructure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00507-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933867
1,055
2.984375
3
It's been a bit since we checked in with the Mars Curiosity Rover, but luckily NASA's Jet Propulsion Laboratory is able to give us a weekly update on what's going on out there. In this week's report, we find out that the rover has been scooping up soil samples, and then determining whether the sample collected is good enough for further analysis. It's fascinating seeing the different steps (a system of go/no-go determines whether to go on to the next step) the Rover takes to make sure the sample is good. Still no discoveries of Martians, Marvin or otherwise. Fingers still crossed. Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. Watch some more cool videos: Science Monday #1: Why it's dark at night BBC gives Doctor Who fans an Amy/Rory postscript The best remote-control car chase ever Science Monday: Origins of Quantum Mechanics in under 5 minutes Motion-copy robot can mimic painting brush strokes
<urn:uuid:d684fe95-9236-44f8-a272-727e4faf3b89>
CC-MAIN-2017-04
http://www.itworld.com/article/2718951/cloud-computing/this-week-on-mars--soil-scoopage-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00415-ip-10-171-10-70.ec2.internal.warc.gz
en
0.888056
240
2.625
3
Kaspersky Lab Experts Identify Mysterious Language in the Duqu Trojan; Thanks Programming Community for its Support of the Analysis Kaspersky Lab recently appealed to the programming community for assistance in solving one of the biggest mysteries of the Duqu Trojan, which was identifying an unknown code block located inside a section of the malicious program’s Payload DLL. The unknown code section, titled the “Duqu Framework” was a portion of the Payload DLL that was responsible for interacting with its Command & Control (C&C) servers after the Trojan infected a victim’s machine. After receiving an incredible amount of helpful feedback from the programming community, Kaspersky Lab experts have stated with a high degree of certainty that the Duqu Framework consists of “C” source code compiled with Microsoft Visual Studio 2008 and special options for optimizing code size and inline expansion. The code was also written with a customized extension for combining object-oriented programming with C, generally referred to as “OO C.” This kind of in-house programming is highly sophisticated and more commonly found in complex ‘civil’ software projects, rather than contemporary malware. While there is no easy explanation why OO C was used instead of C++ for the Duqu Framework, there are two reasonable causes that support its use: - More control over the code: When C++ was published, many old school programmers preferred to stay away from it because of distrust in memory allocation and other obscure language features which cause indirect execution of code. OO C would provide a more reliable framework with less opportunity for unexpected behavior. - Extreme portability: About 10-12 years ago C++ was not entirely standardized and it was possible to have C++ code that was not interoperable with every compiler. Using C provides programmers with extreme portability since it’s capable of targeting every existing platform at any time without facing the limitations associated with C++. “These two reasons indicate that the code was written by a team of experienced ‘old-school’ developers who wanted to create a customized framework to support a highly flexible and adaptable attack platform. The code could have been reused from previous cyber-operations and customized to integrate into the Duqu Trojan,” said Igor Soumenkov, malware expert. “However, one thing is certain: these techniques are normally seen by elite software developers and almost never in today’s general malware.” Kaspersky Lab would like to thank everyone who participated in the quest to help indentify this unknown code. To read the full version of the analysis, written by Igor Soumenkov, please visit Securelist. The analysis includes the technical details of the framework, methods of identification and the knowledgeable comments Kaspersky Lab received that helped solve this piece of the Duqu puzzle.
<urn:uuid:e210fe8b-e24e-4445-b6db-78a33bfaceeb>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2012/Kaspersky_Lab_Experts_Identify_Mysterious_Language_in_the_Duqu_Trojan_Thanks_Programming_Community_for_its_Support_of_the_Analysis
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00415-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952025
584
2.609375
3
Using Cryptography: Methods and techniques When the concept of cryptography comes into place, we should immediately think of the diverse methods and techniques that can be used. It is quite important that we are familiar with most of the techniques since they normally differ from each other in terms of their security levels and areas of implementation. It is therefore important that we have basic knowledge about them. WEP vs. WPA/WPA2 and preshared key Encryption of a wireless network and technology becomes very important due to the fact that wireless technology is radio waves and it is very easy for anyone to listen in on the right frequency and see what is happening inside one's network. With respect to this it is important that data being sent back and forth via these air waves is encrypted properly. This will make sure that everyone has the right access information to be able to unlock and decrypt so as to see what is happening in the data streams. The idea behind this is that only people with the password will be in a position to make sense of what is going on in the data streams. WEP and WPA are the two encryption technologies that are used in the encryption of wireless networks. The WEP stands for Wired Equivalent Privacy and it gives two levels of encryption key strengths depending on one's location in the world where one can either have a 64-bit key or a 128-bit key. WEP is a type of encryption format that has been detected to have very much vulnerability which have let to the use of WEP being highly discouraged. With the identification of WEP as a very weak and poor encryption method, the WPA was discovered. This stands for Wi-Fi Protected Access. This is a kind of encryption that included the RC4 which is a cipher that was used with WEP and then an additional Temporary Key Integrity Protocol mechanism. This therefore means that every packet that goes through a particular network gets a unique encryption key. The AES stands for Advanced Encryption Standard which is a component of the WPA2 certification. It is among the most modern symmetric key ciphers out there. This is a technology that replaced the RC4 component in WPA. This is a 128-bit block symmetric cipher and there are different key sizes that one can use from a 128-bit key size up to a 256-bit key size on both sides of the symmetric cipher. This is a symmetric encryption cipher standing for Data Encrypted standard which was developed between 1972 and 1977 by the IBM. The DES was a 64-bit block cipher that used a 56-bit key. This makes DES a very small key to use this. The 3DES is an abbreviation that stands for Triple Data Encryption Standard. The 3DES takes the same idea of the does but does the encryption three times where in each case one could be using three different keys to encrypt one's information. This makes harder to do the break force and also takes longer to figure out what the original key could be. The RC4 is an example of a symmetric encryption cypher which stands for Reversed Cipher 4. The RC4 is also a part of the ill-fated WEP standard which is no longer used due to the numerous vulnerabilities that were associated with it. The RC4 has what is termed as a biased output. This can be explained that if the third byte of the original state is zero and the second byte is not equal to two, then the second output byte is always zero. This makes this cipher not as secure as we would like it to be. It is for this reason that the RC4 is not common these days. The one-time pad is a cipher that was created in the early 1900 and it was built when the early teletype machines were becoming popular as a way to encrypt the information on teletypes. This was the first automated on-line encryption system. This is a system that worked on the concept of the pad which had a very simple encryption and decryption process. This is a system that was not complicated and it was very secure in that it is one of the unbreakable systems if used correctly. For the one-time pad to be very effective, there are some very important aspects that had to be kept in mind. One aspect is that the key had to be of the same size as the plain text that we need to encrypt. The number of letters in the key and the number of letters in the message should be the same. Another aspect is that the key should be really random and should only be used once after which it should be destroyed. This makes the entire communication very difficult to decrypt. When one decrypt the key one time, one might never be able to decrypt it again. Another rule of this system is that there are only going to have two copies of the key, the sender and the receiver. This means that if anyone gets the information in the middle, he or she cannot decrypt it. The NTLM which stands for NT LAN Manager was developed so as to make the LM more secure. It was used in early versions of Windows NT. The password was Unicode which means that there was a lot of flexibility on the types of characters. It can be up to 127 characters long and it is stored as a 128-bit MD4 hash which was more secure than the smaller DES that were used in the LAN Manager configuration. The NTLM was not completely secure and therefore there was the development of the NTLMv2 for better security which came out with the Windows NT service pack 4. It added some additional security since there was a new password response, an MD4 password hash similar to NTLM version 1 and there was the hash of the user name and server name combined hence there was more information in it making it more encrypted in the network. There was also some variable-length challenge with a specific time-span, some random data and some domain name information increasing the amount of details in there so as to make the conversation more secure during the authentication process. Blowfish is an example of a symmetric encryption algorithm which is a 64-bit block cipher and its key length can range from 1 bit to 448 bits. Blowfish is a very secure encryption cipher and there are no known ways of breaking the 16 rounds of encryption in it and there are no patents associated with it. The PGP (Pretty Good Privacy) and GPG (Good Privacy Guard) are common asymmetric encryption methods that are used all over the world. PGP is commercial software. In addition, it is an open standard by itself. PGP is available for Windows, Linux and UNIX. The twofish is the successor of blowfish and uses a larger block size of 128-bits and key sizes of up to 256 bits. There are no patents associated with it making it available to everyone who wants to use it. CHAP is a more secure authentication than PAP. It stands for Challenge-Handshake Authentication Protocol. This is an encrypted message that is sent across a network. The operation of this is in the form of a three way handshake where one first send information to the server one are trying to authenticate to and it responds with a challenge message prompting one to prove who one say one are. After that, the client responds with a hash of the password which is send again to the server and it compares is to what it has stored as one's password. Even after that, the challenge is not over since one will always send some challenges during the connection. Since one's username and password is hashed in one's machine, one might not see this message. When we log in to a server or a network, we need a way to authenticate ourselves and therefore there is a series of protocols behind the scene that ensure that our names and passwords or any other information we are using to authenticate is received properly so that we are able to gain proper access. The Password Authentication Protocol was one method of doing this. PAP is a very simple encryption protocol and in fact everything that is communicated through the PAP is in the clear. This involved sending of passwords across the network in form of clear text which is concept not quite encouraged in the modern day. Use of algorithms/protocols with transport encryptionSSL SSL stands for Secure Sockets Layer which was among the very first encryption mechanisms that we had in our browsers. It was developed by Netscape and build in the early browsers that were being used in the year 1996. The TLS which stands for Transport Layer Security was derived from the SSL. This was an improved version of SSL that was more public, more standard and was not specific to Netscape. It was derived from the SSL but is now a worldwide standard that one can find in RFC 2246. The IPsec encryption mechanism is a type of mechanism that we can use in instances where we are carrying out other encryptions outside the browser, encryptions not in a terminal screen or even other types of encryptions. This is a mechanism that one can use if one needed to encrypt any type of data not necessarily in a browser or on terminal scree. This is an encryption mechanism that was designed to work at the TCP/IP and it operates on Layer # of the OSI where it can work with IP packets. The IPsec mechanism allows us to have confidentiality and integrity in the communication that we have between devices or hosts on both sides. In this mechanism, there is the ability to sign every packet so that when we receive a packet, we can determine that it is the same packet that was sent from the other side. The IPsec is an extremely standardized mechanism of transportation since one will see it or routers and firewalls. One will see this mechanism in RFC 4301 all the way to RFC 4309. The SSH stands for Secure Shell and it's a very common method of communicating to an encrypted server through a terminal screen either the command line or a terminal screen where one will see it often communicating through the tcp/22. Secure Shell mechanism can also be applied more than just typing things at a screen. We can use it to use remote administration, file transfer using the Secure File Transfer Protocol (SFTP) and Secure File Copy Protocol (SFCP) so as to send information back and forth. The secure Shell can be used when we need to have access to a computer but not really going to the browser or when we are not using the HTTPS, TSL and SSL types of configurations to ensure that all of one's communications are correctly encrypted. The combination of TLS and SSL encryption mechanisms inside of a browser or inside the protocols that go back and forth inside one's browser and web server would then result into HTTPS which means that it is a HTTP protocol that is secured. This is an encryption mechanism that uses the SSL and TLS mechanisms so as to encrypt all the communication that is going between one's browser and the web communication server. This is a protocol that would run in the port of tcp/443 by default. It is also a protocol that is built into most of the browsers currently so that we can have these encrypted methods of sending information back and forth and be assured that nobody can look into the data and know what is contained in it. Cipher suitesStrong vs. weak ciphers Cipher suites are broadly categorized in terms of the strength associated with them. By measure of strength, a cipher is classified based on how it is easy to break or decrypt. Strong ciphers are strong encryption mechanism and algorithms that that cannot be broken while weak ciphers are encryption mechanisms that have a lot of vulnerabilities thus making them easy to break and highly unsecure. Key stretching can be basically identified as the method of increasing one's key security by making some additional computations to a small length key so as to make it longer. This is basically making a weak key stronger. The PBKDF2 key stretching is one that is not highly reliable owing to the fact that it is one that cannot be fully relied on. It has some vulnerability all by itself. The Bcrypt method of key stretching is one that is considered to be much more secure since it requires much more hardware thus making it difficult for the cracker to get into it. The disadvantage with it is that it becomes very difficult to configure the length of the output key. It is very important that one is conversant with all the cryptographic algorithms and techniques since it is with their knowledge that we are able to secure our data and information properly. By reading this all, one can ensure that he can get some good insights about the methods used in this field and hence he can benefit from it by using them for the betterment of the company.
<urn:uuid:4b813f4c-daae-43f2-900b-a3d95827cab8>
CC-MAIN-2017-04
https://www.examcollection.com/certification-training/security-plus-using-cryptography-methods-and-techniques.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00195-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972485
2,589
3.40625
3
Klez.H is Capable of Revealing Confidential Information 19 Apr 2002 Kaspersky Lab reports the beginning of a large-scale epidemic, first exposed on April 17, attributed to the Internet-worm Klez.H. This dangerous virus currently accounts for over 70% of all infections from malicious programs and this number continues to rise. Presently the spread of this epidemic has affected practically all countries. Klez.H poses a special threat: the worm scans the disks of an infected computer and depending on a set of conditions attaches a file to each infected email it distributes. Klez.H selects this file from the infected computer's disk storage and looks for files with the following extensions: .txt .htm .html .wab .asp .doc .rtf .xls .jpg .cpp .c .pas .mpg .mpeg .bak .mp3 .pdf The result being the possible leakage of important confidential information, the consequences of which cannot be foretold. In a similar fashion, near the end of 2001, the Internet-worm SirCam made public classified documents from a score of government institutions representing different countries from around the world. "In contrast to earlier versions, Klez.H does not have the ability to destroy stored data. Instead Klez.H maintains its threat from its ability to, unsanctioned, mail out files from the infected computer,"- commented Eugene Kaspersky, Kaspersky Lab Head of Anti-Virus Research - "Under these conditions Klez.H poses a greater threat to corporate clients for which an information leak can have unpredictable consequences." The speed at which Klez.H has spread demonstrates that the majority of users have ignored the advice to install the Internet Explorer security patch that will protect a computer from any version of Klez as well as from future modifications of it. In addition users do not regularly update anti-virus program databases. The consequence of this lax behavior is the Klez.H has a good chance to achieve a large-scale epidemic just like another infamous version of this worm - Klez.E, which already for several months has confidently taken first place in the list of most wide-spread viruses. Considering the high danger of infection from Klez.H, Kaspersky Lab once again strongly recommends users update their Kaspersky Anti-Virus database. For more thorough protection users should install the Internet Explorer security patch found at:http://www.microsoft.com/windows/ie/download/critical/Q290108/default.asp. Kaspersky Lab is providing a free utility that will detect and remove all of the most widely spread versions of Klez, including Klez.H. You can download this utility at the following address: http://www.kaspersky.com/removaltools More detailed information covering Klez.H can be found in the Kaspersky Lab Virus Encyclopedia at: http://www.viruslist.com/eng/viruslist.html?id=4292
<urn:uuid:6e9fdb2e-a632-4442-95e9-d128d773b117>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/business/2002/Klez_H_is_Capable_of_Revealing_Confidential_Information
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00525-ip-10-171-10-70.ec2.internal.warc.gz
en
0.875028
624
2.6875
3
Internet Addiction Might Become a Diagnosis In recent months, the awareness about people unable to connect from technology for even a short period of time has gotten a lot of attention. Last November, The New York Times ran an article on a Korean boot camp to cure kids of their computer addiction. South Korea, a country where 90 percent of homes are connected to the Web, feels that it has a responsibility to deal with the effects of this, holding the first international symposium on Internet addiction in September. "Korea has been most aggressive in embracing the Internet," said Koh Young-sam, head of the government-run Internet Addiction Counseling Center told the Times. "Now we have to lead in dealing with its consequences." U.S. psychiatrists appear to be taking Internet addiction more seriously as well, proposing that this "compulsive-impulsive" disorder be added to the next release of its Diagnostic and Statistical Manual of Mental Disorders, the DSM-V in 2011. "Internet addiction appears to be a common disorder that merits inclusion in the DSM-V," wrote Dr. Jerald Block, a psychiatrist at Oregon Health and Science University in the March issue of the American Journal of Psychiatry, agrees. Block argued that internet addiction shared four components with other compulsive-impulsive disorders, including excessive use, often associated with a loss of sense of time or neglect of basic drives; withdrawal, including feelings of anger, tension or depression when away from the computer; tolerance, including the need for better computer equipment, more software and more hours of use; and negative repercussions, including arguments, lying, poor achievement and social isolation. Block notes that South Korea already considers this a serious public health issue, as does China, which as of March 13, had surpassed the U.S. in its number of Internet users. In 2007, China began restricting computer game use, discouraging more than 3 hours each day. A study suggests that the U.S. isn't very far behind. According to a study by the Solutions Research Group, 68 percent of Americans feel anxious when they're not connected in one way or another, and this "disconnect anxiety" causes feelings of disorientation and nervousness when a person is deprived of Internet or wireless for a period of time.
<urn:uuid:787abd72-914b-40fb-8ca1-4f0bb938fa43>
CC-MAIN-2017-04
http://www.eweek.com/careers/internet-addiction-might-become-a-diagnosis-1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00461-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955317
463
2.984375
3
When someone talks about Cloud Computing, what exactly are they talking about? As more functionality moves to the internet cloud every provider and user is developing their own defintion. Industry experts and researchers are struggling to formulate a standard set of terms to describe all the different functions. This graphic developed by Lamia Youseff, University of California, Santa Barbara and Maria Butrico, Dilma Da Silva, IBM T.J. Watson Research Center depicts as five layers, with three constituents to the cloud infrastructure layer. The figure represents the inter-dependency between the different layers in the cloud. They define the five levels as follows: A. Cloud Application Layer –The most visible layer to the end-users of the cloud. Normally, the users access the services provided by this layer through web-portals, and are sometimes required to pay fees to use them. B. Cloud Software Environment Layer – The second layer in our proposed cloud ontology is the cloud software environment layer (also dubbed the software platform layer). The users of this layer are cloud applications’ developers, implementing their applications for and deploying them on the cloud. C. Cloud Software Infrastructure Layer – The cloud software infrastructure layer provides fundamental resources to other higher-level layers. Cloud services offered in this layer can be categorized into: computational resources, data storage, and communications. D. Software Kernel – This cloud layer provides the basic software management for the physical servers that compose the cloud. Software kernels at this level can be implemented as an OS kernel, hypervisor, and virtual machine monitor and/or clustering middleware. E. Hardware and Firmware – The bottom layer of the cloud stack in our proposed ontology is the actual physical hardware and switches that form the backbone of the cloud. In this regard, users of this layer of the cloud are normally big enterprises with huge IT requirements in need of subleasing Hardware as a Service (HaaS). Are these really five different levels. Is there much difference between Infrastructure and Hardware? Or are the two merging as the technology continues to evolve?
<urn:uuid:dfb47f4c-0cd0-4eed-9459-6329fd49cdc6>
CC-MAIN-2017-04
https://www.bluelock.com/blog/cloud-computing-a-five-layer-model/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00369-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894306
422
3.328125
3
Full Disk Encryption EvolvesThe Opal standard paves the way for hardware-based encryption. Earlier this month, the Naval Hospital in Pensacola, Fla., began notifying thousands of individuals that personally identifiable information about them had been lost when a laptop disappeared. In August, the National Guard announced that a laptop containing personal information on 131,000 members had been stolen. We could go on--rarely does a month go by without an organization revealing the loss or theft of a laptop brimming with sensitive data. Full disk encryption, or FDE, is the preferred mechanism to address this threat because, as the name implies, the technology lets IT encrypt the entire hard drive so that sensitive data is protected, no matter where it resides. But unfortunately, FDE adoption comes at a price: complex and costly deployments, additional licensing fees, and one more application for IT to support. Now, adoption of a new standard for hardware-based FDE, called Opal, aims to alleviate some of that pain. The Need For FDE No organization can plead ignorance of encryption options. Microsoft Windows, Mac OS X, and Linux all have built-in support for file-system-level encryption. But while encrypting a file system, or providing an encrypted folder on an employee's laptop, is better than nothing, it still leaves too much to chance. Did the employee put all sensitive data into that target folder? Was anything left in caches or temporary directories? And perhaps most critical, without FDE, if a device is stolen or lost, how do you definitively know that all of the sensitive information it contained was encrypted? Short answer: You don't. Vendors including Check Point Software (via its PointSec acquisition), Guardian Edge, McAfee (via its Safeboot acquisition), and PGP offer software-based FDE suites that can help you avoid all these problems. With software-based FDE products, the data on the drive can only be accessed when the operating system is booted and the encryption keys unlocked. But the technology isn't perfect--software-based FDE also has drawbacks. First, a number of software FDE products don't support Linux or Mac OS X. Second, depending on the age and processing power of the laptop, the encryption process can slow down a machine. Finally, encryption keys are stored in the computer's memory, which makes them vulnerable to a class of so-called "cold boot" attacks, in which encryption keys are recovered in RAM. In January 2009, the Trusted Computing Group released the final specification of the Opal Security Subsystem Class, a standard for applying hardware-based encryption. Moving hard-drive encryption into hardware has a number of advantages. For starters, it works with any OS. It also moves the computational overhead of the encryption process to dedicated processors, alleviating any computing load on the system's CPU. In addition, the encryption/decryption keys are stored in the hard-drive controller and never sit in the system's memory, making "cold boot" attacks ineffective. Hardware-based FDE also simplifies the key escrow dilemma--that is, the need to manage encryption keys. Simply put, the keys used by the hard drive can be unlocked only by a passphrase entered during the pre-boot sequence. The passphrase is sent to the hard drive controller before the OS boots, so the keys never leave the hard drive's hardware. Also, multiple passphrases can be configured to unlock those keys. Note that software-based FDE products do allow you to choose the encryption algorithm and variable key strengths, while most Opal drives are limited to AES-128. We see this as being an issue only for organizations that require specific algorithms or larger key sizes. |HOW SOFTWARE AND HARDWARE APPROACHES COMPARE |> Widely deployed ||> May not support all systems |> Flexible encryption options |> Strong management options ||> Potential performance impact ||> Susceptible to cold boot attack |> OS agnostic ||> Requires new laptop |> Great performance ||> Most only supports 128-bit AES ||> Limited management options |> Immune to cold boot attack 1 of 2
<urn:uuid:fa755645-0402-41c8-9708-d3163b2e9d29>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/full-disk-encryption-evolves/d/d-id/1083402?piddl_msgorder=asc
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00093-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907138
876
2.765625
3
IBM Takes Watson to Africa for Project Lucy Michel Bezy, associate director for Carnegie Mellon University in Rwanda, said, “Africa is facing a double challenge: the lack of accessible data to support its economic development, and the lack of advanced skills in data analysis. IBM's work to share Watson with Africa will help to address both challenges.” The new pan-African CEDD will help here by leveraging the latest Watson cognitive technologies to provide its research partners with access to high-frequency data. This will enable scientists and analysts to more accurately calculate social and economic conditions and identify previously unseen correlations across multiple domains. Through the Project Lucy initiative, partners will be able to tap into IBM’s expertise in cognitive computing across its 12 global laboratories and new Watson business unit. Two of the first focus areas of the new center are health care and education. Regarding health care, IBM estimates that Sub-Saharan Africa is home to approximately 25 percent of the world’s disease burden; yet the most common form of health care outside of cities is delivered by community health workers. CEDD will collect encyclopedic knowledge about traditional and non-traditional diseases in Africa. With access to Watson’s cognitive intelligence, doctors, nurses and field workers will get help in diagnosing illnesses and identifying the best treatment for each patient, IBM said.On the education front, IBM said half of African children will reach their adolescent years unable to read, write or perform basic numeric tasks. The key to improving these statistics is a thorough understanding of student performance, teacher expertise, attendance levels, class sizes, linguistic abilities and learning materials. While previous information systems have only provided a limited view of point problems, using Watson technologies, CEDD aims to create new holistic approaches for analyzing data to identify previously unrecorded correlations. For example, Watson could identify the link between a contaminated water borehole, an epidemic of cholera and the subsequent low levels of school attendance in the region. Watson could also help to uncover other causes of low school attendance in a particular region such as a lack of sanitary supplies and cultural traditions placing childcare responsibility on older siblings. This week IBM is also announcing other investments into the African innovation ecosystem with the opening of new IBM Innovation Centers in Lagos, Nigeria; Casablanca, Morocco; and Johannesburg, South Africa. These new centers aim to spur local growth and fuel an ecosystem of development and entrepreneurship around big data analytics and cloud computing in the region. In recognition of its role in driving data-driven growth and opportunity, this week Frost & Sullivan named IBM an Innovation Leader in Big Data and Analytics in Sub-Saharan Africa. This focus on Africa is not a new one for IBM. IBM has been making long term, strategic investments in the future and economic expansion of this rapidly expanding region. IBM has operated in Africa since the 1930s, and today has a direct presence in more than 20 African countries and hundreds of clients such as: Santam, RAWBANK in the DRC, Fidelity Bank and Surfline Communications in Ghana, Bharti Airtel across 17 African countries, and Morocco's Ministry of Economy and Finance. Over the next few years IBM plans to continue strengthening this network with new facilities, offerings and partnerships. IBM recently organized an initiative asking people from across Africa to submit images which best illustrate Africa’s grand challenges and opportunities and help illustrate the mission of IBM’s new Africa Research Lab. "The World is Our Lab – Africa" project has generated more than 1,200 images from across 25 African countries helping to tell the other side of the continent’s story. To visit the project Website, go to: http://www.theworldisourlabafrica.com/ For example, according to IBM, women in sub-Saharan Africa account for 22 percent of all cases of cervical cancer worldwide mainly due to a lack of services and knowledge. Watson could provide new insights into the evolution of cervical cancer in Africa and suggest new approaches for its prevention, diagnosis and treatment.
<urn:uuid:19e9b7fe-9681-4b26-b360-2ed179b5b6b1>
CC-MAIN-2017-04
http://www.eweek.com/database/ibm-takes-watson-to-africa-for-project-lucy-2.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00334-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929433
819
2.59375
3
DNS and DHCP - On the Server or The Firewall? A few years ago, one of the major "truths" about our business changed. It had long been the wisdom that DHCP and DNS should be served from the Windows Server, specifically from the (primary) domain controller. The primary reason for this is that we ("we" being Windows engineers) find it very convenient to manage DHCP from the same place where we manage DNS. But DHCP does not have to be served from the same place as DNS. Because they're inter-connected, we're going to talk about routers/firewalls, DNS, and DHCP. We'll cover the first two fairly quickly because they are not really up for debate. DHCP is another topic. Routers and Firewalls In many documents, Microsoft simply refers to "the router" to describe whatever device you point to to move data off the local area network. Strictly speaking this is a gateway. And for 95% of all the networks we work with, that gateway is a firewall. See the diagram. There are basically three kinds of firewalls you'll come across: 1) Old, junkie firewalls that are not very configurable; 2) Super powerful firewalls that can absolutely do whatever you want; and 3) Plug and Play firewalls that can be configured by automated scripts to do what you need them to do. Most of the arguments in favor of putting DHCP on the server make reference to the first kind of firewall. This has literally become a straw man argument. These firewalls are almost non-existent today. This is particularly true when you consider how little we actually ask the firewall DHCP service to do. The high-end firewalls can, by definition, do what we need them to do. The only question is whether we choose to use that function. The third kind of firewall - Universal Plug and Play or UPnP - has evolved only recently. UPnP is defined and promoted by the UPnP Forum, an industry collaboration that seems to help manufacturers develop devices that can discover each other and configure automatically. See http://www.upnp.org/. UPnP has been around for most of a decade. It was published as an international standard in 2008 and has been refined considerably since then. Most consultants have not really paid attention to the evolution of UPnP. You can review the specifications at http://upnp.org/sdcps-and-certification/standards/sdcps/. You will want to look at the specific notes under Internet Gateway:1 and Internet Gateway:2. One of the cool things that UPnP can do is to understand DNS and become a DNS forwarder. This includes the DNS portion of active directory. Don't get too far ahead of me here, but imagine if the server could automatically configure the UPnP firewall. DNS Belongs on The Server This discussion will be short and to the point: Put DNS on the Server. Now, really, you could make the firewall a backup DNS controller and have it get info from the server. But since you're all on the same network, it makes sense to just go to the server. DNS is critical for directory services. This is particularly true when the server is hosting a variety of functions, such as a Small Business Server. "//Companyweb" is not an entry you'll find in a lot of DNS servers. But if you don't have it in your in-house DNS server, you'll need to add it to a hosts file on each machine. Microsoft pretty much requires the following: 1) Primary DNS is on the Server 2) All workstations point to the server for DNS We like to add the following two items: 3) The server forwards requests to Google Public DNS (220.127.116.11 and 18.104.22.168) and NOT the ISP 4) Local workstations use the Google Public DNS as their secondary Number 3 is because ISPs are horrible at keeping clients informed when they change DNS addresses. In addition, this setup means you don't have to change anything if you change ISPs or your ISP changes your IP address. Number 4 increases the probability that workstations will be able to reach the Internet even if the server is unavailable. The only glitch is when the server is half-up, reachable by ICMP, but the DNS service is not responding. This is a very rare occurrence. DHCP: Server or Firewall? Without getting into details on some private conversations with people at Microsoft, let me just say that putting DHCP on the server was resulting in many, many calls to tech support. One goal for moving it was simply to create a more stable environment, thus resulting in fewer calls. A big clue about where the industry standard is going is: By default, all versions of SBS 2011 and Windows Server Essentials 2012 do not enable the DHCP function on the server. The official recommendation is that DHCP is on the router (firewall). In fact, these operating systems automatically configure the router with DHCP. Since SBS 2008, the server has always been able to set up UPnP routers (firewalls). But since the protocol was very new in 2008, I think there was not much noise about it. See http://blogs.technet.com/b/sbs/archive/2011/09/22/running-dhcp-server-on-sbs-2011-essentials-with-a-static-ip.aspx. And see http://social.technet.microsoft.com/wiki/contents/articles/923.aspx. Many firewalls also serve up wireless access, and that subnet needs to have it's own DHCP. Putting both DHCP scopes on the same device (the firewall) allows that device to manage traffic between the wired and wireless subnets very efficiently. If you have a plug and play firewall, these Windows Servers will configure the firewall to turn on DHCP, set up the appropriate IP range, and exclude the server's static IP. Note that DHCP will be set up with Dynamic DNS enabled. So both the firewall and the server will exchange information about the devices on the network. On a Standard SBS Server, you have many options that need to be configured. Depending on which options you enable, the UPnP configuration will open only the necessary ports, including SMTP - TCP 25 HTTP - TCP 80 HTTPS - TCP 443 SharePoint via RWW - TCP 987 VPN - TCP 1723 RDP - TCP 3389 AND it will forward each of these ports to the Windows Server. So the ports are not just fully open, but can only be used to access the server. After the server is finished configuring the firewall with UPnP, the Windows Console collects and displays information about your firewall so you can verify it. To see this information, simply view the Internet connection properties. NOTE: Even if your firewall does not have UPnP enabled, I believe you should put DHCP on the firewall. I only mention the UPnP information to make the point that this is the emerging default. So people with lots of money to spend on research think it's a good idea. :-) What About VPN? When I present this information in public, I'm always asked about VPN. Don't you have to enable DHCP on the server in order to use the server for RRAS or VPN? The VPN service (RRAS) hands out IP addresses to anyone who dials in. If you know what you're doing, and have a reason, you can also hard code IP addresses for machines that dial in. But basically, the VPN service has it's own little DHCP-like service that hands out addresses and a few scope options (DNS, gateway) to the machine calling in. If you are suspicious about this, enable the RRAS role but not the DHCP role. You will still be able to connect. In fact, I'll be you have some machines out there that are already configured that way and you just didn't know it. Advantages of DHCP on the Server? The primary advantage I hear about having DHCP on the server is that we find it very convenient to manage DHCP from the same place where we manage DNS. (See the first paragraph above.) Okay, that's fine. But think about it. You normally configure DHCP once and never again. You might make a little change here or there if you migrate a Primary Domain Controller. But having DHCP on the firewall makes that migration a lot easier. As you muck around with DNS settings between the old and new servers, have both servers up at once, and reboot the old and new servers for various reasons, DHCP will simply hum along on the firewall - and no one in the office will know the difference. Technically, configuring both DHCP and DNS on the same machine might be a bit easier. But since you have to open a new screen for configuring the DHCP role or a new screen for configuring the firewall, it's all the same to me. I believe DHCP is more stable on the firewall and make the network more stable. The server is infinitely more likely to be rebooted than the firewall. Implementing this policy is really just a matter of making everyone on the support team aware. You might write up a brief memo that says "It is our policy to serve DHCP from the firewall unless there is a specific reason to do otherwise." And then give a brief description of the preferred configuration, similar to what I posted above. I would really like to hear alternatives to this policy. This is a topic that is very entrenched in our assumptions about networks. It's not a "religious" debate like HP vs. Dell, but really just a different view of how IP addressing will be served up going forward. - - - - - About this Series SOP Friday - or Standard Operating System Friday - is a series dedicated to helping small computer consulting firms develop the right processes and procedures to create a successful and profitable consulting business. Find out more about the series, and view the complete "table of contents" for SOP Friday at SmallBizThoughts.com. - - - - - Next week's topic: Labeling Equipment by Erick Simpson Deliverables - Pricing and Positioning - Staffing Requirements Hiring, Managing and Training - Technical Roles and Responsibilities Processes and Procedures - Target Markets - Customer satisfaction and Loyalty . . . and More!
<urn:uuid:1d47018a-78ae-4257-9728-77b4ff25b0c0>
CC-MAIN-2017-04
http://www.channelpronetwork.com/blog/entry/sop-friday-dns-and-dhcp-allocation-server-vs-firewall
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00058-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935757
2,199
2.546875
3
DUMFRIES, SCOTLAND--(Marketwired - January 11, 2017) - 20 years since Diana, Princess of Wales walked through a minefield in Angola, The HALO Trust says that mines and unexploded ordnance are still harming civilians and hindering development in Angola and in 63 other countries and territories around the world. In September 2016 eight people from the same family were killed near Kuito, a town visited by Diana, when a child brought an anti-tank mine into his home. Another child was killed and a further two suffered amputations after encountering an unexploded mortar in Huambo City, only 5 km from where the princess visited. Diana's 1997 visit to Angola raised global awareness of the plight of landmine victims and the indiscriminate nature of the weapons. States came together later that year to sign the Mine Ban Treaty in Ottawa. Despite the Treaty's huge success in stopping landmine production and transfer, HALO says that the Treaty's proposed 2025 deadline for a mine free world will not be met without a substantial increase in funding for mine clearance. Staff from HALO, the world's largest and oldest mine clearance charity, were in the process of clearing the minefield Diana walked through on 15 January 1997. Since then HALO has destroyed more than 92,000 landmines, 800 minefields and 162,000 shells, missiles and bombs in Angola. The minefield where Diana walked is now a thriving community with housing, a carpentry workshop, a small college and a school. But there is still much to be done. Most of the cities in Angola have been cleared but rural areas remain heavily mined and over 40% of the population lives in the countryside. There are 630 minefields remaining in the eight provinces in which HALO works, and perhaps more than 1,000 minefields remaining across the country. A sharp decline in international assistance has forced HALO to reduce its local demining teams from 1,200 personnel to just 250 in the last few years. Today, fleets of armoured vehicles and specialist equipment are inactive due to lack of funds. Hundreds of trained Angolan de-miners are now unemployed. Meanwhile, estimates for the total number of casualties from landmines and explosive items in Angola vary considerably, from 23,000 to 80,000. The size of the country and length of its conflict have hindered efforts to keep reliable records. The slow progress of Angola clearance contrasts with that of Mozambique, which was finally declared free of mines in 2015 after 22 years of work by HALO and other operators. James Cowan, CEO of HALO, said: 'The world cannot turn its back on Angola now that Mozambique has shown us what can be done with the right commitment and determination. All people deserve to be free of the debris of war: its removal is the first step towards regrowth, development and peace. Yet 20 years after Diana's visit to Angola, children are still being killed and maimed by mines. 2017 is the year to re-focus, re-energise and finish the job. Together the world can achieve the Ottawa Treaty's vision of a mine-free world.' There are 64 states and territories affected by mines and other items of unexploded ordnance such as cluster munitions and improvised explosive devices (IEDs). Cambodia, Sri Lanka, Angola and Afghanistan are among the most severely mine-afflicted states in the world, while Syria, Yemen and Iraq face hugely insecure futures due to the prolific use of IEDs in contemporary conflicts. The Landmine Monitor recently reported that 6,461 people were known to be wounded or killed by landmines and other explosive remnants of war in 2015 -- a 75% increase from 2014 and the highest reported casualty total since 2006's figure of 6,573. For interviews with James Cowan and HALO personnel who accompanied Diana, images and infographics please call Louise Vaughan on 0044 7984 203075 or Paul McCann on 0044 7967 853217 or email email@example.com or firstname.lastname@example.org or visit https://www.halotrust.org/media-centre/ For US enquiries after 14:00 ET, please contact Amy Currin on 001.415.609.0696 Notes to editors: - The HALO Trust is grateful to the US Department of State PM/WRA and the Swiss Development Cooperation for their ongoing assistance in mine clearance in Angola. - The HALO Trust was founded in 1988 and its mission is to lead the effort to protect lives and restore livelihoods threatened by landmines and the debris of war. Image Available: http://www.marketwire.com/library/MwGo/2017/1/9/11G126790/Images/imageforhalotrust-2c6fa87360779d438ec7e3c701dde82e.jpg Image Available: http://www.marketwire.com/library/MwGo/2017/1/9/11G126790/Images/imageforhalotrust2-f01dee3fa7e7ae6e863e04b56b5acd2d.jpg Image Available: http://www.marketwire.com/library/MwGo/2017/1/9/11G126790/Images/imageforhalotrust3-6ca20567005b89b48af3abfd0fe037b8.jpg
<urn:uuid:579982d4-6e17-4232-bf74-f02991618c28>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/20-years-after-dianas-angolan-visit-the-halo-trust-says-landmines-are-still-killing-2187593.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00544-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936628
1,147
2.578125
3
In the previous discussion on QoS, the uses of Per-Hop Behaviors DiffServ to mark packets were identified and discussed in detail. Today’s post will identify the mechanisms to implement QoS. The five main categories of tools used to implement QoS are as follows. - Classification and Marking - Congestion Management - Congestion Avoidance - Policing and Shaping - Link Efficiency Classification and Marking Classification and Marking, although lumped together as one, are two distinct items within the QoS mechanisms to implement QoS. In general terms they identify and split traffic into different classes and mark the traffic according to desired behaviors. Classification tools sort packets into different traffic types, to which different policies can be applied. Classification can be done at every node in the network or be implemented at the edge of the network when the packet enters the network. Classification of packets can happen without marking the packets. Classification inspects one or more fields in the packet to identify the type of traffic that the packet is carrying. After the identification process the traffic is handed to the treatment application such as, marking, remarking, queuing, policing, shaping or a combination of these. Marking writes a field within the packet, frame, cell or label to preserve the classification decision that was reached during the classification process. Marking is also known as coloring the packet, which involves marking each packet as a member of a network class so all devices throughout the rest of the network can quickly recognize the packet class. The marking process set bits in the DSCP or IP Precedence field of each IP packet according to the class that the packet is in. Packets that are marked as high priority, such as a voice packet, will generally never be dropped by congestion avoidance mechanisms. On the other hand, if packets are marked as low priority they will be dropped when congestion occurs. Congestion management queuing algorithms use the marking on each packet to determine which queue to place packets in. Each queue is given different treatment based on the class of packets in the queue. Congestion management tools are implemented on all output interfaces in a QoS enabled network. Cisco IOS uses the following congestion management queuing methods: - FIFO (First in First Out), Priority Queuing (PQ), Custom Queuing (CQ) - Weighted Fair Queuing (WFQ) - Class Based Weighted Fair Queuing (CBWFQ) - Low Latency Queuing (LLQ) Congestion Avoidance monitors network traffic loads in an effort to anticipate and avoid congestion. Congestion Avoidance is achieved through packet dropping. Typically, congestion avoidance is implemented on output interfaces where high-speed links intersect with low speed links. Congestion Avoidance in Cisco products uses Weighted Random Early Detection (WRED) to avoid congestion by dropping low priority packets and allowing high priority packets to continue on their path. Policing and Shaping Policing or Shaping mechanisms are used to condition traffic before transmitting or when receiving traffic. Policers and Shapers can work in tandem, they are not mutually exclusive. Policing controls bursts and conforms traffic to ensure each traffic type gets the prescribed bandwidth. In some cases policing can help service providers maintain service level agreements (SLA). This is accomplished by throttling excess traffic above agreed SLA by dropping low priority traffic. Policing is implemented with Class-Based Policing and Committed Access Rate (CAR). Shaping helps smooth out speed mismatches in the network and limits transmission rates. These mechanisms are typically used to limit the flow from high speed links to low speed links, to prevent the low speed links from becoming over run. Cisco QoS uses Generic Traffic Shaping (GTS), and Frame Relay Traffic Shaping (FRTS) to implement shaping. Although not exclusively QoS tools, link efficiency tools are categorized as QoS tools because they are often used in conjunction with QoS. Both of the link efficiency tools were created outside of the realm of QoS, and were used as independent Cisco IOS Tools. Header-compression is a tool that is used to reduce the IP overhead of a Real-Time Transport Protocol (RTP) voice packet which reduces the overall size of the IP packet. Large packets normally do not use header-compression because the ratio of the size of the IP header is not significant compared to the payload of the packet. Short voice packets’ IP header can more than triple the overall size of the packet which can increase the delay of transmitting the packet to its destination. A RTP packet has 40 bytes of IP Over head broken out as follows: - IP Header = 20 bytes - UDP Header = 8 bytes - RTP Header = 12 bytes - Total = 40 bytes When compressed, the IP/UDP/RTP header is reduced down to 2 or 4 bytes depending on if the cyclic redundancy check (CRC) is transmitted. Link Fragmentation and Interleaving (LFI) is used to reduce delay and jitter on slower speed links by breaking the larger packets, such as FTP file transfers, into smaller packets and interleaving them in with the small voice packets. LFI reduces serialization delay by fragmenting large packets such as file transfers on slow WAN links (768K or less). If these large packets are allowed to continue unimpeded the voice packets would exceed their delay and jitter tolerances, and would result in bad quality voice. The next few entries of this QoS blog will explore the Cisco IOS Modular QoS Command Line Interface (MQC), and how to configure the QoS mechanisms explained in this blog. End-To-End QoS network Design, by Tim Szigeti and Christina Hattingh DiffServ – The Scalable End-To-End QoS Model Integrated Services Architecture Definition of the Differentiated Services Field An Architecture for Differentiated Services Requirements for IP Version 4 Routers An Expedited Forwarding PHB (Per-Hop Behavior) Author: Paul Stryer
<urn:uuid:9f1fb5db-f82f-47de-99cb-b62ba83b4fc7>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/11/12/qos-mechanisms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00360-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923587
1,266
2.6875
3
Sometimes the best design idea is to borrow someone else’s. The National Oceanic Atmospheric Administration did just that to illustrate Great Lakes currents. The visualization shows current flow and speed by drawing white lines across the blue background of the Great Lakes at six different speeds—on the day of this writing, the strong currents flowing across Lake Superior show clearly why the freighter Edmund Fitzgerald wanted to make Whitefish Bay by the Sault Ste. Marie Locks leading into Lake Huron before it sank in 1975. The visualization can also be switched from surface currents to depth-averaged currents, which will show how pollution would be moved. Developers at NOAA adapted code borrowed from the Wind Map visualization introduced in late March 2012 that depicts wind flow across the U.S. It’s also interactive—it lets you zoom into an area and scroll over it to find wind speed and direction. (“It’s beautiful to look at,” wrote Nathan Yau at FlowingData.com.) The Wind Map was developed by Martin Wattenberg and Fernanda Viegas, who created IBM’s Many Eyes visualization project and are now co-leaders of Google’s “Big Picture” data visualization project. Collaboration and Code Sharing In fact, there were two stages of borrowing to get to the Great Lakes map. An oceanographer named Rich Signell saw the Wind Map and got permission to use the code for a map of coastal currents in the U.S. Signell then told a colleague. “He said ‘hey, look what I did’”, says David J. Schwab, an oceanographer at NOAA’s Great Lakes Environmental and Research Laboratory in Ann Arbor, Mich. “And we looked at the Great Lakes and decided there was enough interest in Great Lakes currents, and we went ahead with it,” Schwab says. (His lab has not yet received formal permission to use the code, so it is piggy-backing on Signell’s permission to use it). Signell had written a script in Python to pull data on coastal currents from NOAA’s databases. In the Great Lakes lab, a research scientist, Gregory A. Lang, tweaked the Python script to work with its databases on Great Lakes currents (among other things, it changes wind speed to current speed). Lang also made it dynamic, because the Great Lakes data updates every six hours. Lang said it took him about three weeks, working occasionally on it, to make the modifications he needed. He tweaked the code to change the graphic’s legends, to plot depth average current versus surface current, and to do monthly averages. The hardest part of creating the visualization was learning Python, a scripting language Lang said he didn’t know. The visualization has required no maintenance since it was posted in early July, before the annual Port Huron to Mackinac (July 14) and Chicago to Mackinac (July 21) sailing races. Schwab says it received about 5,000 views a day when it was first posted. Lang has received emails from, among others, Tom Skilling, the weatherman at Chicago’s WGN, who wrote “How cool is this! It’s fascinating!” Other prospective users are less sanguine. A charter fisherman told the Great Lakes Echo that “for what we do, day to day fishing, I don’t see an application for it,” adding that he intends to continue using a current probe that he tosses over the side of his boat. But another charter captain posted in the comments on the story “I think it will help” explain movements of fish. Schwab says the lab had worked on other ways to represent the Great Lakes currents, using vector maps and pseudo colors. Vectors represent how forces move things. “But vector fields are notoriously hard to visualize,” he says. “That’s why we were impressed by this technology.” Schwab says the Great Lakes lab had come close to visualizing how currents circulated in the Great Lakes, but the Wind Map folks “did it in a very elegant way, and a way you can display on the Web very easily. “ Schwab, who studied computer science in the 1960s, called the current map “the kind of thing that you dreamed about” back then. “Now the day has come. It’s kind of cool.”
<urn:uuid:6497555d-4e54-4e68-bcea-02b515419634>
CC-MAIN-2017-04
http://data-informed.com/an-interactive-map-visualizes-great-lakes-water-currents/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00480-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953188
943
3.265625
3
Most arguments for the imminent demise of the spinning disks cite the exponential increases in solid state device density described by Moore's law which states that device density in integrated circuits doubles very two years. They then argue that while flash may cost 20-70 times as much as capacity oriented disks on a $/GB basis that Moore's law will drive the price of SSDs down so fast they'll be just 2-5 times the cost of spinning disks and at that cost no one will buy disk drives. The main problem with that argument is that it ignores the fact that hard drives have followed a similar path to higher and higher densities, known as Kryder;s law after former Seagate CTO Mark Kryder. Kryder's law states that disk bit density doubles annually. Even if flash SSD costs fall by a factor of three every two years while the cost of hard drive space continues to fall only by a factor of two, flash won't hit the 5x the cost target until 2018 or so. Then there are the issues that may limit how small NAND flash geometries can be reduced. Chips in general are have reached geometries small enough that in addition to pushing the limitations of lithography, manufacturers also have to consider quantum effects as they reduce the size of transistors and traces. Flash, in no small part because of the high voltages needed for block erasure, is also susceptible to stress-induced leakage current that can cause cells to lose their charge--and your data--as nearby cells are erased or even written. Smaller cell geometries means cells are closer together and therefore more vulnerable to this effect.
<urn:uuid:67a91aad-0b1e-4615-9c82-59ebc4ce267e>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/hard-drives-have-future/44218497
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00480-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962312
332
2.71875
3
2.4.4 What is the RSA Secret Key Challenge? RSA Laboratories started the RSA Secret Key Challenge in January 1997. The goal of the challenges is to quantify the security offered by secret-key ciphers (see Question 2.1.2) with keys of various sizes. The information obtained from these contests is anticipated to be of value to researchers and developers alike as they estimate the strength of an algorithm or application against exhaustive key-search. Initially, thirteen challenges were issued, of which four have been solved as of January 2000. There were twelve RC5 challenges and one DES challenge, with key sizes ranging from 40 bits to 128 bits. The 56-bit DES challenge and the 40-, 48-, and 56-bit RC5 challenges have all been solved. The 56-bit RC5 key was found in October 1997 after 250 days of exhaustive key search on 10,000 idle computers. The project was part of the Bovine RC5 Effort headed by a group called distributed.net and led by Adam L. Beberg, Jeff Lawson, and David McNett. In January 1998, RSA Laboratories launched the DES challenge II, which consists of a series of DES challenges to be released twice per year. It has been expected that each time the amount of time needed to solve the challenge will decrease substantially. Indeed, in February 1998, distributed.net solved RSA's DES Challenge II, using an estimated 50,000 processors to search 85% of the possible keys, in 41 days. In July 1998, the supercomputer DES Cracker designed by Electronic Frontier Foundation (EFF) was able to crack RSA's DES Challenge II-2 in 56 hours. The same computer, assisted by 100,000 distributed.net PCs on the Internet, was able to crack DES Challenge III in only 22 hours; see http://www.eff.org/descracker.html. For more information about the challenges, send email to email@example.com or visit the web site at http://www.emc.com/emc-plus/rsa-labs/historical/cryptographic-challenges.htm.
<urn:uuid:cedcac6b-76a4-41e1-bc50-731450a321a8>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-the-rsa-secret-key-challenge.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00324-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951334
441
3.46875
3
MASPT comes with life-time access to course material and exercises on Mobile application security. Enroll now and get access to all of our material and labs! Before we dive into Security and Penetration Testing, we will introduce you to the Android environment. There are few key concepts you should be familiar with before we get started. Prior to diving into Android Application Security, we need to have a means to examine, build, debug and run applications. For these purposes, we’ll need to install the Android Studio IDE (Integrated Development Environment). Understanding how Android Studio compiles the code and resources into a working Android application will help you better understand how all the pieces fit together. This will also provide insight into the protection employed to guarantee the authenticity of applications and circumstances by which they can be rendered meaningless. In this section, we’ll discuss the process of reversing Android applications. This is an important skill for anyone who wants to audit the security of third-party applications where the source code is unavailable. Rooting is a process by which one obtains “root” or system level access to an Android device. In this module you will learn why it can be important for our security tests but also which are the implications of rooting a device. In order to perform a thorough pentest on Android application you must know and master all its components. In this module you will study all the fundamental concepts and topics that you may encounter during your security test tasks Mobile devices are unique in how they use networks, being almost exclusively wireless and often bouncing between cellular and Wi-Fi networks. To lower cellular data traffic, some cellular carriers provide Wi-Fi hotspots for their customers. Bad guys know this and will often set up fake Wi-Fi networks, tricking the devices into connecting. In this module you will learn how to configure your environment in order to inspect and analyze network traffic. How securely data is stored on mobile devices has become a hot topic lately. In fact, Insecure Data Storage is second most common vulnerability, according to the OWASP Mobile Top Ten. If you are familiar with Clickjacking in web applications, you’re already familiar with the basic concepts of Tapjacking. In a Tapjacking attack, a malicious application is launched and positions itself atop a victim application. In this module you will see some example of Tapjacking, but also how to properly develop an Application to solve this issue. Dynamic Code Analysis is the process by which code is reviewed for vulnerabilities by actually executing some or all of the code. This execution could occur in a normal environment, virtualized environment or a debugger. This type of inspection also allows you to directly observe network requests, interactions with other applications and the results of any error conditions encountered. Static Code Analysis is a process for programmatically examining application code on disk, rather than while it is running. There are numerous scientifically rigorous approaches to the problems of validating that code is free of errors. In this module you will learn how to perform security tests on Android application by using different static code analysis. To understand the iOS ecosystem, we need to realize that iOS operating system is based on Darwin OS, which was originally written by Apple in C, C++ and Objective-C. Darwin is also at the heart of OSX, and thus OS X and iOS share some common foundation. Jailbreaking is the process of actively circumventing/removing such restrictions and other security controls put in place by the operating system. This allows users to install unapproved apps (apps not signed by a certificate issued by Apple) and leverage more APIs, which are otherwise not accessible in normal scenarios. Before we proceed, it is important to understand a few fundamental concepts unique to apple ecosystem, and more precisely related to the iOS app development process. Apple provides simulators for different hardware and iOS versions. In this module you will learn how the iOS build process works and what are the differences between running an application on a device or the emulator. There is an incentive for an attacker to examine and understand how the software works, so that they can then look for further weak spots or patch/manipulate those binaries to their advantage. In this module you will see which are the most used techniques and tools to successfully reverse iOS application. In order to perform a thorough pentest on iOS applications you must know and master all its components. In this module you will study how applications are composed and what each component is useful for. In this module you will start running your security tests against iOS Applications. Depending on the target of your tests, you will learn different techniques and use multiple tools to reach your goal. In this module you will learn how to configure your environment in order to inspect and analyze network traffic. iOS 6 and later versions, have a built in support for powerful device management capability with fine grain controls that allows an organization to control the corporate apple devices and data stored on it. In this module you will see which options organizations have to get clear visibility into all the active devices, ensure that the devices are in compliance, that the software running on these devices is up to date and much more. There is a certain class of applications, that has significant amount of client side logic built into it. Typical examples include word-processing software, image editors, games, utilities etc. In such cases, there is an incentive for attackers to be able to examine and understand how the software works, so that they can then look for further weak spots in the application or bypass restrictions that are applied locally. During the Mobile Application Security and Penetration Testing course you will have to deal with several guided labs and exercises that will help you to improve your mobile pentesting skills. These labs are Android and iOS applications that you have to test in order to apply the techniques explained and reach the final goal. Depending on the lab you will be provided with the application installer or the source code of the application. During your tests you will have to: Install, run and test each application, Find security issues, Develop a Proof-of-Concept (PoC) exploit for each issue found |Lab 2||Locating Secrets||Android| |Lab 3||Bypass Security Controls||Android| |Lab 8||Insecure External Storage||Android| |Lab 10||FileBrowser and FileBrowserExploit||Android| |Lab 12||Leack Result||Android| |Lab 13||Vulnerable Receiver||Android| |Lab 14||Silly Service||Android| |Lab 16||Starting Lab||iOS| Tony is the Director of Security Engineering in Tinder and has 20 years IT experience, including network engineering/security, systems administration, consulting and application security. He is recognized in the Android Security Acknowledgements and numerous responsible disclosure programs, such as Microsoft, Yahoo, WordPress and Uber. He is also the creator and core contributor to QARK. Speaker/Presenter: DefCon, Wall of Sheep, Black Hat London, Black Hat USA, BSides Las Vegas, DeepSec, Hack-in-The-Box, AppSec California and AppSec USA Tushar is a security enthusiast, and currently works as a Senior Information Security Engineer at LinkedIn. He specializes in the area of application security, with a strong focus on vulnerability research and assessment of mobile applications. Previously, Tushar has worked as a security consultant at Foundstone Professional Services (McAfee) and as a Senior developer at ACI Worldwide. Francesco Stillavato is Senior IT Security researcher and instructor at eLearnSecurity with 6 years of experience in different aspects of Information Security. His experience spans from web application secure coding to secure network design. He has contributed to the Joomla project as a Developer and has conducted a number of assessments as a freelance. Publications: Francesco is the author of the Penetration testing course Professional, Penetration Testing Student and author of Hera Lab scenarios. Education: Francesco Stillavato holds a Master's Degree in Information Security from Università di Pisa Enroll now and get access to all of our material and labs! Any web browser (for IE version 8+ is required) is supported. If you run Kali Linux/Backtrack as a virtual machine you will need at least 2GB of RAM. Minimum internet speed of 512 Kbit/s recommended for video streaming. <strong>For some of the iOS related exercises you will need an iOS device (6+) and a MacOS X Maverick. No physical devices are required for Android section. As soon as you enroll in one of our courses you are provided with access to private forums (subject to the plan selecte) where you will find instructors and community managers available to help you 24/7. Support for billing, technical and exam-related questions is also provided by email. All major credit cards, Paypal and bank transfer are supported. Installment plans available. Minor updates such as bug fixes or additional labs are provided for free. Major releases (e.g. upgrade from 2.0 to 3.0) require an upgrade fee. We reserve the right to issue minor or major updates when we see the need. We only process refunds/chargebacks for fraudulent transactions. Subscriptions let you split the enrollment fees in 3 or 4 months. You will receive new contents upon every billing cycle. If we don't receive the payment within 14 days from the due date the account will be frozen until payment is cleared. You can cancel your subscription at any time, however you will lose access to the material you purchased in the meantime. There are no hidden fees. If you are from a country where VAT is required (most EU countries), you have to add VAT to our ticket price. We are legally obligated to collect VAT on your purchases.
<urn:uuid:b867c221-0484-4fcd-ba3c-7209d7248edc>
CC-MAIN-2017-04
https://www.elearnsecurity.com/course/mobile_application_security_and_penetration_testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00260-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920843
2,009
2.6875
3
"Planet X" sounds like something from a comic book or intergalactic space opera. It's actually a hypothetical planet scientists have used to explain a variety of astronomical events. Unfortunately, it doesn't exist. NASA's sky-surveying spaceship has turned up all sorts of findings—but a giant gas planet out past Neptune isn't one of them, the agency announced Friday. Since the early 20th century, scientists have used Planet X as the possible answer to unexplained events in our solar system. Mass extinctions, some said, may have been caused when this unseen planet ran through a mass of comets and redirected them toward Earth. Others have used Planet X to explain irregular comet orbits, as well as the orbits of Uranus and Neptune. But NASA said Thursday it's found nothing larger than Saturn out to the distance of 930 billion miles. (Pluto—the original, then discounted, Planet X—is on average only 3.7 billion miles from the sun.) NASA reached its conclusions from scanning hundreds of millions of objects spotted by WISE—a spacecraft that uses infrared light to survey the sky. The ship has found thousands of stars—including many unknown ones close to Earth, "hiding in plain sight"—as well as millions of other observations like galaxies and asteroids. The ship was recently renamed NEOWISE and has moved into asteroid-hunter mode, helping us better track nearby flying rocks and looking for a candidate on which to land astronauts in the next decade.
<urn:uuid:ce3bc35e-1821-49e6-891e-0bb190e577cb>
CC-MAIN-2017-04
http://www.nextgov.com/emerging-tech/2014/03/nasa-sorry-plutos-replacement-isnt-real/80188/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00076-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953655
305
3.546875
4
"G" to Work"> Getting "G" to Work The 802.11g task force had to overcome significant engineering challenges when it was establishing the standard. The biggest challenge was to ensure high-speed performance while providing backward compatibility with the large installed base of 802.11b networking products, which share the same radio spectrum. To solve this problem, the IEEE is relying on a mechanism that was part of the original 802.11 specification: CTS/RTS (clear to send/request to send). Think of it as a wireless handshaking mechanism, much as in the old RS-232 serial days. In protected mode (or mixed mode, as its also called), the access point uses CTS/RTS to give clients access to the airwaves. Other changes also have to take place to ensure interoperability between "b" and "g" clients. The slot time, the time between packets, is increased from 9 microseconds (used by 802.11g clients operating in a pure "g" environment) to 20 microseconds (the slot time used by 802.11b clients). This means that 802.11g clients operating in a mixed-mode environment (with its associated overhead) will have poorer throughput than those operating in a "g"-only modeeven if the 802.11b clients present arent sending any traffic. to view compatibility test results. Indeed, the major issues with 802.11g products concern interoperability, both with legacy 802.11b clients and among different makes of equipment. To deal with legacy 802.11b products, the 802.11g draft specifies a protected mode. 802.11g, as previously noted, uses OFDM, whereas 802.11b uses DSSS (direct sequence spread spectrum). Unfortunately, radios using these different transmission methods dont "hear" one another.
<urn:uuid:2b1a96b9-2778-40b8-b21a-b6df4071e735>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/Early-80211g-Entries-Deliver-the-Speed-But-Require-Some-Maintenance/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00288-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946272
380
2.8125
3
How Michigan set the pace for state public safety networking - By Patrick Marshall - Jul 03, 2014 First of two parts. Whether it was airliners bringing down the Twin Towers or Hurricane Katrina slamming into New Orleans, disasters over the past 15 years have demonstrated both the importance and vulnerability of public safety communications. Emergency personnel responding to the collapse of the Twin Towers were severely hampered by overloaded radio channels and incompatible communications equipment. And according to Eddie Compass, the New Orleans police superintendent during the chaos that followed Hurricane Katrina, his department had no communications at all for days, a lack he described as nearly as catastrophic as running out of ammunition. In contrast, when a blackout struck on August 14, 2003, disrupting power in the Northeastern and Midwestern United States, public safety personnel in Michigan barely noticed any impact on communications. "All of our [transmission] sites have redundant power, with generator power as well as commercial power," said Bradley Stoddard, director of Michigan's Public Safety Communications System (MPSCS). "Many of our sites in Southeast Michigan lost commercial power and kicked over to generator power. End users on the network had no idea. There was no loss of communications whatsoever." Michigan, in fact, has long been a leader in developing public safety communications systems. Stoddard attributes that success to the state’s ability to keep an eye on economies of scale and adhere to standards that paved the way for expansion of shared public safety networking across the state. “It really started in 1928,” Stoddard said. “The city of Detroit had the first public safety radio communications in the United States and, I would venture, probably the world.” In 1928, of course, state of the art meant one-way radio communications from the police station to the patrol car. Nevertheless, according to Stoddard, Michigan’s state police force was so impressed by what it saw in Detroit that it pushed for similar capabilities. Michigan again led the way in the 1940's, being one of the first states to install two-way radio communications in patrol cars. "At that time," noted Stoddard, "mobile radios were very large, as were the base stations." While radio communications equipment gradually became smaller, lighter and better performing, the system set up in the 1940s remained fundamentally unchanged until the mid-1980s, when state police, noting the increasing mobility of criminals, wanted troopers to have the ability to communicate statewide, instead of just within jurisdictions. And state officials didn't want a system that just connected state police to each other, Stoddard said. They envisioned a network that local police as well as other agencies at state and local levels could join. Shared communications services "The governor's office saw that the state police had a radio system, the Department of Natural Resources had a radio system and the Department of Transportation had a radio system," Stoddard said. "It became an issue of economies of scale. Why does everyone need to have their own radio system? Why don't we build one new system that provides statewide capability and then collapse those systems and bring the state agencies together?" The challenge was that across jurisdictions agencies were using different, and in many cases not interoperable, equipment. It wasn't until 1989 that a coalition of federal agencies and public-safety professional associations established Project 25, a set of standards for digital radio equipment that made a statewide system feasible. The Project 25 suite of standards involves digital land mobile radio (LMR) services for local, state and federal public safety agencies. In such systems, radios can communicate in analog mode with legacy radios and in either digital or analog mode with other P25 radios. "By the mid-1990s, the RFPs went out for the system," Stoddard said. As a result, Michigan state agencies were ahead of the game when the events of September 11, 2001, occurred, thanks to its network of microwave radio transmission stations. That piqued interest in the legislature to determine if there would be opportunities for local public safety to leverage the same statewide radio system that state agencies had access to, according to Stoddard. Between 2002 and 2014, as agencies and local jurisdictions replaced equipment with Project 25-compliant systems, the statewide digital voice IP system has grown to cover 57,000 square miles of Michigan using 244 microwave transmission towers across the entire state "In 2002 we had 152 agencies, both state and local, utilizing the system and roughly about 11,000 radios," Stoddard said. "Today we have 1,460 agencies representing local, state, federal, tribal and private, and roughly 67,000 radios. So just in a dozen years we have seen monumental growth." Next: Tech decisions driving Michigan’s public safety expansion
<urn:uuid:203539f1-cb0a-4bf4-8522-510b5d13d18e>
CC-MAIN-2017-04
https://gcn.com/articles/2014/07/03/michigan-public-safety-network.aspx?admgarea=TC_STATELOCAL
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00104-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966319
985
2.59375
3
As used in this subpart, the following terms have the following meanings: means the ability or the means necessary to read, write, modify, or communicate data/information or otherwise use any system resource. (This definition applies to ‘‘access’’ as used in this subpart, not as used in subparts D or E of this part.) are administrative actions, and policies and procedures, to manage the selection, development, implementation, and maintenance of security measures to protect electronic protected health information and to manage the conduct of the covered entity's or business associate's workforce in relation to the protection of that information. means the corroboration that a person is the one claimed. means the property that data or information is accessible and useable upon demand by an authorized person. means the property that data or information is not made available or disclosed to unauthorized persons or processes. means the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key. means the physical premises and the interior and exterior of a building(s). means an interconnected set of information resources under the same direct management control that shares common functionality. A system normally includes hardware, software, information, data, applications, communications, and people. means the property that data or information have not been altered or destroyed in an unauthorized manner. means software, for example, a virus, designed to damage or disrupt a system. are physical measures, policies, and procedures to protect a covered entity's or business associate's electronic information systems and related buildings and equipment, from natural and environmental hazards, and unauthorized intrusion. means the technology and the policy and procedures for its use that protect electronic protected health information and control access to it. means a person or entity with authorized access. means an electronic computing device, for example, a lap or desk computer, or any other device that performs similar functions, and electronic media stored in its immediate environment. Need help with your HIPAA Compliance Initiative? Talk to our Expert.
<urn:uuid:fb734a36-e4f7-4b48-a8cd-41dae1f5720c>
CC-MAIN-2017-04
http://www.hipaasurvivalguide.com/hipaa-regulations/164-304.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00314-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903229
441
2.625
3
With the increased reliance on IT systems, companies are becoming increasingly vulnerable to the massive costs and harmful impacts related to system failures. Therefore, it is essential to measure, track, and improve the amount of time a system is functioning properly. With the use of key IT metrics to measure availability, companies can evaluate their systems' current resistance to downtimes, identify areas that require attention, and improve overall system efficiency. Availability is the amount of time a system is working at its full functionality during the time it is required to do so. The key metrics involved in measuring availability are Mean Time Between Failure (MTBF), sometimes referred to as Mean Time to Failure (MTTF), and Mean Time to Repair (MTTR).
<urn:uuid:fd22feee-ef9c-4328-a6d3-e125645191e8>
CC-MAIN-2017-04
https://www.infotech.com/research/key-metrics-for-measuring-system-availability
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00314-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945552
144
2.71875
3