text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Montserrat is one of the many islands sprinkled among the Caribbean West Indies. In just 39 square miles, it boasts mountains, rain forests, beaches and groves of bananas, mangoes and coconuts. The air temperature rarely dips below 78 degrees and neither does the water.
In short, Montserrat is paradise. Or it would be but for the Soufriere Hills volcano, which erupted for the first time in July 1995 and hasn’t stopped since.
Soufriere Hills has rendered nearly two-thirds of the island—an area now called the Exclusion Zone—uninhabitable. Since 1995, the island’s population has fallen from 11,000 to 4,000. The volcano has buried Plymouth, the former capital. It killed 20 people in one violent belch in 1997. It has suffocated the economy, once driven by tourism and rock stars like Sting, the Stones and Paul McCartney, who partied and recorded music there at Air Studios, the recording facility once owned by the Beatles’ former producer George Martin but now buried by the volcano.
This dichotomy—Eden on one side of the island, the fires of Hell on the other—makes Montserrat a perfect laboratory for risk analysis. Just as much of Montserrat is buried in ash, it’s also buried in probabilities. Scientists know, for example, that there’s only a 3 percent chance that Soufriere Hills will stop erupting in the next six months. They also know there’s a 10 percent chance of injury from the volcano at the border of the Exclusion Zone, and they can draw an imaginary line across the island where the threat from the volcano equals the threat from hurricanes and earthquakes.
"Thirty years ago, you needed the biggest computer in the world to do the statistical risk analysis," says Willy Aspinall, who helped develop these figures in the shadow of Soufriere Hills. "Now all you need is a laptop and a spreadsheet." He says the risk calculations get better and more textured all the time. He uses Monte Carlo risk analysis simulation software and spreadsheets to quantify the risk levels that help decision-makers minimize the volcano’s threat to people’s lives.
If this type of risk analysis is good enough for Aspinall, it ought to be good enough for CIOs, especially now that they’re working in an economic environment looming as ominously over their businesses as Soufriere Hills looms over Montserrat. For the most part, though, CIOs have not adopted statistical analysis tools to analyze and mitigate risk for software project management.
This is why they should.
Experts will tell you that statistical risk analysis is as essential to real portfolio management as a processor is to a computer. Without it, portfolio management is simply a way to organize the view of projects that will almost certainly fail. CIOs who are serious about portfolio management need to be serious about statistical risk management. (For more on portfolio management, see "Portfolio Management: How to Do It Right" at www.cio.com/printlinks.)
"If you don’t succeed with risk management, you won’t succeed with project portfolio management," says Raytheon CIO Rebecca Rhoads, who credits risk management with lowering her project failure rate and helping Raytheon IT achieve its cost-performance targets. Rhoads is ahead of the curve, but despite her engineering background, she has yet to apply the kind of sophisticated statistical analysis that Aspinall uses for his volcano.
Robert Sanchez, senior vice president and CIO of Ryder, credits risk analysis with bringing order to his company’s decision-making process for projects. He would welcome statistical analysis, but he’s not there yet. "Have we really embraced it completely and understood it in all of its detail?" Sanchez asks rhetorically. "No, we haven’t. But we will."
CIOs should become familiar with two statistical tools. They are the colorfully named workhorses of risk analysis: Monte Carlo simulation and decision tree analysis. Probabilities figure heavily into both, which means that risk has to be quantified. CIOs must draw their own line between the Exclusion Zone, where it’s too risky to venture, and the beaches, rain forests and coconut groves, where the living is easy and the threats are manageable.
The Trap of Common Sense
Even a simple task like choosing to drive to work requires a risk assessment, although not a computational one; you can do shorthand probability in your head. Though the cost of being wrong is high, the risk is relatively low (a 5 percent probability of being seriously hurt in a car accident) and easily mitigated by wearing a seat belt.
This sort of informal risk analysis can sometimes be useful. Steve Snodgrass, CIO of construction materials supplier Granite Rock, has the misfortune of managing IT for a company that literally straddles the San Andreas Fault. Snodgrass doesn’t need statistics to tell him that it would be a bad idea to do nothing to mitigate the possibility that a quake will take out his critical applications. So he outsources his applications’ backup far from the fault line.
However, CIOs often use this kind of commonsense reasoning as a way to avoid doing real risk analysis, say Tom DeMarco and Timothy Lister, authors of Waltzing with Bears: Managing Risk on Software Projects, a primer on statistical risk analysis for IT. "It’s been very frustrating to see a best practice like statistical analysis shunned in IT," says Lister. "It seems there’s this enormously strong cultural pull in IT to avoid looking at the downside."
In lieu of choosing projects based on acceptable risk, Ryder’s Sanchez says, IT often uses what he calls the moral argument, in which the greatest risk lies in not doing the project. Therefore, the risk is mitigated by doing the project. This reasoning was particularly valid during the boom years when there was a palpable fear of getting left behind technologically. But it was never called risk analysis. "I came into IT and was never really comfortable with the moral argument," says Sanchez, whose background is in engineering and finance. "I was looking at it thinking, We analyze the risk of building a new office, but we don’t on an ERP system that costs the same amount."
How to Create a Risk Analysis Process
As the director of foreign exchange at Merck, Art Misyan uses statistical risk analysis for evaluating the impact of foreign currency volatility. Like Sanchez, he’s puzzled by IT’s laissez-faire attitude toward risk analysis. "Risk gives you the ability to look at a whole range of outcomes, but IT looks at only two possible outcomes," he says. "Either you hit deadlines or budgets, or you don’t."
IT needs to think in probabilities, Misyan says, not ones and zeros. The best way to start is for the CIO to formalize the risk process. "First you have to set up a process to determine and track risks," he says. The good news is that much of the risk process is built into project management methodologies CIOs have been adopting anyway, so it should be familiar. Here are the basics for developing a risk analysis process.
Gather experts to determine project risks. These brainstorming sessions should be free and creative. "You want the pessimist in the group, the dark cloud," says Anne Rogers, director of information safeguards at Waste Management, who teaches risk analysis. "You want the person that will ask, What if a truck ran into the building?"
When you don’t ask the off-the-wall question, you run the risk of smacking into it. "Motorola gambled on developing Iridium satellite phones and charging $7 a minute," recalls DeMarco. "No one seemed to wonder what would happen if cell phones came along offering similar service for 10 cents a minute and free nights and weekends."
Assign researchers to uncover known risks. "We came up with 20 or 30 risks we knew we’d face by research," says Sandy Lazar, director of key systems for the District of Columbia, who is overseeing a five-year, $71.5 million administrative systems modernization program (see "Get a Grip on Risk" at www.cio.com/printlinks). "If you read up, you realize ERP has failed over and over for the same reasons for 15 years now." In fact, there are five typical risks to software projects that every CIO should include in a risk analysis (see "The Five Universal Risks to Software Projects," Page 62).
Divide risks into two categories—local and global. The risk of staff turnover during a project is a local risk. War is a global risk. Often, those new to risk analysis focus only on the local risks, but they need to consider the global risks and their impact.
Create a template for each risk. The template should include a unique risk number, a risk owner, potential costs (in dollars and other terms), a probability of occurrence (a low-medium-high scale will do at this point), any potential red flags or signs that the risk is materializing, mitigation strategies and a postmortem for noting if the risk factor actually happened. (A good example of such a template can be found in Waltzing with Bears. See "Risk Control Form" at www.cio.com/printlinks.)
One important footnote for developing this process: Value consistency over accuracy. If you do things in a consistent manner and the numbers are off, at least they’ll be off in a consistent—and therefore fixable—way. "The process," says Raytheon’s Rhoads, "is so much more important than the math rigor. Mature, consistent processes—you need that first."
How to Use Monte Carlo Simulations
Once you have a repository of project risks, you can get statistical. The most commonly used tool for this is the Monte Carlo simulation. This technique was developed in the 1940s for the Manhattan Project. It’s used today for everything from deciding where to dig for oil to optimizing the process of compacting trash at a waste treatment facility. It’s a deceptively simple but powerful tool for risk analysis. All Monte Carlo really does is roll the dice (hence the name).
Here’s the theory: Roll a die 100 times, and record the results. Each face will come up approximately one-sixth of the time—but not exactly. That’s because of randomness. Roll the die 1,000 times, and the distribution becomes closer to one-sixth. Roll it a million times, and it gets much closer still.
The die represents risks—albeit evenly distributed, predictable risks—where each side has about a one-sixth probability of occurrence or a five-sixths probability of not occurring. What if each die were a project risk and each side represented a possible outcome of that risk? Say one die was for the risk of project delays due to staff turnover. One side would represent the possibility that the project is six months late because of 20 percent turnover. Another side could represent a two-year delay due to 80 percent turnover. The die could also be unevenly weighted so that certain outcomes are more or less likely. There would, of course, be dice for other risks—sloppy development, budget cuts or any other factor unearthed during preliminary research.
Monte Carlo simulators "roll" all those risks together and record the combined outcomes. The more you roll the dice, the more exact they make the distribution of possible outcomes. What you end up with resembles an anthill (see "The Shape of Risk," Page 64), where the highest point on the curve is the most likely outcome and the lowest ends are possible but less likely.
Once you determine a project’s risk profile, you can build in extra resources (like money and time) to mitigate the risks on the highest points of the curve. If the distribution says there’s a 50 percent probability the project will run six months late, you might decide to build three extra months into the schedule to mitigate that risk.
Monte Carlo simulators also let you run "sensitivity analyses"—rolling only one die while keeping the others fixed on a particular outcome to see what happens when just one risk changes. A health-care company (that requested anonymity) using a Monte Carlo simulator from Glomark ran a sensitivity analysis for a pending software project. Each die was rolled, one at a time, 500 times while the other dice were kept fixed on their most likely outcomes. The exercise showed that three of the nine risks represented 87 percent of the potential impact on the project—allowing the company to focus its energy there.
You can (and should) repeat Monte Carlo simulations for all the projects in your portfolio, ranking them from riskiest to safest. This will help you generate an "efficient frontier"—a line that shows the combination of projects that provide the highest benefit at a predetermined level of risk—something like the line across Montserrat. An efficient frontier helps you avoid unnecessary risk. It will help stop you from choosing one project portfolio that has the same risk but lower benefits than another.
Admittedly, this description glosses over some of Monte Carlo’s dirty work. Someone has to determine which dots to put on the dice and how to weight the individual dots. That’s your job. Canvass your experts, mine historical data, and do whatever else you can to come up with possible outcomes from each risk, and then estimate the probability of that result occurring. In other words, the risks themselves are a range of outcomes contributing to a further range of possible outcomes for any given project, or even combinations of projects. | <urn:uuid:c7bfa3dc-8b03-4c89-992f-9a368e755e6c> | CC-MAIN-2017-04 | http://www.cio.com/article/2441974/risk-management/the-role-of-risk-analysis-in-project-portfolio-management.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00169-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947029 | 2,885 | 2.71875 | 3 |
Secured green power for any location
Delta’s new RenE, renewable hybrid, solution is a modular power supply concept that makes use of renewable energy as the sole source, or renewable energy together with other energy sources like mains power or diesel generators. In the first phase, energy is produced with solar panels.
The stand-alone solar power system consists of a solar panel, a solar charger with a maximum power tracker to get the most power from the sun, and batteries to provide energy during the night.
In addition to the stand-alone system, a hybrid solution is available. In this solution, the energy source options include any combination of solar power, wind turbines, AC utility, diesel generators and fuel cells. The power system controller manages all the devices in an optimal manner. The hybrid solution provides all the benefits of renewable energy while securing uninterrupted power for telecommunication services.
In addition to a smaller carbon footprint and lower operating costs, renewable energy also provides other advantages. For instance, alternative energy sources ensure reliable telecom services also in areas where AC utility is unreliable or not available at all.
Read more about our sustainable development.
TPS GLOBAL CONTACT | <urn:uuid:34a33b35-90f9-4cdb-af6f-1b388e1c8d4e> | CC-MAIN-2017-04 | http://www.deltapowersolutions.com/en/tps/sustainbility-rene-renewable-hybrid.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00077-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901468 | 239 | 2.640625 | 3 |
Since its prediction in a 1975 BusinessWeek article, the paperless office has proven to be the stuff of fantasy. In the last 10 years, office paper consumption has barely dipped in North America and has increased globally, according to industry groups. Even with such technological advances as the desktop workstation, the tablet and Internet, the paperless office seems to be moving farther out of reach. Computer scientists and ergonomic specialists have found that the obstacles are not technological. We have all of the machinery to completely remove paper products from the workspace. It is the human factor that has stopped us from obtaining a truly digital organization.
The task is no longer the elimination of paper from an office workspace. It is about creating a psychological process, individually and culturally, that utilizes data to its highest potential. Just moving data from point A to point B is not enough. The information needs to be accessible anywhere and over multiple platforms.
To see how office processes are evolving, start with the Xerox Corporation, circa 1970. The company assembled a team of information and physical scientists and created the Palo Alto Research Center (PARC) to design innovations in data technology. One of the first significant developments to come out of PARC was the laser printer. This substantially changed the office workplace by allowing users the ability to render digital copies of their work.
As computers gained power, the technology for transmitting the data increased. By 1973, PARC offices included personal computing workstations and an Ethernet file-sharing system. This was a significant cultural shift. At this time, there was no environmental concern that required paper reduction, and tablet technology was something of science fiction stories. PARC’s goal was to create the office of the future. PARC would go on to introduce the graphical user interface (GUI), with widespread applications across the technology industry.
In trying to create the office of the future, PARC changed the way the world communicates. It produced a data management environment in which paper was just one of the information storage and transmittal devices.
In an article for the journal Ethical and Social Issues in the Information Age, computer scientist J.M. Kizza defined virtualization as a process through which something can be created, that is there in effect and performance, but not in reality. It is in this new reality that the goal of a paperless office has gone awry. The removal of paper from a company is too simplistic a goal. The new office requires processes that integrate the workers into the system, effectively making them part of the data analysis mechanism.
Forbes lays out five relatively straightforward technological processes that can be used to get closer to a paperless office.
1. Create internal document-sharing.
2. Use paperless statements and electronic bill pay.
3. Utilize electronic file-sharing and storage.
4. Deliver meeting handouts electronically.
5. Use scanning and faxing instead of snail mail.
These are all practical and useful suggestions for creating a paperless office, but they take on a more significant meaning when looked at as social and ergonomic functions. Each of these steps creates a digital, global relationship between workers. Using electronic data-sharing means there is a nearly instantaneous exchange of information between co-workers anywhere in the world. This creates a de facto virtual office, whether the workers recognize it or not. This can be difficult if you're a worker who lives in a culture of identity theft, continually trying to protect your identity in a virtual world.
The paperless office has not become practical, because society has not evolved its processes enough to step into this virtual new world. With pun fully intended, take a page from the digital publishing industry. A graph of sales versus time for the last 10 years resembles the letter U. The left side of the graph shows a high point in sales, as the technophiles purchase books in digital media, and schools begin offering textbooks in this format. There is a rapid decline in sales that correspond to the introduction of commercial e-readers, such as the Kindle and Nook. One of the major criticisms of this format is that it did not feel right in the hand. It did not feel like a book. Over the next five years, consumers became accustomed to the design of the e-readers, which is reflected by the consistent sales growth of digital books. It is this kind of cultural shift that proponents of paperless environments are hoping for.
Technological changes can have profound cultural effects — the launch of Google Glass is an example. During the launch, Google co-founder Sergey Brin said smartphones are emasculating, because texting or Internet searching on a phone requires the user to maintain a head-down position -- a subservient posture. This is one example of technology creating a cultural shift. It is not unforeseeable for an office worker to be equipped with a work version of Google Glass, possibly replacing the traditional workstation. A person could type in mid-air and, with a flick of the wrist, send a document anywhere in the world.
Tablets may be the next great hope for shifting the culture to a digital one. Gavin Whatrup, group IT director at the marketing services company Creston, says the paperless office will not be realized until paper can be replaced with something that has the same dimensions and general feel. With tablets having the same approximate reading dimensions and becoming increasingly slimmer, they may finally be able to replace a sheet of paper in the office.
The question is not what happened to the paperless office, as much as whether we want to enter this new virtual world. Like splitting the atom, the search for the paperless office means altering the world as we know it. Wearing glasses with digital technology and carrying around a tablet would give the office a science fiction feel that would profoundly change the way we work on a daily basis.
Harold Clinton works as a business adviser at a large marketing firm in Phoenix. | <urn:uuid:4c69a15a-0d2f-4284-8570-b6da1fd311c9> | CC-MAIN-2017-04 | http://www.govtech.com/computing/Whatever-Happened-to-the-Paperless-Office.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00289-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947558 | 1,211 | 2.90625 | 3 |
July 20, 1999
Why Congress Should Act
to Ensure Net Neutrality
TITLE II: YEARS OF UNCERTAINTY, LITIGATION AND CONSUMER HARM
The Internet is the fastest deploying technology in world history. It's a 21st Century engine of innovation that provides an open platform for entrepreneurs, visionaries, and kids in their garages to follow their dreams. And it didn't happen by accident.
The Internet works for Americans because government had wisely chosen to let the web grow and thrive without burdensome regulation that can increase consumer bills, choke progress and smother innovation. But instead of continuing this path of tremendous success, the FCC recently approved a massive regulatory regime that piles on thousands of new rules all in the name of preserving net neutrality. These rules, called Title II, aren't necessary for net neutrality and they certainly aren't going to increase competition or make the Internet faster, better or more innovative.
The FCC's unnecessary action is legally questionable and will result in years of litigation and marketplace uncertainty. The good news is that Congress can act to deliver the permanent net neutrality protections that consumers are demanding.
Learn More: Watch The Video
What is net neutrality and who supports it? Watch the video to learn more about what net neutrality means, why Internet Service Providers support it and why it is time for Congress to step in pass bipartisan legislation that will protect consumers and keep the Internet free from burdensome regulations that will slow investment, innovation and new services.
Want to make your voice heard? Take action at unitedforanopeninternet.com.
TELL CONGRESS TO MAKE
NET NEUTRALITY PERMANENT
With Title II, the FCC has imposed heavy new Internet regulation that goes far beyond widely supported net neutrality protections. Title II will increase consumer costs, slow investment and innovation and cause years of uncertainty. But Congress can step in. Bipartisan legislation can protect consumers while promoting the investment needed to continue expanding and improving America’s broadband networks. Let's choose a future that embraces progress, not expensive regulations.
WHAT PEOPLE ARE SAYING ABOUT THE NET NEUTRALITY CONGRESSIONAL SOLUTION
"The FCC's new rules weaken - or reverse - decades of minimal regulation, during which the Internet flourished. As often as not, economic regulation has adverse, unintended side effects. That was true of the railroads, and it may be true of the Internet."
BE SKEPTICAL OF 'NET NEUTRALITY' | By Robert J. Samuelson at The Washington Post
George Gilder, economist and author of Telecosm said: "We've had 15 years of marvelous success, just stunning success on the Internet. . . Our seven top technology companies are all related to the Internet. The US has four times the investment in fixed broadband than Europe, with its government intervention, and twice the investment in wireless. Most of Internet traffic in the world flows through the US. What on earth is wrong that the FCC thinks it has to reduce it to a public utility?"
INTERNET PIONEERS DECRY TITLE II RULES | By LightReading
"It would be better if Congress finally did its job and agreed on a legislated plan that avoids more bureaucratic wrangling."
SETTLE THE NET-NEUTRALITY DEBATE WITH LEGISLATION | Editorial by The Washington Post
"It was a problem that wasn't broken, didn't need fixing. . . This is another process for government officials, elected officials, to create unneeded controversy so that they can get both sides of the argument to donate a heck of a lot of money to keep themselves in power, and continue to drive the regulation economy."
SCOTT MCNEALY, CO-FOUNDER OF SUN MICROSYSTEMS AND CHAIRMAN OF WAYIN, ON REGULATION OF THE INTERNET | CNBC
Net Neutrality Timeline: Where We Are & How We Got Here
March 14, 2002
FCC Chairman Michael Powell classifies broadband Internet access as a Title I interstate information service.
June 5, 2003
Law professor Tim Wu coins the term “net neutrality” in his paper “Network Neutrality, Broadband Discrimination.”
February 8, 2004
FCC Chairman Powell introduces “Four Internet Freedoms,” Freedom to (1) access content; (2) run applications; (3) attach devices; (4) obtain service plan information.
March 3, 2005
The FCC negotiates an agreement with Madison River Communication where Madison River agrees to “refrain from blocking” phone calls.
June 27, 2005
In FCC vs. Brand X, the Supreme Court upholds FCC’s authority to define the classification of broadband as an information service under Title I.
September 23, 2005
FCC reclassifies Internet access across the phone network, including DSL, as a Title I information service.
August 1, 2008
Comcast vs. BitTorrent decision by FCC Chairman Martin – FCC hands Comcast a cease-and-desist order.
April 6, 2010
U.S. Court of Appeals for the DC Circuit dismisses the FCC's cease and desist order against Comcast.
December 21, 2010
FCC Open Internet Order makes net neutrality rules official FCC regulation for the first time.
January 14, 2014
In Verizon vs. FCC, DC Circuit rules that as a Title I information service, FCC has no authority to adopt net neutrality regulations.
November 10, 2014
President Obama calls on the FCC to reclassify broadband as Title II.
February 26, 2015
FCC votes to 3-2 to classify the Internet as a public utility under Title II of The Communications Act. | <urn:uuid:ce4df284-52b5-4dfd-8fd2-f075a996552c> | CC-MAIN-2017-04 | https://www.ncta.com/positions/title-ii | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00105-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913744 | 1,182 | 2.578125 | 3 |
EPA site taps a deep well of water data
- By Kevin McCaney
- Oct 22, 2013
The Environmental Protection Agency has maintained public databases on the condition of rivers, lakes and streams for decades. But until about a year ago, anyone who wanted to get at that data faced a labyrinthine process, either devising search queries to try to navigate the databases or resorting to a Freedom of Information Act request.
Project at a glance
Name of project: How’s My Waterway?
Office: EPA Office of Wetlands, Oceans and Watersheds/Office of Water
Time to implementation: Less than a year
Before: Hard-to-read technical reports on the condition of the nation’s waterways, buried inside hard-to-navigate databases, sometimes only retrievable via FOIA requests.
After: Plain-language reports available in seconds via PC and mobile platforms, searchable by ZIP code, place name and, in the case of mobile devices, geolocation.
Even if people got to the reports they were looking for, they might have trouble deciphering them, since the reports were highly technical, written by scientists for scientists.
EPA changed the equation in October 2012 with the launch of How’s My Waterway?, a platform-independent website and mobile application that works with PCs, tablets and smartphones, offering plain-English reports on whether a body of water has been assessed, if it’s polluted, and if it is, what’s causing the pollution and what’s being done to clean it up.
Concerned about a local stream where the dog likes to swim? Enter the ZIP code into the site’s Choose a Location search window and it will return a list of rivers, streams and lakes in the area. Click on the stream’s name for the report. Want to know about a lake that’s right in front of you? Type the lake’s name into the app on a smartphone, or use the site’s Use My Location option, and, if the phone’s HTML 5 geolocation feature is authorized, the site will find the device, identify the lake and return the results.
For those who want to take a deeper dive into the scientific breakdown, each result also includes links to the technical reports, as well as links to related sites concerning topics such as beaches, drinking water and fishing.
Project leader Doug Norton, a watershed scientist in EPA’s national Office of Water, noted in a blog post that he regularly uses the agency’s technical databases, but that, “even I had trouble answering the simple question: ‘How’s My Waterway?’,” because the data in those systems wasn’t intended to provide quick answers. “Chances are, most people would be baffled by EPA’s complex databases and scientific information,” he wrote.
Norton and a multidisciplinary team created the site in less than a year as part of the agency’s Water Data Project public outreach effort. Among the tasks they faced were making sure of the regulatory accuracy of their information, translating it into plain language and building a single site that would work across PC and mobile platforms and browsers.
The site launched officially on Oct. 18, 2012, as part of EPA’s celebration of the 40th anniversary of the Clean Water Act. Within a month, it was often getting 1,000 users a day, EPA said, among them public safety crews, travel agencies, educators and environmental groups, along with everyday people.
The team includes Margarete Heber and Patty Scott of EPA’s Police Communications and Resource Management Staff, Alice Mayio of the agency’s Monitoring Branch, Laura Johnson of the Coastal Regulatory Branch, Tracy Kerchkof of the Project Management Office, Julie Reichert and Tatyana DiMascio working with the Watershed Branch, and Brad Cooper and Steve Andrews, contractors with software development company INDUS.
Read about more 2013 GCN Awards winners. | <urn:uuid:36e5a406-ab6b-4086-905d-00a71cc61148> | CC-MAIN-2017-04 | https://gcn.com/articles/2013/10/22/gcn-award-epa-waterways.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00491-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91907 | 845 | 2.53125 | 3 |
The company that brought Jeopardy cyber-champ Watson to the world has come up with some even brainier technology. IBM has unveiled prototypes of microprocessors that are being dubbed “cognitive computing chips.” According to IBM, the hardware is made up of neural circuits and are designed to mimic the brain’s ability to perceive sensory input, understand it, and take action based on that understanding.
To emulate human-level thinking, Watson used clever software on conventional hardware, which could be described as sort of a brute force approach. In contrast, these new chips are designed to behave fundamentally like our own brains, being able process sensory input in a vastly parallel fashion, create correlations, learn from experience, and adapt its processing dynamically.
Although the work can be categorized as artificial intelligence, it’s actually more general than traditional AI, which tends to be focus on individual capabilities, like pattern recognition. In this latest effort, IBM is attempting to integrate all aspects of thinking, including perception and action, as well as cognition. The goal is to build a truly intelligent machine that is able to perform human-level analytics in real time, and with the power and size efficiencies of a biological brain.
The work is being done with funding from the Defense Advanced Research Projects Agency (DARPA) and in conjunction with a number of US universities. The project, which the agency has titled Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE, kicked off in 2008, when DARPA anted up $4.9 million to bootstrap the work. In the second phase of the project, they chipped in $16.1 million more to IBM and its university partners. Now DARPA is prepared to kick in an additional $21 million for the third phase.
The defense agency’s interest in such technology is understandable, given the DoD’s increasing reliance on drone aircraft and other types of unmanned vehicles, not to mention its need to analyze the tremendous amounts of intelligence data. In short, the defense department would like nothing better than to replace its legions of soldiers and analysts with computer chips.
IBM, of course, is aiming at a much larger market. The idea is that system could be hooked up to vast sensor networks, monitoring the environment, cars, homes, even people. Such a product would span every industry and provide the underpinnings of IBM’s so-called Smarter Planet, although in this case it’s more like Smarter Planet 2.0.
The hardware design is certainly futuristic. The cognitive computing prototype chips are made up of a “neurosynaptic core,” which encompass of computational circuits (the neurons), memory (the synapses), and communication lines (the axons). Although this is accomplished with standard digital circuitry, the architecture is unique. From the IBM press release:
IBM’s overarching cognitive computing architecture is an on-chip network of light-weight cores, creating a single integrated system of hardware and software. This architecture represents a critical shift away from traditional von Neumann computing to a potentially more power-efficient architecture that has no set programming, integrates memory with processor, and mimics the brain’s event-driven, distributed and parallel processing.
Getting compute, memory and communication integrated together is central to the architecture’s brainy behavior. According to Dharmendra Modha, IBM’s lead on the project, the tight integration is key to getting the circuitry to behave like biological neurons and synapses, and do so within a very organic power budget. The power consumption of the human brain is estimated to be between 10 and 100 watts.
To date, IBM has developed two chip prototypes, both of which have been implemented on 45 nm SOI CMOS at the company’s fab in Fishkill, New York. Each design contains 256 neurons, one with 256K programmable synapses and the other with 64K learning synapses. The IBM researchers claim to have used them to demonstrate simple applications like navigation, machine vision, pattern recognition, associative memory and classification.
IBM has not revealed a timeline for any commercial products. The company’s goal is to eventually construct a system with ten billion neurons and a hundred trillion synapses, while consuming a single kilowatt of power. Using future nanoelectronics, the researchers estimate such a machine will take up less than two liters of volume.
The chip prototypes will be described in more detail at the IEEE Custom Integrated Circuits Conference on September 20 in San Jose, California. | <urn:uuid:a3fdd2d0-cb2a-478b-997c-8904b4bdefcf> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/08/18/ibm_reveals_cognitive_computing_chips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00491-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941728 | 943 | 3.140625 | 3 |
Encryption is an area of information management that causes problems: does the data need to be encrypted at rest or when in motion? Does the classification of the data mean that there are different encryption requirements?
It’s probably worthwhile having a look at the history of encryption and encipherment.
The desire to protect information from casual viewing has been around for over 2,000 years. In 405BC General Lysander received a message that had been written inside a belt, and the only way to read it was to wind it around a pole of a certain size.
Julius Caesar invented a cipher that was (with the limitations of education in Roman society) very hard to crack but it was Mary Queen of Scots who pushed encryption up to another level, by using symbols, not just for letters but entire words.
This meant that simple frequency analysis became harder without knowing the key. Although encryption has long been used to assist secret communication, nowadays it is commonly used in protecting information within IT systems.
Today, one of the greatest causes of concern when it comes to data is who can get access to the information that lies within.
Whether it is data that is at rest, such as information held on a computer disk and storage device, or data in transit, information being transferred via networks, internet and wireless devices – the question is: would it be possible for a nefarious party to remove the disk, or intercept the connection and access the data?
If the data isn’t encrypted then most definitely it can be accessed from a drive, as can be judged from the number of freely downloadable tools available to assist.
When talking about encryption there are a number of "usual suspect" questions, for example: Does it do full disk encryption? How do I recover the data in the event that the person who knows the password leaves the company?
Full disk encryption is usually reserved for end users and their laptops. It is easier to encrypt the whole drive than to specify certain data paths. The limitations are that in order to boot the machine the drive has to be unlocked, so if the user is overseas and forgets their password you’d better hope that the helpdesk is available 24/7.
In terms of recovering the data, if the encryption keys are lost, retrieving it will depend on how the solution was implemented. If Hardware Security Modules (HSM’s) are used, this will usually require a quorum of administrators to be present before the keys are released. The different models might require this to be in the form of physical keys, or smartcards.
Encryption is enabling the vision of being able to access data anytime and from anywhere but at the same time the proliferation of mobile devices and use of the cloud has also introduced new security challenges – so when it comes to data protection, any security strategy should look to encompass encryption and key management.
Si Kellow, CSO, Proact. | <urn:uuid:5d70a731-61ce-4bf5-ba28-157c0bdd587d> | CC-MAIN-2017-04 | http://www.cbronline.com/blogs/cbr-rolling-blog/guest-blog-encryption-is-it-useful-proact-011112 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00059-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949311 | 596 | 3.421875 | 3 |
http://www.wired.com/news/technology/0,1282,50779,00.html By Mark K. Anderson 2:00 a.m. March 18, 2002 PST the idea of carrying phone conversations with light may date back to 1880, but its implementation took an entire century. It was only in the 1980s that fiber optic channels were first integrated into commercial phone networks. In the intervening years, much has stayed the same -- scientists and engineers still want to cram more and more zeroes and ones into those familiar hair-thin wires of glass. But major advancements appear to be on the way. Next week, Anaheim, California, will host about 25,000 researchers as they gather for the nation's leading fiber optics conference, the Optical Fiber Communications Conference and Exhibit. One of the more intriguing ideas to be presented involves using chaotic behavior across fiber networks as a method of encryption. According to Jia-ming Liu, professor of electrical engineering at UCLA, the emerging field of chaotic communications offers new crypto applications in both optical and wireless systems. "I didn't invent this concept," he said. "But the entire field of chaotic communication is pretty new." His system, he said, starts with a laser that sends part of its beam into photo detectors which produce electrical signal that feed back to help power the laser. The resulting circuit behaves erratically -- something like the feedback you hear at a concert when the performer wanders too close to his stack of amps. Liu has found that if he picks his lasers carefully, he can set up two such nonlinear (chaotic) circuits whose feedback behavior is the same. Thus, if you have a message that needs to get from Albuquerque to Boston without being snooped on, you place a laser in each city. After the two lasers have been synchronized over an open channel, you add your message signal on top of the sending chaotic laser. And once the signal reaches Boston, you use the Boston laser to subtract off the chaos -- and to get the original message. "Any eavesdropper who tried to tap your message would just receive noise -- akin to listening to static instead of the radio," he said. On Thursday, Liu will report that his team has transmitted messages using this chaotic crypto method at the benchmark speed of 2.5 Gbps -- also called the OC-48 level. In fact, this speed is comparable to the rate that much non-encrypted, long-distance telephone and Internet traffic travels at today. "Today, most of the 'long-haul' traffic is either 2.5 or 10 gigabits per second," said Ivan Kaminow, former senior science advisor at the Optical Society of America. "A lot of research is now exploring 40 gigabits per second." Indeed, one paper to be presented by a group from Agere Systems -- will be reporting a record-setting fiber optic transmission rate of 3.2 Tbps (trillion bits [terabits] per second). Of course, fiber isn't the only potential bottleneck in the system. Bishwaroop Ganguly of MIT is working on his PhD, examining the interaction between the optical and the old-fashioned electronic components in a network. On Wednesday, he'll be presenting work that offers a new, more integrated model for conducting network traffic with both optical and electronic signals. Such a system, he said, could enable a best-of-both-worlds Internet -- in which the Net itself would intelligently switch between using electronic switching systems for brief packets of data, such as Web pages, while optical switches would handle the bigger chunks such as MP3s or movie downloads. "We're looking at more of a symbiotic relationship between electronic sub-systems and optical sub-systems, where the electronics handle what they're good at -- which is small transactions," he said. "So consider a Web page. You wouldn't want to set up an optical connection for each JPEG. But if you're transferring files from one workstation to another, it would be nice if that could go all optically and bypass the electronic routers." Still, with all the applied and basic science being presented in Anaheim, one basic question is still very much up in the air: Will users see fiber optic lines coming into their home anytime soon? Ganguly said his system is being designed mostly for businesses. But he also said that it's a truism of the Internet that as bandwidth increases for individual users, new applications always emerge to fill it. "The point is there are existing applications," he said. "But there's another paradigm here too: Build it and they will come." - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail.
This archive was generated by hypermail 2b30 : Tue Mar 19 2002 - 02:08:30 PST | <urn:uuid:59065a5e-aca4-4a47-85bf-99b9c500d77d> | CC-MAIN-2017-04 | http://lists.jammed.com/ISN/2002/03/0111.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00180-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964428 | 997 | 2.8125 | 3 |
It helps to understand more about the history of hacking, when you need to defend yourself against cyber criminals. So here is your Executive Summary:
Early hacking started when guys like Kevin Mitnick became ‘digital delinquents’ and broke into the phone company networks. That was to a large degree to see how far they could get with social engineering, and it got them way further than expected. Actual financial damage to hundreds of thousands of businesses started only in the nineties, but has moved at rocket speed these last 20 years.
Those were the teenagers in dark, damp cellars writing viruses to gain notoriety, and to show the world they were able to do it. Relatively harmless, no more than a pain in the neck to a large extent. We call them sneaker-net viruses as it usually took a person to walk over from one PC to another with a floppy disk to transfer the virus.
These early day ‘sneaker-net’ viruses were followed by a much more malicious type of super-fast spreading worms (we are talking a few minutes) like Sasser and NetSky that started to cause multi-million dollar losses. These were still more or less created to get notoriety, and teenagers showing off their “elite skills”.
Here the motive moved from recognition to remuneration. These guys were in it for easy money. This is where botnets came in, thousands of infected PCs owned and controlled by the cybercriminal that used the botnet to send spam, attack websites, identity theft and other nefarious activities. The malware used was more advanced than the code of the ‘pioneers’ but was still easy to find and easy to disinfect.
Here is where cybercrime goes professional. The malware starts to hide itself, and they get better organized. They are mostly in eastern European countries, and use more mature coders which results in much higher quality malware, which is reflected by the first rootkit flavors showing up. They are going for larger targets where more money can be stolen. This is also the time where traditional mafias muscle into the game, and rackets like extortion of online bookmakers starts to show its ugly face.
The main event that created the fifth and current generation is that an active underground economy has formed, where stolen goods and illegal services are bought and sold in a ‘professional’ manner, if there is such a thing as honor among thieves. Cybercrime now specializes in different markets (you can call them criminal segments), that taken all together form the full criminal supply-chain. Note that because of this, cybercrime develops at a much faster rate. All the tools are for sale now, and relatively inexperienced criminals can get to work quickly. Some examples of this specialization are:
The problem with this is that it both increases the malware quality, speeds up the criminal ‘supply chain’ and at the same time spreads the risk among these thieves, meaning it gets harder to catch the culprits. We are in this for the long haul, and we need to step up our game, just like the miscreants have done the last 10 years!
You can read about all this in much more detail in the book Cyberheist by Stu Sjouwerman. | <urn:uuid:6e90425f-4fc1-49f6-90f1-0853328d34ab> | CC-MAIN-2017-04 | https://www.knowbe4.com/resources/five-generations-of-cybercrime/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00024-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972817 | 669 | 2.96875 | 3 |
The discovery of the Heartbleed implementation bug that could attack certain version of OpenSSL has, rightfully, made global headlines. While this vulnerability doesn’t affect the certificates issued by trusted certification authorities (CA), the discovery has set end-users into a bit of “password panic.”
The crux of the issue is that services providers, website operators, software developers, etc., need to inform end-users about the status of their end-users’ credentials. End-users are wondering, “Do I need to change my password?”
In many cases, they do not as that specific Web server was not susceptible. In other cases, they do as the Web server has now been fixed.
Password Changes Ineffective Until Fix in Place
While changing passwords is smart, it won’t do the end-user much good until the fix is in place. This introduces another scenario where organizations and end-users alike would benefit from transparency and clear, open communication. In other words, what is the status of their Web server?
What is the Heartbleed Bug?
Imagine an insect invasion in a house that goes undetected for a long time. When it’s finally discovered, it turns out insects have overrun the entire building. That house is the Web, and the insect is a bug called Heartbleed.
According to a website that charted its emergence, “The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library.”
Heartbleed attacks the heartbeat extension (RFC 6520) implemented in OpenSSL. Heartbleed allows an attacker to read the memory of a system over the Internet and compromise the private keys, names, passwords and content. An attack is not logged and would not be detectable. The attack can be from client to server or server to client.
Heartbleed is Not a Flaw in SSL/TLP Protocol
Heartbleed is not a flaw with the SSL/TLS protocol specification, nor is it a flaw with the certificate authority (CA) or certificate management system. Heartbleed is an implementation bug.
The bug impacts OpenSSL versions 1.0.1 through 1.0.1f. The fix is in OpenSSL version 1.0.1g. The 0.9.8 and 1.0.0 version lines are not impacted. OpenSSL 1.0.1 was introduced in March 2012, so the vulnerability is 2 years old.
An open-source standard, OpenSSL is one of the most popular Internet traffic encryption options deployed. Online services use it to protect customers and themselves.
“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software,” says the Heartbleed.com website.
In short, much of the Internet is an open book — and has been for some time now. According to Netcraft, the number of otherwise trusted sites infected by the bug sits at around half a million. A site available here allows you to test any website to see if it’s been affected.
The report implies that the bug has resulted in massive breaches of enterprise security, including the exposure of encryption keys and user credentials.
Throughout this debacle, one scary truth is emerging: almost everybody, directly or indirectly, will be impacted.
“On the scale of 1 to 10, this is an 11,” security expert and blogger Bruce Schneier said. | <urn:uuid:15bd8d02-c2c3-4593-b69d-64ade472616f> | CC-MAIN-2017-04 | https://www.entrust.com/heartbleed-openssl-end-users-need-change-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00418-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918912 | 723 | 2.75 | 3 |
Limited time and resources make it essential to get to the root of IT service availability issues as quickly and effectively as possible. Pareto analysis is a method of root cause analysis that aims to resolve problems at the source rather than fix each symptom as it occurs, thus saving time and money in the long run. Use Pareto analysis to improve the quality of IT service availability and get right to the source of problems.
What is Pareto Analysis?
Pareto's Principle is based on the theory of 20th century economist Vilfredo Pareto and focuses on the vital 20% of issues that cause 80% of a specific problem. For an IT department, Pareto analysis takes a problem and breaks it down into manageable pieces to help pinpoint where resolution efforts should be focused in order to solve the issue in the most efficient manner. Pareto analysis works particularly well for problems related to service, process, and quality. | <urn:uuid:40fb23f4-c43e-4060-87a8-7d8f3b140d4d> | CC-MAIN-2017-04 | https://www.infotech.com/research/get-to-the-source-of-it-problems-with-pareto-analysis | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00234-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943056 | 190 | 2.671875 | 3 |
Predictive Modeling Changes the FutureBy Faisal Hoque | Posted 2010-08-10 Email Print
Creating different business scenario models allows executives a window onto possible outcomes, with each scenario representing an alternative for accomplishing the firm’s goals.
Modeling is not a new concept. In fact, everyone does it without thinking. Recall the invention of the spreadsheet? Before the personal computer revolution, Wall Street analysts performed complex spreadsheet calculations by hand using only a simple calculator. This process was completely inflexible, prone to mistakes, and thoroughly mind-numbing. To make changes to a model (whether to vary inputs or correct mistakes), analysts had to rework the entire thing, a process that – needless to say – was inefficient.
In 1978, Harvard Business School student Dan Bricklin recognized an opportunity to automate this tedious process using software and the rapidly maturing PC. He introduced the VisiCalc spreadsheet to the market, and almost overnight transformed how financial analysts worked. The obvious advantage to Bricklin’s innovation was efficiency. Complex models that once took hours to update could all of a sudden be modified with a few keystrokes.
Not surprisingly, spreadsheets like VisiCalc became the de facto standard for financial modeling, and frustrated business school students and financial analysts quickly put the new technology to use. The demand for spreadsheets was so overwhelming, in fact, that they are frequently credited with creating the initial boom market for business PCs.
But the real revolution that the spreadsheet kicked off wasn’t just about efficiency and automation. By unburdening analysts from the heavy lifting of manual calculations, spreadsheets lowered the marginal costs of evaluating new scenarios from thousands of dollars to near zero. This in turn encouraged experimentation and creativity, and the same employee who once spent days perfecting a single model could suddenly produce several alternatives in a single afternoon.
Spreadsheets kicked off an industry-wide movement towards experimentation that revolutionized how analysts – and the financial services industry – worked. By allowing workers to easily create and analyze the impact of multiple scenarios, spreadsheets and predictive modeling encouraged a culture of rapid prototyping and innovation, or impact analysis, that is as applicable today for converging business and technology as it is for the financial world.
Effective scenarios and modeling must be accompanied by impact analyses, which enable decision-makers to alter factors, create multiple output scenarios, evaluate the end-to-end impact of each scenario, and eventually select and implement the optimal solution. This stands in direct opposition to conventional, linear problem solving techniques, where decision-makers analyze sub-problems at each logical step along the way, and then assume that the overall impact of their choices is the best one.
As with modeling in general, impact analysis can be used to address a broad range of activities. For example, it is often used in supply chain planning for advanced, data-driven calculations that optimize a particular function (such as inventory costs) given unique inputs and constraints (such as market demand, logistical restrictions and manufacturing capabilities).
Impact analysis can address much simpler problems, as well. On Dell Computer’s build-to-order website, potential buyers test multiple PC configurations until they find a good match between the features they want and the cost they can afford to pay.
In both of these cases individuals vary inputs, rules translate these inputs to outputs, and team members compare the impact of multiple scenarios to choose the solution that fit their needs.
In order for impact analysis to work, the scenario being modeled should conform to three guidelines:
Easily identified inputs, rules and outputs - Impact analysis requires defined sets of inputs that are linked to outputs using predefined rules. These inputs and outputs are often quantitative (as in our supply chain optimization problem), but they can also be qualitative (such as our PC configuration options). To produce good results, these criteria – and the rules that link them – must accurately reflect the real world problem.
Multiple configuration options and decision factors - Problems that contain only a few inputs and outputs aren’t suited to impact analysis because the effect of altering inputs is often obvious. When the outputs are less intuitive, impact analysis helps decision-makers’ identify good solutions.
Low-design, high-implementation cost - Scenarios that are inexpensive to design but difficult to implement are ideally suited to impact analysis. It’s unrealistic to contract a builder to construct five houses so that you can choose the one you like the most. It’s entirely possible, however, to commission an architect to draft five blueprints. A person can compare plans, choose a favorite, and give it to the contractor. This is where the synergy between modeling and impact analysis really comes in to play. Predictive modeling is a powerful tool for lowering design costs, and so a crucial driver for impact analysis. | <urn:uuid:5e2fb803-4887-40ec-abf8-21a0563f879a> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Business-Intelligence/Predictive-Modeling-Changes-the-Future-363531 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00052-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933259 | 987 | 2.578125 | 3 |
The Internet of Things and Humans: Shifting the Focus to People
Last week in part 1 of our Internet of Things (IoT) series, we talked about building management automation (BMA) and how adding BMA to your services offering could make you and your clients more “green” by reducing power usage with the help of IoT technology. This week we’ll focus on a more basic element to IoT—people. How these devices affect and interact with the humans that use them is the second element of IoT and is referred to as the Internet of Things and Humans (IoTH).
The difference between IoT and IoTH is focus. IoT focuses on devices (e.g. all of your things talking to each other), while IoTH focuses on how your devices interact with you. In other words, an automated building would be IoT. An automated building that responds to the people in it would be IoTH.
A Nest thermostat, for instance, learns by repeating what it has been told, after you’ve told it enough times. If you turn the temperature up before you leave for work and turn it down again when you get home, the device will eventually internalize that schedule. Do the same thing every day for a week and Nest will start moving the temperature up and down around the same time you’d usually do it yourself. Pretty cool, I know.
IoTH takes this a step further by putting you right in the middle. Continuing with the Nest example, consider what happens when you have the day off or come home early. If you connect your motion sensor lights or security system to the thermostat, instead of continuing with its default programming, it will trigger a building response.
We can apply that same idea to your BMA offering. While your clients’ office lights already know when someone is in the room, you could also trigger a thermostat response by monitoring the right object identifiers (OIDs). For example, you can create automation that turns down the lights, changes the thermostat setting and locks all the doors if it’s after 5PM and no motion has been detected for 15 minutes. Or, if more than 5 people have entered the building in the past 15 minutes, the system can unlock the usual doors, queue up lighting and readjust the thermostat.
By implementing an IoTH plan, your client’s building can be ready to interact with everyone who works in it. An IoT automated building is capable of learning habits, but an IoTH automated building is capable of adjusting to one-off situations. Allowing clients to operate on an IoTH basis brings your BMA service offering much more than cost savings; it creates an environment that’s focused on people and their comfort and productivity.
Next week, in the final part of our IoT series, we’ll explain how adding these types of offerings to your IT services will allow you to differentiate yourself from competitors and achieve “stickiness.” | <urn:uuid:9cd9f7ca-ff45-47b5-abdf-a9dcf3332583> | CC-MAIN-2017-04 | http://www.labtechsoftware.com/blog/internet-things-humans-shifting-focus-people/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00538-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949525 | 618 | 2.640625 | 3 |
This resource is no longer available
Learning How To Learn Hadoop
Learning how to program and develop for the Hadoop platform can lead to exciting opportunities when it comes to big data.
Hadoop is a paradigm-shifting technology that lets you do things you could not do before, such as:
- Compile and analyze vast stores of data that your business has collected
- Analyze customer click and buying patterns
- Build deeper relationships with external customers, providing them with valuable features like recommendations, fraud detection, and social graph analysis
- And more.
But like the problems it solves, Hadoop can be quite complex and challenging. Access this exclusive resource to join Global Knowledge instructor and Technology Consultant Rich Morrow as he leads you through some of the hurdles and pitfalls students encounter on the Hadoop learning path.
Read on to learn how you can learn Hadoop today! | <urn:uuid:628e05f4-f5e1-4fbe-97be-5e4644698217> | CC-MAIN-2017-04 | http://www.bitpipe.com/detail/RES/1358541041_636.html?asrc=RSS_BP_KABPCRM | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00474-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922102 | 185 | 2.921875 | 3 |
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The authors of the research paper outlining the attack, which is capable of undermining the 128-bit version of Wired Equivalent Privacy (WEP) encryption algorithm, discovered several ways to uncover patterns in packets of information passing over wireless LANs.
The patterns can be used to discover the WEP encryption key and the number used to scramble the data being transmitted.
The researchers say that using the longer key - 128 bits compared to the current WEP standard of 40 bits - does not make it significantly harder for attackers to unlock the process.
For organisations and institutions, the discovery has different implications.
At Hong Kong's City University - which has one of the territory's largest wireless LAN installations with 200 access points and more than 1,000 users - the fallibility of WEP encryption has not yet surfaced because the network is too old.
Raymond Poon, associate director of computing services at the university, said: "Our wireless LAN was implemented a long time ago, so our access points do not support any type of encryption."
But most users still rely on the wireless LAN for Web access, and the university depends on the Web-based security applications such as Secure Socket Layer (SSL) protocol to secure data.
Finding an encryption code that has not yet been hacked continues to be a dilemma, said university officials.
Poon said that while there have been about two or three cases where security was compromised on the central network, he could not confirm the total number of hacking incidents on separate application areas.
"Even with WEP, the hacker world has come up with programs to unscramble the codes and decipher all the packets," Poon said. "Unless there's a better design for WEP algorithms, we'll have to wait for something more mature to evolve that will have everything enabled."
Security experts said that although wireless LAN encryption is based on a pre-shared secret key, anyone with the same key can eavesdrop. Yet it does not necessarily mean that all deployments of wireless LANs will be affected by the WEP security loophole.
At Hong Kong's Chek Lap Kok airport, the Cathay Pacific lounges, which are equipped with a wireless LAN, are unlikely to be exposed to the risk because WEP is not deployed there.
Allan Dyer, chief consultant at the network security firm Yui Kee Computing, said: "They could ask users to pick up a unique secret key when they entered the lounge, but that would be rather unimaginable."
But although wireless LAN users at the airport are no more vulnerable than they were before the flaw was identified, they should still take necessary precautions, warned Dyer. "Their data was open before, it still is. If they are actually transferring confidential information, they should use another encryption layer between their mobile device and their trusted network, such as a Virtual Private Network (VPN), Secure Shell (SSH) or SSL."
The Wireless Ethernet Compatibility Alliance (WECA) maintains that enterprise users should continue to use WEP because only skilled cryptoanalysts will be able to attack the weaknesses. The industry group said that enterprises could also use existing tools for additional security, such as VPNs, IPSec and Radius authentication servers. | <urn:uuid:17a7fbc5-48d3-4901-817a-2a73e4fd19a4> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240042020/Wireless-LAN-flaw-poses-security-risk | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00106-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947069 | 678 | 2.828125 | 3 |
Chinese tea classification
Tea’s Wonderful History
According to Lu Yu, the writer of the book Tea
Classics in the Tang dynasty, Chinese tea enjoyed a
more than 4000 years history.
Tea was used as offerings in the West Zhou, vegetables
in the Spring and Autumn period, and medicine in the
Warring period. Later in the West Han dynasty, it
became a main commodity. During 300 years between the
Three Kingdoms period and the Northern and Southern
Dynasties, especially latter, Buddhism was popular and
Buddhists applied tea to relieve sleep in Za-zen, so
tea trees spreaded along valleys around temples. That
is why people say tea and Buddhism accompanied each
other in their development. Till the Tang dynasty tea
became popular in ordinary people. In the Ming
dynasty, tea trade began to play an important role in
the government economy, the "Tea and Horse Bureau" was
set up to supervise the tea trade.
In the 6th century, a Buddhist monk introduced tea to
Japan and in the 16th century to Europe by a
Portuguese missionary. And tea became an international
Now in China, tea family not only consists of
traditional tea, but also tea beverage, tea food, tea
medicine and other tea products.
Here under tea classification gives you a silhouette of
Tea wares exhibits various artistic tea wares.
Tea culture explains Chinese people's attitudes and
Last you will get some useful hints on how to select
Best Ten Chinese teas
Although there are hundreds of varieties of Chinese
tea, they can be mainly classified into five
categories, that is, green tea, black tea, brick tea,
scented tea, and Oolong tea.
With its natural fragrance, green tea, as the oldest
kind of tea, is widely welcomed by different people.
It is baked immediately after picking. According to
the different ways of processing, it can be divided to
many kinds. Among various green tea, Longjing (Dragon
Well) Tea around the West Lake in Hangzhou,
HuangshanMaofeng Tea from Mt. Huangshan, Yinzhen
(Silver Needle) Tea from Mt. Junshan and Yunwu (Cloud
and Mist) Tea from Mt. Lushan are most famous.
Black tea is much more favored by foreigners.
Different from green tea, black tea is a kind of
fermented tea. After the fermentation, its color
changes from green to black. The most famous black
teas in China are " Qi Hong (originated in Anhui),
"Dian Hong"(originated in Yunnan), and "Ying Hong"
(originated in Guangdong).
Oolong tea, with an excellent combination of the
freshness of green tea and the fragrance of black tea,
become popular with more and more people. It has a
good function in helping body building and dieting.
Fujian, Guangdong and Taiwan are the major producing
areas of this kind of tea. Oolong tea grows on cliffs,
the hard picking process make it the most precious
Scented tea, which is very popular in Northern China,
in fact is a mixture of green tea with flower petals
of rose, jasmine, orchid and plum through an elaborate
process. Among this type, jasmine tea is common.
Brick tea, usually pressed into brick shape, is mainly
produced in Hunan, Hubei, Sichuan, Yunnan and Guangxi
Zhuang Autonomous Region. Brick tea is made from black
tea or green tea and is pressed into blocks. This kind
of tea is popular with minority people in border
regions. The most famous one is "Pu'er Tea" made in
There are other kinds of tea. Among them white tea is
special and is not very familiar to most people. Just
as its name suggests, this kind of tea is as white as
silver. It is mainly produced in Zhenhe and Fuding in
Fujian Province, but popular in Southeast Asia. Famous
varieties include "Silver Needle" and "White Peony".
In China, people think different teas prefer different
tea wares. Green tea prefers glass tea ware, scented
tea porcelain ware while Oolong tea performs best in
purple clay tea ware.
In its long history, tea wares not only improve tea
quality but also by-produce a tea art. Skilled
artisans bestow them artistic beauty.
Tea wares consist of mainly teapots, cups, tea bowls
and trays etc. Tea wares had been used for a long time
in China. The unglazed earthenware, used in Yunnan and
Sichuan provinces for baking tea today, reminds us the
earliest utensils used in ancient China. Tea drinking
became more popular in the Tang dynasty when tea wares
made of metals were served for noblesse and civilians
commonly used porcelain ware and earthenware. In the
Song dynasty tea bowls, like upturned bell, became
common. They were glazed in black, dark-brown, gray,
gray/white and white colors. Gray/white porcelain tea
wares predominated in the Yuan dynasty and white
glazed tea wares became popular in the Ming dynasty.
Teapots made of porcelain and purple clay were very
much in vogue during the middle of the Ming dynasty.
Gilded multicolored porcelain produced in Guangzhou,
Guangdong Province and the bodiless lacquer wares of
Fujian Province emerged in the Qing dynasty. Among
various kinds of tea wares, porcelain wares made in
Jingdezhen, Jiangxi Province and purple clay wares
made in Yixing, Jiangsu Province occupied the top
Nowadays, tea wares made of gold, silver, copper,
purple clay, porcelain, glass, lacquer and other
materials are available.
Just as coffee in the West, tea became a part of daily
life in China. You can see teahouses scattered on
streets like cafes in the west. It has such a close
relationship with Chinese that in recent years, a new
branch of culture related to tea is rising up in
China, which has a pleasant name of "Tea Culture". It
includes the articles, poems, pictures about tea, the
art of making and drinking tea, and some customs about
In the Song dynasty, Lu You, who is known as "Tea
Sage" wrote Tea Scripture, and detailedly described
the process of planting, harvesting, preparing, and
making tea. Other famous poets such as Li Bai, Du Fu
and Bai Juyi once created large number of poems about
tea. Tang Bohu and Wen Zhengming even drew many
pictures about tea.
Chinese are very critical about tea. People have high
requirements about tea quality, water and tea wares.
Normally, the finest tea is grown at altitudes of
3,000 to 7,000 feet (910 to 2,124m). People often use
spring water, rain and snow water to make tea, among
them the spring water and the rainwater in autumn are
considered to be the best, besides rainwater in rain
seasons is also perfect. Usually, Chinese will
emphasis on water quality and water taste. Fine water
must feature pure, sweet, cool, clean and flowing.
Chinese prefer pottery wares to others. The purple
clay wares made from the Yixing, Jiangsu province and
Jingdezhen, Jiangxi province are the best choice.
In China, there are customs about tea. A host will
inject tea into teacup only seven tenth, and it is
said the other three tenth will be filled with
friendship and affection. Moreover, the teacup should
be empty in three gulps. Tea plays an important role
in Chinese emotional life.
Tea is always offered immediately to a guest in
Chinese home. Serving a cup of tea is more than a
matter of mere politeness; it is a symbol of
togetherness, a sharing of something enjoyable and a
way of showing respect to visitors. To not take at
least a sip might be considered rude in some areas. In
previous time, if the host held his teacup and said
"please have tea", the guest will take his conge upon
the suggestion to leave.
How to Select Excellent Tea
Selecting tea is a subject of knowledge.
Aside from the variety, tea is classified into grades.
Generally, appraisement of tea is based on five
principles, namely, shape of the leaf, color of the
liquid, aroma, taste and appearance of the infused
Speaking of the shape of the leaf, there are flat,
needle-like, flower-like, and so on. The judgment is
usually made according to the artistic tastes of the
The evenness and transparency of the leaf will decide
the color of the liquid. Excellent liquid should not
contain rough burnt red leaves or red stems.
Aroma is the most important factor in judging the
quality of a kind of tea. Putting 3 grams leaves into
100 milliliters boiled water, people can judge the
quality of the tea by the smell from the liquid.
The judgment should be completed through the taste of
the liquid and the appearance of the infused leaves.
Best Ten Chinese teas
Longjing (Dragon Well): Produced at Longjing village
near the West Lake, Hangzhou, Zhejiang.
Biluochun: Produced at Wu County, Jiangsu.
Huangshanmaofeng: Produced at Mt. Huangshan in Anhui.
Junshan Silver Needle: Produced at Qingluo Island on
Qimen Black Tea: Produced at Qimen County in Anhui.
Liuan Guapian: Produced at Liuan County in Henan.
Xinyang Maojian: Produced at Xinyang, Henan.
Duyun Maojian: Produced at Duyun Mountain, Guizhou.
Wuyi Rock Tea: Produced at Wuyi Mountain, Fujian.
Tieguanyin: Produced at Anxi County, Fujian.
Tea is among the world’s oldest and most revered
beverages. It is today’s most popular beverage in the
world, next to water. Tea drinking has long been an
important aspect of Chinese culture. A Chinese saying
identifies the seven basic daily necessities as fuel,
rice, oil, salt, soy sauce, vinegar, and tea.
According to Chinese legend, tea was invented
accidentally by the Chinese Emperor Shen Nong in 2737
B.C. Emperor Shen Nong was a scholar and herbalist, as
well as a creative scientist and patron of the arts.
Among other things, the emperor believed that drinking
boiled water contributed to good health. By his
decree, his subjects and servants had to boil their
water before drinking it as a hygiene precaution. On
one summer day while he was visiting a distant region,
he and his entourage stopped to rest. The servants
began to boil water for the skilled ruler and his
subjects to drink. Dried leaves from a nearby camellia
bush fell into the boiling water. The emperor was
interested in the new liquid because it had a pleasing
aroma in this new brew interested the emperor, so he
drank the infusion and discovered that it was very
refreshing and had a delightful flavor. He declared
that tea gives vigor to the body, thus. That was when
tea was invented, but it was considered as a medicinal
beverage. It was around 300 A.D. when, tea became a
It was not until the Tang and Song Dynasties when tea
showed some significance in Chinese tradition. During
the mid-Tang Dynasty (780 A.D.), a scholar named Lu Yu
published the first definitive book, Cha Ching or The
Tea Classic, on tea after he spent over twenty years
studying the subject. This documentation included his
knowledge of planting, processing, tasting, and
brewing tea. His research helped to elevate tea
drinking to a high status throughout China. This was
when the art of tea drinking was born.
Later, a Song Dynasty emperor helped the spread of tea
consumption further by indulging in this wonderful
custom. He enjoyed tea drinking so much, that he
bestowed tea as gifts only to those who were worthy.
During this e same time, tea was the
inspirationinspired many of books, poems, songs, and
paintings. This not only popularized tea, it also
elevated tea’s value which drew tea-growers to the
Between the Yuan and Qing Dynasties, the technology of
tea production continuously advanced to become more
simplified and to improve the methods of enhancing tea
flavor. During this period, tea houses and other
tea-drinking establishments were opening up all over
China. By 900 A.D., tea drinking spread from China to
Japan where the Japanese Tea Ceremony or Chanoyu, was
created. In Japan, tea was elevated to an art form
which requires years of dedicated studying. Unlike the
Japanese people, the Chinese people tend to view tea
drinking as a form of enjoyment: to have after a meal
or to serve when guests visit.
Tea was introduced to Europe in the 1600s; it was
introduced to England in 1669. At that time, the drink
was enjoyed only by the aristocracy because a pound of
tea cost an average British laborer the equivalent of
nine months in wages. The British began to import tea
in larger qualities to satisfy the rapidly expanding
market. Tea became Britain’s most important item of
trade from China. All classes were able to drink tea
as the tea trade increased and became less of a
luxury. Now, tea is low in price and readily
The word “tea” was derived from ancient Chinese
dialects. Such words as “Tchai“,"Cha,” and “Tay”
were used to describe the tea leaf as well as the
beverage. The tea plant’s scientific name is Camellia
sinensis (which is from the The aceae family of the
Theales order), and it is indigenous to China and
parts of India. The tea plant is an evergreen shrub
that develops fragrant white, five-petaled flowers,
and; it is related to the magnolia. Tea is made from
young leaves and leaf buds from the tea tree. Two main
varieties are cultivated: C. sinensis sinensis, a
Chinese plant with small leaves, and C. sinensis
assamica, an Indian plant with large leaves. Hybrids
of these two varieties are also cultivated. What we
call “herbal tea” is technically not tea because it
does not come from the tea plant but consists a
mixture of flowers, fruit, herbs or spices from other
Today, there are more than 1,500 types of teas to
choose from because over 25 countries cultivate tea as
a plantation crop. China is one of the main producers
of tea, and tea remains China’s national drink.
By L. K. Yee
All The Tea In China by Kit Chow and Ione Kramer:
Britannica Online: “tea”, by Encyclopedia Britannica,
The Chinese Art of Tea Drinking:
The History of Tea, compiled by The Tea Council
Tea, by The Stash Tea Company:
The Way of Tea, by Sundance Natural Foods, Inc.:
Adapted from :
Tea Types For All Tastes
Canadians drink over 7 billion cups of tea each year.
From the hills of Sri Lanka and India to the mountains
and valleys of Kenya, tea is grown in some of the
world's most exotic places.
Three basic types of tea are produced and enjoyed
worldwide: black, green and oolong teas. They all come
from the Camellia Sinensis bush which in the wild can
grow 90 feet and higher. In the past, in some
countries, monkeys were trained to pick the tea leaves
and toss them to the ground. Today the Camellia
Sinensis bush is grown as an important plantation crop
and is kept to a height of three feet for easy
From these three types over 3000 varieties of tea are
available and depending on the time of day and
personal preferences, there is a blend to suit
The tea types include:
Black Tea: Most commonly used in North American tea
bags, black tea is made from leaves that have been
fully oxidized, producing a hearty deep rich flavour
in a coloured amber brew. It is the oxidation process,
oxygen coming into contact with the enzymes in the tea
leaf, that distinguishes black teas from green. The
oxidation process is also known as fermentation.
Green Tea: Most popular in Asia, green tea is not
oxidized. It is withered, immediately steamed or
heated to prevent oxidation and then rolled and dried.
It is characterized by a delicate taste, light green
colour and is very refreshing.
Oolong Tea: The name oolong literally translates as
"Black Dragon" and is very popular in China. Oolong
refers to partly oxidized leaves, combining the taste
and colour qualities of black and green tea. Oolong
teas are consumed without milk or sugar and are
extremely flavourful and highly aromatic.
Flavoured Teas: These are real teas (Camellia
Sinensis), blended with fruit, spices or herbs. Fruit
flavoured tea such as apple or blackcurrant, is real
tea blended with fruit peel or treated with the
natural oil or essence. Spiced and scented teas using
cinnamon, nutmeg, jasmine or mint, are also real teas
blended with spices, flowers or other plants.
Herbal/Tisanes: Herbal infusions or tisanes such as
Camomile, peppermint or nettle, do not contain any
real tea leaf. The term "herbal tea" is somewhat of a
misnomer, since these products are not really tea at
all. Herbal beverages or infusions can be derived from
a single ingredient or a blend of flowers, herbs,
spices, fruits, berries and other plants. | <urn:uuid:c48c3012-5b51-4007-a0cf-34d6701cfde1> | CC-MAIN-2017-04 | http://www.easterntea.com/tea/teatype.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00014-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939313 | 4,043 | 2.84375 | 3 |
Firewalls are often the first security mechanism that is installed on any network. For industrial control networks in municipal water systems, nuclear power plants and other critical infrastructure, firewalls simply aren’t good enough to keep attack payloads away. Industrial plants need unidirectional gateways to provide the ultimate security for critical control systems.
Every week it seems we hear about some advanced persistent threat (APT) that infiltrated a corporate network and slinked off with financial data or intellectual property. With so many similar stories in the news – and many more that we never hear about publicly – it makes you wonder about the ability of hackers to get into industrial control networks.
What would happen if an attacker could get to the point of being able to manipulate the industrial controls of a nuclear power plant, or a municipal water system, or a sprawling petrochemical plant? It was bad enough that Target and other merchants had tens of millions of cardholder records stolen, but at least nobody died from those incidents. But if an attacker could jack up the temperature gauges of a petrochemical hydrocracker unit, there could be massive casualties from the resulting explosions and fires.
In 2013 Trend Micro reported an experiment the company conducted where it deployed a dozen honey pots around the world that were designed to look like the ICS (industrial control system) networks of municipal water utilities. Between March and June, the honey pots attracted 74 intentional attacks, including at least 10 where the attackers were able to take over the control system.
This experiment proved that attackers have both the intention and the ability to penetrate critical infrastructure systems that, in theory, should be less vulnerable than Internet-facing corporate networks. We may be living with a false sense of security in thinking that ICS networks inherently possess security through obscurity.
In the industrial world, there were no connections between the control systems and the outside world until about two decades ago. That was when plant operators discovered there is a wealth of information in the control systems that could help them better manage their plants. For example, production units have to be taken offline every so often for maintenance. By collecting data from the control systems to understand how hard the equipment has been used, the managers might be able to optimize the schedules for maintenance. Running the equipment a few extra days between maintenance cycles could save millions of dollars a year.
When companies connected their control networks to their corporate networks for the purpose of gathering this data, they introduced the security problems that plague the corporate networks today. Everything from viruses to APTs can jump across networks and get into the control networks that used to be thought of as invulnerable.
Even firewalls are insufficient to keep the bad stuff out. As anyone who manages firewalls on a corporate network knows, malicious payloads sometimes slip through undetected, and this could be disastrous for an industrial control network. That’s why many ICS networks are protected with a different kind of security device called a unidirectional security gateway.
Andrew Ginter, vice president of Industrial Security with Waterfall Security Solutions, explained how his company’s unidirectional gateway technology works and where it fits in the scheme of protecting industrial control networks.
According to Ginter, industrial plants separate their control networks from their corporate networks with a DMZ. Instead of a traditional firewall, a unidirectional gateway sits at the DMZ to allow data to flow from the control network to the corporate network on the outside, but nothing can flow back the other way. In fact, it’s physically impossible for data to flow two ways, and here’s why.
A firewall is a box with network in, and network out. If you take your screwdriver and open up the box to see what’s inside you see CPU and memory. A firewall is software. The heart of the unidirectional gateway is hardware. There are two boxes, not one. One box is copper in and fiber out and the other one is fiber in and copper out. There is a very short fiber connecting the boxes. In the transmit box there is a fiber optic transmitter and in the receive box there is a receiver.
Standard fiber optic chipsets have both in the same chip. If you open up a Waterfall box, it only has a transmitter in the transmit box and a receiver in the receive box. You can send from the transmit box to the receive box but you can’t send anything back. There is physically no laser in the receiver to send any signal back to the transmitter. And if you somehow managed to transmit matter to send a signal back, there is no receiver in the transmitter. It can’t even tell if the other end is powered on. It has no way to physically receive any signal.
This technology lets you move information out of your control system networks without any risk of an attack or virus or remote control attack because nothing can get back in. This works because 99% of the data transfer needs are out of control systems, which are designed to run safely indefinitely without outside input.
The data coming out of a control network comes from sensors, gauges, thermostats and the like on the industrial equipment. The data from these devices is consolidated into a historian server. It’s a database optimized for a single schema to keep track of hundreds of thousands of different data points of timestamp data so that for any measurement point, you can go back, for example, 10 seconds, 10 days, or even several years and see what the value was. This database tends to be the point of integration with SAP and other business systems.
Waterfall replicates the historian server on the outside of the control network. Software queries the original historian database, asks it for the data, and sends the data out over the one-way channel. On the other side it inserts the data into the replica database and keeps those two databases synchronized to within about a second of each other. Now anyone who wants access to the data no longer reaches into the control system to ask the real system for data. Instead they reach into the copy and ask the copy for data. The copy has all of the data back to the beginning of time, and it has the latest data that is less than a second old. This satisfies the need for corporate to gather and use control device data without having any ability to send data back into those devices, even inadvertently.
Last October Waterfall Security Solutions introduced new technology called FLIP, and it has been well received by industrial customers. FLIP is a unidirectional gateway that allows you to send information into a control system in a very controlled manner. Normally FLIP allows information to flow out the network, but when a plant needs to communicate inbound for a few seconds – say, to send control recipes into a batch processing system so a chemical plant knows how to treat materials for the day -- FLIP can be temporarily reoriented inbound.
The FLIP technology doesn’t forward packets, which prevents attack communications from slipping through. Only legitimate communications in a payload can get sent into the control network. The plant can transmit the necessary instructions and then flip the gateway back to outbound transmission only.
Unidirectional gateway technology provides much stronger security than firewalls. The technology is already in use at every nuclear power plant in the United States, and there is a strong need for it in many other industrial situations.
Linda Musthaler (LMusthaler@essential-iws.com) is a Principal Analyst with Essential Solutions Corp. http://www.essential-iws.com) which researches the practical value of information technology and how it can make individual workers and entire organizations more productive. Essential Solutions offers consulting services to computer industry and corporate clients to help define and fulfill the potential of IT. | <urn:uuid:850687bb-d958-4ddd-bf66-56be6d89d6ba> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2174383/security/consider-a-unidirectional-security-gateway-when-a-firewall-just-isn-t-strong-enough.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00042-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944485 | 1,593 | 2.8125 | 3 |
Upcoming Web Security Technology
Security systems and technology have historically been designed around protection, detection, and infection. Create a perimeter around your assets and prevent threats for penetrating that perimeter. Detect any threats that have penetrated the perimeter and infected assets and then exterminate all evidence of the threats in any of the assets. This is an age-old approach to security and analogous to building a castle wall to prevent intruders from entering and to detect and extinguish enemies that infiltrate the perimeter. Some lessons can be learned from age-old defense mechanisms in that pickets were strategically placed miles from the castle walls in concentric circles to detect the presence of an enemy in advance of them reaching the actual fortress.
New technologies are emerging in the security industries that align with the age-old practice of picketing: technologies that detect the enemy in the early stages of an attack and prevent them from ever reaching the security perimeter. These technologies focus on the detection of new malicious identities associated with a threat actor prior to the deployment of the attack.
A malicious website begins its life like any other website. It must first be registered, a website created, and then deployed like any other website. It may differ from other benign websites in a variety of ways.
- The website may look like a common brand (ibmdownlaods.com).
- The website has geo-located attributes.
- The registrant of the website has both geo-located features and historical context (other websites registered under same identity).
- The cost associated with registration.
- The trust associated with the registrant.
Domain kinetics involve website gesturing. (See Figure 1) For example, what was the time between the registration of the website and a parking page? What was the time between registering the website, domain parking, and an actual website being created? Is the website recognized by standard search engines and how long for each search engine to identify the website?
By comparing existing information with historical registration information and historical known malicious domains combined with website gesturing data, we start to get a profile of malicious activity in the earliest stages of a threat. We start to build a graph that represents domain trust, identity trust, and registrant trust. A continual observation and correlation of previous malicious websites and associated identities with newly created domains and identities allow kinetic analytics to discretely identify malicious intent.
Global Real-time Visibility
Early detection of website registration is the beginning of detecting a threat actor prior to deployment of the threat. The next stage of detection begins when the malicious website is created and the threat actors hunts the prey. This is done by re-directing users to the website through adds, blogs, messaging, or email. The first observations of access to these websites are key in detecting malicious intent. These observations can take place on the endpoint, the enterprise, and globally. Imagine monitoring DNS activity on an endpoint over time. In a few months, a queasiest state is achieved, as we are all creatures of habit. In the first few days, all domains will appear as new, but as time goes on, the new domains that are visited decrease exponentially. (See Figure 2)
Once the queasiest state is achieved, we could easily assume that any new domain is a potential high value event and should be analyzed as malicious or benign. A newly observed domain on an endpoint can be compared to all newly observed domains for the enterprise (the real-time aggregation of endpoint data). This comparison assists in raising the value of the event or whitelisting the domain as recently observed by other endpoints. (See Figure 3) Simply, the more rare the domains in use, the higher probability that the domain may be a threat. An additional assertion can be made comparing the domain to enterprise relevancy. Is this domain unique among enterprises? The more unique the domain as we move to a global view, the more suspicious and focus the analytics become. Also, we can now compare the globally unique domain to the original registration for malicious activity. This process of primary (endpoint), secondary (enterprise), tertiary (global), and recursive analytics to the registration data set allow near real-time assessment of new threats as they emerge providing threat intelligence that is minutes old instead of days old. The same technique can be applied other entities such as tasks or elevating one’s privilege.
Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.
comments powered by | <urn:uuid:85393dac-1b22-4aa8-92b1-28672e8e7b4f> | CC-MAIN-2017-04 | http://ibmsystemsmag.com/ibmi/administrator/security/web-security-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00344-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925673 | 922 | 2.984375 | 3 |
SSFRULES - Securing Cisco Networks with Snort® Rule Writing Best Practices
Learn to analyze, exploit packet captures, and put the rule writing theories learned to work by implementing rule-language features for triggering alerts on the offending network traffic.
This course focuses exclusively on the Snort® rules language and rule writing.
Starting from rule syntax and structure to advanced rule-option usage, you will
analyze exploit packet captures and put the rule writing theories learned to work
by implementing rule-language features for triggering alerts on the offending network
This course also provides instruction and lab exercises on how to detect certain
types of attacks (such as buffer overflows) while utilizing various rule-writing
techniques. You will test your rule-writing skills in two challenges: a theoretical
challenge that tests knowledge of rule syntax and usage, and a practical challenge
in which we present an exploit for you to analyze and research so you can defend
your installations against the attack.
This course combines lecture materials and hands-on labs throughout to make sure
that you are able to thoroughly understand and implement open source rules. | <urn:uuid:a6d37175-e19d-4087-ab8f-ec9e197c1567> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/120064/ssfrules-securing-cisco-networks-with-snort-rule-writing-best-practices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00280-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924675 | 229 | 2.65625 | 3 |
Author: Mark G. Sobell
Publisher: Prentice Hall
An increasing number of advanced users are drawn to open source solutions as an alternative to Microsoft Windows. The popular free Linux distribution Fedora and it’s big brother Red Hat Enterprise Linux (RHEL) are introduced and described in this massive book that will help run your own network services, and more.
About the author
Mark G. Sobell is the author of three best-selling UNIX and Linux books. He has more than twenty years of experience working with UNIX and Linux and is president of Sobell Associates, a consulting firm that designs and builds custom software applications, designs and writes documentation, and provides UNIX and Linux training and support.
Inside the book
In over 1300 pages, this book covers most of the things you will ever need to know about Fedora and Red Hat Enterprise Linux – whether you are a beginner or an experienced user.
It starts with an interesting and practical preface and a jumpstart section that provides a quick reference to the most often used daemon setup sections.
The first chapter offers a brief history of Linux and basic information about the platform, while the second describes how to install your Red hat/Fedora system. The following chapters take you deeper and deeper into Linux, passing and explaining important concepts such as its file system, utilities, shells, networking, etc.
Given its size, the book is organized surprisingly well. It includes official documentation (manual, info and help pages) with examples and how-to’s in every section.
Having red many general books on Linux, I have to note that information on how to go about configuring Cacti can’t be found in many of them, so this is definitely a plus for this book. I enjoyed the chapters on network services (BIND, Apache, NFS) and on programming, which explains the material beautifully with the help of examples, useful bash scripts and an introduction to Perl programming. I also took great pleasure in reading the unusual introduction to MySQL syntax.
On the other hand, the sections on PAM and SELinux are short and incomplete, and the VIM section describes only basic usage.
One can tell that a lot of time, work and thought went into creating this book. Consequently, it is more thorough, organized and useful than most of the other Linux books I have had the chance to read.
Despite the aforementioned shortcomings, I would greatly recommend Mark G. Sobell’s “Fedora and Red Hat Enterprise Linux” to beginners and system administrators, as it greatly helped me to prepare for my RHCE certification. | <urn:uuid:5ab45dfb-5294-4382-8547-eeb8238fb643> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/09/04/a-practical-guide-to-fedora-and-red-hat-enterprise-linux-6th-edition/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00216-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932645 | 540 | 2.53125 | 3 |
Space junkies are preparing for an astronomical event that will occur on Tuesday, June 5 - the once-in-a-lifetime Venus transit. The shadow of Venus moving across the surface of the Sun will make it appear as if a hole was punched into the circle of the star - it's also not going to happen again until December 2117, so unless we can extend our lifespans for another 105 years, this will be the only chance to see it. This video explains more.
This NASA site has more details on the event, along with its "don't stare at the Sun" advice (always good) and schedule of the best times to see it (head to the South Pacific for the best view, but most of the USA should be able to see it at around sunset). Of course, with my luck, it will be cloudy next Tuesday, but let's hope for good weather at around sunset on June 5.
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. | <urn:uuid:ce341b85-aa38-4a52-ac5b-ca1c07425c2d> | CC-MAIN-2017-04 | http://www.itworld.com/article/2727322/virtualization/watch-a-hole-in-the-sun-next-tuesday.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00428-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949621 | 244 | 2.53125 | 3 |
Weve seen quadrocopters do some pretty awesome indoor maneuvers, from swarming to playing a piano symphony. But what we really want to see is a full-fledged airplane whipping around the inside of a building.
MITs Robust Robotics Group is taking steps toward making this a reality with an autonomous UAV that can fly around in a tight parking lot. To accomplish this feat, the MIT scientists developed a short winged, laser-equipped brainiac UAV that can understand where it is and how to avoid obstacles all on its own.
MIT professor Mark Drela developed the UAV with a short 2-meter wingspan so that it could maneuver quickly in enclosed spaces. More importantly, the small airframe packs the same computational power as a netbook, with an Intel Atom processor inside.
It needs all this processing power to run a state-estimation algorithm in conjunction with a set of lasers, accelerometers, and gyroscopes. With these combined technologies ,the UAV is able to figure out its own orientation (i.e. pitch, roll, and yaw) and velocity, as well as 15 other in-flight factors without a GPS signal. At the same time, the UAV constantly runs an algorithm that it uses to avoid obstacles it comes across on the fly.
So far, the MIT scientists have run a preliminary test of the system aided by a preloaded map. The UAV successfully flew for a total of three miles, at 22 miles per hour through the parking garage under MITs Seuss-y Stata Center.
The MIT researchers next step will be to build an algorithm that allows their UAV to build a map of its surroundings on the fly.
Like this? You might also enjoy&
- This Lego Submarine Is Fully Submersible, Fully Awesome
- LED Top Hat Makes You the Fanciest Gent at the Party
- iPhone App Cardiio Can Tell Your Heart Rate With Nothing But A Video
This story, "MIT makes a drone aircraft that can fly indoors" was originally published by PCWorld. | <urn:uuid:5ba4fe92-6bf3-4c0d-a47e-3ba2a5e9a600> | CC-MAIN-2017-04 | http://www.itworld.com/article/2725096/consumerization/mit-makes-a-drone-aircraft-that-can-fly-indoors.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00060-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937437 | 426 | 2.828125 | 3 |
Current estimates suggest that 40-45% of marriages are at risk for future divorce. Separation and divorce often have negative effects on the mental and physical health of both spouses, which can impact several areas of their lives including their careers.
Psychologist Dr. John Gottman has conducted research on the predictors of divorce for several decades, and has uncovered a way to predict whether or not a couple will divorce with over 90% accuracy, based simply on observing a 15-minute conversation between the couple. This unique Info-Tech research note includes:
- A description of the communication patterns that most strongly predict marital stability vs. divorce.
- Specific examples of Dr. Gottman's research findings.
- Recommendations for how to improve marital communication patterns.
A satisfying marriage can go a long way towards creating happiness in other areas of one's life. Gain an understanding of the communication patterns that are associated with marital stability by learning about Dr. Gottman's research. | <urn:uuid:34a65066-a950-409b-9a04-9e237ea9ed59> | CC-MAIN-2017-04 | https://www.infotech.com/research/leisure-note-how-communication-patterns-predict-divorce | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00060-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948899 | 196 | 2.5625 | 3 |
Homebrew NMS: Keep a Database of Network Assets
As you build your own NMS, you'll eventually want to tie together all the data you gather. Perl's DBD::Pg is one way to do that.
Over time, we find it necessary to gather more and more information. An NMS solution may be able to store data in its internal database, but sometimes we need to combine many data sources in a single place. For example, it's extremely useful to grab Layer 2 discovery data and place it into a "host database," which will contain much more information than your discovery application provides. A few top-of-the-head examples are: owner information, serial number, related ticket numbers from your trouble ticketing system, physical location, and much, much more.
We'll get into the details of one proposed database layout in a future article, which will provide more concrete examples of using externally gathered data to enable successful executions of IT processes.
The overall concept of DBD::Pg is best expressed in steps:
- Define database name, host name of the database server, and your username and password
- Connect to the database
- Execute queries: insert new data, retrieve existing data, or delete data
So let's see this in action. The following example connects to a PostgreSQL database and executes a simple "SELECT" query. Here's the connection part:
The above code sets some useful variables, and then crafts the arguments required by the connect() method. The $dbh variable is a handle returned by the database, and if it's undefined or holds a negative value after the connect() call, that means the connection did not succeed.
At this point we simply need to use the handle returned by DBI->connect() to execute queries. First, a database statement must be "prepared." If, for example, you needed to execute the same query over and over, you would execute() the query in a loop, but with different variables each time. A performance enhancement, prepare(), with a question mark placeholder, allows you to avoid sending the entire query over and over. Instead, it will send only the new arguments for every subsequent execute() call. We aren't using substitution in the following example, but you need to be aware of the real purpose of prepare().
The above code should execute the SELECT query it as commanded to run and print the results. It is quite straightforward after reading the DBD::Pg documents, but there are a few things to point out. The $errstr variable is always available, and should be printed if any call to a DBD function fails. This means that you need to check the return value for every function call, obviously. The fetchrow_array object will return an array containing the data returned from the database. It may be wise to check that only one result was returned!
Since @row is an array, it can be accessed by referencing individual indexes, like so: $row. You will know the database schema beforehand, so accessing individual fields inside @row should be easy. If you're going to be using the data frequently during processing, it is recommended that you assign each field in the array some useful variable names.
For the sake of the example, we'll assume that we have new knowledge about a host in our database. The mythical database keeps track of switch, switchport, mac_addr, and "lastseen" (in that order). An easy way to update this information, correcting the switchport information, follows.
When processing data returned from the DB gets more complex, assigning the @row elements meaningful names save a lot of time and frustration. See how strange it gets referring to $row[number]? Referring back to the order of database elements gets quite tedious.
Inserting new data into the database is actually easier; just create some INSERT queries based on whatever data you have available.The difficult part is keeping track of what data types and fields you need to insert into a database row. I find it best to include the output of '\d' in PostgreSQL right in a comment in my source code, and also to name variables based on database fields.
We'll be using the above example to show how easy it is to store, and more importantly, correlate data from multiple sources in a single authoritative database. Before zooming out to the overall IT picture, we'll continue focusing on network-based information in the next article: managing and verifying discovery data. | <urn:uuid:3db2d987-448a-4614-acd6-50c7ac30f342> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/netos/article.php/3699296/Homebrew-NMS-Keep-a-Database-of-Network-Assets.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00483-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90601 | 919 | 2.671875 | 3 |
Who gets in?
A major component of IT security is determining who is allowed into your structure, both physically and logically, and what they do once they have gained access. Access control determines who has how much access. To get control, organizations must lock down their systems, including hosts, networks, applications, data stores, and data flows.
Communication security protects the pathways across which voice and data traverse. The goals of communication security include preventing eavesdropping to protect confidentiality, assuring integrity, and maintaining availability of the connection itself. All communication channels—whether between devices on the same network, across a VPN, over a remote connection, or wirelessly over radio waves—must be protected. A significant portion of communication security requires appropriate encryption. Encryption as used to protect the data itself while in storage and transit, as well as to provide a digital means of authentication. Without proper security, communication is subject to interception, manipulation, or denial of service. Communication security also includes planning for protection, as new technologies and data flow patterns are incorporated into the workplace.
Cryptography is the science of obfuscation, and it is used to protect data while in transit or in storage. Data encryption includes three common sub-divisions: symmetric ciphers, asymmetric ciphers, and hashing. Symmetric cryptography is used for bulk data encryption, protecting information while in transit or in storage. Asymmetric cryptography is used to prove the identity of endpoints (e.g., digital signatures) or to provide secure symmetric key exchange (e.g., digital envelopes). Hashing is used to detect alterations or verify integrity of communications and stored data.
Intrusion Detection Systems (IDS) are designed to notify administrators of suspect activities in the computing environment. Intrusion Prevention Systems (IPS) detect suspect activities and alter the environment in an attempt to thwart those activities. New Intrusion Detection and Prevention (IDP) solutions can perform deep packet inspection on cloud traffic. These tools supplement the security provided by firewalls, proxies, malicious code scanners, and other typical security mechanisms. IDS/IPS/IDP may be able to detect violations based on pattern matching, anomaly detection, and behavior analysis. However, these tools require expertise for proper deployment, configuration, and tuning.
Logging and Monitoring
Logging and monitoring, in addition to auditing, are essential parts of keeping track of all of the events that occur within an organization’s infrastructure. Each and every piece of equipment that can record a log file should be configured to do so, especially firewalls, proxies, DNS servers, DHCP servers, routers, and switches. Plus, every OS and application that can log events should be enabled as well. The more extensive the logging, monitoring, and auditing, the more evidence will be collected about benign and malicious situations. Other important issues related to event tracking include historical log archival, securing logs, time synchronization, monitoring performance, vector tracking, maintaining accuracy, and complying with rules of evidence and chain of custody.
Penetration testing is used to stress test a mature environment for issues that cannot be discovered by automated tools or typical administrators. Penetration testers are skilled in the method and tools of criminal attacks and the art of reconnaissance, and they are masters of systems, protocols, and other aspects of IT from the perspective of malicious hackers. Testers craft exploits, modify code, decompile executables and applications, debug scripts, uncover covert channels, and more. These are essential skills of the members of a penetration testing team. A complete understanding of the benefits and the mechanisms of black box security testing will enable an organization to benefit fully from hiring an ethical hacking consultant or developing their own in-house testing team.
Remote access is convenient, can reduce costs, and can make work tasks more flexible, but it also increases risk for an organization. Once remote connectivity of any type is enabled for valid user access to a private network, the benefits of physical security are greatly reduced. As soon as authorized outsiders can establish valid connections to internal resources, hackers from across the global gain the ability to attempt intruding into those same remote access channels. Remote access includes traditional PSTN modems, VPN connections over the Internet, wireless connections, and more. Remote access often benefits from the implementation of authentication, authorization, and accounting (AAA) servers exclusively for remote users. Adding filters and rigorous oversight, such as with auditing and IDS/IPS/IDP solutions, is essential. Secure remote connectivity is possible, but it is more challenging and involved than most organizations realize when first launching telecommuting or remote access projects. | <urn:uuid:8d683082-041e-4739-94d7-a2289f427fad> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2013/10/17/what-you-need-to-know-about-access-control/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00299-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926036 | 942 | 3.21875 | 3 |
With the proliferation of keyloggers, Trojans and other malware, it becomes progressively more difficult to ensure that data being used is safe.
In fact, it may not be possible to state that data in use is ever truly secure given that any company is also dependent on the end user and how trustworthy he or she is. So perhaps the first precaution that can be taken is to ensure that those that have access to the data actually need to access it.
It is also important to consider if a person does have access to data where can they access it from and how. If data is highly secure, then really it should never leave the secure location where it is stored, whether that is on-premise or in the cloud, no matter who might be asking or how convenient it might be.
This issue becomes more of a concern when employees are being encouraged to work from home or are tempted to do work from an unsecured machine.
So the first step is identifying the required privacy of data (data discovery and classification is a useful task in itself) and who is allowed access to that data. Then the appropriate access rights can be set up and procedures created on how that data is to be accessed.
Once the policy is in place, then technical solutions can be used to help enforce those policies. To that end, it is important that data protection is part of the work flow and that the user is largely unaware of it where possible. It should be part of what they do.
Full disk encryption (FDE) is a good first step and increasingly ‘invisible’ to the end user. Whilst this may be considered as data at rest, it should be noted that FDE encrypts swap space which is arguably data in use. Furthermore, it has to include all media. For instance, data copied to a USB must be just as encrypted as the hard disk of the desktop or laptop.
Another technical solution that provides protection for company data is the use of a virtual OS on USB sticks. This allows employees to plug this USB into any machine, have their familiar environment and still be private. This allows employees to use home machines that may also be used by their family and yet maintain complete separation from that platform.
Increasingly Data Leak Prevention (DLP) is being used to ensure that a foolish action is prevented from publicising sensitive data but this is a whole subject in itself.
Good gateway protection implementing defence in depth continues to be good policy for the company infrastructure. Locking down the user’s OS and keeping it and all applications patched with the latest and greatest releases is also key.
However, this frequently runs into issues where the danger of manufacturers introducing errors and hence vulnerabilities is contrasted with the vulnerabilities that are being patched. It requires an understanding of what is being patched, a process to test before production and a way to roll back if an error is discovered that cannot be accepted.
Another way of controlling the desktop environment is the application whitelisting where only known applications are permitted to run. This can impede productivity but if possible can go a long way to reduce the chance of malware and of inadvertent disclosure.
As ever, the deployment of defence in depth is best practice and some control at the gateway of a network is an extra precaution that is easy to deploy, manage and monitor. There also remains little alternative to good desktop security.
In the end, the security of data in use is about risk mitigation. However, with the current targeted attacks and the proliferation of zero day threats, the risk level is high. It is necessary that action is taken to implement the required precautions that reduce the risk to an acceptable level.
Cross-posted from Redscan | <urn:uuid:03e148be-df7c-4d40-b827-e82bd92c17f0> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/20173-Protecting-Data-in-Use.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00117-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952881 | 753 | 2.6875 | 3 |
Infineon Technologies and IBM have demonstrated a prototype 16Mbit Mram (magnetic Ram) chip, bringing the power-saving technology one step closer to commercial availability.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
MRam is a non-volatile memory technology which can store information for an extended period of time without power. The technology is seen as a future replacement for flash memory, another type of non-volatile memory, in mobile applications, such as mobile phones, and could even supplant volatile DRam (dynamic Ram) in PCs further down the road.
Made using magnetoresistive materials, MRam stores data by applying a magnetic field that causes memory cells to enter one of two magnetic states. By comparison, existing memory technologies, such as flash, SRam (synchronous Ram) and DRam, use an electric charge to store data.
Besides its non-volatile characteristics, another attractive feature of MRam is cost. Unlike flash memory, which requires a specialised CMos process to manufacture, MRam chips can be produced using standard CMos processes. This could make MRam cheaper to manufacture if yield rates - the percentage of working chips produced on a single silicon wafer - are high.
MRam is expected to first be used in mobile applications as a replacement for flash. At some point in the future, MRam could also replace SRam and DRam, which is cheap to manufacture but is slower than MRam and requires a constant power supply to retain stored data. Using MRam instead of DRam in notebooks PCs could help extend battery life, according to Infineon and IBM.
Infineon and IBM have been working together to develop MRam chips since 2000 and the technology could be available within a few years.
Sumner Lemon writes for IDG News Service | <urn:uuid:0ced5c20-b233-4ca5-aa1f-1198a531ea3a> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240056594/Infineon-and-IBM-broaden-power-saving-technology-availability | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00025-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933672 | 390 | 3.28125 | 3 |
The Federal Trade Commission announced this week that it will host a workshop to explore potential privacy and security implications raised by the increasing use of facial recognition technology. The discussion will take place on December 8, 2011 in Washington, DC.
According to the FTC, the workshop, which is free and open to the public, may focus on topics including:
- What are the current and future uses of facial recognition technology?
- How can consumers benefit from the technology?
- What are the privacy and security concerns surrounding the adoption of the technology; for example, have consumers consented to the collection and use of their images?
- Are there special considerations for the use of this technology on or by children and teens?
- What legal protections currently exist for consumers regarding the use of the technology, both in the United States and internationally?
- What consumer protections should be provided?
Facial recognition technologies are being used more pervasively as a law enforcement and security tool, and technology companies increasingly are including facial recognition capabilities as a software feature to enhance applications and services. Recent articles also have highlighted potential uses of facial recognition technology to target advertisements toward particular demographics—for example, age appropriate shoes or a particular restaurant recommendation
Concerns expressed by consumer groups and resulting congressional attention likely have prompted the FTC to focus on the expanding uses of this technology. Check back for Inside Privacy’s report on the workshop when it takes place. | <urn:uuid:a2a6487b-bb6a-4f81-add3-a430c2b43b54> | CC-MAIN-2017-04 | https://www.insideprivacy.com/united-states/federal-trade-commission/ftc-to-hold-facial-recognition-technology-workshop/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00511-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933161 | 289 | 2.515625 | 3 |
Fu D.,China Agricultural University |
Zhang D.,China Agricultural University |
Xu G.,China Agricultural University |
Li K.,China Agricultural University |
And 6 more authors.
Animal Science Journal | Year: 2015
Beijing-you is a Chinese local chicken which is raised for both meat and eggs. In the present study, we detected the effects of different rearing systems on growth, slaughtering performances and meat quality of Beijing-you chickens at 26-40 weeks of age. Six hundred Beijing-you hens were randomly allocated into two groups at 16 weeks of age and raised in free range or battery cage systems. The body weight, slaughtering performance and meat quality were measured for each group at the ages of 26, 30, 35 and 40 weeks. Some of the traits were dramatically influenced by the two systems, although most of them did not show significant changes. For the meat fiber microstructure, we found that the diameter of thigh and breast muscle fiber in the free range group were significantly increased than in the cage group (P<0.05) at 26 weeks of age. The ratio of fast muscle fiber in thigh muscle samples of the free range group was significantly reduced compared to that of cage group at both 35 (P<0.01) and 40 (P<0.01) weeks of age, indicating that the free range system could promote the transforming of fast muscle fiber to slow muscle fiber. © 2014 Japanese Society of Animal Science. Source
Wang C.,CAS Institute of Zoology |
Luo J.,CAS Institute of Zoology |
Wang J.,U.S. Center for Disease Control and Prevention |
Su W.,CAS Institute of Zoology |
And 9 more authors.
Integrative Zoology | Year: 2014
Outbreaks of H7N9 avian influenza in humans in 5 provinces and 2 municipalities of China have reawakened concern that avian influenza viruses may again cross species barriers to infect the human population and thereby initiate a new influenza pandemic. Evolutionary analysis shows that human H7N9 influenza viruses originated from the H9N2, H7N3 and H11N9 avian viruses, and that it is as a novel reassortment influenza virus. This article reviews current knowledge on 11 subtypes of influenza A virus from human which can cause human infections. © 2013 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd. Source | <urn:uuid:6290cee4-72b5-41ab-bbb2-2c06cabbdeb5> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/beijing-animal-husbandry-and-veterinary-station-727537/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00143-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920131 | 510 | 2.578125 | 3 |
Have a question for us? Use our contact sales form:
On the 20th of this month will be the 40th anniversary of the first Moon landing, Apollo 11, famously taking Neil Armstrong and Buzz Aldrin for man’s first Moon walk. The Lunar Excursion Module (LEM) was fundamentally a simple machine which was designed to fall to the Moon rather than fly, with one giant rocket motor underneath and some smaller attitude thrusters that allowed the spacecraft to be rotated so that the main engine could potentially point in any direction when it fired. Control of the descent was therefore by means of a number of rocket engine “burns” that could slow the fall of the LEM. This certainly is a “brute force” method of flying; as with Earth flying machines like a helicopter or the Harrier jet, if you have a powerful enough engine it’s possible to bludgeon the laws of physics into submission.
Later on (in the late 1970’s), the idea of landing the LEM inspired a series of popular computer games, probably most famously the “Lunar Lander” arcade game from Atari. I first saw versions of lunar lander in the early 1970s, running on programmable calculators, and specifically the Science of Cambridge MK14 (an early single-board microcomputer), the first computer that I ever programmed. In the computer game, the program modelled the amount of fuel in the craft, the altitude and the speed, and of course the moon’s gravitational pull of 1.6N/kg. By pressing a button you could “burn”, which used fuel and slowed descent. If you studied Physics or Applied Maths at school you would have had the formulae needed to create this program, and you could even do the necessary calculations by hand. Where the computer becomes important is in the dynamic nature of the calculations: as you burn fuel, the mass of the craft decreases, and therefore the force of the engine creates more acceleration as the flight continues.
The real LEM had a dry weight of around 4000kg, with another 11,000kg of fuel at the start of the flight, and the descent started from a height of 15km. Unlike the game of course, the Apollo 11 descent had two men’s lives depending on the outcome, and the flight did not go smoothly. Armstrong famously landed the LEM (codenamed “Eagle”) with only a few seconds of fuel left in the tank, after deciding that the landing site was too rocky and deciding to fly along the surface for a while, looking for a new site. If you’ve ever played “Lunar Lander”, you’ll know that flying along at constant height is a very expensive operation in terms of fuel, so this is a high-risk strategy.
The LEM also experienced some computer problems during the short flight, with the Apollo Guidance Computer giving “program alarm 1202” repeatedly, causing Armstrong to ask Mission Control whether he should abort the landing. In subsequent analysis, the experts from MIT concluded that the computer overloaded because of the data coming from both the rendezvous radar and the ground radar at the same time. The boffins imagined that only the ground radar would be on during the descent (to give accurate height readings), while the rendezvous radar would be used after takeoff. Armstrong, being a test pilot, was planning for possible emergencies, and if the landing should be aborted, he wanted to be able to find “Columbia” (the command/service module) as quickly as possible as they burned away from the Moon’s surface. You might say that the user exercised that software in the way that the programmers had not foreseen; a problem that’s still all too common in software engineering today.
I would like to think that with today’s technology it would be much easier to go to the Moon: we have faster , smaller computers; superior materials like plastics and carbon fibre; more sophisticated fuels and engine technologies. Certainly the one thing that hasn’t changed in the last 40 years is the courage that it would take to land on the Moon, and we have to pay tribute to the 12 men that have done it.
For those interested , check this link out http://wechoosethemoon.org/
It recreates the Apollo 11 launch and moonlanding in a real time interactive website to celebrate the 40th anniversary. | <urn:uuid:af04b71f-c697-46de-b2dd-d6a21aed07a9> | CC-MAIN-2017-04 | http://www.dialogic.com/den/developers/b/developers-blog/archive/2009/07/09/moon-40.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00199-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958849 | 931 | 3.5 | 4 |
Future Look: Real-Time Individual Test Inspection
Do you remember the little pieces of paper you sometimes find in pocket of your new shirt or pants? The paper says something like “Inspected by No. 11.” I believe it simply means that Inspector No. 11 found that piece of clothing to be built properly, and placed the little piece of paper in the pocket. It feels good to me to know that the shirt was individually inspected and found to meet some standard. That’s good quality control and good customer value.
Even though you don’t see such inspection notices on everything you buy, you certainly expect that when you buy something new, it has been inspected before you use it.
So what does this have to do with IT certification tests? Well, let me try to explain it by starting at the end, with the final output of a certification test: the test score.
Unfortunately, some test scores are good and others are bad (that is, defective). They are good if they actually represent the knowledge or skills of the test- taker. Otherwise they are bad. What would cause a score to be bad and not represent the skills of the test-taker? There are several causes, actually:
- The person cheated in some way. Perhaps he only used a brain-dump site to prepare for the test. The score would represent what was memorized from the brain-dump site, but not overall knowledge or skill.
- The person hired someone else to take the test. The score would then represent the hired person, not the name associated with the score.
- The test-taker didn’t take the test seriously, perhaps because she just wanted to see what the questions were like or was sent in by an illegal test-prep company to steal questions.
- The person could have been sick during the test, making it difficult to think clearly and work hard during the test.
- Something happened during the administration of the test to distract the test-taker, causing strange errors during the test.
So, from my point of view, a score is not good or bad because it is a high score or a low score (although I’m sure that’s one way you could look at it). It is good if it is valid and bad if it is not valid. I would like every score report issued following IT certification exams to say: “This score is valid and can be used for certification decisions.” During the actual test, it would be ideal if algorithms were running that evaluated the performance of the test and test-taker, and were able to judge properly and make that statement. It’s like the clothing inspector looking over the garment and then using the little piece of paper.
A bad test would not even provide the test score. It would simply state, “This test event is not valid.” How can a bad test score be identified and exposed? It’s possible using statistical methods and comparing unusual test events to the majority of normal tests taken.
Normal test-takers–whether they are high scorers, low scorers or somewhere in between–take the test seriously, do not cheat, are not sick, do not hire others to take the tests and are not distracted during the test. They work through the test in a normal fashion, reading each question, spending a typical amount of time to work things out and selecting or creating an answer.
Other test-takers answer questions very strangely, sometimes picking very unlikely answers, sometimes getting very difficult questions correct and missing easy ones. They also spend unusual amounts of time on questions, using less time on questions that should take more time, and vice versa. I saw a test record once where the test-taker spent less than three seconds on each of 50 questions and passed because he got most of the questions correct! How it was done, I haven’t a clue, but I do know that the test produced a score that should not have been accepted and used for certification. Using statistics properly, such a test would be very easy to detect.
It benefits us all, particularly you and the rest of the vast majority of normal test-takers out there, if we can provide a way to evaluate each testing event to make sure the score was properly produced. Unusual test events (such as those produced by cheating, illness or equipment malfunction) will not have the negative impact they have today by allowing defective scores to be used for certification decisions.
David Foster, Ph.D., is president of Caveon (www.caveon.com) and is a member of the International Test Commission, as well as several measurement industry boards. | <urn:uuid:5f4ad969-bef7-4ec3-9bfe-7c11a0878c5e> | CC-MAIN-2017-04 | http://certmag.com/future-look-real-time-individual-test-inspection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00501-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969471 | 962 | 2.53125 | 3 |
OpenSSH is a common tool for most of network and system administrators. It is used daily to open remote sessions on hosts to perform administrative tasks. But, it is also used to automate tasks between trusted hosts. Based on public/private key pairs, hosts can exchange data or execute commands via a safe (encrypted) pipe. When you ssh to a remote server, your ssh client records the hostname, IP address and public key of the remote server in a flat file called “known_hosts“. The next time you start a ssh session, the ssh client compares the server information with the one saved in the “known_hosts” file. If they differ, an error message is displayed. The primary goal of this mechanism is to block MITM (“Man-In-The-Middle“) attacks.
But, this file (stored by default in your “$HOME/.ssh” directory) introduces security risks. If an attacker has access to your home directory, he will have access to the file which may contains hundreds of hosts on which you also have an access. Did you ever eared about “Island Hopping” attack? Wikipedia defines this attack as following:
“In computer security, for example in intrusion detection and penetration testing, island hopping is the act of entering a secured system through a weak link and then “hopping” around on the computer nodes within the internal systems. In this field, island hopping is also known as pivoting.“
A potential worm could take advantage of the information stored in the file to spread across multiple hosts. OpenSSH introduced a countermeasure against this attack since the version 4.0. The ssh client is able to store the host information in a hash format. The old format was:
host.rootshell.be,10.0.0.2 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0ei6KvTUHnmCjdsEwpCCaOHZWvjS \ jytm/5/Vv1Dc6ToaxTnqJ7ocBb7NI/HUQEc23eUYjFrZQDS0JRml3RnsG0UzvtIfAPDP1x7h6HHy4ixjAP7slXgqj3c \ fOV5ThNjYI0mEbIh1ezGWovwoy0IxRK9Lq29CacqQH8407b1jEj/zfOzUi3FgRlsKZTsc3UIoWSY0KPSSPlcSTInviG \ oNi+9gC8eqXHURsvOWyQMH5K5isvc/Wp1DiMxXSQ+uchBl6AoqSj6FTkRAQ9oAe8p1GekxuLh2PJ+dMDIuhGeZ60fIh \ eq15kzZGsDWkNF6hc/HmkJDSPn3bRmo3xmFP02sNw==
With the version 4.0, hosts are stored in this new format:
|1|U8gOHG/S5rH9uRH3cXgdUNF13F4=|cNimv6148Swl6QcwqBOjgRnHnKs= ssh-rsa AAAAB3NzaC1yc2EAAAABIw \ AAAQEAvAtd04lhxzzqW57464mhkubDixZpy+qxvXBVodNmbM8culkfYtmq0Ynd+1G1s3hcBSEa8XHhNdcxTx51MbIjO \ dCbFyx6rbvTIU/5T2z0/TMjeQyL3SZttbYWM2U0agKp/86FdaQF6V87loNcDq/26JLBSaZgViZS4gKZbflZCdD6aB2s \ 2sqEV4k7zU2OMHPy7W6ghNQzEu+Ep/44w4RCdI5OYFfids9B0JSUefR9eiumjRwyI0dCPyq9jrQZy47AI7oiQJqSjvu \ eMIwZrrlmECYSvOru0MiyeKwsm7m8dyzAE+f2CkdUh6tQleLRLnEMH+25EAB56AhkpWSuMPJX1w==
As you can see, the hostname is not readable anymore. To achieve this result, a new configuration directive has been added in version 4.0 and above: “HashKnownHosts [Yes|No]“. Note that this feature is not enabled by default. Some Linux (or other UNIX flavors) enable it by default. Check your configuration. If you switch the hashing feature on, do not forget to hash your existing known_hosts file:
$ ssh-keygen -H -f $HOME/.ssh/known_hosts
Hashing ssh keys is definitively the right way to go but introduce problems. First, the good guys cannot easily manage their SSH hosts! How to perform a cleanup? (My “known_hosts” file has 239 entries!). In case of security incident management or forensics investigations, it can be useful to know the list of hosts where the user connected. It’s also an issue for pentesters. If you have access to a file containing hashed SSH hosts, it can be interesting to discover the hostnames or IP addresses and use the server to “jump” to another target. Remember: people are weak and re-use the same passwords on multiple servers.
By looking into the OpenSSH client source code (more precisely in “hostfile.c“), I found how are hashed the hostnames. Here is an example:
“|1|” is the HASH_MAGIC. The first part between the separators “|” is the salt encoded in Base64. When a new host is added, the salt is generated randomly. The second one is the hostname HMAC (“Hash-based Message Authentication Code“) generated via SHA1 using the decoded salt and then encoded in Base64. Once the hashing performed, it’s not possible to decode it. Like UNIX passwords, the only way to find back a hostname is to apply the same hash function and compare the results.
I wrote a Perl script to bruteforce the “known_hosts” file. It generates hostnames or IP addresses, hash them and compare the results with the information stored in the SSH file. The script syntax is:
$ ./known_hosts_bruteforcer.pl -h
Usage: known_hosts_bruteforcer.pl [options]
-d <domain> Specify a domain name to append to hostnames (default: none) -f <file> Specify the known_hosts file to bruteforce (default: /.ssh/known_hosts) -i Bruteforce IP addresses (default: hostnames) -l <integer> Specify the hostname maximum length (default: 8 ) -s <string> Specify an initial IP address or password (default: none) -v Verbose output -h Print this help, then exit
Without arguments, the script will bruteforce your $HOME/.ssh/known_hosts by generating hostnames with a maximum length of 8 characters. If a match is found, the hostname is displayed with the corresponding line in the file. If your hosts are FQDN, a domain can be specify using the flag “-d“. It will be automatically appended to all generated hostnames. By using the “-i” flag, the script generates IP addresses instead of hostnames. To spread the log across multiple computers or if you know the first letters of the used hostnames or the first bytes of the IP addresses, you can specify an initial value with the “-s” flag.
Examples: If your server names are based on the template “srvxxx” and belongs to the rootshell.be domain, use the following syntax:
$ ./known_hosts_bruteforcer.pl -d rootshell.be -s srv000
If your DMZ uses IP addresses in the range 192.168.0.0/24, use the following syntax:
$ ./known_hosts_bruteforcer.pl -i -s 192.168.0.0
When hosts are found, there are displayed as below:
$ ./known_hosts_bruteforcer.pl -i -s 10.255.0.0 *** Found host: 10.255.1.17 (line 31) *** *** Found host: 10.255.1.74 (line 165) *** *** Found host: 10.255.1.75 (line 69) *** *** Found host: 10.255.1.76 (line 28) *** *** Found host: 10.255.1.78 (line 56) *** *** Found host: 10.255.1.91 (line 51) *** ^C
My first idea was to bruteforce using a dictionary. Unfortunately, hostnames are sometimes based on templates like “svr000” or “dmzsrv-000” which make the dictionary unreliable. And about the performance? I’m not a developer and my code could for sure be optimized. The performance is directly related to the size of your “known_hosts” file. Be patient! The script is available here. Comments are always welcome.
Usual disclaimer: this code is provided “as is” without any warranty or support. It is provided for educational or personal use only. I’ll not be held responsible for any illegal activity performed with this code. | <urn:uuid:6d781fce-a010-4ff4-870d-6d67f901002b> | CC-MAIN-2017-04 | https://blog.rootshell.be/2010/11/03/bruteforcing-ssh-known_hosts-files/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00319-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.790821 | 2,208 | 3.421875 | 3 |
I’m sure you’ve all heard of 4G, but what does it mean and what can it do for you? 4G is the fourth generation wireless service that offers between 4 and 10 times the performance of 3G networks. 4G is the new generation of mobile broadband and offers faster download speeds to enhance demanding applications while on-the-go. The two main technologies of 4G are WiMax and LTE (Long Term Evolution). According to PC World Magazine:
- WiMax providers are advertising download speeds between 2 mbps and 6 mbps, with peak speeds of 10 mbps or more
- LTE is expected to offer download speeds in the 5 mbps to 12 mbps range
Most 3G networks deliver 1.5 mbps. With this enhanced speed you’ll truly be able to perform no matter where the job takes you.
I’m sure there will be interesting developments in the coming months, so stay tuned for more information about the 4G evolution. | <urn:uuid:71fc2ed6-4f20-4488-adf9-f2416c94efb2> | CC-MAIN-2017-04 | http://blog.decisionpt.com/4g-wireless-basics | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00135-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932701 | 205 | 2.515625 | 3 |
[Article published in The Review (Feb. 2010)]
An innovative scheme in Ghana uses cellphones to improve the quantity and quality of care for expectant mothers and newborn babies
One effect of the spread of cellphones in the developing world is that their use in mobile health initiatives has become increasingly popular.
In West Africa, for example, the Ghanaian government, in partnership with the Grameen Foundation (a non-profit organization that funds access to microfinance and technology for people living in poverty), is planning to use cellphones to increase the quantity and quality of antenatal and neonatal care in rural areas of the country.
The two-and-a-half-year Mobile Technology for Community Health (MoTeCH) scheme will develop a suite of services, delivered over basic cellphones, that provide relevant health information to pregnant women and encourage them to seek antenatal care from local facilities. At the same time, MoTeCH will help community health workers to identify women and newborn babies in their area who are in need of healthcare services, and automate the process of tracking patients who have received care.
Pregnant women will register by providing their phone number, the area in which they live, their estimated due date and their language preference. They will then begin to receive SMS and/or voice messages that provide information about their pregnancy (milestones in fetal development, for example), the location of their nearest health center, and specific treatments that they should receive during their pregnancy, such as tetanus vaccinations. Once her child is born, the mother will continue to receive messages and information about essential vaccinations for her child and how to manage critical childhood illnesses.
The system will also include a facility that enables mothers to send healthrelated questions via SMS and receive responses by the same medium. There will also be specific tools for community health workers, who will be able to enter data on patients into a national patient register using their cellphones. This will make daily record-keeping simpler for them, as well as enabling the Ghana Health Service to track the delivery of antenatal services and send timely and important messages to both health workers and patients.
MoTeCH, which is funded by a grant from the Bill & Melinda Gates Foundation, is a collaboration between Columbia University’s Mailman School of Public Health and the Ghana Health Service. All the parties involved hope that the end result will be a fall in infant mortality and disease across the country.
Digital life in Africa
How digital technology is solving some of Africa’s most pressing problems
Video: The Gabon Health Program
Video: Making Mobile Financial Services available to everyone | <urn:uuid:61aeb697-7d06-4568-bc2d-d2c48b9afd27> | CC-MAIN-2017-04 | http://www.gemalto.com/mobile/inspired/mothers-in-ghana | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00007-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944146 | 534 | 3.125 | 3 |
Last week, Montana Attorney General Mike McGrath announced a new Web site, www.safeinyourspace.org, designed to educate young people -- and the adults in their lives -- about some of the kinds of dangers they might face online.
McGrath made the announcement at an event at the Attorney General's Office in Helena. He was joined by Superintendent of Public Instruction Linda McCulloch and representatives of the Montana Safe Schools Center at the University of Montana. The site was designed in cooperation with the center.
"We think this site will help start the conversation between young people and the adults in their lives," McGrath said. "Safe in YourSpace encourages children, parents and teachers to talk with one another about how to stay safe online."
"The Internet is a valuable educational tool that gives students access to resources around the world," McCulloch said. "This Web site will help educate students, parents, educators and community members on ways to keep our students safe while they are surfing the Web at home or at school."
The site has specific information for teens, parents and teachers. It covers a variety of topics, including cyberbullying, Internet predators and technical issues for teachers. The section for teens has information and tips on e-mail, instant messaging, social networking and peer-to-peer networking. It also includes a glossary of terms and links to state and national organizations.
McGrath noted that sometimes, young people are more technically savvy than their parents.
"We know kids are going to use technology, and we need to encourage that," he said. "But while young people may know how to navigate pages and sites, they don't necessarily know how to make good decisions about some of what they face on the Net."
The Safe Schools Center at the University of Montana has provided training with the National Center for Missing and Exploited Children and i-SAFE, a nonprofit foundation that focuses on Internet safety.
"Online predation, identity theft, cyberbullying - these are issues young people have to be prepared for on a daily basis," said Rick van den Pol, director of the Safe Schools Center. | <urn:uuid:d2dafcbb-e3be-4e4b-a3c0-651d8a88c289> | CC-MAIN-2017-04 | http://www.govtech.com/security/New-Montana-Cyber-Safety-Web-Site.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00401-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967745 | 436 | 2.75 | 3 |
Since the automobile industry’s earliest days, innovation has been a fundamental element of progress. A requirement, really. Pioneering a better way to get around doesn’t just happen on its own.
In fact, the entire industry is an innovation in itself. Automobile inventors originally were seeking alternatives for steam locomotives, the dominant method of transportation in the 1800s.
The vehicles we drive today—while a far cry from the first automobile, a three-wheeled, steam-engine-powered cart, and even Henry Ford’s Model T—exist because of the idea of YES. Yeses that fueled the industry’s evolution.
Perhaps the most newsworthy innovation lately has been the rise of self-driving cars.
Uber and Volvo partnered to pilot the first-ever self-driving car service in Pittsburgh. Drivers will still be present, for the time being, in modified Volvo XC90s but will serve merely as supervisors. The SUVs are “outfitted with dozens of sensors that use cameras, lasers, radar, and GPS receivers,” according to a Bloomberg article.
Volvo and Uber signed a $300 million agreement to develop a fully self-driving car that will be road-ready by 2021, according to the Bloomberg article. To further its autonomous innovation, Uber recently announced that it acquired self-driving truck startup Otto.
Google and many other companies are also innovating in this space. Google currently has prototypes on the road in California, Texas, Washington, and Arizona, and claims to have driven more than 1.5 million miles.
And startup Drive.ai is using deep learning to create new ways for self-driving cars to communicate with people outside—to give them a voice of sorts.
Companies and startups aren’t the only ones having a hand in the self-driving-car revolution.
A recent New York Times article reported that Michigan Senators Gary Peters and Debbie Stabenow and Representative Debbie Dingell are trying to open a self-driving car test center where automakers and regulators can collaborate together to design and test the vehicles.
“[Autonomous vehicles] are inevitable. It’s only a matter of time,” Andrew Chatham, senior staff engineer and off-board software lead for Google’s self-driving car program, told FastCompany. “They are the most logical next step and will have socio-economic and other effects on our society—some good and some bad—quite possibly changing our whole way of life…”
It will be interesting to continue to watch how all these innovators will continue to collectively say yes to evolving the auto industry and to finding new and better ways of getting from point A to point B. | <urn:uuid:14326c42-5e94-43cd-94dd-dcf19a359bd7> | CC-MAIN-2017-04 | https://www.citrix.com/blogs/2016/09/09/self-driving-cars-started-a-century-ago-with-a-yes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00547-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948215 | 563 | 2.515625 | 3 |
When the RMS Titanic set sail on her maiden voyage from Southampton, England, to New York, on April 10, 1912, she was considered the ultimate passenger liner -- unparalleled in luxury, size and technology.
The legendary British ocean liner that struck an iceberg and sank hours later in the North Atlantic Ocean on April 15, 1912, clearly wouldn't be considered a high-tech vessel today. But when the ship set sail with 2,228 passengers and crew members amid great fanfare on an April afternoon 100 years ago, the Titanic was a marvel of state-of-the-art technology that captured the world's interest.
"At the time, it was the most advanced ship," said Joseph Vadus, IEEE Life Fellow and the leader of the team that discovered the Titanic in 1985. "The crew had a lot of confidence in their ship and the technology that it had. They were bragging on how good this ship was. Trouble was unthinkable. There were people on board who were experts at different technical problems -- engineers, electricians, plumbers -- and they did their best, but their best was not enough."
The Titanic, for instance, had an electrical control panel that was 30 to 40 feet long. The panel controlled all of the fans, generators and lighting on the ship. It also controlled the condensers that turned steam back into water, along with the few machines that took salt out of ocean water to make it drinkable.
"Their electrical control panel, to us, would seem enormous, complicated and wasteful," said Tim Trower, a self-styled maritime historian who focuses his research on the Titanic. "It would be pretty primitive today. A simple desktop computer would handle everything that was down on this massive control panel."
The Titanic also had a master-and-slave setup for all of the clocks onboard. The central clock was on the bridge, and as the captain adjusted the time on that one clock, all the clocks on the ship would register the change as the ship sailed through different time zones.
There also were four elevators on the Titanic, which was fairly new technology on a ship. A few first-class cabins also had telephones, although the phone could not make ship-to-shore calls.
The crowning technical glory on the Titanic was the advanced wireless communications setup for Morse Code, which was considered the most powerful setup in use at the time.
The main transmitter was housed in what was dubbed the Marconi Room, named after Italian inventor Guglielmo Marconi, who was known as the father of long-distance radio transmissions. The transmitter's antenna was strung between the ship's masts, some 250 feet above the ocean's surface.
Most ships of the day could transmit messages a distance of 100 to 150 miles during the day, according to Trower, a contributor to the Titanic Commutator, the Titanic Historical Society's magazine. However, the Titanic's wireless system was capable of transmitting messages for 500 miles during the day and 2,000 miles at night.
"They had the very best, the very latest in wireless equipment," Trower said. "There were only two wireless operators onboard, both young men. They were the computer geeks of the day. These guys ate, slept and breathed wireless. Think of computer nerds sitting in the basement in their underwear surfing the Internet. These were those kinds of guys. They were good at what they did, but it was still slow." | <urn:uuid:a5c29def-f5ee-4bca-81b1-3f65936725c0> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2503092/mobile-wireless/titanic-was-high-tech-marvel-of-its-time.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00512-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.988697 | 708 | 3.375 | 3 |
Posts from ‘General’
“Why doesn’t this PING work!?!”
Here is a simple 3 router configuration, well at least it is simple on 2 of the 3 routers. R1 and R3 are configured quite traditionally, but R2 is a bit more involved.
Here is the diagram.
Here are the details.
R2 is using a VRF which includes both LAN interfaces. R2 is also acting as a Zone Based Firewall in transparent mode, allowing all ICMP traffic in both directions, as well as SSH from the inside to the outside networks. R2 has a bridged virtual interface in the 10.123.0.0/24 network. All are running OSPF, but pings issued from R2 to the loopbacks of R1 and R3 are failing.
Can you identify why? Continue Reading
RFC, or Request for Comments, are documents published that describe various items surrounding computer networking. Generally, these are memorandums published by the Internet Engineering Task Force.
RFCs can be a great resource. For some unknown reason, most candidates preparing for the CCIE don’t take the time to review these documents, which can be very helpful in assisting with understanding the how and why of various networking components. Perhaps the language is a bit dry, or they prefer books with shiny covers.
You have just been given a shiny, new router to configure. As part of the configuration, you are asked to configure an outbound access list which will only permit traffic through to specific destinations. Here are the requirements that you are given for your access-list:
Match (and permit) the following destinations using an access-list. Your access list should use the fewest number of lines, and should not overlap any other address space.
Anything within the 10.0.0.0/8 address space.
Anything within the 172.16.0.0/12 address space.
Anything within the 192.168.0.0/16 address space.
Anything within the 169.254.0.0/16 address space.
Be warned, it is estimated that a very high percentage of readers will NOT have the correct answer.
The leading question:
“Is it possible (and if so, how) to redistribute or originate a default route based on time of day?”
The short answer is “Sure, why not?”… But the longer answer has to do with how do we warp the forces of the universe to make that happen???
Well, start with what we know. We know we can do time-ranges in access-lists, right? Can we do them in standard access-lists (what we see used for redistribution all the time)?
Rack1R1(config-if)#exit Rack1R1(config)#access-list 1 permit 172.16.0.0 0.15.255.255 ? log Log matches against this entry <cr> Rack1R1(config)#
Nope. There’s a bummer. So we will need to use EXTENDED ACL’s in order to make this work. So now we are reaching the point of “Yes, it can be done, but it will make my head hurt.” as the answer.
First, as a little review, check out a blog we did last year providing some information on that sort of thing in conjunction with a distribute-list in different routing protocols.
Hello faithful blog readers. We all know there are some real treasures in the DOC-CD that can assist dramatically in the lab exam. Here are some of our reader’s favorites. Thanks to my friend Ruhann over in South Africa for the post idea!
All navigation begins from http://www.cisco.com/cisco/web/psa/configure.html?mode=prod
I. Bridging and Switching
a. Integrated and Concurrent Routing and Bridging
Cisco IOS Software – 12.4 Family – 12.4 Mainline – C.G. – Cisco IOS Bridging and IBM Networking Configuration Guide, Release 12.4 – Part 1: Bridging – Configuring Transparent Bridging
II. IP IGP Routing
a. Best Path Selection
Cisco IOS Software – 12.2 Family- 12.2 Mainline - C.G. – Cisco IOS IP Configuration Guide, Release 12.2 – Part 2: IP Routing Protocols – Configuring BGP – How BGP Selects Paths
Answers for Part II
So the answers to the exciting tasks at hand….
There was a good amount of activity surrounding answers submitted for the contest! It was good to see that many people interested in them! Now, it’s time to go through the answers and stretch the imagination a bit! Be prepared for some stretching as well!
One quick thing to point out before we get started, there was a question asked about why /24 routes won’t have a “.255″ as the fourth octet. This really depends on how we are using the ACL. If we are doing traffic filtering, where packets will obviously come from hosts INSIDE the /24, then yes, I’d use a “.255″ mask.
However, when the entry is being used for a routing filter, and it’s a /24 route… The fourth octet will, by definition, always be “.0″ and shouldn’t be changed. So the mask of “.0″ prevents anything from changing!
Now… On to the answers!
Thank you to everyone who participated… It was my first time running a little contest on the blog, and I’m sorry to say it didn’t quite work as I expected! The comments were not supposed to be seen until a day later, but I think I forgot to share that with the other folks here! My bad!
Anyway, there are a variety of answers that we received in the commentary, and remember that I said all three must be correct. That was the catchy part, as even the first few people were almost there. Almost, but not quite! It’s a good learning curve though!
Andrew Dempsey was the first person to actually get all three of them correct! Congratulations! Andrew, pop me an e-mail and we’ll figure out how to get the tokens to you.
To everyone else, I promise to have all the kinks worked out by tomorrow when I post Part II with a much more exciting set of things to think about!
1. Start picking a few and finding similarities again…
Just like with our even/odd example, we look at the constants. The last two bits will ALWAYS be “00″. So a mask of 11111100 would fit.
access-list 16 permit 188.8.131.52 0.0.252.255
2. Our differences here are in the third octet.
There are three bits of difference between the six values there. The 2-bit, 4-bit and 8-bit positions. But 2^3 would yield eight matches.
140 and 142 are missing there. So we have two ways of looking at this.
access-list 17 permit 184.108.40.206 0.0.6.0
access-list 17 permit 220.127.116.11 0.0.2.0
access-list 17 deny 18.104.22.168 0.0.2.0
access-list 17 permit 22.214.171.124 0.0.14.0
Either way, two lines is our best bet!
3. This is a little more complicated than it looks! Namely because there’s a gap in the middle and we aren’t allowed to use any “deny” statements! Go figure!
This exercise is just like creating subnets though. How well did you know your bit boundaries?
So let’s make as many major blocks as we can.
access-list 18 permit 126.96.36.199 0.0.0.63
access-list 18 permit 188.8.131.52 0.0.0.15
access-list 18 permit 184.108.40.206 0.0.0.7
access-list 18 permit 220.127.116.11 0.0.0.3
access-list 18 permit 18.104.22.168 0.0.0.0
access-list 18 permit 22.214.171.124 0.0.0.0
access-list 18 permit 126.96.36.199 0.0.0.3
access-list 18 permit 188.8.131.52 0.0.0.15
access-list 18 permit 184.108.40.206 0.0.0.127
After a while you have to think about all the different bit boundaries. On the bright side, Windows Calculator available to help if necessary!
Back in a day’ish!
As CCIE candidates, we are asked to do all sorts of things with access lists. We have them in lots of different places, and use them in lots of different ways. So many, sometimes, that it becomes very confusing to follow things!
Access-lists themselves aren’t really that bad. Or are they? When we use them with route-maps, occasionally we see permits in an access-list that are really used to deny the packet/route/whatever. What’s up with that?
Remember that ACLs are simply a matching mechanism. Something either DOES (permit) or DOES NOT (deny) match the list. Now, what we do with that matching status depends on how we use the ACL. As an interface-based packet filter, it truly is a permit or deny of the packet! In a route-map, it would depend on whether the route-map clause is a permit one or a deny one, in which case anything MATCHING the ACL would follow that route-map permit or deny ability.
Now, within an access-list, we also have some difficult things to grasp. I like to consider that lesson in binary math. That may oversimplify it, because really all we are doing is counting a 0 or a 1. How difficult can it be to count to one???
All by itself, it’s not that difficult. But the we’ll get some really obnoxious words thrown in like “in as few lines as possible” or “minimal configuration”. That means we have to start thinking about how the router sees things.
Let’s start with some groupings or summaries, and get the ball rolling.
In as few lines as possible (which translates to “no overlap”) summarize:
There’s only four networks, so it should be a /22, right? We learned that back in CCNA! Well, not quite so fast, because there’s a bit-boundary in our way. The best way to start with any of these tasks is to work with the binary and start to see the patterns.
Now, we may have seen some documents about performing an XOR function between the different entries. This is kinda-sorta true. From a pure logical construct, XOR measures between two different things. Here we have four. So would it be an XXXOR or an XOROROR? Either way, the point is we are beyond the basic logical formula!
But it really isn’t as bad as it may sound! Think about it in very simplistic terms! Next, we look at what things are the same and which are different. Well, between 31 and 32 we see LOTS of things that are different. 6 bits, in particular, have different values. When we create an ACL binary mask, the 0-bit means “stay the same” while a 1-bit means “different” or “don’t care”.
So just on two values with 6 bits of difference, we could come up with a mask of 00111111 and it would work. The problem we create, though, is the over-matching. If you take 2 to the power of the total number of 1-bits in your mask, you’ll find the total number of matches for the mask.
In this case, 2^6 yields 64 matches to that mask. We only have four things to group, so that’s not cool. So we won’t be able to get our summary in one line!! Leave 31 by itself then. Look at 32, 33 and 34.
There are two bits (the 1-position and 2-position) that are different. Here with a mask of 00000011 and two bits set to the 1-bit value, there will be a total of four matches to the mask (2^2). Still more than we want, as there’s only three lines left after setting aside “31″!
But, here’s where we start to look at multiple ways to accomplish the task!
access-list 10 permit 172.16.31.0 0.0.0.255
access-list 10 permit 172.16.32.0 0.0.1.255
access-list 10 permit 172.16.34.0 0.0.0.255
access-list 10 permit 172.16.31.0 0.0.0.255
access-list 10 deny 172.16.35.0 0.0.0.255
access-list 10 permit 172.16.32.0 0.0.3.255
Both results give us a total of three lines as the tightest configuration we can get. The difference is that one of them over-permits, but we deny those non-listed things first! So if your lab task says there “must be at least one ‘deny’ statement” then this is it.
The bottom line is no more, no less though! So let’s add to that list.
In as few lines as possible (which translates to “no overlap”) summarize:
We know we’ll run into the same basic quandary with “31″ and the others as we did before. But what about the rest? Back to binary.
Counting 31 off on its own, we notice there are three bits that have varying values from 32 through 37. The 1-bit, 2-bit and 4-bit positions. So if we used a mask of 00000111, that would cover all of those three bits.
2^3 to check the mask though tells us there would be eight matches. There’s only six values listed to match. We want no more, no less! Notice that 32 through 35 has two bits varying between them. A mask of 00000011 would match ONLY those four. And a mask of 00000001 would match ONLY the 36 and 37.
So we can use:
access-list 11 permit 172.16.31.0 0.0.0.255
access-list 11 permit 172.16.32.0 0.0.3.255
access-list 11 permit 172.16.36.0 0.0.1.255
What if we were told that we must have at least one DENY statement in the ACL?
Oh, those obnoxious requirements! Think like we did with the first one with the over-permitting. Let’s go back to the 00000111 mask. What extra values come into play there?
With a mask, we are saying we permit any and all of the variants with the bits. No matter where they fall in the mask, we should substitute values in to see them!
With three bits, we need:
The “110″ and “111″ matches are not in our list. That would be 38 and 39.
Those can be summarized as well, so here’s a list with a deny:
access-list 11 permit 172.16.31.0 0.0.0.255
access-list 11 deny 172.16.38.0 0.0.1.255
access-list 11 permit 172.16.32.0 0.0.7.255
Exactly the same list is being permitted as before. With no more or no less.
Now, let’s start talking about non-contiguous matching! Because that’s where life becomes more interesting. We’re following the same rules though.
What if we are instructed to pick only the even /24 networks from 192.168.0.0/16?
As I said, the rules don’t change here, it’s just “different” than we may be used to building masks for. We just aren’t drawing a “line” to separate network from host. That’s the CCNA version of access-lists. While it’s technically true, it’s not the entire truth! As a CCIE we are expected to know more!
So we can go back and start breaking all the even numbers up into binary to start to see our patterns.
Blah, blah, blah… You get the idea. Keep going if you want, but you should already get the pattern. The ONLY bit that WILL NEVER change is the 1-bit position, which will always be a zero.
Our mask will consist of seven “don’t care” bits and one “must be the same” bit. 11111110 will work perfectly fine. Now, here’s the catchy part. What do you put as the ‘network’ portion of the ACL?
Well, let’s expand on that “network” name…. Again, very CCNA explanation. What really is happening is that your ACL has a “binary starting point” and a “binary mask” to go with it.
So we need to SET our values, then provide the rules for what can change or what cannot.
access-list 12 permit 192.168.0.0 0.0.254.0 will permit all of the EVEN /24′s there.
Why did we put “.0″ as the fourth octet? Because the question asked for /24′s, which implies this will be used in a routing protocol. All /24 network advertisements must have “.0″ as the fourth octet and it cannot change. If it were a security question asking for hosts in the even numbered networks, then we would use a 0.0.254.255 mask.
What if we wanted the ODD networks out of the same range?
You can work out the binary if you want, but you will find that the mask will actually be the same thing. The only difference here is that our STARTING POINT changes.
access-list 13 permit 192.168.1.0 0.0.254.0 will permit all of the ODD /24′s!!!
So our big lessons here are:
- All bits are treated individually (no “line” to draw)
- The logical least number of lines may include over-permission with a deny first
- It’s not a “network”, but rather a binary starting point.
- Don’t forget to check the mask using 2^(# of 1-bits in the mask) forumla!
Some extra ones to think about, and we’ll see who gets the answers first.
1. Allow packets from all hosts in every fourth /24 network from 220.127.116.11/16
2. In as few lines as possible, permit only the following networks (assume it will be a distribute-list):
3. In as few lines as possible, allow access from all hosts in the 18.104.22.168/24 network except .93 through .106. You are not allowed to use any “deny” statements.
Be sure to comment with your answers, no comments will show up until the contest ends. The first person with all three answers correct will win 60 tokens!!! Whether you are renting racks of equipment for any track or working on graded Mock Labs, those tokens sure come in handy!
Stay tuned for the answers, and for Part II in a few days! Good Luck!!!
This may seem to be a basic topic, but it looks like many people are still confused by the difference between those two concepts. Let us clear this confusion at once!
Brian,First off, thanks for this great website and the great effort. One question about the CCIR R&S. Is grading effected by executing show or debug commands? Many cases I configure elements and I’m pretty sure that it will work, and omit the verification stage. In other words, does the proctor/script look at monitoring commands I executed, and if not, he marks me down because of simply not monitoring even though the configuration is fully functional?Kind regards,M. Khonji
The proctors don’t monitor your progress during the day for grading purposes. I.e if you configure OSPF and don’t issue the “show ip ospf neighbor” output it’s not going to make a difference as long as your configuration is functional. Now on the other hand grading itself can be based on either the configuration or the result. By this I mean that simple feature configurations, such as IP Accounting or SNMP or RMON would most likely be graded by looking at your running configuration, as there is usually only one way to accomplish a particular goal with features such as that. However more complex configurations like BGP or redistribution may be graded not based on your configurations but based on their results. I.e if you are asked to modify the BGP bestpath selection process, the easiest way to check this is to view the “show ip bgp” output and then to send traffic through the network with traceroute and see where it actually goes. Based o n this fact I would highly recommend to try to verify as much of the results of your configurations as possible in the real lab before leaving at the end of the day.Good luck in your preparation! | <urn:uuid:ff60cff5-d1d9-4399-8ecf-c9758fef1b50> | CC-MAIN-2017-04 | http://blog.ine.com/category/ccie-routing-switching/general/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00292-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922334 | 4,717 | 2.734375 | 3 |
What if researchers could access and share scientific simulation and modeling tools as easily as YouTube videos with the power of the cloud to drive it all? That’s the underlying premise for the HUBzero Platform for Scientific Collaboration, a cyberinfrastructure developed at Purdue University.
HUBzero was created to support nanoHUB.org, an online community for the Network for Computational Nanotechnology (NCN), which the U.S .National Science Foundation has funded since 2002 to connect the theorists who develop simulation tools with the experimentalists and educators who might use them.
Since 2007, HUBzero’s use has expanded to support more than 30 hubs — and growing — in fields ranging from microelectromechanical systems and volcanology to translating lab discoveries into new medical treatments and the development of assistive technologies for people with disabilities.
HUBzero is now supported by a consortium including Purdue, Indiana, Clemson and Wisconsin. Researchers at Rice, the State University of New York system, the University of Connecticut and Notre Dame use hubs. Purdue offers a hub-building and -hosting service and the consortium also supports an open source release, allowing people to build and host their own. HUBbub2010, the first of planned annual HUBzero conferences, drew more than 100 people from 33 institutions as far away as Korea, South Africa and Quebec, along with U.S. universities nationwide.
Although they serve different communities, the hubs all support collaborative development and dissemination of computational models running in an infrastructure that leverages cloud computing resources and makes it easier to take advantage of them. Meanwhile, built-in social networking features akin to Facebook create communities of researchers, educators and practitioners in almost any field or subject matter and facilitate communication and collaboration, distribution of research results, training and education.
“Contributors can structure their material and upload it without an inordinate amount of handholding; that’s really a key because you want people to contribute,” says Purdue chemical engineering Professor Gintaras Reklaitis. He’s the principal investigator for pharmaHUB.org, a National Science Foundation-supported Virtual Engineering Organization for advancing the science and engineering of pharmaceutical product development and manufacturing.
One could cobble some of this functionality together with commercial Web software, but HUBzero integrates everything in a single package. Add the research tool-enabling features and research-oriented functions like tracking the use of tools (useful for quantifying outreach) and citation tracking and you have something quite different — and powerful.
HUBzero can be a prime tool for satisfying cyberinfrastructure requirements, such as data management and access, of granting agencies like the NSF. HUBzero’s emphasis on interdisciplinary collaboration only makes it more attractive in funding proposals. A hub is central to the Purdue-based Network for Earthquake Engineering Simulation (NEES), a $105 Million NSF program announced in 2009, the largest single award in Purdue history. Purdue’s PRISM Center for micro-electro-mechanical systems and C3Bio biofuels research center, both funded by the U.S. Department of Energy, are some other recent major award winners employing hubs.
Such an infrastructure can have an impact on scientific discovery, as nanoHUB.org clearly shows.
As of December 2010, NCN identified 719 citations in the scientific literature that referenced nanoHUB.org. In addition, user registration information indicates that more than 480 classes at more than 150 institutions have utilized nanoHUB. Because the site is completely open and notification of classroom usage is voluntary, the actual classroom usage undoubtedly exceeds these numbers. There are nanoHUB.org users in the top 50 U.S. universities (per the listing by U.S. News and World Report) and in 18 percent of the 7,073 U.S. institutions carrying the .edu extension in their domain name. “Nano” is a tiny area in science and technology, but nanoHUB is big in many institutions.
The nanoHUB.org community Web site now has more than 740 contributors and 195 interactive simulation tools. In 2010 more than 9,800 users ran 372,000 simulations. In addition to online simulations, the site offers 52 courses on various nano topics as well as 2,300 seminars and other resources, which have propelled the annual user numbers to more than 170,000 people in 172 countries.
Likewise, the cancer care engineering hub cceHUB.org, one of the early hubs following nanoHUB, has proven to be the linchpin in building an online data tracking, access and statistical modeling community aimed at advancing cancer prevention and care.
“We were looking for a solution for sample tracking and data storage that would not cost $5 million and it was a true logistical challenge needing a comprehensive cyberinfrastructure support system,” says Julie Nagel, managing director of the Oncological Sciences Center in Purdue’s Discovery Park. “The hub is the core of the CCE project and has brought the project forward so much faster than we could have if we had started from scratch.”
The success of nanHUB.org is what attracted the attention of Noha Gaber, who was seeking a good way to facilitate collaboration in the environmental modeling field when she came across the thriving international resource for nanotechnology research and education.
HUBzero, the technology powering nanoHUB, could obviously be used to build a Web-based repository of models and related documentation for projecting the spread and impact of pollutants. It also had built-in features, such as wiki space, enabling environmental researchers to share ideas and information. But the ability to make the models operable online, right in a Web browser window, and allow researchers to collaborate virtually in developing and using models was the deal closer.
“It’s not just providing a library of models, but providing direct access to these tools,” says Gaber, executive director of the U.S. Environmental Protection Agency’s Council for Regulatory Environmental Modeling. She’s a driving force behind the new iemHUB.org for integrated environmental modeling.
Under the hood, HUBzero is a software stack developed by Purdue (and being refined continuously by the consortium and hub users) and designed to work with open source software supported by active developer communities. This includes Debian GNU/Linux, Apache HTTP Server, LDAP, MySQL, PHP, Joomla and OpenVZ.
HUBzero’s middleware hosts the live simulation tool sessions and makes it easy to connect the tools to supercomputing clusters and cloud computing infrastructure to solve large computational problems. HUBzero’s Rappture tool kit helps turn research codes written in C/C++, Fortran, Java, MATLAB, and other languages into graphical, Web-enabled applications.
On the surface, the simulation tools look like simple Java applets embedded within the browser window, but they’re actually running on cluster or cloud hosts and projected to the user’s browser using virtual network computing (VNC). Each tool runs in a restricted lightweight virtual environment implemented using OpenVZ, which carefully controls access to file systems, networking, and other server processes. A hub can direct jobs to national resources such as the TeraGrid, Open Science Grid and Purdue’s DiaGrid as well as other cloud-style systems. This delivers substantial computing power to thousands of end users without requiring, for example, that they log into a head node or fuss with proxy certificates.
The tools on each hub come not from the core development team but from hundreds of other researchers scattered throughout the world. HUBzero supports the workflow for all of these developers and has a content management system for tool publication. Developers receive access to a special HUBzero “workspace,” which is a Linux desktop running in a secure execution environment and accessed via a Web browser (like any other hub tool). There, they create and test their tools in the same execution environment as the published tools, with access to the same visualization cluster and cloud resources for testing. HUBzero can scale to support hundreds of independent tool development teams, each publishing, modifying, and republishing their tool dozens of times per year.
If a tool already has a GUI that runs under Linux, it can be deployed as-is in a matter of hours. If not, tool developers can use HUBzero’s Rappture toolkit to create a GUI with little effort. Rappture reads an XML description of the tool’s inputs and outputs and then automatically generates a GUI. The Rappture library supports approximately two dozen objects — including numbers, Boolean values, curves, meshes, scalar/vector fields, and molecules — which can be used to represent each tool’s inputs and outputs. The input and output values are accessed within a variety of programming languages via an Application Programming Interface (API). Rappture supports APIs for C/C++, Fortran, Java, MATLAB, Python, Perl, Ruby, and Tcl, so it can accommodate various modeling codes. The results from each run are loaded back into the GUI and displayed in a specialized viewer created for each output type. Viewers for molecules, scalar and vector fields, and other complex types can be automatically connected to a rendering farm for hardware-accelerated 3-D data views.
HUBzero sites also provide ways for colleagues to work together. For example, because of the unique way the HUBzero middleware hosts tool sessions, a single session can be shared among any number of people. A group of people can look at the same session at the same time and discuss ideas over the phone or instant messaging. If some of the people aren’t online or available, they can access the session later from their My Hub page and follow up at their convenience. Some commercial collaboration tools, such as Adobe Presenter, also work within HUBzero (hub builders are required to license these).
As people are using the tools, questions arise and sometimes things go wrong. HUBzero supports many ways for users to find help and help one another and includes a built-in trouble report system. Users also can post questions in a community forum modeled after Amazon.com’s Askville or Yahoo! Answers. In practice, many tickets aren’t really problems but are actually requests for new features. HUBzero supports a wish list capability for collecting, prioritizing and acting on such requests. Users can post an idea to the wish list associated with each tool or to the general list associated with the hub itself.
HUBzero’s unique blend of simulation power and social networking seems to resonate across engineering and science communities. As hub use continues to grow, a goal is to develop new capabilities to connect related content so that tools published on one hub can be easily found on all others. Another goal is to improve tool interconnection, so that one tool’s output can be used as input to another, letting developers solve larger problems by connecting a series of models from independent authors.
Michael McLennan is the senior research scientist and hub technology architect at Purdue. Greg Kline is the science and technology writer for Information Technology at Purdue (ITaP). | <urn:uuid:7e67d6de-a5ab-4f1a-9316-62451e997a73> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/02/28/hubzero_paving_the_way_for_the_third_pillar_of_science/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00228-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921674 | 2,327 | 2.796875 | 3 |
Residual Fingerprints on Touchscreens May Pose Security Risk
According to InformationWeek, the researchers examined two different Android smartphones, the HTC G1 and the HTC Nexus1. The research showed a complete smudge pattern two-thirds of the time, with researchers being able to partially identify one 96 percent of the time.
However, Android phones do have the protection of making a user enter his or her Google username and password to authenticate after 20 failed password attempts. The article notes:
The good news is that for now, even with a smudge attack, an attacker typically wouldn't be able to reduce the password space to 20 or fewer possibilities. But going forward, don't rule out the possibility that enterprising attackers may add on additional techniques to help see through smudges. | <urn:uuid:67988889-bed3-4b4e-bbc3-b5ea23dcd564> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/residual-fingerprints-touchscreens-may-pose-security-risk | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00136-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931711 | 161 | 2.53125 | 3 |
Technology Briefing: DHCP
Anyone who maintains an IP-based network knows the headaches. Each device on the network, including routers, printers, firewalls, and workstations requires a unique IP address. As networks expand, most network managers find that they quickly run out of addresses. In addition, maintaining tables of IP addresses takes time. Managers must add subnets to increase the number of IP addresses, and they need to update their tables of IP addresses. Changes to the infrastructure also force the manager to reassess and reassign IP addresses. Further, the increase in mobile computers and remote connections can force a continual expansion of IP addresses.
An Easier Way
Dynamic Host Configuration Protocol (DHCP) offers a solution. Using DHCP, network managers can create a flexible, self-configuring network. DHCP works on the principal that most users and devices do not need a constant connection with the server. When a user logs onto the network, the server assigns an IP address to that device. This IP address remains in effect for a period of time and, if it is not active when that time expires, the server releases the IP address. The server can then reassign that address to another device.
In general, DHCP networks support a mix of three modes of operation to allocate IP addresses.
- Manual -- network administrators assign IP addresses for each group of devices on a network. When a device requires IP services, it polls the servers to get its IP address. Managers can use this to "share" an IP address between multiple devices that never access the server at the same time. This also allows managers to reserve specific IP addresses for devices.
- Automatic -- each device gets an IP address from the server when it first contacts the server. The IP address, however, remains with that device, and the server does not release it for use by another device. This is useful for initially configuring a static network.
- Dynamic -- when a device connects with the server, it receives an IP address that remains in effect for a pre-set time period. When the time expires, the device or workstation must request another IP address. This represents the most flexible use of DHCP.
How It Works
Under DHCP, each device or workstations that connects to the network requests an IP address from the server. The process of negotiating this address includes:
- DHCPDiscover -- As a device or workstation connects to the network, it broadcasts a request for an IP address. This request is sent after a random delay to avoid simultaneous submissions from multiple devices on the network.
- DHCPOffer -- The server receives the DHCPDiscover message and responds with an IP address. Multiple DHCPOffers can be generated if more than one server resides on the network.
- DHCPRequest -- The device or workstations receives the DHCPOffer and generates the DHCPRequest message for the IP address it selects. As a checkpoint, DHCP also can verify that the IP address is not currently in use.
- DHCPack -- The server responds to the DHCPRequest with a message that sets the parameters of the session. This information includes the length of time (lease time) that the IP address will remain active. The device or workstation now operates using the assigned IP address.
DHCP, in theory, seems simple. However, it takes time to set up a DHCP-based network. Older devices may not support DHCP. In some cases, these devices only support BOOTP, an older, simplified version of DHCP. Although many DHCP-enabled servers can support these devices, managers will need to configure the server. In other cases, some devices require a permanent IP address, and these must be identified and assigned. Some network managers prefer to manually assign IP addresses to routers, printers and other "permanent" devices.
In addition, managers using dynamic allocation techniques must take time to calculate the proper lease time for the IP addresses. The server verifies each connection when the lease time reaches the halfway point. If a network supports multiple remote sessions that last a relatively short amount of time, the lease time can be set minutes. This ensures that IP addresses will be released and available for subsequent users. For more stable networks, a lease time can be set for several hours or days. The lease time can effect network performance, so the manager must consider this parameter carefully.
|Additional Resources on CIN|
Each Technology Briefing acts as a reference on individual technologies and products,
providing a knowledge base and guide to IT decision-makers in purchasing and deployment.
here to reach a collection of previous Technology Briefings. Topics include: Firewalls, WLANs, network
storage and others.
You can also go to the Great Docs section to find these Technology Briefings and other informative resources and documents on training and staffing, e-mail and Internet usage policies, a guide to project management and other topics.
Managers also need to consider the impact of service interruptions. Scheduled server maintenance or server failures can create havoc in a DHCP configuration. Longer lease times generally recover better from interruptions, but managers can implement multiple servers that share a pool of IP addresses to help resolve the problem. Managers can implement servers that share all available IP addresses, or they can select a subset of addresses to share among servers. Each approach requires that the servers synchronize their database of IP assignments, and this requires server processing.
Security also presents a problem. Firewalls, for example, generally allow managers to configure a list of acceptable IP addresses. If these addresses are dynamically assigned, it is more difficult to determine whether the device connecting to the network is authorized. Similarly, DHCP does not specify links to authentication programs, so managers may encounter difficulties implementing these types of security.
What to Look For
Managers seeking a DHCP solution need to consider several functions, including:
- Configurable parameters -- depending on the DHCP version, managers can set such parameters as the lease time, establish groups of users with different parameters, and enhance security by limiting the MAC addresses of devices allowed to access the network. In addition, support for BOOTP devices and other types of named servers can increase the flexibility of DHCP.
- Multiple server support -- coordinating DHCP across multiple servers requires the servers to synchronize the IP allocation tables. Support for this function helps ensure that the network operates correctly and that multiple devices do not accidentally receive the same IP address.
- Administration features -- as is the case with most network administration utilities, managers generally prefer a centralized approach. For networks with multiple servers or geographically dispersed networks, centralized control is necessary. Support for setting parameters through scripts and programming languages also help managers maintain a network more effectively, and most managers prefer a graphical interface.
- Import capability -- if a network already supports static IP addresses, the ability to import these addresses can simplify the DHCP conversion. This feature also helps managers maintain a network that has group of IP addresses that change infrequently.
- Global settings -- managers need to change parameters in DHCP as the network evolves. The ability to apply these changes to every session or groups of sessions eases the configuration process.
- Reports -- an audit trail that includes a log of the IP addresses granted allows the manager to monitor the network operation, enhance security, and anticipate problems.
The Bottom Line
Most network managers like the idea of DHCP, but they fear the complexity of setting up such a system. The decision of moving to DHCP revolves around time. For managers with a static network, DHCP provides little benefit. However, managers that spend time maintaining complex IP tables and managers that expect to expand their networks in the future will want to seriously considering implementing this standard.
Gerald Williams serves as Director of Quality Assurance for Dolphin Inc., a software development company. Williams has extensive background in technology and testing, previously serving as Editorial Director with National Software Testing Labs (NSTL), Executive Editor with Datapro Research, and Managing Editor of Datapro's PC Communications reference service.
Each CrossNodes Briefing is designed to act as a reference on an individual technology, providing a knowledge base and guide to networkers in purchasing and deployment decisions. | <urn:uuid:7927fec3-1247-4c64-ae40-c957729ea0fa> | CC-MAIN-2017-04 | http://www.cioupdate.com/print/reports/article.php/11050_940561_2/Technology-Briefing-DHCP.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00072-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905832 | 1,665 | 3.4375 | 3 |
Networking 101: Understanding IP Addresses
Networks don't work without addresses: Whenever you are sending something, you need to specify where it should go and where it came from. To be an effective network engineer or administrator, you need to understand IP addresses backward and forward: you need to be able to think on your feet. If something breaks, likely as not some address assignment has been screwed up. And spotting the problem quickly is likely to be the difference between being the hero, or the guy who "takes a long time to fix the problem." Before covering subnetting in the next Networking 101 installment, we'd like to thoroughly explore IP addresses in their primal form. This is crucial to understanding subnets.
IPv4 Addresses and 32-bit Numbers
IP addresses are just 32-bit binary numbers, but they're important binary numbers: you need to how to work with them. When working with subnet masks, new network administrators generally get confused with the ones they haven't memorized. All the subnet mask amounts to is moving the boundary between the part of the address that represents a "network" and the part that represents a "host." Once you're comfortable with this method of thinking about IP addresses and masks, you've mastered IP addressing.
Binary is quite simple. In binary the only numbers are zeros and ones, and a 32-bit number holds 32 zeros and ones. We're all used to base-10 numbers, where each place in a number can hold any number from 0-9. In binary each place holds either a zero or a one. Here's the address 255.255.255.0 in binary:
For convenience, network engineers typically break IP addresses into four 8-bit blocks, or octets. In an 8-bit number, if all of the bits are set to 1, then the number is equal to 255. In the previous address, 11111111 represents 255 and 00000000 represents zero.
The way binary really works is based on powers of two. Each bit represents a different power-of-two. Starting at the left-hand side, the most significant bit, numbers form in the following manner:
The result is additive, meaning that if all bits are set, you simply add the power-of-two value up for each place. For example, if we have an 8-bit number, 11111111 , we simply add: 27 + 2 6 + 2 5 + 2 4 + 2 3 + 2 2 + 2 1 + 2 0 = 255
Try a non-trivial example now: 11110000
We can see that four bits are "set" in the above 8-bit number. Summing the power-of-two values in those places yields: 27 + 26 + 2 5 + 2 4 = 240
It is just that simple. If you can convert a binary number to decimal form, you can easily figure out subnet masks and network addresses, and we'll show you how in the next edition of Networking 101.
Focusing on 32-bit IPv4 addresses themselves now, there are a few different types that need to be understood. All IP addresses can be in the range 0.0.0.0 to 255.255.255.255, but some have special uses.
Packets that will not leave the host (i.e. they will not traverse an external network interface). Example: 127.0.0.1
Packets that are destined for a single IP address. Example: 126.96.36.199
Packets that will be duplicated by the router, and eventually routed by multicast routing mechanisms. Example: 188.8.131.52
A broadcast packet, sent to every host, limited to the local subnet. Example: 255.255.255.255
Packets that are routed to a specific subnet, and then broadcast. Example, assuming we are not on this subnet: 184.108.40.206
There are also some special cases of IP addresses, including private and multicast addresses. Addresses in the range 220.127.116.11 - 18.104.22.168 are reserved for multicast. Everything below that range is fair game on the Internet, excluding addresses reserved by RFC 1918 and a few other special-purpose assignments. These "1918 addresses" are private addresses, meaning Internet routers will not route them. The ranges include:
These IP addresses can be assigned locally to as many computers as you want, but before those computers access the Internet, the addresses must be translated to a globally routable address. This is commonly done via Network Address Translation ( NAT ) (define) . The 1918 addresses aren't the only reserved spaces, but they are defined to be "site local." Multicast also has a reserved range of addresses that aren't designed to escape onto the Internet: 22.214.171.124 - 126.96.36.199 are multicast "link-local" addresses.
To give the necessary background for the next issue of Networking 101, we need to make sure everyone understands the concept of a local subnet. Once we have assigned a valid IP address to a computer, it will be able to speak to the local network, assuming the subnet mask is configured properly. The subnet mask tells the operating system which IP addresses are on the local subnet and which are not. If an IP we wish to talk to is located on the local subnet, then the operating system can speak directly to it without using the router. In other words, it can ARP for the machine, and just start talking. IP address and subnet mask configuration is fairly straightforward for general /24 networks. The standard 255.255.255.0 mask means that the first three octets are the network address, and the last part is reserved for hosts. For example, a computer assigned the IP of 10.0.0.1 and a mask of 255.255.255.0 (a /24, or 24-bits if you write it out in binary) can talk to anyone inside the range 10.0.0.1-10.0.0.255.
Be sure to digest everything here, because next we'll get to the meat of subnetting with CIDR.
In a Nutshell
- IP addresses are just 32-bit numbers. Subnet masks are just a "cover" that can be arbitrarily slid up and down the IP address's bits to create larger or smaller networks.
- The "network" portion of an IP address tells the host how large its local subnet is, which in turn tells it who can be spoken to directly.
- Unicast packets go to one computer, broadcast packets go to many.
When he's not writing for Enterprise Networking Planet or riding his motorcycle, Charlie Schluting works as the VP of Strategic Alliances at the US Division of LINBIT, the creators of DRBD. He also operates OmniTraining.net, and recently finished Network Ninja, a must-read for every network engineer. | <urn:uuid:b9de5e55-1dfb-46cb-9e7a-d956edfa6b82> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3561551/Networking-101-Understanding-IP-Addresses.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00494-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929224 | 1,458 | 4.0625 | 4 |
Having a military background, I tend to look at all security issues with the perspective of someone who’s served in the armed forces. That means using a thorough investigation process that doesn’t treat any action as accidental or an attack as a stand-alone incident and looking for links between seemingly unconnected events.
This method is used by law enforcement agencies to investigate acts of terrorism, which, sadly, are happening more frequently. While terror attacks that have occurred in the physical world are making headlines, the virtual world is also under attack by sophisticated hackers. However, not much is said about the similarities between investigating both types of attacks or what security researchers can learn from their law enforcement counterparts. I’ve had this thought for awhile and, fearing that I’d be seen as insensitive to recent events, debated whether to write this blog. After much thought, I decided that the stakes are too high to remain silent and continue treating each breach as a one-off event without greater security implications.
The parallels between cyber and terror attacks are numerous: they involve well-coordinated adversaries who have specific goals and planned intricate campaigns months in advance. The target’s security measures are irrelevant and can always be exploited. Preventing cyber and terror attacks is difficult, given the numerous vectors an adversary can use. Discovering one component of either type of attack can lead to clues that reveal an even larger, more detailed operation. But the methods used to investigate cyber attacks often fall short at establishing links between different events and possibly preventing hackers from striking again.
Cyber attacks targeting infrastructure are happening
To date, we haven’t experienced a cyber attack that has caused the same devastation of what’s happened in the physical world. Having your credit card number stolen doesn’t compare to lives being lost. But this doesn’t mean we won’t see cyber attacks that cause major disruptions by targeting critical infrastructure.
In fact, they’re already happening. Just last week the U.S. Department of Justice accused seven Iranians of hacking the computer control system of a dam in New York and coordinating DDoS attacks against the websites of major U.S. banks. According to the DOJ, the hackers would have been able to control the flow of water through the system had a gate on the dam not been disconnected for repairs. Then in December, hackers used malware to take over the control systems of two Ukraine energy plants and cut power to 700,000 people. I’m not trying to spread fear of a cyber apocalypse by mentioning these incidents. Fear mongering isn’t applicable if the events have occurred.
+ ALSO ON NETWORK WORLD U.S. Critical Infrastructure under Cyber-Attack +
When examining terror attacks, police conduct forensic investigations on evidence found at the scene. If suspects are arrested, the police confiscate their smartphones (as we’ve seen with the iPhone used by the shooter in the San Bernardino, Calif., attack) and computers and review information like call logs and browsing histories. These procedures may provide investigators with new information that could lead to other terror plots being exposed, the arrest of additional suspects and intelligence on larger terrorist networks.
Applying an IT perspective to breaches won’t reveal complete cyber attacks
Cyber attacks, on the other hand, are investigated in a manner that isn’t as effective. They’re handled as individual incidents instead of being viewed as pieces of a larger operation. I’ve found that too many security professionals are overly eager to remediate an issue. Considering the greater security picture isn’t factored into the process, nor is it culturally acceptable within most organizations to do so. Corporate security teams have been conditioned to resolve security incidents as quickly as possible, re-image the infected machine and move on to the next incident.
Cyber attacks, though, are multi-faceted and the part that’s the most obvious to detect sometimes serves as a decoy. Adversaries know security teams are trained to quickly shut down a threat so they include a component that’s easy to discover. While this allows a security professional to report that a threat has been eliminated, this sense of security is false. Shutting down one known threat means exactly that: you’re acting on a threat that was discovered. But campaigns contain other threats that are difficult to discover, allowing the attack to continue without the company’s knowledge.
Unfortunately, most companies don’t approach cyber security with either a military or law enforcement perspective. They use IT-based methods and try to block every threat and prevent every attack, approaches that are unrealistic and ineffective given the sophisticated adversaries they’re facing. The clues security teams need to discover, eliminate and mitigate the damage from advanced threats is contained in the incidents they have been resolving.
Cyber security stands to learn a lot from law enforcement when it comes to investigating attacks. Next time they’re looking into a breach, security professionals should:
Not treat a security incident as an individual event. Try to place it in the greater context of what else is occurring in your IT environment. View the attack as a clue that, if followed, can reveal a much larger, more complex operation.
Instead of immediately remediating an incident, consider letting the attack execute to gather more intelligence about the campaign and the adversary.
Remember the threat that’s the most obvious to detect is often used as a decoy to shield a more intricate operation.
While there will always be terrorists and hackers, remembering these points helps us stay ahead of them, minimize the impact of their attacks and regain a sense of control.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:290f127e-eba6-4b2a-ae43-665b952ff924> | CC-MAIN-2017-04 | http://www.networkworld.com/article/3048846/security/what-terrorism-investigations-can-teach-us-about-investigating-cyber-attacks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00494-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958521 | 1,176 | 2.734375 | 3 |
Being connected all the time is more of a necessity than an urge today. Life changes every minute, every second. In this fast changing world of thoughts/ communication, the information puzzle is too tempting to resist!
A common man for example, spends 60 minutes a day on the road, in the car. If these 60 minutes can be spent in getting latest update from twitter, sending voice based SMS, listening to emails, getting traffic updates, why would one not want to ride a car all the time. If the “Connection” can be combined with some “good to the environment”, “subscription based business model” and “connectivity though vehicle/ smartphone” nothing like it!
Though all this may look cloudy (hazy) at this point of time, but the fact is the “Cloud” above us would help us achieve the dream of getting connected while in CAR
Read on to understand how Cloud will change the way you drive
Electric vehicle owners can have access to information on the nearest charging stations; monitor charge levels and plan their trips. The owners of a particular brand of cars would be able to compare their fuel efficiencies; connected through cloud. It would be similar to “Green Leaves” in Ford’s smart gauge, which is basically an indication of the driver’s style of driving; more leaves mean greener, more efficient driving. Rapid acceleration and hard braking would cause leaves to drop from the vines!
Once connected drivers can have up to date information on weather, traffic, point of interests and location based service content. The drivers will have access to information on road blocks/ traffic and accidents. The information is basically transmitted from one vehicle to another. The drivers will also be able to understand the safe and economical route to their destinations.
The car owners (if they allow the car emission to be monitored) may get certain incentives from the government for economical driving, for causing less pollution!
Vehicle to Grid (V2G) technology (under investigation)
“Electric-drive vehicles, whether powered by batteries, fuel cells, or gasoline hybrids, have within them the energy source and power electronics capable of producing the 60 Hz AC electricity that powers our homes and offices. When connections are added to allow this electricity to flow from cars to power lines, we call it "vehicle to grid" power, or V2G. Cars pack a lot of power. One typical electric-drive vehicle can put out over 10kW, the average draw of 10 houses. The key to realizing economic value from V2G is precise timing of its grid power production to fit within driving requirements while meeting the time-critical power "dispatch" of the electric distribution system.” Since most vehicles are parked an average of 95 percent of the time, their batteries could be used to let electricity flow from the car to the power lines and back. Owners can get information on charge in their vehicles and the power put back to the grid.
The cloud would make the onboard data base residing in the car completely obsolete in a few years from now. Cloud based services would let users customize their preferences. Android, connecting to app stores, Wi-Fi, would be in- CAR and Smart phones would start communicating with your CAR!
The painful truth is that the infotainment system will have to keep pace with the changes happening in the Consumer electronics industry. The connectivity/ portability and the usability have to be thought through!
Minimizing Driver Distraction
Thought the cloud would bring a revolution in the automotive Industry, it is also true that in a few years time Driver Distraction would be one of the biggest issues. Governments will impose stringent rules on the OEMs.
One way to reduce the driver distraction is to provide minimum options to the driver (while the passengers would have unlimited connectivity). Human Machine Interface (HMI)will also be of importance as they would decide the eye dwell time on Mobile Apps!
HCL - Technology that touches lives (in car)!
User experience is the foremost focus of HCL Engineering services.All services are aimed at providing total passenger experience. HCL is continuously investing in tools, technology, alliances and resources that would help in enhancing the driving experience.
HCL Automotive vertical highlights include-
- Agora- HCL solution for metering, billing and monitoring for software provided as a service
- Standard operating systems - Android, Microsoft, QNX and other open source OS
- Support OEMs in connecting the car to app stores
- Wireless solutions
- Significant investments in CoE- Android and HMI
- Alliance with Genivi, a non-profit industry alliance committed to driving the broad adoption of an In-Vehicle Infotainment (IVI) reference platform
- Investment in solutions in the infotainment space
- Leveraging expertise and experience from adjacent verticals such as Consumer Electronics
I have possibly mentioned only the tip of the ice berg, the cloud as a companion, definitely has a lot in store for you and me. The “cloud” would virtually bring the whole world inside your car and people would tell each other “Happy clouding”! | <urn:uuid:6d0c1100-6d2f-4c08-bd32-4eea4a28dd4b> | CC-MAIN-2017-04 | https://www.hcltech.com/blogs/engineering-and-rd-services/cloud-above-your-car | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00548-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952147 | 1,067 | 2.890625 | 3 |
Advocates of responsible computer recycling warn that PCs and monitors can be a threat to health and the environment when they are broken apart. The toxic chemicals inside don’t pose a danger when the PC is in normal use, but at the end of its life the PC becomes hazardous waste.
Click here to download a PDF of this page (5.9 MB)
1. The Monitor
Cathode-ray tubes contain 4-8 pounds of lead in the radiation shielding of the glass and in solder on wires and connections. Barium is also used in the glass shielding. There is phosphorus in the inside coating of the faceplate. Hexavalent chromium is applied on galvanized steel parts for corrosion protection.
2. Circuit Boards
Most manufacturers use lead solder to connect semiconductors and other components and wires to motherboards and integrated chip sets. Beryllium is commonly found on boards and connectors. Printed wiring boards contain mercury. Cadmium can be found in semiconductors and resistors.
3. PC Chassis
Hexavalent chromium is used on steel plates to prevent corrosion.
4. Cables and Wires
The plastic covers of the wires inside and outside of a PC contain both PBDE and PVC.
5. Plastic Shell
Polybrominated diphenylethers (PBDE) are used as a flame retardant in computer plastics. Polyvinyl chloride (PVC) components, when burned, can give off dioxin fumes.
- Lead: Toxic to the kidneys, damages nervous and reproductive systems, inhibits mental development in infants and young children.
- Barium: Exposure can cause brain swelling, muscle weakness and damage to the heart, liver and spleen.
- Hexavalent chromium: Can cause DNA damage and asthmatic bronchitis.
- Phosphorus: Health effects aren’t fully understood, but the U.S. Navy brands it “extremely toxic.”
- Beryllium: Recently classified as a human carcinogen.
- Mercury: High levels of exposure contribute to brain and kidney damage and can cause birth defects.
- PBDE: Can potentially harm a developing fetus.
- Dioxin: Can cause cancer, damage.
Information courtesy of WeRecycle, LLC. | <urn:uuid:bf608242-2385-4fba-b09b-9b123c6ff1b7> | CC-MAIN-2017-04 | http://4thbin.com/e-waste-facts/toxins-in-a-pc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00301-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.883447 | 476 | 3.5 | 4 |
Solana is the first solar plant in the U.S. with a thermal energy storage system and it can produce enough electricity to power 70,000 households.
The switch has been flipped on a massive solar array field near Phoenix, producing up to 280 megawatts of electricity - enough to power 70,000 households.
Arizona's largest public utility, Arizona Public Service (APS), will purchase all of the electricity produced by the solar plant for 30 years through a power purchase agreement with Abengoa Solar, the company that built the array.
The solar array, financed in part by a Department of Energy loan guarantee, is the county's first large-scale solar plant with thermal energy storage system. The thermal energy storage system can provide electricity for six hours without the concurrent use of the solar field.
Solana's solar array field covers three square miles with about 3,200 mirrored parabolic trough collectors. Each collector is about 25 feet wide, 500 feet long, and 10 feet high.
The Solana solar plant will generate enough clean energy to power 70,000 households and will prevent about half a million tons of CO2 from being emitted into the atmosphere per year, according to Abengoa Solar. Solana received a federal loan guarantee of $1.45 billion to build the plant.
The construction of the plant created more than 2,000 jobs and a national supply chain that spans 165 companies in 29 states.
The Solana solar power. plant
The energy storage system is seen as a turning point for renewable energy, as it is a tangible demonstration that solar energy can be stored and dispatched on demand.
Construction of the Solana solar array, which is about 70 miles southwest of Phoenix, began in 2010 and had a final cost of $2 billion.
A video showing how the plant was constructed and generates solar power.
Abengoa Solar described the array as the worlds largest parabolic trough plant. The solar arrays use parabolic shaped mirrors mounted on moving structures that track the sun and concentrate its heat. That heat is used to heat water into steam, which is then used to power a conventional steam turbine. Being able to store the power allows the plant to continue distributing energy when the sun goes down or is blocked by poor weather.
"These six hours will satisfy Arizona's peak electricity demands during the summer evenings and early night time hours," Abengoa Solar said in a statement. "Dispatchability also eliminates intermittency issues that other renewables, such as wind and photovoltaics, contend with, providing stability to the grid and thus increasing the value of the energy generated by [the plant]."
The parabolic mirrors of the plant.
Abengoa Solar has two commercial solar power towers, 13 50MW trough plants, a solar-gas combined-cycle plant and five photo-voltaic plants in commercial operation worldwide. Abengoa has concentrated solar power plants under construction in the U.S., South Africa, Spain, and the United Arab Emirates, with a total capacity of 810 megawatts.
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "U.S. flips switch on massive solar power array that also stores electricity" was originally published by Computerworld. | <urn:uuid:e85d5170-e7ce-408b-9a9d-80e2335efac7> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2170768/data-center/u-s--flips-switch-on-massive-solar-power-array-that-also-stores-electricity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00211-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931763 | 720 | 3.203125 | 3 |
There are few common reasons that wireless networks aren't found by devices. To isolate and fix the problem, do the following: confirm you can connect to the Internet, your wireless is ON, you're looking for the correct network, you're within range of your modem, there's nothing interfering with your signal and your network isn't hidden.
Connect to the Internet
Confirm you have signal and can connect to the Internet. Do this by using your Ethernet cable. Plug it into your computer and then the modem. If you don't have a connection, read "Troubleshooting your Internet connection" and/or "Troubleshooting common wireless connection problems."
Turn the wireless ON
- Check your modem's wireless by looking at the lights on the front of it. For CenturyLink modems, a flickering-green wireless indicator light means your wireless is ON and transmitting data. (If you have a non-CenturyLink modem, refer to your user guide or manufacturer's website.) If your wireless is OFF, turn it ON and try finding your network again.
- Check that your computer's wireless is also ON. Many laptops (and tablets) have a button on the front (or side) that you can conveniently hit to turn your wireless ON (or OFF). It's easy to bump these types of buttons and accidentally turn OFF your wireless. Double check that this hasn't happened.
Tip: While less common, it's possible your device is in Airplane Mode. If it is, you won't be able to find any local networks until you turn this setting OFF.
Find the correct network
- If you're using the modem's default settings, your network name is likely to be something very generic like "CenturyLink0001" (and easy to confuse with your neighbor's network, say, "CenturyLink0002"). To confirm your network name, look for your default credentials on a sticker on the bottom of your modem. (If you don't see a sticker, you can still find your network name.)
- It's possible at some point you (or someone in your household) changed your network name to make it easier to remember. If this is the case (and you now can't remember it), you can reset the modem settings and restore them to their factory defaults. For how to do this (as well as some words of caution), read "Modem RESET: Understanding what it does and when to use it."
Get within range of the modem
Wireless networks have a maximum broadcast range. If you're on the periphery, you reception may be slow or intermittent. When you're completely out of range, you won't be able to access it all. If you're using a portable device, try moving closer to your modem and then look for your network list. If you can't physically move closer, you may need to use something to boost your wireless signal such as an external antenna or a wireless repeater.
Remove any interference
Without a signal, you aren't able to see a network list. It might surprise you the bizarre things (e.g., cordless phones, microwaves) that can interfere with your signal. Read "Improve the Performance of Your Wireless Connection" for tips on strengthening your signal.
Unhide (or unblock) the network
- Hidden networks. It's possible to hide a network so it doesn't show up in network lists. People do this for security reasons, though not all experts agree how effective hiding a network really is. If you think your network could be hidden, search the Internet for "SSID broadcasting" or refer to your user guide for how to unhide it.
- Blocked networks. If someone else manages your network, that person could be intentionally blocking the network you're trying to access. Contact the person in charge to see if this is the case. | <urn:uuid:b6ae2dec-202d-494c-9bbf-a69894d06475> | CC-MAIN-2017-04 | http://www.centurylink.com/home/help/repair/modem-and-wifi/what-to-do-if-your-wireless-network-isnt-showing-up.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00055-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939186 | 787 | 2.53125 | 3 |
“Ignorance is bliss” may be true in most cases; but it is a misnomer when it comes to health literacy. A study conducted by Kaiser Permanente and published recently in the Journal of the American Medical Association found that patients with congestive heart failure and low health literacy are three times more likely to die in a given year than patients with better health literacy skills.
For instance, patients, with high deductible health plans, might be avoiding even basic preventive care like annual checkup, etc., simply because they do not know that preventive care does not attract any out of the pocket expenses as it is covered by the plan. It might also be because these patients have not understood the benefits of their plan and hence avoid visiting the hospital.
Increasingly, stakeholders across the health care system have recognized the important link between health literacy and health status, and are advocating the necessity of ‘clear communication’ to provide consumer health and benefits information that :
Is easy to access, understand, and act upon
Promotes consumer’s engagement in their own health
Results in better health outcomes
So what are the health plans doing to improve health literacy of the consumers?
Some common strategies that could be employed by various health plans to promote health literacy are:
Assessment of an organization to see if infrastructure exists to provide clear, easy to use information
Awareness sessions for the personnel who are involved in either written or spoken communication to promote health literacy
Adopt a target reading level for all communications, within and outside the organization
Standardize the jargons and acronyms used across organizations. This would require a joint effort from multiple organizations
In our next blog post we will examine how improved health literacy among Americans will impact the health of the patients and reduce the overall cost of health care.Read more about health care consulting. | <urn:uuid:b0e33363-99ad-4a4a-877e-b6ca10f83c94> | CC-MAIN-2017-04 | https://www.hcltech.com/blogs/health-literacy-%E2%80%93-why-it-important | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00449-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946284 | 379 | 2.96875 | 3 |
Best Practices to Address Online and Mobile Threats” is a comprehensive assessment of Internet security as it stands today and explains in non-technical language the proactive steps that can help mitigate risks, according to the report's two major contributors, the Messaging, Malware and Mobile Anti-Abuse Working Group (M3AAWG) and the London Action Plan (LAP).A cooperative international report was released last week outlining Internet and mobile best practices aimed at curtailing malware, phishing, spyware, bots and other Internet threats. It also provides extensive review of current and emerging threats. "
The report is also one of the first global efforts to encourage governments to deploy best practices, which are more often associated with businesses. It focuses on four major areas of concern: malware and botnets, social engineering and phishing, IP and DNS exploits, and mobile threats. To encourage government participation, it has been presented to the 34-member country OECD (Organisation for Economic Co-Development) for review.
|Data Center||Policy & Regulation|
|DNS Security||Regional Registries|
|Domain Names||Registry Services|
|Intellectual Property||Top-Level Domains|
|Internet of Things||Web|
|Internet Protocol||White Space|
Afilias - Mobile & Web Services | <urn:uuid:14dda09f-b435-4993-8a82-ed51171f5ef8> | CC-MAIN-2017-04 | http://www.circleid.com/posts/20121030_m3aawg_london_action_plan_release_best_practices_online_and_mobile/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00357-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.810226 | 272 | 2.640625 | 3 |
The fiber optical cable is a certain amount of fiber cable core, according to certain outsourcing jacket, and some coated outer sheath to achieve a communication line of the optical signal transmission. The fiber optic cable is today’s information society various information network transfer tool. If the “Internet” is referred to as the “information superhighway”, then the fiber optic cable network is the cornerstone of the information highway – cable network is the physical routing of the Internet. Once a cable destruction and blocking, the direction of the “information superhighway” is destroyed. Through the optical transmission of information, in addition to the usual telephone, telex, fax outside, a large number of transmission moment of television signals, bank transfer, the stock market can not be interrupted. Cable plays an important role in today’s information society, once the cable is damaged, easy to make the communication to transmit information is not working properly, affecting work and life.
So, in today’s information society, how to improve the reliable and stable operation of the ordinary optical fiber cable is an important topic that we can not ignore.
The first point, the path of scientific and rational choice. Ordinary Aerial Cable is running the main members of the power of optical networking. To make the cable safe and stable operation, we must first choose a suitable path after construction but also maintenance personnel regularly check and maintenance in order to ensure the stable operation of the fiber optic cable in the future. The path of the fiber optic cable along the highway should try to choose, Village Road side toward the outside but also consider other environmental factors.
The second point, the loss of the fiber optic cable and its solutions. The stability and reliability of the optical fiber and transmission loss characteristics is to determine one of the most important factor of the optical fiber transmission distance optical fiber transmission loss causes are many, in the construction and maintenance of optical fiber communication network, the most noteworthy is the fiber transmission loss caused and how to reduce these losses. Transmission loss is mainly caused in the use of fiber splice loss (inherent in the fiber splicing losses and activities splice loss) and non-splice loss (caused by the loss of bending loss and other construction factors and application environment) categories.
Cable design and planning, rational distribution, construction experience needs to continue to explore and accumulate further improve the fiber optic cable construction program. Eliminating defects by means of fiber-optic cables run, constantly sum up the problems found in the running, can improve the quality of optical transmission, to extend the service life of the fiber optic cable, to adapt to the needs of the system communications and development and construction.From FiberStore,we provide some types of bulk fiber optic cable,including Indoor Cables, Outdoor Cables, FTTH Cables, Armored Cables, LSZH Cables and some special cables. They are various at Aerial Cables, Building Cables, Direct buried cables, Duct Cables, Underwater/Submarine Cable. If you have any questions with fiber optic cable,welcome to contact us Sales@fs.com. | <urn:uuid:69ad1e9d-765f-4a61-9841-34cbba33694e> | CC-MAIN-2017-04 | http://www.fs.com/blog/how-to-improve-the-reliable-and-stable-operation-of-the-optical-fiber-cable.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00385-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91178 | 639 | 2.9375 | 3 |
A new HPC center will be launched by years end in Massachusetts. The Massachusetts Green High-Performance Computing Center (MGHPCC) will be outfitted with terascale hardware aims to deliver it in an environmentally friendly way. Beyond promises of power efficiency and reduced carbon footprint, the center is deviating from typical facility models. It will act as a shared resource between multiple universities, requiring users to develop new strategies of implementation.
Last week, IEEE Spectrum likened the facility to a Thanksgiving table of HPC for colleges. University members include MIT, Harvard University, Boston University, Northeastern University and the University of Massachusetts system. The $95 million center will be located in the town of Holyoke and provide the necessary infrastructure to house and remotely access compute resources. This includes power, network and cooling systems. The universities will provide their own hardware and migrate research computing to the center.
Francine Berman, professor of computer science at Rensselaer Polytechnic Institute (and former SDSC director), equated the design to building a city instead of a skyscraper. She expects to see social challenges between member parties. “As hard as the technical, computational, power, and software problems are, social engineering of the stakeholders is dramatically difficult,” said Berman. Potential conflicts may involve the center’s ability to produce groundbreaking research and papers versus expanding its user base, which typically receives less of a spotlight.
John Goodhue, MGHPCC’s executive director, wants the facility to be simple and highly accessible to its users. This provides somewhat of a challenge as the universities currently house compute resources locally. Transitioning to an external facility adds a number of layers between the users and their equipment. He seems confident that the model will work though, saying the center will provide “ping, power and pipe” for its members.
The challenge, he says, is making the physical hardware behave as a set of private local machines for the various users. But thanks to high bandwidth network pipes and machine virtualization, that should now be possible.
The facility is undertaking an ambitious mission. Assuming it can meet the technical needs of its users, the center will also need to deal with universities that share different priorities. If the project turns out to be a success, it could become the model for future collaborative efforts. | <urn:uuid:b3b67979-6e6a-4856-a60c-467bdd0c8e45> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/05/29/massachusetts_offers_a_new_model_for_academic_hpc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00385-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947361 | 474 | 2.65625 | 3 |
Bertassoli B.M.,Federal University of Minas Gerais |
Santos A.C.,University of Sao Paulo |
De Oliveira F.D.,University of Sao Paulo |
De Oliveira D.M.,University of Sao Paulo |
And 2 more authors.
Ciencia Animal Brasileira | Year: 2013
Many species of opossums are created and used in laboratories, opening a wide field for studying and acquiring knowledge about the habits, diseases, diet and reproduction of these animals. The research aimed at describing the macroscopic and microscopical morphology of the trachea and larynx of opossums. In this study, five opossums (Didelphis sp.) wereused. The trachea and larynx of the opossums were extracted, measured and processed by histology routine and dyed with Hematoxiline eosine, Picrosirius and Massoń Trichrome. The larynx can be divided into: cricoid in a "V"-like shape, thyroid in a shield-like shape, arytenoid in shell-like shape, and epiglottis shaped like sheets, the first three structuresshowed hyaline cartilage, and the last structure showed elastic cartilage. The trachea showed a cylindrical form and it consists of 25 cartilaginous incomplete rings in a "C"-like shape, similar to amphibians, snakes, lizards and pigs; and different from domestic animals. Source | <urn:uuid:44eeea16-dd25-4a9d-8e51-5c4ee684e49b> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/centro-universitario-da-fundacao-of-ensino-octavio-bastos-484743/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00503-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.885888 | 321 | 2.6875 | 3 |
The draft new Computer Science GCSE is due to be submitted for approval to Ofqual next week, with the intention of rolling out the course to secondary schools in the UK in September 2016.
One distinctive feature of the new GCSE is a focus on cyber security, including security best practices, phishing, malware, firewalls and people as the ‘weak point’ in secure system, which learners will study for the first time at this level, as well as the ethical and legal concerns around computer science technologies.
Since last September, computing has been a compulsory part of the curriculum, with a move away from using computer applications to learning how to create them. Central to the new GCSE is a greater emphasis on ‘computational thinking’, which represents 60% of the content.
You can read the press release here. | <urn:uuid:9e700c93-7e7d-40de-a891-f69ac7f0a1bc> | CC-MAIN-2017-04 | https://www.newnettechnologies.com/school-lessons-cyber-security-skills.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00137-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956534 | 173 | 2.96875 | 3 |
DOD Proposes Disposable Satellites To Aid SoldiersDARPA's SeeMe program aims to use small disposable satellites to provide soldiers in remote locations with images of their surrounding terrain.
NASA's Blue Marble: 50 Years Of Earth Imagery (click image for larger view and for slideshow)
The Department of Defense (DOD) plans to add new satellite technology to its efforts to create better communications for warfighters in remote locations.
The Defense Advanced Research Projects Agency (DARPA) is working on small, disposable satellites that will give soldiers images of their surrounding location via handheld mobile devices, according to the agency. This information is often difficult for them to access from remote locations with limited satellite coverage.
The Space Enabled Effects for Military Engagements (SeeMe) program aims to create constellations of up to two dozen satellites, each lasting 60 to 90 days in orbit not far above the earth, according to the agency. After their useful time is up, the satellites will de-orbit and burn up without leaving space debris.
Soldiers will use handheld devices to communicate with the satellites, basically pressing a button requesting that a satellite "see me" to download location images in less than 90 minutes, according to DARPA.
[ DARPA is very active in developing satellite technology. Read DARPA Seeks Satellite Programs That Stick. ]
To keep the cost of the satellites to $500,000 apiece or less, DARPA aims to use off-the shelf components--such as those used by the mobile phone industry--to develop the technology, said DARPA program manager Dave Barnhart in a statement. It also aims to develop advanced optics, power, propulsion and communications technologies to keep the size and weight of the satellites down, he said.
SeeMe will be a companion technology to the DOD's use of unmanned aerial vehicles (UAVs) to provide location information and images for soldiers, but which are limited by the aircrafts' need to refuel, Barnhart said.
"With a SeeMe constellation, we hope to directly support warfighters in multiple deployed overseas locations simultaneously with no logistics or maintenance costs beyond the warfighters' handhelds," he said.
To meet potential bidders and generate ideas about how to proceed with the project and meet its low-cost and development goals, DARPA will hold an industry day on March 27.
DARPA already has a number of satellite projects under way, and SeeMe may leverage one--the Airborne Launch Assist Space Access (ALASA)--that's developing a better launch system for small satellite payloads, the agency said. Typically, smaller satellites must hitch rides on rockets carrying larger satellite payloads, but the agency wants to build a dedicated system for rapid and less expensive launch of payloads under 100 pounds.
SeeMe joins other DARPA efforts aimed at giving soldiers in remote locations better communications capabilities. DARPA recently unveiled a pair of wireless networking projects to that end--one called Mobile Hotspots to create a scalable, mobile, millimeter-wave communications backbone, and another called Fixed Wireless at a Distance to build a fixed-mobility infrastructure to connect limited-range warzone mobile networks to provide more reliable mobile device coverage.
InformationWeek's 2012 Government IT Innovators program will feature the most innovative government IT organizations in the 2012 InformationWeek 500 issue and on InformationWeek.com. Does your organization have what it takes? The nomination period for 2012 Government IT Innovators closes April 27. | <urn:uuid:7d6f7b10-ac1f-4156-a784-e3ea17e31f94> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/dod-proposes-disposable-satellites-to-aid-soldiers/d/d-id/1103346 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00347-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91499 | 707 | 2.703125 | 3 |
NASA has sent the very first spacecraft into an orbit around Mercury, the closest planet on our solar system to the Sun.
Now that it is there, NASA's satellite MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging) will orbit Mercury about 730 times in the next 12 months, beaming back pictures and never-before-available pictures and data on the planet.
The $446 million MESSENGER will orbit the planet once every 12 hours for the duration of its mission. The spacecraft will orbit Mercury at about 200 kilometers (124 miles. At the time of orbit insertion, MESSENGER will be 46.14 million kilometers (28.67 million miles) from the Sun and 155.06 million kilometers (96.35 million miles) from Earth, NASA stated.
More on NASA: 20 projects that kept NASA hopping in 2010
According to NASA, MESSENGER's equipment will be checked out in the first few days after orbit is achieved and then on March 24 its instruments will be turned on and checked out. Then on April 4 the science phase of the mission will begin and the first data from Mercury will be beamed to Earth.
A quick look at MESSENGER:
Size: Main spacecraft body is 1.44 meters (57 inches) tall, 1.28 meters (50 inches) wide, and 1.85 meters (73 inches) deep; a front-mounted ceramic-fabric sunshade is 2.54 meters tall and 1.82 meters across (100 inches by 72 inches); two rotatable solar panel "wings" extend about 6.14 meters (20 feet) from end to end across the spacecraft.
Launch weight: Approximately 1,107 kilograms (2,441 pounds), including 599.4 kilograms (1,321 pounds) of propellant and 507.6 kilograms (1,119 pounds) of "dry" spacecraft and instruments.
Power: Two body-mounted gallium arsenide solar panels and one nickel-hydrogen battery. The power system generated about 490 watts near Earth and will generate its maximum possible output of 720 watts in Mercury orbit.
Propulsion: Dual-mode system with one bipropellant (hydrazine and nitrogen tetroxide) thruster for large maneuvers; 4 medium-sized and 12 small hydrazine monopropellant thrusters for small trajectory adjustments and attitude control.
The Johns Hopkins University Applied Physics Laboratory built and operates the MESSENGER spacecraft for NASA.
MESSENGER has a variety of tools at its disposal. For example, MESSENGER has two cameras -- one wide-angle, and one narrow-angle -- to help the "two-eyed" Mercury Dual Imaging System (MDIS) create a map of the planet's landforms, NASA said. It will also trace different features on the surface. A special pivoting platform will let scientists point the MDIS in whatever direction they choose, NASA said.
The Mercury Laser Altimeter (MLA) will create topographic maps of the planet's surface in unprecedented detail, NASA stated. When the laser shines down and reflects off Mercury's surface, a sensor will gather the light, allowing scientists to track variations in the distance from the surface to the spacecraft. A Radio Science experiment will use the Doppler Effect to track the changes in MESSENGER's velocity, and translate them into clues to how the planet's mass is distributed and where the crust is thicker or thinner, NASA said.
Three instruments will rely on a process called spectroscopy to tell scientists what elements are present in the rocks and minerals around the planet. The X-ray Spectrometer (XRS) will detect X-rays emitted by certain elements in Mercury's crust. The Gamma Ray and Neutron Spectrometer (GRNS) works in much the same way, detecting gamma rays and neutrons emitted by various elements. GRNS may also help to determine if water ice really exists in permanently-shadowed craters at the planet's north and south poles -- as previous observations suggest. The Mercury Atmospheric and Surface Composition Spectrometer will be able to determine Mercury's atmosphere and also detect minerals on the surface. The instrument is extremely sensitive to light from the infrared to the ultraviolet, NASA said.
To get into its proper orbit, MESSENGER has taken a six-year scenic route through the solar system, including one flyby of Earth, two flybys of Venus, and three flybys of Mercury.
Some other Mercury facts from NASA:
- Mercury has a diameter of 3,032 miles, about two-fifths of Earth's diameter. Mercury orbits the sun at an average distance of about 36 million miles (58 million kilometers), compared with about 93 million miles for Earth.
- Because of Mercury's size and proximity to the brightly shining sun, the planet is often hard to see from the Earth without a telescope. At certain times of the year, Mercury can be seen low in the western sky just after sunset. At other times, it can be seen low in the eastern sky just before sunrise.
- Mercury travels around the Sun in an oval-shaped orbit. The planet is about 28,580,000 miles from the sun at its closest point, and about 43,380,000 miles from the sun at its farthest point. Mercury is about 48,000,000 miles from Earth at its closest approach.
- Mercury moves around the Sun faster than any other planet. The ancient Romans named it Mercury in honor of the swift messenger of their gods. Mercury travels about 30 miles per second, and goes around the sun once every 88 Earth days. The Earth goes around the sun once every 365 days, or one year.
- As Mercury moves around the Sun, it rotates on its axis, an imaginary line that runs through its center. The planet rotates once about every 59 Earth days -- a rotation slower than that of any other planet except Venus. As a result of the planet's slow rotation on its axis and rapid movement around the Sun, a day on Mercury -- that is, the interval between one sunrise and the next -- lasts 176 Earth days. | <urn:uuid:c1496df1-b868-4744-a71f-3d5fd12abd3b> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2228792/wireless/nasa-satellite-goes-where-no-other-spacecraft-has-gone-before--mercury-orbit.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00165-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902915 | 1,269 | 3.640625 | 4 |
Even as it hurtles towards an August 5 rendezvous with the red planet, NASA's Mars Science Laboratory (MSL) is being fine-tuned for a more precise landing and better operations once it reaches its destination.
NASA today gave a status report for the MSL which was launched November 2011, and is still over 17.5 million kilometers away from Mars. Of major interest today was the fact NASA said it has narrowed landing target for the Mars rover, Curiosity letting it touch down closer to its ultimate destination for science operations, but also closer to the foot of a mountain slope that poses a landing hazard, the agency said.
"We're trimming the distance we'll have to drive after landing by almost half," said Pete Theisinger, Mars Science Laboratory (MSL) project manager at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, Calif. It was possible to adjust landing plans because of increased confidence in precision landing technology aboard the MSL spacecraft, which is carrying the rover.
According to NASA, the landing target had been an ellipse approximately 12 miles wide and 16 miles long (20 kilometers by 25 kilometers). Continuing analysis of the new landing system's capabilities has allowed mission planners to shrink the area to approximately 4 miles wide and 12 miles long (7 kilometers by 20 kilometers), assuming winds and other atmospheric conditions as predicted.
NASA said Curiosity's landing site is near the base of a mountain known as Mount Sharp inside Gale Crater, near the Martian equator. Rock layers located in the mountain are the prime location for research with the rover. Researchers plan to use Curiosity to study layers in the mountain that hold evidence about wet environments of early Mars. According to NASA, Mount Sharp rises about 5 kilometers above the landing target on the crater floor, higher than Mount Rainier above Seattle, though broader and closer.
. "However, landing on Mars always carries risks, so success is not guaranteed. Once on the ground we'll proceed carefully. We have plenty of time since Curiosity is not as life-limited as the approximate 90-day missions like NASA's Mars Exploration Rovers and the Phoenix lander." noted Dave Lavery, MSL program executive.
Some other updates in the mission:
- Software upgrades: The Lab will use an upgraded version of flight software installed on its computers during the past two weeks. Additional upgrades for Mars surface operations will be sent to the rover about a week after landing.
- Drill bits: NASA has gotten a better understanding of how the debris generated from the Lab's drill might sully the rock samples NASA is interested in. Experiments at JPL indicate that Teflon from the drill could mix with the powdered samples. Testing will continue past landing with copies of the drill. The rover will deliver the samples to onboard instruments that can identify mineral and chemical ingredients. "The material from the drill could complicate, but will not prevent analysis of carbon content in rocks by one of the rover's 10 instruments. There are workarounds," said John Grotzinger, MSL project scientist at the California Institute of Technology in Pasadena. "Organic carbon compounds in an environment are one prerequisite for life. We know meteorites deliver non-biological organic carbon to Mars, but not whether it persists near the surface. We will be checking for that and for other chemical and mineral clues about habitability."
- Two NASA Mars orbiters along with a European Space Agency orbiter will be in position to listen to radio transmissions as MSL descends through Mars' atmosphere.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:795fc6c4-0c29-49b8-8c52-efebd8cdabaf> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2222564/security/nasa-mars-lab-mission-gets-inflight-software-upgrade--more-specific-landing-spot.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00283-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948166 | 728 | 3.203125 | 3 |
Introduction by George Kupczak of the AT&T Archives and History Center
The film opens by showing the sun as the basic source of power on earth -- making possible the growth of plants and crops which sustain life. The sun is shown as the source of mechanical power -- how it affects the winds and water power, and it is also shown as the source of the energy locked in coal and oil. Man's dreams of someday converting the sun's rays directly into usable power are realized, in part, when the solar battery (we'd call it a solar cell, today) was invented in 1954 by three Bell Laboratories scientists: John Pearson, Calvin Fuller, and Daryl Chapin. Here, the camera takes us into the laboratory, and we see how a "Solar Battery" is made and how it works. The film then shows how the solar battery may be used in the phone system as a source of power and explains that the battery holds great promise for the future in other fields as well.
Interestingly enough, Alexander Graham Bell was fascinated with the uses of solar power back in the 1800s. His photophone--a solar telephone--is an example of this. In a late interview in the 20th century, he speculated that people in the future may heat their homes with solar panels.
The first solar cell design from Bell Laboratories was tested in the field on a telephone carrier system in Georgia in 1955. The Bell System used 3,600 solar cells to power the satellite Telstar in 1962, and since then solar has been standard in design for space. However, it took decades for the price of solar cells to come down to where they could compete with other forms of energy--and they are still slow to be adopted in regions with strong sun patterns. Efficiency of solar technology has also greatly improved along the way.
Solar power cost per peak watt:
Audience: public and schools
An MPO Production
Footage courtesy of AT&T Archives and History Center, Warren, NJ | <urn:uuid:c13368bf-e19a-4360-b898-a2b825996193> | CC-MAIN-2017-04 | http://techchannel.att.com/play-video.cfm/2011/4/18/AT&T-Archives-The-Bell-Solar-Battery | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00191-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953191 | 403 | 3.640625 | 4 |
Approaching IT in an eco-friendly way will not only result in a more environmentally-friendly organization, it will also enable your company to operate more efficiently and save money on such basic costs as paper and heating and cooling. To gain these dual benefits, organizations may want to consider implementing these green IT steps.
Adopt Server Virtualization
Virtualization decreases the number of servers required, thus saving on power and A/C. Virtual servers enable organizations to run multiple operating systems and multiple applications simultaneously on less physical hardware. As an example, think of collapsing four physical servers into four "virtual machines" that all run on only one physical server. The lack of need for three more physical servers fosters energy savings and lower capital expenses due to more efficient use of a firm's resources, as well as better server management, increased security and improved disaster recovery processes.
Establish IT Operations in a Hosted Environment
Hosting saves on electricity usage by reducing the number of redundant data center environments, thereby reducing the organization's carbon footprint. How would an organization transition to an IT hosting environment? First, choose a managed services provider who will work with you every step of the process – finding appropriate space, moving servers, coordinating system downtime during transition, and performing tests to ensure the hosted environment is functioning properly. Becoming part of a larger data center allows firms to avoid duplicating energy consumption by operating an entirely separate center just for their own servers.
With SaaS (Software-as-a-Service), a software vendor hosts a Web-based software application and operates it for use over the Internet. This helps enable telecommuting and potentially vastly extends the life of PCs, resulting in an energy savings as well as a reduction in the disposal rate of computer equipment.
By consolidating software into one location with a SaaS model, firms no longer need to maintain the systems infrastructure (i.e., building a server, retaining space to house the server, investing in server room air conditioning, etc.) to maintain the software separately, thereby reducing energy consumption. To take advantage of the SaaS model, firms simply choose a provider and utilize the service for a predictable monthly fee. Users also get anytime, anywhere access to their SaaS applications from any Internet-enabled computer.
Modify Printing Habits
One of the simplest ways to go green is to be mindful of the way in which you handle hard copy documents.
- Work towards a paperless office. Read and review documents and spreadsheets online, if possible, instead of printing them;
- Recycle paper you don't need, and purchase recycled paper for your office;
- Make it a habit to print on both sides or use the back side of old documents for faxes, scrap paper, or drafts;
- Avoid color printing and print in draft mode whenever feasible.
Recycle toner and ink cartridges and buy remanufactured ones. Refurbished printer toner cartridges cost 30-50% less than new ink cartridges and tests indicate that remanufactured printer cartridges produce more copies at a lower price per copy. These simple approaches to printing can go a long way in saving trees and energy. In fact, if offices throughout the country increased the rate of two-sided photocopying from 20% to 60%, they could save the equivalent of 15 million trees.
Becoming a more environmentally-friendly organization typically consists of making small changes over a period of time. Simply modifying your approach to hard copy documents is a good start. Little by little, your firm can save money, operate IT more efficiently and reduce your overall carbon footprint by implementing some, if not all, of the suggested "green" IT methods above. It's a win-win for everyone. | <urn:uuid:9150631a-068d-4533-a0f3-5712291aeb3a> | CC-MAIN-2017-04 | http://www.mindshift.com/Blog/2012/February/Green-IT-Best-Practices.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00311-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91953 | 760 | 2.546875 | 3 |
There’s an article in the Wall Street Journal today that takes on how virtual currency in internationally popular games, such as the near ubiquitous Pokémon Go, can cause interesting financial dilemmas for their creators. The article, “Pokémon Go Illustrates a Currency Problem,” highlights how Nintendo, the company behind Pokémon Go, could face making less money on in-app purchases in places like Mexico, where the value of the peso is less than say, the yen in Japan.
You know who else notices things like this? Fraudsters.
One way a fraudster can exploit this difference in value is a technique called currency arbitrage. Basically, a fraudster simulates his or her presence in different countries using proxy servers, purchases virtual goods with virtual currncy in one location (the one with the weaker currency, in this case, Mexico), and resells them at another location (the one with the stronger currency, this time Japan) and pocket the price difference.
In the case of Pokémon Go, you currently can’t transfer goods in exchange for money. According to Niantic’s Pokemon Go Terms of Service, “Trading Items may be traded with other Account holders for other Trading Items, but Trading Items can never be sold, transferred, or exchanged for Virtual Money, Virtual Goods, “real” goods, “real” money, or “real” services, or any other compensation or consideration from us or anyone else. Even though transferring items for money is not allowed, players can still transfer items and exchange money under the table. That can result in yet another loss for the game, since those players are less likely to spend money in the game. This is just one of the many ways fraudsters can exploit both game companies and their players and an issue with virtual currency.
If you’re interested in learning more about ways fraudsters are making real money in this land of virtual goods, take a look at this post our CEO authored for TechCrunch: There Is Real Fraud In The Underground Market For In-Game Virtual Goods. | <urn:uuid:388a69d6-a4b6-4c54-bf18-645ab6487c3d> | CC-MAIN-2017-04 | https://www.datavisor.com/quick-takes/spend-money-to-make-money-virtual-currency-arbitrage/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00127-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928163 | 430 | 2.625 | 3 |
NoSQL solutions are solutions that do not accept the SQL language against their data stores. Ancillary to this is the fact that most do not store data in the structure SQL was built for - tables. Though the solutions are "no SQL", the idea is that "not only" SQL solutions are needed to solve information needs today. The Wikipedia article states "Carlo Strozzi first used the term NoSQL in 1998 as a name for his open source relational database that did not offer a SQL interface". Some of these NoSQL solutions are already becoming perilously close to accepting broad parts of the SQL language. Soon, NoSQL may be an inappropriate label, but I suppose that's what happens when a label refers to something that it is NOT.
So what is it? It must be worth being part of. There are currently at least 122 products claiming the space. As fine-grained as my information management assessments have had to be in the past year routing workloads across relational databases, cubes, stream processing, data warehouse appliances, columnar databases, master data management and Hadoop (one of the NoSQL solutions), there are many more viable categories and products in NoSQL that actually do meet real business needs for data storage and retrieval.
Commonalities across NoSQL solutions include high volume data which lends itself to a distributed architecture. The typical data stored is not the typical alphanumeric data. Hence the synonymous nature of NoSQL with "Big Data". Lacking full SQL generally corresponds to a decreased need for real-time query. And many use HDFS for data storage. Technically, though columnar databases such as Vertica, InfiniDB, ParAccel, InfoBright and the extensions by Teradata 14, Oracle (Exadata), SQL Server (Denali) and Informix Warehouse Accelerator deviate from the "norm" of full-row-together storage, they are not NoSQL by most definitions (since they accept SQL and the data is still stored in tables).
They all require specialized skill sets quite dissimilar to traditional business intelligence. This dichotomy in the people who perform SQL and NoSQL within an organization has already led to high walls between the two classes of projects and an influx of software connectors between "traditional" product data and NoSQL data. At the least, a partnership with CloudEra and a connector to Hadoop seems to be the ticket to claiming Hadoop integration.
NoSQL solutions fall into categories. These labels may (I dare say should) replace "NoSQL" as the operative term since, despite the similarities, the divergences are many and are exacerbating. Whereas once all this data was excluded from management (or force-fit into relational databases), NoSQL solutions access this data better, as well as save cost and don't have a per-CPU cost model. Naturally, many of the solutions are open source and embraced by various vendors with value-added code, training, support, etc.
The categories (and future industries) are:
KVS like Redis store data paired with its key and accessible by a navigable tree structure or a hash table. KVS support dynamic online activity with unstructured data.
Document Stores like mongoDB and CouchDB support schema-less sharding for guaranteed availability.
While sharing the concept of column-by-column storage of columnar databases and columnar extensions to row-based databases, column stores like HBase and Cassandra do not store data in tables but store the data in massively distributed architectures.
Graph Stores like Bigdata represent connections across nodes and is useful for relationships among associative data sets.
Posted September 14, 2011 6:22 PM
Permalink | No Comments | | <urn:uuid:8c424635-6391-4657-b121-bb2e713b7e3c> | CC-MAIN-2017-04 | http://www.b-eye-network.com/blogs/mcknight/archives/nosql/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00035-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926902 | 757 | 2.53125 | 3 |
Charles Walton, 89 (November)
Known as the
Father of RFID
, Walton created technology in the 1970s and 1980s that is now common everywhere from warehouses to retail stores to public libraries. RFID technology beat out barcodes for many applications and is paving the way for technologies such as near-field communications (NFC) being used for eWallets.
According to a story on the history of radio frequency identification (RFID) technology in RFID Journal,
received a patent in 1973 for an active RFID tag with rewritable memory and that same year “Charles Walton, a California entrepreneur, received a patent for a passive transponder used to unlock a door without a key. A card with an embedded transponder communicated a signal to a reader near the door. When the reader detected a valid identity number stored within the RFID tag, the reader unlocked the door. Walton licensed the technology to Schlage, a lock maker, and other companies.” Like many wireless pioneers, Walton got a start working with such technology for the military – in his case, the Army Signal Corps, after studying electrical engineering in college. He later spent a decade at IBM, then started his own company called Proximity Devices to make devices based on his wireless patents. The first patent to mention RFID, for a "portable radio frequency emitting identifier," was awarded to Proximity in 1983. | <urn:uuid:4f6b5323-24c3-46b9-87eb-fdbc4ee27edf> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2869064/data-center/2011-s-notable-tech-industry-deaths.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00339-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959155 | 290 | 3.296875 | 3 |
In a recent research paper [pdf], we analyzed the security features of the APCO Project 25 (P25) digital two-way radio system. P25 radios are widely deployed in the United States and elsewhere by state, local and federal agencies, first responders, and other public safety organizations.
The P25 security features, in which voice traffic can be encrypted with a secret key to frustrate unauthorized eavesdropping, are used to protect sensitive communications in surveillance and other tactical law enforcement, military and national security operations. Because radio signals are inherently easy to detect and intercept, encryption is the primary mechanism used to secure sensitive P25 traffic.
Our analysis found significant — and exploitable — security deficiencies in the P25 standard and in the products that implement it. These weaknesses, which apply even when encryption is properly configured, leak data about the identity of transmitting radios, enable active tracking and direction finding of idle (non-transmitting) users, allow highly efficient (low-energy) malicious jamming and denial of service, and permit injection of unauthenticated traffic into secured channels.
Unfortunately, many of these vulnerabilities result from basic design flaws in the P25 protocols and products, and, until the standard is changed and products are upgraded, cannot be effectively defended against by end users or P25 system administrators. While we are unaware of incidents of criminals carrying out the active attacks we discovered, the hardware resources required to conduct them are relatively modest. As technology advances, these attacks will demand increasingly fewer resources and less sophistication to carry out.
However, in addition to active attacks against P25, we also discovered a serious practical problem that can be exploited easily today against fielded P25 systems: a significant fraction of sensitive traffic that users believe is encrypted is actually being sent in the clear. In the metropolitan areas we sampled, we intercepted literally thousands of unintended clear transmissions each day, often revealing highly sensitive tactical, operational, and investigative data.
In every tactical system we monitored, encryption was available and enabled in the radios’ configurations (and, indeed, was used correctly for the majority of traffic). Yet among the encrypted traffic were numerous sensitive transmissions sent in the clear, without their users’ apparent knowledge. Virtually every agency using P25 security features appears to suffer from frequent unintended clear transmission, including federal law enforcement and security agencies that conduct operations against sophisticated adversaries.
This unintended clear sensitive traffic can be monitored easily by anyone in radio range, including surveillance targets and other adversaries, using only readily available, inexpensive, unmodified off-the-shelf equipment, including many of the latest generation of “scanner” radios aimed at the hobby market. Unintended cleartext therefore represents a serious practical threat to communications security for agencies that rely on P25 encryption.
P25 encryption usability deficiencies
As noted in our paper, we found two distinct causes for unintended sensitive cleartext in federal P25 systems, each accounting for about half the clear transmissions we intercepted:
Ineffective feedback to the user about whether encryption is enabled. Subscriber radios are generally configured to enable encryption of their transmissions via a two-position switch (located on the control head of mobile radios or near the channel selector of portable radios). The switch controls only the encryption of outbound transmissions; clear transmissions can still be received when the radio is in encrypted mode, and encrypted transmissions can still be received when the radio is in clear mode (as long as the correct keys are available). This means that if a radio is inadvertently placed in the clear mode, it will still appear to work normally, interoperating with encrypted radios in its network even while actually transmitting in the clear.
Unavailable or expired key material. Many systems expire or “re-key” their encryption keys at frequent intervals, in the belief that this makes encrypted traffic more secure against attack. (In fact, this is a myth; modern ciphers such as the AES algorithm used in federal P25 systems are designed to remain secure even if a single key is used to protect many years worth of traffic, and, as we discuss below, the problem of key compromise in law enforcement environments is negligible.) But the effect of frequent rekeying is that one or more users in a group can be left without current key material. When this happens, the entire group must switch to clear mode in order to communicate.
Fortunately, although the default configuration of most P25 radios exacerbates these problems, P25 system administrators can configure radios and adjust keying practices to mitigate these problems and reduce the incidence of unintended clear transmission of sensitive traffic in their systems.
Configuring P25 systems for more reliable security
The user interfaces of most P25 radios are highly configurable by an agency’s radio technicians, through the use of “customer programming” software provided by the manufacturer. We found it to be possible to configure existing P25 radios to have much more reliable security behavior, with better feedback to the user and more intuitive operation, than the default configuration provides. We recommend that encrypted radios used in tactical law enforcement operations be configured according to the guidelines in this section.
We use the Motorola Astro25 radios (e.g., the XTL-5000 mobile radio and the XTS-5000 portable) for terminology and illustration. Most other vendors’ P25 radios have similar configuration capabilities, but they may use different terminology from Motorola’s for the configurable features; contact your radio vendor for specific information on how to accomplish a particular configuration.
1. Disable the “secure” switch
The behavior of the “secure” switch is a source of confusion among even trained users. Aside from its obscure labeling (a zero for clear mode and a zero with a slash for encrypted mode), it is often out of view, can change position if touched, and does not provide direct feedback tied to the objective of communicating.
Instead, we recommend that encryption be a permanently enabled or disabled function of the selected channel. That is, if an agency has a frequency called Tac1 in which both encrypted and clear communication take place, radios should be configured with two Tac1 channels, one with encryption always enabled and the other with encryption always disabled. The two channel names (as displayed on the radio screen) should reflect this, e.g., Tac1 Secure and Tac1 Clear.
On the Motorola Astro25 radios, the secure/clear switch can be disabled in the “Radio Configuration” menu under “switches”; set the switch’s function to “blank”. Channels can then be “strapped” for “clear” or “secure” mode in the “Personality” menu for the channel.
2. Prevent mixed encrypted/clear communication with separate NACs
Current P25 radios do not tie the decryption behavior of their receiver to the encryption behavior of their transmitter. That is, as long as a receiving radio has the correct key loaded, it will decrypt and play all incoming encrypted transmissions it receives on the current channel, even if it is itself set to transmit in clear mode. Similarly, even if a radio is set to transmit in encrypted mode, it will still receive clear transmissions on the current channel. This behavior runs counter to many users’ expectations, and means that if a user in an encrypted network has his or her encryption switch in the wrong position, communication still occurs as if it were encrypted. The error is thus unlikely to be detected. (Some radios can be configured with a cleartext “beep” warning, but we found it to be ineffective at actually alerting users).
Acceptance of received clear traffic in encrypted mode and received encrypted traffic in clear mode is a basic feature of the P25 architecture; it cannot be disabled through most radios’ configuration software. However, it is possible to use P25’s Network Access Code (NAC) mechanism to segregate encrypted and clear traffic and achieve effectively the same result. (NACs are the P25 equivalent of the sub-audible CTCSS tones used in analog FM systems.) P25 signals always include a 12-bit NAC code; P25 receivers can be configured to mute received transmissions that do not carry the correct code.
To prevent encrypted users from receiving clear traffic (and vice-versa), simply configure different NACs on the clear and encrypted versions of each channel. That is, Tac1 Clear might use a NAC code of “A01”, while the Tac1 Secure version of the channel could use NAC code “A02”. Even though both channels use the same frequency, users set to the encrypted version of the channel will not hear the transmissions of those on the clear version, nor will users on the clear channel hear the encrypted transmissions, even if they have the correct keys.
This configuration prevents the (common) scenario where a single user accidentally repeatedly transmits in the clear as part of an otherwise encrypted group. Communication simply cannot occur until all users are set to either encrypted or clear mode. (Note that this configuration prevents only accident, not malice. An attacker can still transmit clear traffic with the “encrypted” NAC to inject false messages).
The disadvantage of segregating clear and encrypted traffic on separate NACs is that, in an emergency, it may be more difficult for an unkeyed user to communicate with encrypted radios. But the behavior of radios configured in this way is ultimately much more intuitive, making the “encrypted” or “clear” mode a more reliable indicator of the state of the receiver as well as of the transmitter.
On the Motorola Astro25 radios, the transmit and receive NAC codes are set in the “Zone Channel Assignment” menu. Note that repeaters must also be configured to accept both NACs (or to operate in transparent mode). Trunked P25 systems may require additional configuration to use multiple NACs.
3. Use long-term, non-volatile keys
Many federal systems use the P25 “OTAR” protocol to manage and distribute keys. For a variety of reasons, this protocol is unreliable in practice. The result is that users frequently do not have current keys, and are unable to successfully rekey. When users without key material must communicate with a group, the only option is for the entire operation to switch to the clear. That is, attempting to centrally manage keys via OTAR has the effect of forcing many sensitive operations to use clear mode.
Exacerbating the situation is the practice of using short-lived, volatile keys that are changed frequently (monthly or even weekly). This practice has its origin in military operations, where keyed radios are occasionally captured by enemy forces. (Once the network re-keys, captured radios become useless). But captured radios are not a significant threat in the law enforcement tactical environment. Here, the practice of re-keying results in less security, not more, especially given the unreliability of P25 OTAR systems.
Rather than short-lived keys refreshed via OTAR, we strongly recommend that agencies simply load a small set of semi-permanent keys into all radios used for sensitive communication. These keys should be changed (and radios re-keyed) only in the (rare) event that a radio is discovered to be lost or stolen. (Even in the unlikely case that a radio is stolen and not detected, a system with long-lived keys is still more secure with a small number of compromised radios than a system using an unreliable keying scheme that frequently forces users to operate in the clear).
Note that radios configured for “volatile” keying can lose their key material if their battery is disconnected and under certain other conditions. When this happens, radios can only operate in the clear until they are re-keyed. To prevent accidental key erasure, we recommend that Motorola Astro25 radios be configured for “Infinite Key Retention” in the “Security” menu of the programming software. We also suggest that provisions be made for deploying keyloading devices in the field to quickly re-key radios if keys are accidentally deleted.
For further information, see our paper [pdf].
The configuration changes here are intended to address only one (albeit perhaps the most immediate and serious) of the P25 security vulnerabilities that we discovered — that of unintentional transmission of sensitive cleartext. However, we emphasize that configuring radios as we recommend do not prevent other attacks we discovered (such as low-energy jamming or active tracking). Until these problems are addressed in the standard and new products implemented, we urge agencies that use P25 for sensitive traffic, in addition to configuring radios as we recommend here, to not regard P25 communication as reliably secure against modestly sophisticated adversaries.
We have made the federal tactical and public safety radio community aware of the attacks we discovered and of the problem of unintended cleartext, but it is possible that some sensitive P25 users are not yet aware of the risks and mitigations that are possible. While we cannot provide extensive consulting services, we are happy to discuss specific issues and mitigation strategies with agencies whose communication may be at risk.
Contact the University of Pennsylvania P25 Security Research Group via email, blaze (atsign) cis.upenn.edu | <urn:uuid:b2caf7fc-0b45-4301-8874-0d0bdb17f18d> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2011/08/10/p25-security-mitigation-guide/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00339-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925815 | 2,733 | 2.578125 | 3 |
Flash memory shines on reads: it reads 100 times faster than a disk. But its performance advantage is much weaker on writes, and its write endurance is much lower than disk’s. Therefore, Nimble OS uses flash only for accelerating reads, aka “read caching”. It uses NVRAM (a DRAM-based device) for accelerating writes, aka “write caching”.
On the other hand, a few storage systems use flash memory for write caching. Here I describe what compels these systems to use flash in this manner and the cost-benefit tradeoff it entails.
In general, storage systems implement write caching using a non-volatile “write buffer.” On a write request, the system stores the data into the write buffer anacknowledges the request. In the background, as the buffer fills up, the system drains the buffer to the underlying storage. The speed at which the write buffer can be drained to underlying storage constrains the sustainable write throughput.
The write buffer helps in following ways:
- It enables the storage system to acknowledge a write request with very low latency.
- It can absorb a high-throughput burst of writes, while it drains less speedily to disk-based storage over a longer period of time.
- It absorbs overwrites (multiple writes to the same blocks), thereby reducing the amount of drainage, which may support a higher write throughput.
- It allows the data being drained to be sorted by logical addresses, thereby improving the sequentiality of drainage, which may improve the speed of draining and support a higher write throughput.
The latency advantage depends on the buffering medium. NVRAM (DRAM made non-volatile with battery backup or flash backup) provides latency of a few tens of microseconds. Flash a few hundreds of microseconds. Disk a few milliseconds. Most storage systems use NVRAM for write buffering. However, file systems that are not tied to a hardware platform cannot assume the availability of NVRAM, and may buffer writes on flash or even on disk. E.g., the write buffer in ZFS, called ZFS Intent Log (ZIL), is generally stored on flash or disk.
A few storage systems now use flash as a secondary write buffer in addition to using NVRAM. E.g., EMC “FAST cache” uses flash as both a read cache and a write buffer. In such systems, written data is staged through the NVRAM-based buffer, the flash-based buffer, and finally to disk. The flash-based buffer is much bigger than the NVRAM-based buffer, and therefore provides higher levels of burst absorption, overwrite absorption, and sequentiality improvement, which in turn may support a higher write throughput. These advantages are based on the assumption that the NVRAM-based buffer cannot be drained directly to disk-based storage at high throughput.
Most storage systems employ a simplistic disk layout such that draining the write buffer results in random writes on disk. Furthermore, these systems amplify the IO load in order to support parity RAID and copy-on-write snapshots. The resulting load cripples the speed at which data can be drained to disk. (NetApp’s WAFL performs better by concatenating random data blocks and writing them into free space, but it too degenerates gradually as the free space becomes fragmented.) Because these systems cannot drain to disk at high speed, they stand to benefit from adding a larger write buffer. Even so, this benefit is limited because it does not eliminate random writes to disk—it only reduces them by some modest amount.
Furthermore, many of these storage systems could instead use a disk-based write buffer, which would be similar to a write-ahead log used in database systems. The log is written sequentially, which disks perform just as well as flash drives (about 100MB/s per drive). One advantage of a flash-based buffer over a disk-based buffer is that it also serves as a read cache for newly written data. However, as described later, there are cheaper ways of building a read cache. Another advantage is that the draining process can read the flash-based buffer in random order, so it supports a more thorough sorting of the data, thereby extracting more sequentiality.
Now consider the cost of write buffering. A flash-based buffer is expensive. First, because it holds the only copy of newly written data, it must employ the more expensive forms of flash and controllers, and also some RAID-like redundancy in the form of parity or mirroring. (In fact, a flash-based buffer needs to be even more reliable than an NVRAM-based buffer, because it is larger and the overwrite-absorption and re-sorting might make it difficult to recover the system to a consistent state upon loss.) On the other hand, a read cache does not ever store the only copy of any data, so it can be constructed inexpensively without sacrificing reliability: add a checksum to every block, verify the checksum on every read, and toss the cached block if the checksum does not match. Second, pushing the writes through flash burns through its limited write endurance, again requiring expensive, high-endurance, flash. Third, to obtain a significant edge over NVRAM-based log, the flash-based log must be much bigger. E.g., it may need to be large enough to absorb all writes during a busy period lasting hours.
The questionability of using flash as a write cache for disk is epitomized by a research paper, Extending SSD Lifetimes with Disk-Based Write Caches, which states the following:
“We present Griffin, a hybrid storage device that uses a hard disk drive (HDD) as a write cache for a Solid State Device (SSD).”
In other words, the authors are proposing just the opposite of using a flash-based write cache for disk! These authors are reputable researchers from the academia and Microsoft Research, and they exhibit a deep understanding of flash characteristics as a storage medium. There are practical issues with following their proposal, but the mere existence of this proposal questions the wisdom of using flash for write caching.
Nimble’s CASL™ filesystem uses the entire disk storage as a log, and always writes data to disk in large sequential chunks. This enables it to drain data from NVRAM buffer to disk storage at high throughput. This avoids the need for a secondary write buffer. It is as if the entire disk subsystem is at once a write buffer and the end point of storage.
In summary, flash-based write caching addresses burst throughput but only partially improves sustained throughput, while a write-optimized disk layout addresses both with little cost. However, systems with legacy disk layouts are forced to cache writes in flash as a costly fix to improve their write performance partially.
- Umesh Maheshwari | <urn:uuid:d123e1b1-828c-4639-90bf-08aca62ab1d7> | CC-MAIN-2017-04 | https://www.nimblestorage.com/blog/write-caching-in-flash-a-dubious-distinction/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00394-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941915 | 1,425 | 3.171875 | 3 |
IP Routing (e) - Flash
- Course Length:
- 1 hour of eLearning
NOTE: While you can purchase this course on any device, currently you can only run the course on your desktop or laptop.
As the communications industry transitions to wireless and wireline converged networks to support voice, video, data and mobile services over IP, a solid understanding of IP and its role in networking is essential. IP is to data transfer as a dial tone is to a wireline telephone. A fundamental knowledge of IPv4 and IPv6 networking along with use of routing is a must for all telecom professionals. A solid foundation in IP and routing has become a basic job requirement in the carrier world. Understanding of IP routing protocols is an important part of building this foundation. Starting with a basic definition, the course provides a focused base level introduction to the fundamentals of IP routing and associated protocols like OSPF, BGP, and VRRP. It is a modular introductory course only on IP routing as part of the overall eLearning IP fundamentals curriculum.
This course is intended for those seeking a basic level introduction to IP routing and the common associated protocols.
After completing this course, the student will be able to:
• Define the differences between IP routing and forwarding
• Distinguish between Interior Gateway Protocols and Exterior Gateway Protocols and give examples of each
• Explain Open Shortest Path First (OSPF) and how it is used
• List the main types of Link State Advertisements in OSPF
• Describe Border Gateway Protocol (BGP) and how it is used
• Show how route reflectors simplify network configuration and reduce routing overhead
• Explain how PING can be used to verify end-to-end connectivity in an IP Network
• Describe how Traceroute can be used to track down routing errors in a network
1. What is IP routing?
1.1. IP routing basics
1.2. Routing and forwarding
1.3. Routing protocols
2. Open Shortest Path First (OSPF)
2.1. OSPF basics
2.2. A closer look at OSPF
3. Border Gateway Protocol (BGP)
3.1. BGP basics
3.2. A closer look at BGP
3.3. Scaling BGP
4. Redundancy Protocols
5. Debugging Tools and Utilities | <urn:uuid:a07156b7-8349-42b0-8fdc-dec46cca21ca> | CC-MAIN-2017-04 | https://www.awardsolutions.com/portal/elearning/ip-routing-e-flash?destination=elearning-courses | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00120-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.873971 | 501 | 3.734375 | 4 |
Weather satellite program appears back on track
- By Frank Konkel
- Mar 03, 2014
Four of the six state-of-the-art scientific instruments that will launch with the National Oceanic and Atmospheric Administration’s first next-generation geostationary satellite are ready for spacecraft integration set to begin this month.
The instruments include the advanced baseline imager, solar ultraviolet imager, extreme ultraviolet and X-ray irradiance sensors, and the space environment in-situ suite. Those four instruments will be installed on the Geostationary Operational Environmental Satellite (GOES-R) at Lockheed Martin’s facility Littleton, Colo.
Two other instruments, the magnetometer and the geostationary lightning mapper, are on schedule to be delivered later in 2014. The delivery of all the instruments on time for integration is a key parameter for the first GOES-R spacecraft to launch by early 2016. The original GOES-R spacecraft’s launch date was delayed in 2013, a big reason why NOAA has been slammed in critical reports by inspectors general and Government Accountability Office.
However, the instruments’ timely completion is an indication that the $11 billion GOES-R program is back on track.
"Together, these tools will improve NOAA's ability to observe terrestrial and space weather from geostationary orbit in near real-time," GOES-R System Program Director Greg Mandt said in a statement. "These deliveries, and the start of integration with the spacecraft bus, demonstrate the continued strength of the program as it moves towards launch in 2016."
When it is launched, the GOES-R will be the most advanced civilian spacecraft in orbit, producing four times more continuous data than existing geostationary satellites. Sometime around 2017, the NOAA constellation of satellites will produce on the order of 20 terabytes of weather data per day. Much of that data will be generated by the instruments aboard GOES-R, which include:
- The geostationary lightning mapper, which will provide for the first time a continuous surveillance of total lightning over the western hemisphere from space.
- The space environment in-situ suite, which consists of sensors that will monitor radiation hazards that can affect satellites and communications for commercial airline flights over the poles.
- The solar ultraviolet imager, a high-powered telescope that observes the sun, monitoring for solar flares and other solar activity that could affect Earth.
- The magnetometer, which will provide measurements of the space environment magnetic field that controls charged particle dynamics in the outer region of the magnetosphere. These particles can be dangerous to spacecraft and human spaceflight.
- The advanced baseline imager, which is GOES-R’s primary instrument for scanning the planet’s weather, oceans and environment, offering faster imaging at higher resolutions than current space-based technology. The instrument also offers NOAA new forecast products for severe weather, volcanic ash advisories, fire, smoke monitoring and other types of hazards.
- The extreme X-Ray irradiance sensor, which will monitor solar behavior and alert ground crews of solar storms.
Frank Konkel is a former staff writer for FCW. | <urn:uuid:7426fd74-c788-4681-b3cc-f3d98945669a> | CC-MAIN-2017-04 | https://fcw.com/articles/2014/03/03/noaa-satellite-instruments.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00146-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901874 | 654 | 2.8125 | 3 |
Flash! Supercomputing goes solid-state
Lawrence Livermore lab is shaping the next generation of supercomputers
- By Henry Kenyon
- Jun 24, 2010
A prototype computer system is demonstrating the use of flash memory in supercomputing. The Hyperion Data Intensive Testbed at Lawrence Livermore National Laboratory uses more than 100 terabytes of flash memory.
Hyperion is designed to support the development of new computing capabilities for the next generation of supercomputers as part of the Energy Department's high-performance computing initiatives. Specifically, it will help test the technologies that will be a part of Lawrence Livermore’s upcoming Sequoia supercomputer.
The Hyperion testbed is an 1,152-node Linux cluster, said Mark Seager, assistant department head for advanced technology at Lawrence Livermore. It was delivered in 2008, but is only now at the point where serious operational testing can begin with the recent addition of the solid-state flash input/output memory.
Flash memory is a key component of the Hyperion system, Seager said. The memory is in the form of 320-gigabyte enterprise MLC ioMemory modules and cards developed by Fusion-io.
Supercomputers access data from long-term memory stored on disks to augment what is in their active memory. Desingers typically use dynamic random access memory chips to serve as a temporary repository for active data in use before it is stored. Shortening this transfer time between long-term storage and accessible memory is key to higher supercomputer speeds. Flash memory eliminates the need for DRAMs, shortening the transfer time; it also greatly reduces the amount of hardware needed, thereby significantly cutting space and power requirements. Unlike DRAMs, flash memory chips retain data when electrical current is cut off.
China threatens U.S. supercomputer supremacy
Scientists creating advanced computer simulation of nuclear reactor
Seager said that the testbed is a partnership between Lawrence Livermore and 10 participating commercial firms that are testing technologies that will be used in Sequoia. He noted that Red Hat has been testing its Linux kernel and Oracle has been testing and developing its Lustre 1.8 and 2.0 releases on the machine for six months. Other Linux-based technologies being evaluated include cluster distributions of Linux software and the Infiniband software stack.
Testing for the Hyperion system will include trials of the Lustre object storage code on the array’s devices. Seager said the goal is to see how much faster various processes can be made to operate by using flash memory. He added that Lawrence Livermore researchers also want to use an open source project called FlashDisk, which combines flash memory with rotating media in a transparent, hierarchical storage device behind the Lustre server.
Seager said that the project will also examine methods to directly use flash memory without a file system. “We think that that will probably give us the best random [input/output operations per second] performance,” he said. Achieving a performance in excess of 40 million IOPS is a key goal of the effort.
The Hyperion system uses 80 one-use servers occupying two racks and not even filling them. A similar system using conventional data storage technology would occupy about 46 racks, Seager said. This provides a power savings that is an order of magnitude better than current systems, he added.
All of these technologies are used to support Lawrence Livermore’s large, high-performance computing efforts. The data intensive testbed extension of Hyperion was designed to meet the goals of the Sequoia next generation advanced strategic computing system being built by IBM and scheduled for delivery in mid-2011.
Sequoia will be a third-generation Blue Gene system with a compute capability of about 20 petaflops and 1.6 petabytes of memory. Another goal is achieving one terabyte per second random IO bandwidth performance.
When Hyperion’s technologies are used in Sequoia, the supercomputer will take up relatively little space and save power. Seager said that IBM’s Blue Gene line is focused on exceptional flops per watt. He noted that one of the goals of the Blue Gene line is high end performance at low power. Sequoia is a third generation Blue Gene computer.
The Lawrence Livermore research is funded by the National Nuclear Security Administration. Lawrence Livermore, Sandia National Laboratory and Los Alamos National Laboratory will be using Sequoia to support the Stockpile Stewardship mission to test the security and reliability of the nation’s nuclear stockpile without the need for underground testing. | <urn:uuid:0f891a99-034b-4924-a6c6-e6fb5ee878f1> | CC-MAIN-2017-04 | https://gcn.com/articles/2010/06/24/lawrence-livermore-hyperion.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00542-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918728 | 943 | 2.875 | 3 |
Addressing the Vast World of Robotics
The world of robotics is vast and diverse. Robots are used for aerospace, industrial, residential, consumer, gaming, entertainment and many other purposes. Robots are created to reduce human involvement in certain tasks. Getting the most visual information allows the robot to reconstruct its environment in 3-D and operate more efficiently. | <urn:uuid:e0cfcb1f-7ed8-4c49-9588-72e776d7eb3b> | CC-MAIN-2017-04 | https://www.immervisionenables.com/experience/robotics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00266-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903011 | 70 | 2.96875 | 3 |
RAID configuration on the IBM Power platform
RAID stands for Redundant Array of Independent Disks and it involves two key design goals: Increased data reliability and increased input/output (I/O) performance. When multiple physical disks are set up to use the RAID technology, they are said to be in a RAID array. This array distributes data across multiple disks, but the array is seen by the computer user and operating system as one single disk. RAID can be set up to serve several different purposes.
Different types of RAID levels
Different types of RAID levels are available. Some are basic RAID levels and some are a combination of basic levels.
- RAID 0
- RAID 1
- RAID 5
- RAID 6
- RAID 10
- RAID 50
- RAID 60
Here, RAID 0, RAID 1, and RAID 5 are the basic RAID levels and the remaining RAID 6, RAID 10, RAID 50, and RAID 60 are the combination of the basic RAID levels.
Each RAID level is defined for a specific purpose. Read through the following table to get a better understanding about the various RAID levels.
|RAID level||Minimum drives||Protection||Description||Strengths||Weakness|
|RAID 0||2||None||Data striping without redundancy||Highest performance||No data protection; If one drive fails, all data is lost|
|RAID 1||2||Single drive failure||Disk mirroring||Very high performance; Very high data protection; Very good on write performance||High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required|
|RAID 5||3||Single drive Failure||Block-level data striping with distributed parity|| Best
cost/performance for transaction-oriented networks; Very high
performance, very high data protection; Supports multiple
simultaneous reads and writes; Can also be optimized for
large, sequential requests||Write performance is slower than RAID 0 or RAID 1|
|RAID 6||4||Two-drive failure||Same as RAID 5 with double distributed parity across an extra drive||Offers solid performance with the additional fault tolerance of allowing availability to data if two disks in a RAID group is to fail;Is recommended to use more drives in RAID group to make up for performance and disk utilization hits compared to RAID 5||Must use a minimum of five drives with two of them used for parity, so disk utilization is not as high as RAID 3 or RAID 5. Performance is slightly lower than RAID 5|
|RAID 10||4||One disk per mirrored stripe (not same mirror)||Combination of RAID 0 (data striping) and RAID 1 (mirroring)||Highest performance, highest data protection (can tolerate multiple drive failures)||High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required; Requires minimum of four drives|
|RAID 50||6||One disk per mirrored stripe||Combination of RAID 0 (data striping) and RAID 5 (single parity drive)||Highest performance, highest data protection (can tolerate multiple drive failures)||High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required; Requires minimum of four drives|
|RAID 60||8||Two disks per mirrored stripe||Combination of RAID 0 (data striping) and RAID 6 (dual-parity drives)||Highest performance, highest data protection (can tolerate multiple drive failures)||High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required; Requires minimum of four drives|
Supported RAID levels in IBM Power platforms
The following RAID levels are supported by IBM Power hardware.
- RAID 0
- RAID 5
- RAID 6
- RAID 10
Configuring RAID on the Power platform
Perform the following steps to configure RAID 5 on the Power platform.
- Get the supported diagnostics CD for the specific hardware. Here
I'm going to configure RAID on the Power platform; hence I have
used the following media.
Version 22.214.171.124 (For selected Power/PowerPC based systems)
- Create the logical partition (LPAR) by assigning the RAID controller to it. Note that we can not merge two or more disk controllers for a single RAID array configuration.
- Start the LPAR with the diagnostics CD.
- Type 2 and then press Enter, as mentioned in the console screen.
- Press Enter to continue.
- On the FUNCTION SELECTION page, select the third option.
- Enter the terminal type, preferably vt100 and press Enter.
- From the tasks selection list, select RAID Array Manager and press Enter.
- From the list of available disk controllers, select an appropriate disk array manager and press Enter.
- In the disk array manager, we can get different options for different operations, such as listing, creating, deleting and so on. Select List SAS Disk Array Configuration.
- Then, select the appropriate RAID adapter. To do so, move the
cursor to the required option and press Esc+7.
A list of disks that is available in the selected controller is displayed.
- Now, press F3 to move back to the main screen. Then, select the Create an Array Candidate pdisk and Format to 528 Byte Sectors option and press Enter. It is mandatory to create an array candidate.
- Select the Small Computer System Interface (SCSI) controller for selecting disks to create array candidates.
- Press F7 or Esc+7 to mark the disks as an array candidate.
- After selecting the disk, press Enter to begin formatting.
- Press Enter to continue.
- Now, create the array using the array candidates.
- Select the required RAID level. In this example, I've selected RAID 5.
- Select the stripe size (256 Kb is the default and recommended) and press Enter.
- Select the array candidates on which to create RAID and press Enter.
- After your configuration is complete, press Enter. The following screen is displayed.
- Now we are ready with the RAID configuration. Press F3 to go to the main screen.
- For checking the array configuration status, select List
SAS Disk Array Configuration.
After the hdisk is available, it is ready for use by assigning it to any LPAR.
General usage of this setup
This kind of setup is mainly for hardware redundancy with respect to disks.
- Hardware data redundancy with RAID 5 is more stable than the OS-level mirroring.
- This setup is best suited when we assign a disk from Virtual I/O Server (VIOS) to many LPARs.
- No need to configure the OS-level mirror in all LPARs. | <urn:uuid:fe4dfc15-17f1-458e-a01e-017a82131d77> | CC-MAIN-2017-04 | https://www.ibm.com/developerworks/aix/tutorials/au-aix-raid/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00386-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.790184 | 1,410 | 3.484375 | 3 |
From living preferences to workforce needs, each generation of Americans has its own characteristics.
By most estimates, millennials recently surpassed baby boomers as the nation's largest generation. As of 2012, roughly 28 percent of Americans were millennials, while boomers accounted for about a quarter of the population.
The prevalence of each generation does vary somewhat among states, though.
Using recent Census estimates, Governing computed state population totals for millennials, Generation X, baby boomers and the Silent Generation. The following maps and data summarize the top states for each generation, current as of 2012. (See definitions used for generations below)
Washington, D.C., has served as a magnet for millennials, particularly over the past decade. A third of the District's residents fall into this age bracket -- more than any state.
Just behind D.C. is Utah, the nation's youngest state in terms of median age, followed by Alaska, North Dakota and Texas. In general, data suggests western states tend to have higher concentrations of millennials.
Here's a map showing millennials' share of each state's total population. (Mouse over a state to display its data.)
Gen Xers, those in their mid-to-late 30s and 40s, account for about a fifth of the population in most states.
After D.C., the Census estimates suggest this generation is most common in Georgia. The Peach State, one of the younger states, similarly ranks eighth for millennials. Gen Xers also make up about 22 percent of the population in Nevada and Colorado.
Interestingly, Gen Xers are least prevalent in North Dakota, which has one of the highest tallies for millennials.
You'll find the greatest concentration of baby boomers in three northeastern states: Maine, New Hampshire and Vermont. Boomers make up just under 30 percent of the population in these states, which are also the nation's oldest in terms of overall population.
For the most part, slightly fewer baby boomers tend to reside in southern states. This could soon change, though, if historical migration patterns hold true and boomers opt to move south as they retire.
With less than 11 percent of the total population, the Silent Generation is the country's smallest age group.
It's no surprise that Florida is home to the largest share of these Americans, born in the late 1920s up through 1945. States where the Silent Generation is most prevalent also include West Virginia, Maine and Montana.
Arkansas (8th most) and Arizona (9th most) have higher shares of these older residents, despite the fact that their total populations are younger than the majority of other states.
The Census Bureau published population estimates tallying the number of residents living in states for each age number as of July 1, 2012. The following definitions for generations were used to calculate percentages of the population for this snapshot of data:
Millennials: Age 11 to 30 (born 1981-2000)
Generation X: Age 31 to 46 (born 1965-1980)
Baby Boomers: Age 47 to 65 (born 1946-1964)
Silent Generation: Age 66 to 84 (born 1928-1945)
This story was originally published by Governing. | <urn:uuid:816c1ea8-d97d-49d7-8433-5e24de4d9f4a> | CC-MAIN-2017-04 | http://www.govtech.com/data/A-State-by-State-Look-at-Where-Each-Generation-Lives.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00230-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948775 | 654 | 2.96875 | 3 |
Over 80% of UK children have witnessed online hate over the past year with a quarter specifically targeted, according to a new study designed to raise awareness of these issues as global initiative Safer Internet Day (SID) kicks off today.
Under the banner this year of “Play your part for a better internet,” the event is being held in over 100 countries around the world and aims to make the net a safer, more inclusive place.
However, an online poll of 1500 young people conducted in the UK has found there is still much work to do.
More than four in five said they had seen or heard offensive, mean or threatening behavior aimed at someone because of their race, religion, disability, gender, sexual orientation or transgender identity.
Although a resounding majority of respondents (94%) said that this kind of thing should never happen, around three-quarters (74%) feel the need to self-censor what they say online for fear of inciting more hate speech.
Over 1000 organizations across the UK will take part in awareness-raising activities today in support of SID, with big names such as the BBC, Google, Instagram, Microsoft and the Premier League taking part alongside government ministers and celebrities.
“While it is encouraging to see that almost all young people believe no one should be targeted with online hate, and heartening to hear about the ways young people are using technology to take positive action online to empower each other and spread kindness, we were surprised and concerned to see that so many had been exposed to online hate in the last year,” said SID director, Will Gardner.
“It is a wake-up call for all of us to play our part in helping create a better internet for all, to ensure that everyone can benefit from the opportunities that technology provides for building mutual respect and dialogue, facilitating rights, and empowering everyone to be able to express themselves and be themselves online – whoever they are.”
Coinciding with the event, cybersecurity accreditation body CREST has joined forces with the National Crime Agency (NCA) to produce a new paper designed to show young people the consequences of getting involved in cybercrime.
It’s hoped the paper will highlight the benefits to youngsters of turning their technical ability instead towards a career in cybersecurity.
Several vendors have released research today to show the darker side of the internet.
An Intel Security study of 1000 children in the UK found that over a quarter (28%) have had a conversation with a stranger online, while one third said they aren’t supervised by their parents when surfing the web.
“Teaching children the best practices for safe online behavior right from the start will be invaluable to them as they grow up,” argued Intel Security consumer vice president, Nick Viney.
“We all have a responsibility – parents, teachers and technology experts – to ensure children understand how to protect themselves from the potential risks online, and that comes as a result of greater education and by having ongoing conversations with children.” | <urn:uuid:5080c432-79d4-4e53-a3c2-f3726df89221> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/over-80-of-uk-kids-exposed-to/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00138-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961076 | 620 | 2.578125 | 3 |
Java is a programming language developed by Sun Microsystems. It implements a strong security model, which prevents compiled Java programs from illicitly accessing resources on the system where they execute or on the network. Popular World-Wide Web browsers, as well as some World-Wide Web servers and other systems implement Java interpreters. These are used to display interactive user interfaces, and to script behaviour on these systems.
While implementation problems have opened security vulnerabilities
in some Java interpreters (Java Virtual Machines or JVM's), the
design of this language makes it at least theoretically possible to
execute program with reasonable assurances about its Security, and
in particular its ability to cause harm. | <urn:uuid:096d2aa6-bbb1-4c97-96b7-8294eb85d357> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/java.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00468-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901409 | 137 | 3.234375 | 3 |
A new study released earlier this week evaluated each of the 50 U.S. states on their spending transparency. This year marks the first time all 50 states provided some form of checkbook-level information on state spending online.
The “Following the Money 2013” study, released by independent research and education organization U.S. PIRG Education Fund, was the fourth annual study which awarded states letter grades ranging from “A” to “F”.
States that earned “A” grades are Texas, Massachusetts, Florida, Illinois, Kentucky, Michigan and Oklahoma, while Wyoming, Wisconsin, Hawaii, California and North Dakota earned “F” grades. According to U.S. PIRG, states with “F” grades had websites that are “limited in scope, lack comprehensiveness and are difficult to navigate.”
According to the organization, three years ago, only 32 states provided checkbook-level information online. Thirty-nine state transparency websites now include reports about government spending through tax-code deductions, exemptions and credits – a significant increase from just eight states three years ago. | <urn:uuid:f06627e9-051a-42af-974a-613bd1544c7d> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/Which-States-Aced-the-Spending-Transparency-Test.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00376-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953771 | 235 | 2.75 | 3 |
Clinical Data Management
What is clinical data management?
The accumulating, documenting and storing data, critical for clinical trials or other clinical procedures, critical to pharmaceutical manufacturers and Bio Tech companies is called Clinical Data Management. Clinical Data is usually stored in a data repository known as Clinical Data Repository, which stores data mostly in a patient centric fashion, accumulating data from multiple sources. Clinical data repositories may form Clinical Data Warehouses, when the data stored in them is specifically organised for analytics purpose. Commonly used Clinical Data Management tools are:
• Oracle Clinical
• eClinical Suite
Most organisations outsource the Clinical Data management process to IT service providers/partners, in order to focus on their core competencies of pharmaceutical or clinical research. The process of outsourcing the Clinical Data Management helps Pharmaceutical/Clinical Research companies reduce their Operational Expenditure and allocate Capital Expenditure to other useful activities such as research and development. | <urn:uuid:c60d4174-4bf7-4d96-bd4b-3050fea6974f> | CC-MAIN-2017-04 | https://www.hcltech.com/technology-qa/what-is-clinical-data-management | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00128-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901541 | 187 | 2.84375 | 3 |
Virtualization Coming of Age
Virtualization creates a logical abstraction of the physical computer hardware, allowing a single computer to appear and function as many virtual machines, each having its own view of the systems resources. Some virtualization products, notably Virtual Irons VFe, aggregate multiple machines into the appearance of one. Leading players in the virtualization arena currently include VMware, Microsofts Virtual PC, formerly Connectix; Sun Microsystems Inc.s Solaris Containers, formerly N1 Grid Containers (for OS sharing, not hardware sharing); and open-source initiatives including Xen."IBMs rHype creates partitions, subdivisions of a machine," said Illuminatas Eunice. "In a sense, its a very lightweight virtual machine ... much closer to the LPARS [Logical Partitions] in IBMs pSeries products." The difference, Eunice said, is that "while virtual machines tend to abstract a lot of the I/O elements of a system, partitions do much less of that. For example, in VMware, if you want to copy a VM from system to system, you can do so, and copy all of the files with it, the local storage files. That is harder to do in partitioning products, its not built in. So partitions tend to be about slicing up CPU time or memory, not about building an entire envelope to contain running applications." Some, like Microsofts, run only on Windows, and some, like Suns Containers, will only create more instances of a systems operating system, rather than allow a range of operating systems and versions. One of the new players, VFe, from Virtual Iron, is a data center virtualization solution that can aggregate multiple devices as well as divide individual ones. The experience of company founders includes work on Digital Equipment Corp.s VAX clusters and storage virtualization technology, as well as at Thinking Machines Inc., so the "carve many from one, combine many into one" approaches make perfect sense. "The problem that clusters and grids have is that they have to be made application-aware," said Alex Vasilevsky co-founder and chief scientist at Virtual Iron. "VFe hides the cluster and presents one single computer that consists of virtual processors, so you dont need to make them cluster- or grid-aware, or buy clustered file systems." There are even application-level virtualization tools, like Softricitys SoftGrid. For example, according to Raghu Raghuram, senior director of strategy and market development at VMware, which pioneered virtualization on x86 platforms seven years ago, "If you deploy an application on your Windows desktop, it makes changes to your registry. A second application may need to make competing changes. Softricity solves that by making a virtual registry at Windows level." For enterprises, whats significant is not so much any one of the recent announcements as the rapid speed and broad contribution into open virtualization, said Eunice. "Whether its Xen or rHype or Intels Vanderpool or AMDs Pacifica, everyone is focusing on getting virtualization to the masses." And companies large and small can benefit from virtualization of their servers and data centers, Eunice said. "The smaller the server, the worse the problems are for fault isolation and security breaches." Coming to the rescue: "Theres a whole class of mainframe technologies that will start to become available in the next two years. Virtualization will be a part of the solution ... youll be able to do partitions, virtual machines, and get good quality of isolation out of the box, and probably for a fairly low price as well." The result: "Better reliability, from a fault, failure and security point of view," predicted Eunice. "And much better utilization." "Virtualization is a powerful mechanism," said VMwares Raghuram. "Today, you can use it to turn your data center into one flexible compute pool. Another use is for more cost-effective disaster recovery. Because virtualization [with VMware] takes the applications and operating system, and abstracts them from the hardware, you can bring up a VM instantly, and you dont have to keep your primary and secondary hardware identical, which saves in operational costs." While consolidating servers brings benefits, "that was for a single box," said Nigel Dessau, vice president of Virtualization Solutions at IBM. "The question now is, how to do it across the enterprise." Next Page: Four stages of virtualization.
The various virtualization and partitioning technologies have similaritiesand differences. | <urn:uuid:33006021-7a27-4939-b381-08753050b339> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Enterprise-Applications/Recent-Advances-Boost-System-Virtualization/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00248-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939342 | 934 | 2.578125 | 3 |
The public agrees that immigrants work harder than most blacks at low-paying jobs, but the margin is not as wide as when the comparison is made with white workers; 56% of adults agree that most immigrants work harder than most blacks; 30% disagree and 14% say they don’t know.
Analyzing the responses to both sets of questions by the race of the respondent yields some interesting patterns. Blacks are even more likely than whites to say that their fellow blacks are out-worked by immigrants at low-wage jobs. Some 64% of blacks hold this view, compared with just 55% of whites.
There’s a simple explanation for whites not agreeing with the statement as much as blacks: they’re simply afraid to say what they believe for fear of feeling like, or being labeled, a racist. Blacks have immunity in this regard due to the fact that they’re talking about themselves, thus they are more free to say what they really think. | <urn:uuid:73ba87e9-8670-4d92-9851-339e062bc7e1> | CC-MAIN-2017-04 | https://danielmiessler.com/blog/pew-immigrants-work-harder-than-blacks-at-low-paying-jobs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00064-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97843 | 199 | 2.546875 | 3 |
University researchers are claiming a bandwidth breakthrough with the first light-based microprocessor that communicates with conventional electronic circuitry.
While optical computing is hardly a new concept, researchers at University of Colorado-Boulder, Massachusetts Institute of Technology, and University of California, Berkeley claim to have made it work on a more practical level. The photonic transmissions are built onto a single chip that also integrates traditional electronics, so it could in theory work with other standard electronic components and integrate into current manufacturing processes.
“It’s the first processor that can use light to communicate with the external world,” Vladimir Stojanović, the University of California professor who led the collaboration, said in a press release. "No other processor has photonic I/O in the chip."
The big benefit of light-based computing is that it’s faster at transferring data within the space it’s given, with the new chip touting a density of 300 gigabits per second per square millimeter. That’s 10 to 50 times better than traditional electrical microprocessors. Light-based processors also promise to be more energy efficient, as they can transfer data over longer distances without using more power.
The lab processor isn’t especially powerful, as it packs just two computing cores, but researchers are hoping it could be a boon for networking chips, and could pave the way for faster computing overall. As such, they’ve set up a pair of startups to help commercialize the technology. But like so many other exciting university research projects, the timeframe for seeing light-based processors in actual products is murky at best.
Why this matters: Granted, CPU bandwidth is just one of many potential bottlenecks that computing systems can run into, and it always pays to be a bit skeptical of lab-based technological breakthroughs. But by slotting photonics into places where electronics would normally go, it sounds like the researchers are on a path faster networking with far lower energy consumption.
This story, "Groundbreaking light-based photonic processor could lead to ultra-fast data transfers" was originally published by PCWorld. | <urn:uuid:0831cd0a-e3d0-4776-a46c-f6758e996977> | CC-MAIN-2017-04 | http://www.itnews.com/article/3018382/hardware/revolutionary-light-based-photonic-processor-could-lead-to-ultra-fast-data-transfers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00550-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926208 | 437 | 3.578125 | 4 |
Black Box Explains...Media converters
Media converters interconnect different cable types such as twisted pair, fiber, and coax within an existing network. They are often used to connect newer Ethernet equipment to legacy cabling. They can also be used in pairs to insert a fiber segment into copper networks to increase cabling distances and enhance immunity to electromagnetic interference (EMI).
Traditional media converters are purely Layer 1 devices that only convert electrical signals and physical media. They don’t do anything to the data coming through the link so they’re totally transparent to data. These converters have two ports—one port for each media type. Layer 1 media converters only operate at one speed and cannot, for instance, support both 10-Mbps and 100-Mbps Ethernet.
Some media converters are more advanced Layer 2 Ethernet devices that, like traditional media converters, provide Layer 1 electrical and physical conversion. But, unlike traditional media converters, they also provide Layer 2 services—in other words, they’re really switches. This kind of media converter often has more than two ports, enabling you to, for instance, extend two or more copper links across a single fiber link. They also often feature autosensing ports on the copper side, making them useful for linking segments operating at different speeds.
Media converters are available in standalone models that convert between two different media types and in chassis-based models that connect many different media types in a single housing.
Rent an apartment
Standalone converters convert between two media. But, like a small apartment, they can be outgrown. Consider your current and future applications before selecting a media converter. Standalone converters are available in many configurations, including 10BASE-T to multimode or single-mode fiber, 10BASE-T to Thin coax (ThinNet), 10BASE-T to thick coax (standard Ethernet), CDDI to FDDI, and Thin coax to fiber. 100BASE-T and 100BASE-FX models that connect UTP to single- or multimode fiber are also available. With the development of Gigabit Ethernet (1000 Mbps), media converters have been created to make the transition to high-speed networks easier.
...or buy a house.
Chassis-based or modular media converters are normally rackmountable and have slots that house media converter modules. Like a well-planned house, the chassis gives you room to grow. These are used when many Ethernet segments of different media types need to be connected in a central location. Modules are available for the same conversions performed by the standalone converters, and 10BASE-T, 100BASE-TX, 100BASE-FX, and Gigabit modules may also be mixed. | <urn:uuid:bf305adf-43ef-4cef-9c1f-e97027814417> | CC-MAIN-2017-04 | https://www.blackbox.com/en-nz/products/black-box-explains/black-box-explains-media-converters | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00487-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907094 | 568 | 2.546875 | 3 |
Introduction to the C++11 feature: delegating constructors
sumi_cj 270001SV2S Comment (1) Visits (55874)
In C++98, if a class has multiple constructors, these constructors usually perform identical initialization steps before executing individual operations. In the worst scenario, the identical initialization steps are copied and pasted in every constructor. See the following example:
These three constructors have the same function body. The duplicated codes make maintenance difficult. If you want to add more members or change the type of existing members, you have to make the same changes three times. To avoid code duplication, some programmers move the common initialization steps to a member function. The constructors achieve the same function by calling this member function. Let us revise the example as follows:
This revision eliminates code duplication but it brings the following new problems:
C++11 proposed a new feature called delegating constructors to solve this existing problem. You can concentrate common initialization steps in a constructor, known as the target constructor. Other constructors can call the target constructor to do the initialization. These constructors are called delegating constructors. Let us use this new feature for the program above:
You can see that delegating constructors make the program clear and simple. In this example, A() delegates to A(int i), so A(int i) is the target constructor of A(). A(int i) delegates to A(int i, int j), so A(int i, int j) is the target constructor of A(int i). Delegating and target constructors do not need special labels or disposals to be delegating or target constructors. They have the same interfaces as other constructors. As you haven seen from the example, a delegating constructor can be the target constructor of another delegating constructor, forming a delegating chain. Target constructors are chosen by overload resolution or template argument deduction. In the delegating process, delegating constructors get control back and do individual operations after their target constructors exit. See the following example:
Although we can use delegating chains in our program, we should avoid recursive calls of target constructors. For example:
In the preceding example, a recursive chain of delegation exists in the program. The program is ill-formed.
The output of the example is:
As described, the delegating constructors feature can be easily understood and used. This feature helps reduce the code size and make your program more readable and maintainable. | <urn:uuid:6d79947a-4d37-465f-b3ef-5c6b7cc9d462> | CC-MAIN-2017-04 | https://www.ibm.com/developerworks/community/blogs/5894415f-be62-4bc0-81c5-3956e82276f3/entry/introduction_to_the_c_11_feature_delegating_constructors?page=0&lang=en | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00423-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.83032 | 520 | 3.75 | 4 |
Back to article
Cloud Computing GuideBy Thor Olavsrud
May 11, 2010
ALSO SEE: Ten Cloud Computing Leaders
Cloud computingis a method of provisioning computing resources, including both hardware and software, that relies on sharing those resources rather than using local servers or personal devices to handle applications.
In theory, cloud computing is a way for IT departmentsto increase capacity or add capabilities on the fly, without having to invest in new infrastructure, train new personnel or license new software.
In the cloud computing model, workers log into a Web-based service that hosts all the programs the users need for their jobs, from e-mail to word processing to complex Business Intelligence software. The service, and the hardware it runs on, is deployed, maintained and upgraded by another company, which generally offers service-level agreements(SLAs) to guarantee the service.
Examples of cloud computing include using Google App Engine to run your company's e-mail services, contracting with Amazon Web Services to host your infrastructure, or contracting with Salesforce.comto provision your company's customer relationship management (CRM) software.
The number of cloud computing vendors is growing rapidly, with a continual fresh crop of cloud computing start-ups. Indeed, because of the potential power and influence, the fierce struggle to dominate the cloud computing marketis prompting titanic investments.
Cloud Computing Definitions
Cloud Computing Definitions
The definition of cloud computing remains somewhat fuzzy, as it is often used to describe an array of related concepts. More than a few people are uncertain as to what separates cloud computing related concepts such as utility computing, software-as-a-service (SaaS) and virtualization.
The vagary led Larry Ellison, Oracle's chief executive officer, to famously say of cloud confusion, "The interesting thing about cloud computing is that we've redefined cloud computing to include everything that we already do. I can't think of anything that isn't cloud computing with all of these announcements. The computer industry is the only industry that is more fashion-driven than women's fashion. Maybe I'm an idiot, but I have no idea what anyone is talking about. What is it? It's complete gibberish. It's insane. When is this idiocy going to stop?
"We'll make cloud computing announcements. I'm not going to fight this thing. But I don't understand what we would do differently in light of cloud."
That hasn't stopped analysts at research firm IDC from attempting to establish a more concrete definition. For those who continue to seek more useful terminology, IDC distinguishes between Cloud Servicesand Cloud Computing.
"When most people talk about "cloud computing," they usually refer to online delivery and consumption models for business and consumer services," said IDC's Frank Gens. "These services include IT services—like software-as-a-service (SaaS) and storage or server capacity as a service—but also many, many "non-IT" business and consumer services.
"Indeed, the vast majority of these online services are not, in the mind of the user, IT or "computing" at all—they are about shopping, banking, selling, collaborating, communicating, being entertained, etc. In other words, most people using these services are not "computing," they are living! These customers are not explicitly buying "cloud computing," but "cloud services" that are enabled by cloud computing environments; cloud computing is hidden underneath the business or consumer service."
So IDC's definitional framework is as follows:
In other words cloud computing is the IT environment that encompasses the stack of IT and network products that enables the development, delivery and consumption of cloud services.
Cloud Services, contrast with Cloud Computing
According to IDC, there are eight key attributes that define cloud services, as follows:
Cloud Computing, contrast with Cloud Services
Meanwhile, cloud computing is the IT foundation for cloud services. A partial list of cloud computing attributes compiled by IDC include the following:
Cloud Computing vs. Utility Computing vs. SaaS
But IDC's definitions are not universally shared. According to 3Tera Chairman Barry X Lynn, cloud computingenables users and developers to utilize services without knowledge of, expertise with, nor control over the technology infrastructure that supports them. Meanwhile, utility computing provides on-demand infrastructure with the ability to control, scale and configure that infrastructure.
SaaS, according to Lynn, is a software-enabled service that is offered on the Web on a month-to-month subscription or a pay-per-use basis, rather than having to purchase or license the software.
Under IDC's rubric, utility computing falls under the umbrella of the cloud computing definition, while SaaS falls under the umbrella of cloud services.
Cloud Computing Benefits
So what potential benefits does cloud computing offer? According to Forrester analyst Ted Schadler, there are three key benefits of cloud computing:
• First, cloud computing promises speed. IT departments don't have to go through the lengthy process of building IT infrastructure and deploying software to multitudes of computers when using cloud services. Instead, they subscribe to services and receive them.
• Second, the cloud computing provider is responsible for maintaining and upgrading the infrastructure, meaning that the customer's IT staff can focus on more important core business processes.
• Third, firms only have to pay for the resources they use. Instead of buying hardware, software and consultants to set up and run applications, businesses can pay a cloud-based provider by the user by the month.
Disaster recoveryis another added benefit. Cloud providers are offsite, maintain the servers in their own data centers, fix any problems, manage disaster recovery planning and continually upgrade the software.
Other benefits of cloud computinginclude:
Cloud Computing Concerns
There are a number of issues surrounding cloud computingas well, including security, privacy, compliance and vendor lock-in.
Cloud computing security is an oft-cited reason for wariness toward cloud services. Skeptics argue that once your data exists out in the cloud, you are hard-pressed to ensure no one else has access to it. This issue may be causing more than a few firms to delay adoption of cloud services.
Burton Group analyst Eric Maiwald said firms considering cloud services should resolve these issues with providers before jumping in: how data is encrypted and stored, how e-discovery can be done if need be, what controls there are and whether the cloud provider has passed a SAS-70 audit.
According to a report released by the Cloud Security Alliance in March, the top threats to cloud computinginclude the following:
Cloud Computing's Five Myths
San Francisco's Golden Gate University moved to a largely cloud-based infrastructure several years ago. According to Anthony Hill, the CIO of Golden Gate University who oversaw the move to cloud, not all is as it seems when it comes to both the benefits and problems of cloud computing.
Hill said that while security is often the first concerned mentioned when it comes to cloud computing, he doesn't see it as a big concern. He told InformationWeek's Frederic Paul that he's more comfortable with his data in an Oracle data center than he is with it in his own data center. He noted that no midmarket organization could bring to bear the level of security and redundancy on its own that Oracle offers.
On the other hand, Hill said that cloud-computing champions often trumpet the ability to avoid capital expenditures (CapEx) in favor of operating expenditures (OpEx). He noted that in the real world it is sometimes advantageous to bury IT costs in the capital budget and keep it off the P & L for a few years.
Another benefit of cloud computing is the ability to offload management risk. The vendor becomes responsible for reliability, availability and so forth. However, Hill noted that companies should remember that they are exchanging management risk for vendor risk. Organizations are still vulnerable to business disruptions if the vendor experiences problems, or, worse, goes out of business.
Some cloud-computing advocates also suggest that the model provides a great deal of agility. But Hill said switching vendors is not a trivial task. In fact, Richard Stallman, outspoken founder of the Free Software Foundation, has called cloud computing a "trap" that forces people into locked, proprietary systems.
Finally, Hill said it is a myth that "tight" SLAs, which measure breach of contract by specific metrics, are the key to a successful relationship with a cloud-computing vendor. Instead, Hill suggested cloud-computing customers negotiate SLAs that allow for termination based on lack of customer satisfaction.
Cloud Computing: News and Additional Resources
: Private Cloud | <urn:uuid:215cd8ee-dbeb-46a3-8ebd-79b86a721e76> | CC-MAIN-2017-04 | http://www.datamation.com/netsys/print.php/3881451 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00239-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945316 | 1,785 | 2.75 | 3 |
What kind of imagery does the term “cyber security landscape” conjure up for you? Chances are, it’s not the wide-open, rolling fields you see on farmlands. In fact, that is the kind of scenery you’d expect to be far removed from the world of hackers and complicated data networks.
But it’s precisely this setting where we’re seeing cyber security concerns rapidly grow.
Like any other type of business, farms are using various technologies to enhance their operations and stay competitive. Common enhancements range from digital storage of crop data to sophisticated “precision agriculture” tools that use GPS and networked data to support more precise management of individual fields.
Farms’ uptake of digital tools has been fairly rapid. In a 2014 survey conducted by the American Farm Bureau Federation, more than half of respondents indicated that they were planning to invest in precision agriculture tools within two years. That’s all well and good on its own. However, only 5 percent could say whether they knew if the partners storing their farm data had incident response plans in case of a data breach. In a similar finding, 87 percent of survey respondents indicated that their own farms had no plans in place for incident response.
It’s clear that standard cyber security measures are lacking at farms, even while tech solutions are growing quickly. Even then, you might wonder: How big a problem is this? What kind of sensitive data could farms be holding that hackers might want to get their hands on? What kind of damage could it cause?
In short, compromising farms’ digital data could cause serious trouble, not only to the farms themselves, but to the entire nation.
Just as the federal government and financial regulators worry about hackers sabotaging the U.S. infrastructure, those involved in the agriculture industry should consider what could happen if hackers were to sabotage the data used to manage the nation’s crops. The net effect could be even more devastating.
As fits the potential effect of the situation, the FBI in March issued a warning to farmers about the cyber security risk potential of precision agriculture tools. Accompanying the call for awareness, the bureau offered a set of recommendations to help mitigate their risks. These include:
- Monitoring employee logins after hours
- Using two-factor authentication
- Conducting regular privacy training
- Setting up a VPN
- Monitoring outgoing data and unusual traffic
- Closing unused ports
Lunarline helps clients from all types of industry backgrounds, both large and small, control their risks with secure network and policy development, training programs, secure configuration tools and more. To learn more about these solutions and how they help protect against data breaches, contact us today or visit us online at Lunarline.com.
This is the first article in a three-part series on agriculture and cyber security. | <urn:uuid:60dfe06b-601c-4996-a7ef-4205358d1be3> | CC-MAIN-2017-04 | https://lunarline.com/blog/2016/09/hacking-heartland-old-macdonald-got-hacked-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00387-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950658 | 589 | 2.90625 | 3 |
June 17, 2010—Over the last 20 years, scientists have collected vast amounts of data about climate change, much of it accessible on the Web.
Now the challenge is figuring out how to integrate all that information into coherent datasets for further analysis—and a deeper understanding of the Earth’s changing climate.
In 2000, more than 20 countries began deploying an array of drifting, robotic probes called Argo floats to measure the physical state of the upper ocean. The floats, which look a little like old-fashioned hospital oxygen tanks with antennas, are designed to sink nearly a mile below the surface.
After moving with the currents for about 10 days, they gradually ascend, measuring temperature, salinity, and pressure in the top 2,000 meters of the sea as they rise. At the surface, they transmit the data to a pair of NASA Jason satellites orbiting 860 miles above the equator, then sink again to repeat the process.
So far more than 3,000 floats have been deployed around the world. The data they’ve collected has provided a big leap forward in understanding the upper levels of the Earth’s oceans—and their effect on global climate change—in the same way as early weather balloons expanded understanding of the earth’s atmosphere. What’s more, the data they collect is available in near real time to anyone interested, without restrictions, in a single data format.
Dr. Thomas Peterson, a scientist at the U.S. National Oceanographic and Atmospheric Administration (NOAA)’s National Climatic Data Center in Asheville, North Carolina, has been with the data center since 1991. “Back then,” says Peterson, “people came to us for integrated climate information because it was so hard to find the large amounts of data they needed to derive the information themselves. With the Internet, people can just download the data from the Web.”
The Argo floats are funded by some 50 agencies around the world. The program is one example among thousands of the ways in which the Web is facilitating scientists’ understanding of global climate change. Without the Web, in fact, the float system would not exist.
Studying human-caused change
Humans have probably been studying weather at least since they began raising crops. But rigorous climatology—the study of weather patterns over decades, centuries, or even millennia—dates only from the late 1800s. The study of anthropogenic, or human-caused, climate change is much younger. Until the 1950s, few suspected the earth’s climate might be changing as a result of human activity. And if a scientist in, say, Germany did suspect it, it would have been difficult indeed for him to work with scientists in England or China to explore the possibility.
By the 1980s, as evidence began accumulating of rising levels of atmospheric carbon dioxide, scientists who were pursuing particular aspects of climate change independently began holding international conferences to exchange information. But not until the 1990s did the Web enable them to collaborate remotely, in real time. That collaboration, along with the enormous amounts of data collected using web technologies, has revolutionized the field.
Today, climate scientists conduct studies with colleagues on the other side of the world, hold marathon webinars, and co-author papers with dozens or even hundreds of collaborators, all via the Web. Scientists use the Web to access, monitor, and share everything from in situ data collected by such means as the Argo floats and a worldwide network of 100,000 weather stations, to remote data from radar and satellites, to paleoclimatologic indicators like tree rings and core samples from glaciers and ancient lake beds.
A staggering amount of data
The sheer volume of scientific data on climate is staggering, collected around the world by government agencies, the military, universities, and thousands of other institutions.
NOAA stores about 3,000 terabytes of climate information, roughly equal to 43 Libraries of Congress. The agency has digitized weather records for the entire 20th century and scanned records older than that, including some kept by Thomas Jefferson and Benjamin Franklin. All of it is accessible on the Web. As part of its educational mission, NOAA has even established a presence in the Second Life virtual world, where members can watch 3D data visualizations of a glacier melting, a coral reef fading to white, and global weather patterns evolving.
The U.S. National Aeronautics and Space Administration (NASA) is an equally important player in climate research. Its Earth Observing System (EOS) of satellites collects data on land use, ocean productivity, and pollution and makes its findings available on the Web. There is even a NASA-sponsored program involving a network of beekeepers to collect data on the time of spring nectar flows, which appears to be getting earlier (http://honeybeenet.gsfc.nasa.gov).
The U.N.’s World Meteorological Organization’s Group on Earth Observations (GEO)—launched by the 2002 World Summit on Sustainable Development and the G8 leading industrialized countries—is developing a Global Earth Observation System of Systems, or GEOSS, both to link existing climatological observation systems and to support new ones.
Its intent is to promote common technical standards so that data collected in thousands of studies by thousands of instruments can be combined into coherent data sets. Users would access data, imagery, and analytical software through a single Internet access point called GEOPortal. The timetable is to have the system in place by 2015.
But—and this is a huge but—despite the wealth of information that’s been collected bearing on climate change, finding specific datasets among the thousands of formats and locations in which they’re stored can be daunting or even impossible.
How MIT’s DataSpace could help
Stuart Madnick, who is the John Norris Maguire Professor of Information Technology at MIT’s Sloan School of Management, believes a new MIT-developed approach called DataSpace could help. “Right now, papers on hundreds of subjects are published, but the data that backs them up often stays with the researcher,” says Madnick. “We want DataSpace to become the Google for multiple heterogeneous sets of data from a variety of distributed locations. It wouldn’t necessarily work the way Google does, but it would be as useful, scalable, and easy to use, and it would allow scientists to access, integrate, and re-use data across disciplines, including climate change.”
As a simple example of how DataSpace could work with respect to climatology, Madnick posits that a scientist wants to know the temperature and salinity of the water around Martha’s Vineyard, Massachusetts, over the past 20 years. Data that could answer the question could exist in all kinds of locations, from nearby Woods Hole Oceanographic Institute, to NOAA, to international fishing fleets. But right now, there is little or no integration of that data. DataSpace could perform that integration, which can require adjustments ranging from such simple things as reconciling Centigrade data with Fahrenheit, to compensating for differences in the ways various instruments measure.
Semantic Web technologies
DataSpace would incorporate “reasoning systems” that would “understand” disparate data in a way that now requires human intervention. Often called Semantic Web technologies, such linked-data systems would collect unstructured data, interpret data that is structured but not interpreted, and interpret what the data means.
How would such Semantic Web technologies be used to study climate change? Madnick provides an example. “Microbes are the most abundant and widely distributed organisms on Earth. They account for half of the world’s biomass and have been integral to life on Earth for more than 3.5 billion years. Marine microbes affect climate and climate affects them. In fact, they remove so much carbon dioxide from the atmosphere that some scientists see them as a potential solution to global warming. Yet many of the feedbacks between marine biogeochemistry and climate are only poorly understood. The next major step in the field involves incorporating the information from environmental genomics, targeted process studies, and the systems observing the oceans into numerical models. That would help to predict the ocean’s response to environmental perturbations, including climate change.”
Madnick believes such integration of disparate data, including genetics, populations, and ecosystems, is the next great challenge of climatology, and that Semantic Web technologies will be needed to meet the challenge.
A Wikipedia for climate change?
Another approach being developed at MIT is the Climate Collaboratorium, part of MIT’s Center for Collective Intelligence. MIT Sloan School Professor Thomas Malone describes the Climate Collaboratorium, still in its formative stage, as “radically open computer modeling to bring the spirit of systems like Wikipedia and Linux to global climate change.” His hope is that thousands of people around the world—from scientists, to business people, to interested laypeople—will take part via the Web to discuss proposed solutions in an organized and moderated way and to vote on proposed solutions.
Malone has written, “The spectacular emergence of the Internet and associated information technology has enabled unprecedented opportunities for such interactions. To date, however, these interactions have been incoherent and dispersed, contributions vary widely in quality, and there has been no clear way to converge on well-supported decisions concerning what actions—both grand and ground-level—humanity should take to solve its most pressing problem.” Malone says the Collaboratorium will not endorse positions, but be “an honest broker of the discussion.”
Asked what the biggest challenges are facing climate change scientists, NOAA’s Tom Peterson answers, “Communication. Too much is written by scientists for scientists, so it is often too dense for laypeople to understand. It’s rare for a scientist to take time out of trying to make progress on scientific questions to rigorously disprove some of the widely propagated errors about climate change.”
Still, what is truly remarkable about how the study of climate has changed over the past 20 years is the way the Web has given scientists from around the world, in disparate areas of research, a new way to collaborate. Thanks to the Web, many millions of people not only within but also beyond the scientific community now have access to an enormous tapestry of information. And new technologies like the Semantic Web will undoubtedly enrich that tapestry. | <urn:uuid:fb0bf4b0-452c-41db-a706-b53c56f1435c> | CC-MAIN-2017-04 | https://www.emc.com/leadership/articles/climate-change.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00295-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937904 | 2,165 | 3.890625 | 4 |
For at least one hundred years, warehousing has referred to the short- or long-term storage of items in a specially designed facility. Originally this general definition described the storage of inventory and other physical items. In the early 1990s, Bill Inmon advocated creating specialized data warehouses for decision support applications. The term data warehousing refers to the process of creating and maintaining a data warehouse.
A data warehouse is a database designed to support decision making in organizations. It is updated in batches or in real time, and it is structured for rapid online queries and for providing managerial summaries. Data warehouses contain large amounts of historical data. According to Inmon, a data warehouse is a subject-oriented, integrated, time-variant, nonvolatile collection of data in support of management's decision-making process. Ralph Kimball defines a data warehouse as "a copy of transaction data specifically structured for query and analysis (p. 310)."
The data warehousing process has changed in the past 10 years. Builders have fewer concerns related to data storage capacity and processing speed, but the task of creating a data warehouse remains difficult.
In 1997, Anahory and Murray wrote about data warehousing in the real world. They positioned their book as "a practical guide for building decision support systems." They define a data warehouse "in its simplest perception ... as no more than a collection of the key pieces of information used to manage and direct the business for the most profitable outcome (p. 4)." More technically, they define a data warehouse as the data and the "processes involved in getting that data from source to table, and in getting the data from table to analysts (p. 4)." Let's review the process they prescribed.
The process for delivering an enterprise data warehouse is "a variant of the joint application development approach .. the entire delivery process is staged in order to minimize risk (p. 9)."
First, understand the business case for investment. Identify the projected business benefits from using the data warehouse.
Second, experiment with the concept of data analysis and learn about the value of a data warehouse.
Third, specify the business requirements.
Fourth, develop an overall system architecture.
Fifth, quickly load some data to produce an initial production deliverable that satisfies the "most pressing business requirement for data analysis (p. 12)."
Sixth, finish loading required historical data into the data warehouse.
Seventh, "configure an ad hoc query tool to operate against the data warehouse (p. 13)."
Eighth, automate operational data management processes like extracting and loading new data, backing up data, and generating data aggregations.
Ninth, if there are additional business requirements, extend the scope of the data warehouse.
Tenth, monitor business requirements. During the life of a data warehouse "business requirements will constantly change (p. 14)."
Some of the potential benefits of putting data into a data warehouse include:
- Improving turnaround time for data access and reporting;
- Standardizing data across the organization so there will be one view of the "truth;"
- Merging data from various source systems to create a more comprehensive information source;
- Lowering costs to create and distribute information and reports;
- Sharing data and allowing others to access and analyze the data;
- Encouraging and improving fact-based decision making;
The major limitations associated with data warehousing are related to user expectations, lack of data and poor data quality. Building a data warehouse creates some unrealistic expectations that need to be managed. A data warehouse doesn't meet all decision support needs. If needed data is not currently collected, transaction systems need to be altered to collect the data. If data quality
is a problem, the problem should be corrected in the source system before the data warehouse is built. Software can provide only limited support for cleaning and transforming data. Missing and inaccurate data cannot be "fixed" using software. Historical data can be collected manually, coded and "fixed," but at some point source systems need to provide quality data that can be loaded into the data warehouse without manual clerical intervention.
Data warehousing tasks and deliverables have changed in terms of the technical tools used, the risks and concerns and the time needed to complete some tasks, but the above process is still a good starting point for planning an enterprise data warehouse for an organization.
According to Westerman (2001), "To understand what is needed for your data warehouse, you have to speak with the business people. This is not an option; it is a requirement (p. 61)."
The easiest way to get started with data warehousing is to analyze some existing transaction processing systems and see what type of historical trends and comparisons might be interesting to examine to support decision making. See if there is a "real" user need for integrating the data. If there is, then IS/IT staff can develop a data model for a new schema, load it with some current data and start creating a decision support data store using a database management system (DBMS). Find some software for query and reporting and build a decision support interface that's easy to use. Although the initial data warehouse/data-driven DSS may seem to meet only limited needs, it is a "first step." Start small and build more sophisticated systems based upon experience and successes.References
Anahory, S. and D. Murray. Data Warehousing in the Real World: A practical guide for building decision support systems
Inmon, W. H. Building the Data Warehouse
Kimball, R. The Data Warehouse Toolkit: Practical Techniques for Building Dimensional Data Warehouses.
Power, D., "What are the advantages and disadvantages of Data Warehouses?" DSS News, Vol. 1, No. 7, July 31, 2000
Power, D., What do I need to know about Data Warehousing/OLAP? DSS News, Vol. 4, No. 5, March 02, 2003
Westerman, P. Data Warehousing: Using the Wal-Mart Model
SOURCE: Data Warehousing 101
Recent articles by Dan Power | <urn:uuid:30a541db-4b3c-4e74-b3a0-ffd208c9c8af> | CC-MAIN-2017-04 | http://www.b-eye-network.com/channels/1385/view/12929 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00505-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913994 | 1,248 | 3.4375 | 3 |
37.3 million users around the world were subjected to phishing attacks in the last year, which is a massive 87 percent increase for the number of targeted user in 2011-2012.
According to the results of a Kaspersky Lab research into the evolution of phishing attacks, they were most frequently launched from the U.S., the U.K., Germany, Russia and India. Most often targeting users are those in Russia, the U.S., India, Germany, Vietnam, the U.K., France, Italy, China and Ukraine, which represent 64 percent of all phishing attack victims within the observed period.
Yahoo!, Google, Facebook and Amazon are top targets of malicious users. Online game services, online payment systems, and the websites of banks and other credit and financial organizations are also common targets, but also email services, social networks, online stores and auction venues, blogs, IT company websites, and telecom operator websites.
The number of fraudulent websites and servers used in attacks has more than tripled since 2012, and more than 50 percent of the total number of individual targets were fake copies of the websites of banks and other credit and financial organizations.
The Top 30 websites that are copied the most often by phishers are mostly services and companies whose names are known by a mass audience. The number of attacks against one or another online resource may correspond directly to its popularity.
Depending on the country, the list of the websites that are visited may change — this is typically influenced by local user preferences.
For example, in the U.S. the top three most targeted sites are Yahoo!, Facebook and Google. The list for Russia goes like this: Odnoklassniki.ru, VKontakte, and Google Search.
Internet users can encounter links to phishing sites either by surfing the web or via email, but according to the research, the overwhelming majority of phishing attacks are launched against users when they are surfing the web, and take the form of banners to legitimate websites, messages on forums and blogs, private messages on social networks. | <urn:uuid:a816e465-cf70-4451-9aa8-0ddfdbd873ab> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/06/21/phishing-attacks-impacted-373-million-users-last-year/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00323-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949184 | 422 | 2.765625 | 3 |
Theoretical physicist Dr. John Preskill, who is one of the leading researchers in the areas of quantum information, quantum computing and quantum error correction, presented some of his ideas in a recent Google Tech Talk.
The purpose of the presentation was to offer a higher-level overview of where the fields of quantum physics and information technology intersect and what these merging areas might mean for future technologies.
Preskill is the founder of the Institute for Quantum Information (IQI), which was conceived in 2000 as part of the initiative in Information Technology Research as led by the NSF. Recently the IQI became a part of the Institute for Quantum Information and Matter (IQIM) at Caltech, which is among the NSF’s Physics Frontiers Centers and supported by the Gordon and Betty Moore Foundation.
IQIM’s stated aims are “to discover new physics in the quantum realm and to build scientific foundations for designing materials and devices with remarkable properties.” Researchers in these fields are dedicated to exploring large-scale quantum phenomena that are possible when particles such as atoms, photons and electrons are strongly correlated or entangled. IQIM scientists investigate and manipulate entangled systems and materials in order to advance basic science and build the foundations for future technologies including quantum computers.
As IQIM’s describes, their work is comprised of research programs that span quantum information science, quantum many-body physics, quantum optics, and the quantum mechanics of mechanical systems. The program’s faculty are drawn from Caltech’s departments of physics, applied physics, and computer science. IQIM also conducts outreach programs to acquaint high school students and the general public with quantum theory. | <urn:uuid:29d9d8d4-fb0d-49d8-8d5e-3a9de09b2754> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/09/30/theoretical_physicist_talks_quantum_computing_future/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00323-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941376 | 338 | 2.5625 | 3 |
Thanks to Hollywood, the media, and even American politics, the lone hacker trope is alive and well. The truth is, there are a multitude of personas that represent hackers, with an equally diverse set of agendas.
Social engineering is a tool as old as war. And a growing number of cybercriminals are using it to manipulate users into giving them what they want.
Today, we wanted to let you know about a new computer security initiative called “100 Cities in 100 Days”. The Identify Theft Council created this project to persuade cities to commit to doing at least one thing to aid community awareness about identify theft and online safety. Malwarebytes Labs will be publishing an article a day for five days that describe different aspects of computer security. We are trying to share knowledge with everyone we possibly can to make the most out of this project. | <urn:uuid:38c7c185-f291-449d-bace-632ff24e2d08> | CC-MAIN-2017-04 | https://blog.malwarebytes.com/tag/cybercrime/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946643 | 174 | 2.671875 | 3 |
Generic Concepts and Definitions
These questions are based on EXE-101: ITIL Foundation Certification in IT Service Management Version 3
Self Test Software Practice Test
Objective: Generic concepts and definitions.
Sub-objective: Define and explain the following key concepts: event, alert, incident, impact, urgency, priority and service request.
Single answer, multiple-choice
Which type of response should you raise for an event that requires human intervention?
- Event log entry.
- Request for Change (RFC).
- Incident record.
You should raise an alert for an event that requires human intervention. Alerts are notifications to the person(s) responsible that either a threshold has been reached, a failure has happened or something has changed. The person(s) responsible should have the required skills and take steps to handle the event in the stipulated timeframe.
You should not raise an event log entry because logs are recorded in scenarios when information about an event occurrence may be required by technical management staff later.
You should not raise a Request for Change (RFC) because an RFC is raised when an exception has been generated or a change request from a user has been received.
You should not raise an incident record because these are raised when a user reports an incident.
Office of Government Commerce. Foundations of IT Service Management Based on ITIL v3, Glossary, Alert, p. 323.
Office of Government Commerce. Service Operation, Glossary, Alert, p. 223.
Office of Government Commerce. Service Operation, Chapter 4: Service Operation processes, 126.96.36.199 Response Selection, pp. 41-42. | <urn:uuid:7d447ab1-a7c4-4301-a0b1-001618ac9584> | CC-MAIN-2017-04 | http://certmag.com/generic-concepts-and-definitions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00193-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.893189 | 339 | 2.703125 | 3 |
With the introduction of add-on accelerators like GPUs, Intel’s upcoming MIC chip, and, to a lesser extent, FPGAs, the foundation of high performance computing is undergoing somewhat of a revolution. But an emerging variant of this heterogenous computing approach may upend the current accelerator model in the not-too-distant future. And it’s already begun in the mobile space.
In October 2011, ARM announced their “big.LITTLE” design, a chip architecture than integrates large, performant ARM cores with small, power-efficient ones. The goal of this approach is to minimize power draw in order to extend the battery life of devices like smartphones and tablets.
The way it works is by mapping an application to the optimal cores based on performance demands and power availability. For mobile devices, big cores would be used for performance-demanding tasks like navigation and gaming, and the smaller cores for the OS and simpler tasks like social media apps. But when the battery runs low, the software can shunt everything to the low power cores in order the keep the device operational. ARM is claiming that battery life can be extended by as much as 70 percent by migrating tasks intelligently.
ARM’s first incarnation of big.LITTLE pairs its large Cortex-A15 design with the smaller Cortex-A7, along with glue technology to provide cache and I/O coherency between the two sets of cores. Companies like Samsung, Freescale, and Texas Instruments, among others, are already signing up.
ARM didn’t invent the big core/little core concept though. This model has been kicked around in the research community for nearly a decade. One of the first papers on the subject was written in 2003 by Rakesh Kumar, along with colleagues at UCSD and HP Labs. He proposed a single-ISA heterogenous multicore design, but in this case based on the Alpha microprocessors, a CPU line that, at the time, was being targeted to high-end workstations and servers.
He found that a chip with four different Alpha core microarchitectures had the potential to “increase energy efficiency by a factor of three… without dramatic losses in performance.” He also discovered that most of these gains would be possible with as little as two types of cores.
In a recent conversation with Kumar, he expressed the notion that the time may be ripe for single-ISA heterogeneous chips to find a home in the server arena, even in high performance computing. The driver, once again, is power, or the lack thereof. As server farms and supercomputers expand in size, electricity usage has become a limiting factor. Whether you’re scaling up or scaling out, everyone is now focused on more energy-efficient computers.
“The key insight was that even if you map an application to a little core, it’s not going to perform much worse than running it on a big core,” said Kumar, referring to his earlier research. “But you can save many factors of power.”
The problem with big powerful CPUs like the Xeon, Opteron, and Power is now well known. Although Moore’ Law is still working to expand transistor budgets at a good clip, clock frequencies are stagnant. That means performance and, especially, performance-per-watt are increasing more slowly. For these high-end server chips, essentially you have to spend four units of power to deliver one unit of performance on a per core basis.
That’s a result of the superscalar nature of these big-core microarchitectures, which feature a lot of instruction level parallelism (ILP) and deep pipelines. Such a design reduces execution latency, but at a hefty price in wattage. As Kumar explains it, “It takes a lot of power and a lot of [die] area to squeeze that last 5 to 10 percent of performance.”
The implication is to just switch to smaller, power-efficient cores, with simpler pipelines and less ILP. If you can parallelize an application across many smaller, simpler cores, you get the best of both worlds: better throughput and higher energy efficiency. The problem is that for many applications, decent performance is contingent upon single-threaded performance as well. That has led to the adoption of the types of accelerator-based computing platforms mentioned at the beginning of this article, which pairs a serial CPU chip with a throughput coprocessor.
What the big/little model brings to the table is having both types of cores on the same die. And perhaps more importantly, unlike the CPU-GPU integration that AMD is doing with their Fusion chips and what NVIDIA is planning to do with their “Project Denver” platform, the big/little model consolidates on a homogeneous instruction set.
That has a number of advantages, one of which is easier software development. With a common ISA, there is no need for a complex toolchain with multiple compilers, runtimes, libraries, and debuggers that are needed to deal with two sets of architectures. For supercomputing-type applications though, writing the application is likely to remain challenging, inasmuch as the developer still has to parallelize the code as well as explicitly map the serial work and throughput work to the appropriate cores. Unlike with mobile computing, for HPC, assigning tasks to cores would be more static, since maximizing throughput is the overriding goal.
But where performance has to be compromised because of power or resource constraints, a single ISA chip is a huge advantage. So at run-time, application threads can migrate across the different microarchitectures, as needed, to optimize for throughput, power or both. And since the cores share cache and memory, suspending a thread on one core and resuming it on another is a relatively quick and painless operation.
So, for example, a render server farm equipped with big/little CPUs could shuffle application threads to faster or slower cores depending up the workload mix, available processor resources, and the turnaround time required. If a service level agreement (SLA) was in effect that allowed the rendering job to meet its deadline without maxing out on the big cores, the server farm could save on its electricity bill by utilizing more of the little cores.
It should be noted that power savings can also be achieved by varying a microprocessor’s power supply voltage and clock frequency, otherwise know as voltage/frequency scaling. But as transistor geometries shrink, this technique tends to yield diminishing returns. And as even Intel has concluded, big/little cores — Intel calls them asymmetric cores — seem to deliver the best results.
The most likely architectures to adopt the big/little paradigm over the next few years are x86 and ARM. As mentioned before ARM big.LITTLE implementations are already in the works for mobile computing, but with the unveiling of the 64-bit ARM architecture last year, and with companies like HP delving into ARM-based gear for the datacenter, big/little implementations of ARM servers could appear as early as the middle of this decade.
We may see x86-based big/little server chips even sooner. Intel, in particular, is in prime position to take advantage of this technology. For one thing, the chipmaker is the best in the business at transistor shrinking, which is an important element if you’re interested in populating a die with a useful number of big and little cores. It also has a huge stable of x86 cores designs, from the Atom chip all the way up to the Xeon.
Also, since Intel has little in the way of GPU IP that can be used for computing, the company is most likely to rely on its x86 legacy for throughput cores. For example, it’s not too hard to imagine Intel’s big-core Xeon paired up with its little-core MIC chip in a future SoC geared for HPC duty. The same model, but with a different mix of x86 microarchitectures, could also be used to build more generic enterprise server processors, not to mention its own mobile chips.
Whether Intel intends to go down this path or not remains to be seen. But a recent patent the company filed regarding mixing asymmetric x86 cores in a processor suggests the chipmaker has indeed given serious thought to big/little products. And since both AMD and NVIDIA are pursing their own heterogeneous SoCs, which by the way could also incorporated this technology, Intel is not likely cede any advantage to its competitors.
The big/little approach won’t be a panacea for energy-efficient computing, but it looks like one of the most promising approaches, at least at the level of the CPU. The fact that it incorporates the advantages of a heterogeneous architecture, but with a simpler model, has much to recommend it. And while big/little CPUs may be seen as somewhat of a threat to GPU computing, it can also be viewed as a complementary technology. What is certain is that the days of one-size-fits-all architectures are coming to a close. | <urn:uuid:237334b5-389b-4adf-bd5f-0b2e8554a2e5> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/04/18/a_new_breed_of_heterogeneous_computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00497-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949501 | 1,885 | 2.578125 | 3 |
For many Californians, summertime is synonymous with sunshine, sunscreen and smog. Yes, smog, the low-hanging air pollution created partly by vehicle emissions and that's more visible and hazardous when temperatures spike.
While air quality levels can easily be dismissed on clear, sunny days, they're of legitimate concern for many Californians whose daily activities are affected. Just like checking the weather before going to work, those who care for children or the elderly, those who work outdoors and those who are just curious can check their region's air quality levels before stepping outside. California offers this service online, in real time, through open source software.
"We're just on the cusp of going into smog season in California," said Gennet Paauwe, spokeswoman for the California Environmental Protection Agency (EPA) Air Resources Board. "We expect the site to get hit more by people interested in air quality in their area."
What began as a graduate project at California State University, Chico, ended up at the state EPA's Air Resources Board nearly 10 years ago. That project, dubbed the Air Quality and Meteorological Information System, combines historical weather and smog data, maps and graphs that illustrate air quality in specific regions and ozone and fine particle levels. Most recently, a Google maps feature was added in late 2009 that allows users to visually display air quality and ozone levels.
"People who want to know what the air quality is in their town can now find out with the click of a mouse," an Air Resources Board press release said. "The data also plays a vital role in optimizing daily air resources decisions such as agricultural burning and other smoke-related activities."
While users can zoom in and out and focus on specific areas, Air Resources Board Supervisor Mena Shah said the hope is to add features that will let a user input a ZIP code, allowing a more direct search. Another plan is to have the state divided geographically by air basin, Shah said. "We are always looking to enhance and make the site better and more usable," she said.
Among other enhancements planned is a graphical tool that shows how the wind is blowing, Air Resources Board engineer Jagjeet Arce said. Collecting and disseminating upper air data -- the quality of air 5,000 meters or more above ground -- is another potential addition, she said.
And, in what one day may read just like a weather forecast, the board will figure if it can forecast regions' air quality, Arce said.
Since the website's launch several years ago, its popularity and usage has risen in concert with its enhanced features and availability of data, Paauwe said. During peak summer days, it can garner around 10,000 hits a day, and many users are researchers, in academia, the public school system and analysts, Shah said.
"We get a lot of different types of requests," she said. One such example is someone installing solar roof panels, who wanted to know how often they'd need to be cleaned based on soot levels. "By using our site, they can guess how dusty it will be in the area with solar panels," Shah said. | <urn:uuid:43b53f1b-ec6f-474d-8b7a-8c962333e072> | CC-MAIN-2017-04 | http://www.govtech.com/health/Californias-Real-Time-Air-Quality-Readings-Online.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00405-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959651 | 644 | 2.921875 | 3 |
Computer systems are being tasked with addressing a proliferation of graph-based, data intensive problems in areas ranging from medical informatics and social networks. As a result, there has been an ongoing emphasis on research that addresses these types of problems.
A four-year National Science Foundation project is taking aim at developing a new computer system that will focus on solving complex graph-based problems that will push supercomputing into the exascale era.
At the root of the project is Jeanine Cook, an associate professor at New Mexico State University’s department of Electrical and Computer Engineering and director of the university’s Advanced Computer Architectre Performance and Simulation Laboratory.
Cook specializes in micro-architecture simulation, performance modeling and analysis, workload characterization and power optimization. In short, as Cook describes, she creates “software models of computer processor components and their behavior to use these models to predict and analyze performance of future designs.”
Her team has developed a model that could improve the way current systems work with large unstructured datasets using applications running on Sandia systems.
It was her work while on sabbatical with Sandia’s Algorithms and Architectures group in 2009 that led to the $2.7 million NSF collaborative project. Cook developed processor and simulation tools and statistical performance models that identified performance bottlenecks in Sandia applications.
As Cook explained during a recent interview:
“Our system will be created specifically for solving [graph-based] problems. Intuitively, I believe that it will be an improvement. These are the most difficult types of problems to solve, mainly because the amount of data they require is huge and is not organized in a way that current computers can use efficiently.” | <urn:uuid:1b4e303d-f47e-47f5-857e-412797ce1de1> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/08/01/research_targets_graph-based_computing_problems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00221-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945302 | 358 | 3.1875 | 3 |
Low density polyethylene(LDPE) is a chemical resistant material. In the form of a closed-cell foam, it is not totally solid and flexible but semi-rigid. However, there are two types of LDPE materials, cross-linked low-density polyethylene foam and extruded low-density polyethylene foam.
Cross-linked LDPE foam is created by batches. The cross-linking agent is put on temperature due to which it blends with solid polyethylene. Then more temperature is applied so that there is cross linking between the agent and the polyethylene.Even more temperature causes foaming. Extruded LDPE foam is manufactured by a continuous process. Polyethylene is melted and added to halogenated hydrocarbon, which acts as a foaming agent. Pressure is applied and then the mix is fed into a screw extruder. The resultant material is a polyethylene with foaming agent across the material.
This material has many properties like resistance to chemicals and water, buoyancy, energy-absorbing and cushioning properties. The compressive strength depends on the density of the foam, with denser ones having more compressive strength. Electric materials also use this material due to its properties of dielectric strength. Due to its high water-resistance property, it can be used in many applications which have the presence of water which can spoil materials.
The packing industry also uses this material as it can absorb energy as well as provide cushioning. The chemical-resistance of the material can be used to reduce the speed of degradation.
This report aims to estimate the North America low-density polyethylene foam market for 2014 and to project the expected demand of the same by 2019. This market research study provides a detailed qualitative and quantitative analysis of the Global cooling fabrics market. It provides a comprehensive review of major drivers and restraints of the market. The Global cooling fabrics market is also segmented into major application and geographies.
An in-depth market share analysis, in terms of revenue, of the top companies is also included in the report. These numbers are arrived at based on key facts, annual financial information from SEC filings, annual reports, and interviews with industry experts, key opinion leaders such as CEOs, directors, and marketing executives. A detailed market share analysis of the major players in global cooling fabrics market has been covered in this report. Some of the major companies in this market are BASF-YPC Co, Dow Chemicals, Du Pont, Ashai-Dow Ltd, Exxon Mobil, LG Chemicals, Norchem, IRPC, Union Carbide etc.
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
Low Density Polyethylene (LDPE) Foams - Calendered
Low Density Polyethylene (LDPE) Foams - Calendered and Low Density Polyethylene (LDPE) Foams - Extruded and
Low Density Polyethylene (LDPE) Foams - Extruded
Low Density Polyethylene (LDPE) Foams - Extruded and Low Density Polyethylene (LDPE) Foams - Calendered and
Low Density Polyethylene (LDPE) Foams - Blown Moulded
Low Density Polyethylene (LDPE) Foams - Blown Moulded and Low Density Polyethylene (LDPE) Foams - Calendered and | <urn:uuid:628c083a-9436-46ac-b681-b84a4a63d5ef> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market-report/low-density-polyethylene-ldpe-foams-reports-8713137587.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00249-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921296 | 753 | 2.8125 | 3 |
IBM says it has packed an integrated circuit about the size of a nickel with technology that can enable gigabit/sec mobile data-rate and clutter-cutting radar image applications.
The integrated circuit takes advantage of millimeter-wave spectrum which spans the 30 GHz to 300 GHz range, 10 to 100 times higher than the frequencies used for mobile phones and Wi-Fi. Frequencies in the range of 90-94GHz are well suited for short and long range, high-resolution radar imaging, IBM said.
IBM says that the chip is based on Silicon Geranium and "the transceiver operates at frequencies in the range of 90-94GHz and is implemented as a unit tile, integrating four phased array integrated circuits and 64 dual-polarized antennas. By tiling packages next to one another on a circuit board, scalable phased arrays of large aperture can be created while maintaining uniform antenna element spacing. The beamforming capabilities enabled by hundreds of antenna elements will allow for communications and radar imaging applications that will extend over a range of kilometers."
"Each of the four phased-array integrated circuits in a tile integrates 32 receive and 16 transmit elements with dual outputs to support 16 dual polarized antennas. Multiple operating modes are supported, including the simultaneous reception of horizontal and vertical polarizations. Fabricated using an advanced IBM SiGe semiconductor process, the ICs also integrate frequency synthesis and conversion as well as digital control functions. The complete scalable solution, which includes antennas, packaging, and transceiver ICs, transforms signals between millimeter-wave and baseband, all in a form factor smaller than an American nickel," IBM stated.
The two primary applications IBM envisions for the chip include mobile backhaul and radar.
"Today's E-band solutions consist of multi-chip modules and bulky mechanically aligned antennas. The newly developed compact scalable phased array technology provides electronic beam steering and the bandwidth to support Gb/s wireless communications, IBM stated.
"Weather, debris and other vision impairing obstructions often leave aircraft pilots helpless, but 94GHz radar imaging technology could alleviate this problem. Moreover, the design's support for two antenna polarizations-with minimal increase in footprint-provides a further advantage while navigating through fog and rain. The chip allows radar technology to be scaled down, giving pilots the ability to penetrate fog, dust and other vision impairing obstructions," IBM stated.
While the technology sounds pretty cool, in writing about the IBM technology the GiGOM consultancy notes that millimeter wave broadband, while fast, has significant limitations in that it is power hungry, can't go very far because the signals deteriorate and the equipment is expensive.
Check out these other hot stories: | <urn:uuid:06e358e8-b46a-44c2-ba16-d0e571b8a23f> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224739/wi-fi/ibm-shows-off-nickel-sized-chip-that-backs-gb-sec-wireless-data-rates--cutting-edge-radar-imag.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00157-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908789 | 546 | 2.578125 | 3 |
In recent years, there has been a change in the computer data storage industry. While hard drives (HDDs) have historically been the champions of storage, solid state drives (SSDs) have seen tremendous market growth due to a combination of rising storage capacities and falling prices. Technological advances are a large reason for this change, with advances such as 3D NAND allowing for greater data storage density, which over time contributes to a lower cost per gigabyte. With more SSDs flooding the global market, forensic examiners are likely to see more of these devices as time goes on, making SSD forensics an increasingly important expertise area for forensics labs.
Solid State Drive Overview
Solid state drives are remarkable pieces of technology. Unlike a hard drive’s rapidly spinning platters with magnetic substrate to store data, SSDs don’t have any moving parts. Instead they rely on electrons for data storage. By default, the transistors in the NAND chips of an SSD don’t contain electrons. When writes are performed to the drive, electrons are sent to the individual memory cells. Once electrons are in a cell, the charge state is altered and the cell is therefore storing data. By varying the number of electrons in a cell, multiple charge states are achievable and data storage density can be increased. This is called MLC, or Multi-level cell NAND.
MLC NAND is used in most solid state drives in production today, though SLC (single-level cell), which is NAND with only two potential charge states per cell, is favored in the enterprise space for its increased reliability and performance. SLC is considered more reliable because there is a greater tolerance between charge states, as the cell can only be on or off. With MLC NAND, more charge states with smaller error tolerances for each means a greater probability of errors occurring. The tradeoff is that users can store much more data in MLC drives than in SLC drives.
One thing to note with SSDs is the difference between reading/writing and erasing data. While data can be read or written to individual pages of data, it can only be erased in blocks, which are composed of multiple pages. To put it simply, it’s a little like using an etch-a-sketch. Users can make little black lines on the screen wherever they want, but those lines can only be erased if the whole screen is erased by shaking the etch-a-sketch. In this case, the pages are the little black lines and the screen is the block. This difference between pages and blocks makes erasing much more time intensive than reading or writing to a drive. It can also cause some complications for examiners in SSD forensics cases due to features such as garbage collection and TRIM.
Challenges in SSD Forensics
While by no means an exhaustive list of potential issues for forensic examiners, garbage collection and TRIM are particularly relevant because they are unique to solid state drives. Garbage collection is a function of the firmware of the drive and is used to help free up space where files have been erased by the operating system. To explain the concept of erased files more, erased files aren’t really gone until garbage collection actually resets a block. Before garbage collection resets the block, erased files are simply marked as free space by the operating system. This is one of the reasons deleted files are sometimes able to be recovered.
Typically, the garbage collection program doesn’t know about erased files until the operating system tries to save new files over them. Since the space with the erased files isn’t free yet, it moves the new files to another location and marks the previously erased files for garbage collection. Since blocks are the smallest groups of data that can be erased, garbage collection has to first migrate all the good data in the block somewhere else before it is able to wipe the whole block. Wear leveling can be initiated at this point, but not always.
So how does garbage collection affect forensic examiners? Since garbage collection is a function of the drive itself, it can occur whenever power is supplied to the drive and therefore can run whether or not forensic examiners want it to. This can mean a few things. First, hashes may be different when acquiring multiple images of a drive since the garbage collection feature might move some data and erase other data as it sees fit. Second, erased data might not be recoverable even if data has been previously located in unallocated blocks. While not always the case, garbage collection could begin during a recovery after powering the device on, eradicating the deleted data and making a recovery impossible.
This problem is exacerbated by TRIM, which is a function of the OS. TRIM helps the process of garbage collection by marking erased files and letting the drive know they are ready for garbage collection. Instead of the drive having to stumble upon deleted files, it can proactively get rid of them since the operating system has told it to do so. Basically, it increases the chances that garbage collection will occur on deleted files and decreases the chances of successfully recovering that deleted data. On the positive side, since TRIM is a function of the OS and isn’t compatible with certain drives/operating systems, it can either be disabled or simply isn’t a factor in some SSD forensics cases.
SSD Forensics Services by Gillware Digital Forensics
Our engineers have years of experience working with solid state drives and are familiar with the many issues that can arise when working on them, including the ones described above. Though not all of these issues are avoidable, every precaution is taken by our engineers to ensure that any preventable issues don’t occur when working on a case. In fact, our Director of Research and Development, Greg Andrzejewski, pioneered many of the SSD data recovery techniques that are used in our lab today. His expertise goes hand in hand with President Cindy Murphy’s 17 years working in digital forensics, meaning SSD-based cases are in good hands with Gillware Digital Forensics.
With our world-class digital forensics experts and the right tools to handle difficult cases, use Gillware Digital Forensics for all your SSD forensics needs. To get started on a case, follow the link below to request an initial consultation with Gillware Digital Forensics. | <urn:uuid:5d468944-18eb-45ca-a5df-f3d621f07233> | CC-MAIN-2017-04 | https://www.gillware.com/forensics/ssd-forensics | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00186-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943147 | 1,303 | 3.390625 | 3 |
A group of Caltech researches have been able to create an integrated circuit which is able to repair itself if it sustains significant damage. The specifics on how this is accomplished is as follows. There is a secondary processor that will spring into action and determine the best course of action in order to finish the task at hand. In addition to a second processor, the chip consists of roughly 100,000 transistors and sensors to be able to diagnose its health status. The group tested the self healing abilities by hitting the chip with a laser and destroying half of the transistors. Reports indicate that it only took a few milliseconds to adjust itself and continue functioning on task. It was also noted that the chip also was able to improve efficiency after the laser blast by reducing power to the remaining transistors.
This has quite a lot of uses in terms of wear and tear items, such as laptops, cell phones, robots, etc… It allows them to be able to take more abuse and continue to keep on functioning. This may also prove beneficial as more technology gets taken into war zones and battle fields. Devices will be able to take the abuse more adequately. | <urn:uuid:b5ad2af8-1850-4b2e-bf17-c05e996fe2a9> | CC-MAIN-2017-04 | http://www.bvainc.com/self-healing-self-monitoring-chip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00396-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.976416 | 231 | 3.421875 | 3 |
The storage industry celebrates Nobel Prize for discovery that triggered disk drive capacity boom.
The disk drive industry had big news to celebrate on Oct. 12.
Drs. Albert Fert and Peter Grünberg were awarded the $1.5 million 2007 Nobel Prize for physics by discovering GMR (giant magnetoresistance)a key physical technology now widely used in HHD (Hybrid Hard Drive) development and manufacturing.
GMR is a quantum mechanical effect observed in atoms-thin structures composed of alternating ferromagnetic and nonmagnetic metal layers.
Grünberg and Fert were the first to discover how to use GMR to manipulate the magnetic and electrical properties of thin layers of metal atoms to store much more data on spinning disks than had ever been stored before, leading to vast innovations in the industry.
"The MP3 and iPod industry would not have existed without this discovery," said Börje Johansson, a member of the Royal Swedish Academy which awarded the prize Oct. 11, according to The Associated Press. "You would not have an iPod without this effect."
Fert, from the Université Paris-Sud in Orsay, France, and Grünberg, with the Institute of Solid State Research at the Jülich Research Center in Germany, will share the prize money.
Click here to read about how Georgia State University greatly improved its storage efficiency.
Fert, 69, and Grünberg, 68, were working independently in 1988 when they discovered the GMR effect, in which tiny changes in a magnetic field produce huge changes in electrical resistance.
"Youve leveraged a weak bit of magnetism into a robust bit of electricity," Phillip Schewe, of the American Institute of Physics, told The New York Times.
The scanning heads in todays hard drives consist of alternating layers only a few atoms thick of a magnetic metal and a nonmagnetic metal. At that small size, quantum physics come into play, and new properties are suddenly available.
The GMR effect is an important asset to modern hard drives as they record data (text, audio, video and graphics) as a dense magnetic patchwork of zeros and ones, which is then scanned by a small head and converted to electrical signals that can be read onscreen.
"A long time ago, when we were just doing heads, wed read and write with an inductive head," Mark Re, senior vice president of research at Seagate Technologythe worlds largest maker of disk drivestold eWEEK. "So you would have a magnetic core with some coils around it, and you would put a current into the coils to represent the data, and that would generate fields that would write on the disk.
"Then, when you pass the head over that information, the fields coming off the disk could get sent in the reverse direction. So the flux would go through the magnetic material and generate a current in the coil that you could read out."
That took disk drives from the beginning (IBM invented them in 1953)to around 1990, Re said.
"Then, the industry separated the write and read head, so we still write inductively, meaning we pass a current through a coil and generate some fields; but now we read, starting back then, with something that was a sensing device whose resistance would change with field direction. And that was called magnetoresistance," Re said.
That carried the industry for another period of time, Re said.
"Then in the late 80s, the two gentlemen (Grünberg and Fert) who won the Nobel Prize came up with an affect that was a little bit different, called giant magnetoresistance. Its basically a higher signal amplitude that you could get from this device," he said.
Like most advancements in technology, there was a lot of practical work that had to be done to take this theory from the labs and get it working in devices, Re said.
"It became commercially available, oh, on the order of about 10 years after their discovery. That seems like a long time, but to go from a Nobel Prize-winning discovery to high-volume manufacturing within a decade is really pretty impressive," Re said.
Fert and Grünberg did all the basic physics parts using thin-film ironmagnetic materialseparated by chromium, which is nonmagnetic, Re said.
"When it actually got into the devices, it turned out to be different materials; so the first ones were nickel-iron alloy separated by copper. But the basic idea is what they came up with, and thats what the Nobel Prize celebrates," Re said.
Check out eWEEK.coms for the latest news, reviews and analysis on enterprise and small business storage hardware and software. | <urn:uuid:0d350672-0e86-44b6-857e-58739893df07> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Data-Storage/Disk-Drive-Industry-Celebrates-Nobel-Prize | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00240-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965249 | 979 | 2.734375 | 3 |
In a series of special online exhibits of powerful photographs, documents and more, the Google Cultural Institute brings the 1944 D-Day invasion on France's Normandy coast back to life on its 70 anniversary.
To commemorate the 70th
anniversary of the D-Day invasion on the Normandy coast in France on June 6, 1944, the Google Cultural Institute has created a series of special online exhibits to illustrate the emotions, power and destruction of an epic and successful World War II battle that likely changed the course of the war.
The online exhibits include an in-depth look into the Normandy landings on June 6, 1944, featuring some 470 new documents and images
, according to a June 4 post by Sixtine Fabre, associate program manager for the Google Cultural Institute, on the Google Public Policy Blog
"On June 6, 1944, the largest air, naval and military operation in history took place on the coast of Normandy," wrote Fabre. "To commemorate the 70th anniversary of D-Day, we've partnered with a number of cultural institutions and veterans from the U.S., U.K. and France to help share the stories of the Normandy Landings through the Cultural Institute
The new exhibits commemorating D-Day and its events range from photos of important preparations
that were made for months before the invasion to special meetings of Allied leaders
that were arranged prior to the battle. Also featured in the new online exhibits are images of the Allied soldiers in action
as well as images of original and historic documents such as President Franklin Delano Roosevelt's D-Day Prayer
and a top secret progress report
from Gen. Dwight Eisenhower to Gen. George C. Marshall, wrote Fabre. "These pieces have been curated into digital exhibits that present a timeline of events for those who want to be guided through the content. For visitors who have a specific photo or document in mind, the search function allows users to find specific archival material."
The exhibits were assembled with the help of several partners, including The National Archives
, The George C. Marshall Research Foundation
, The Imperial War Museum
and Bletchley Park codebreaker center
"Technology allows us to bring together information from around the world to showcase different perspectives on one moment in time," wrote Fabre. A special Google+ Hangout on Air
(in French and available for replay) was also featured in a broadcast from the Caen War Memorial in France on June 4, featuring American, French and British D-Day veterans as they told their stories about the invasion. "Whether it's through the Cultural Institute or Hangouts on Air, we hope you'll take the chance to learn more about D-Day and remember this important piece of our history."
The Google Cultural Institute, which was established in 2010 to help preserve and promote culture online and to make important cultural material available and accessible to everyone and to digitally preserve it to educate and inspire future generations, has been actively adding to its growing collections.
In April 2014, the Institute began offering virtual tours of the opulent Palais Garnier opera house
in Paris using Google Street View images to showcase the beautiful and grand opera house, which has been hosting performances since it opened in 1875.
Earlier in April, the Institute helped to highlight the U.S civil rights movement
through a fascinating online collection of documents, photographs and film clips in commemoration of the 50th anniversary of the Civil Rights Act of 1964. Among the highlights of the online collection is an emotionally worded telegram from Dr. Martin Luther King Jr. to President Kennedy
from June 1963, as well as a personal request to meet with Kennedy on the day of the March on Washington
in August 1963 from one of the organizers of the March. Also included is a copy of The Civil Rights Act of 1964
itself. The collection, with its photos, documents and other content, is moving as it describes and re-creates the turmoil of the nation during the period, which also included the shocking assassination of President Kennedy on Nov. 22, 1963.
A "Women in Culture
" project was launched by the Institute in March 2014 that tells the stories of known and unknown women who have impacted our world to commemorate International Women's Day.
In November 2013, the Google Cultural Museum showcased the five handwritten versions of Abraham Lincoln's Gettysburg Address online in commemoration of the 150th anniversary
of his famous and moving 272-word speech. The five versions were placed online in a special gallery
for viewers to read and review. Five different copies of the Gettysburg Address were written by Lincoln and given to five different people
, each named for the person to whom they were given, according to AbrahamLincolnOnline.org. | <urn:uuid:ddb630d6-b507-49f1-a718-23b355179782> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/google-showcases-d-days-70th-anniversary-in-tribute.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00148-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957528 | 961 | 2.796875 | 3 |
Around the world, community, industry and academic leaders bemoan the “skills gap,” the divide between the profile of those seeking employment and the actual requirements of the marketplace. A number of studies have reported that during the next decade, there will be millions of available jobs in so-called STEM fields (science, technology, engineering and mathematics) and not enough qualified candidates to fill those positions.
The National Academy of Sciences, National Academy of Engineering, and the Institute of Medicine describe STEM as “high-quality, knowledge-intensive jobs…that lead to discovery and new technology,” benefiting the US economy and standard of living. The US may be short by as many as three million of these highly-skilled workers by 2018, putting national competitiveness at risk.
The National Math + Science Initiative refers to this shortage as a STEM crisis, which they say creates a chilling effect in research and the economy.
Some data points:
- The demand for STEM skills has risen dramatically. STEM-based jobs grew at over three times the pace of non-STEM jobs between 2000 and 2010 and are expected to grow almost twice as fast by 2018.
- As of February 2012, more than half of the 30 fastest growing occupations require training over and above a high-school diploma. But American students aren’t keeping pace with their foreign counterparts. American universities only award about a third of the bachelor’s degrees in science and engineering as Asian universities.
- 25 years ago, the US led the world in high school and college graduation rates. Today, the US has dropped to 20th and 16th, respectively. The decline in education relative to other countries has a troubling effect on R&D. By 2009, for the first time, over half of US patents were awarded to non-US companies.
President Obama’s administration maintains that STEM education is vital to keeping the nation competitive. The President has supported efforts to train young people for technologically-driven careers, but government funding is struggling and many states are facing budget cuts. As a result, there is a greater emphasis on collaborative endeavors, public-private partnerships where vendors share some of the cost and then benefit from the research through technology-transfer programs.
The business sector has also aligned with communities and schools to encourage interest in science-based careers. Intel and Lockheed Martin, for example, have helped inspire young talent by sponsoring tournaments, science fairs and other innovation challenges. The Intel Foundation hosts some of the world’s largest pre-college science fair competitions and also runs the Educators Academy, an online community for K-12 educators. Lockheed Martin is also doing its part to advance STEM education, by sponsoring outreach activities for students from elementary school through college.
The Gender Factor
The gender disparity in the science and math-driven disciplines continues, but hidden in this problem is a source of immense potential. While women make up 51 percent of the overall workforce, they comprise only 26 percent of STEM workers. Solving for this disparity would go a long way to minimizing the skills gap, and helping the United States meet its projected skilled employment needs.
The computer science field highlights the slow pace of change. While the past decades’ attention to female equality has paid off as higher participation in most STEM fields, the number of women in the computational sciences has actually fallen. Recent Census Bureau findings show the number of female computer workers, employed in such roles as developers, programmers, and security analysts, has been on a 20-plus-year decline. In 1990, a full third of computer workers were women, but now that number has dropped to 27 percent.
An article at the Alantic about the “Brogrammer Effect” delves further into the data, noting the women in computer science are more likely to be Web developers (40 percent) than software developers (22 percent). The author makes the connection that less women are entering the field because they’re not pursuing computer science degrees: women’s participation in computer science education peaked in the 1980s. So why the lack of interest?
There are certainly cultural implications. Where male nerdism is accepted, embraced even, geeky women don’t have quite the same cachet. And while it’s easy to think of geek-chic role models like Steve Jobs or Mark Zuckerberg, their female equivalents don’t spring as readily to mind.
According to a recent US Census survey, computer workers make up about a half of STEM employment, and STEM pays well. Students who pursue a degree in a field pertaining to computers, mathematics, statistics or engineering are the most likely to secure full-time, year-round employment and the least likely to be unemployed. Earnings paralleled employment rates, with engineering majors averaging earnings of $92,000 per year and those coming from arts and humanities fields making about $55,000 annually.
Even the social studies, the arts and humanities, which tend to be more female-dominated, are becoming more technology-driven and are tapping the benefits of computer science. New research fields are springing up with names like “petascale humanities.” In fact, a new acronym has arisen that reflects the importance of the arts in the national curricula and the new economy. Proponents of “STEAM” (the “A” is for Arts) point out that creativity is an essential component of innovation.
Women continue to earn less than their male counterparts across every field of degree. Still women in high-tech jobs earn about 25 percent more than those in non-science fields. Advocates should not be afraid to play the money card, observes the executive director of the nonprofit group Science Club for Girls, Connie Chow, in this New York Times piece on the dearth of women scientists. That earning-potential can have a strong motivating effect, especially for students in low-income communities.
Preparation and Inspiration
Community and education leaders maintain that increasing student engagement in STEM subjects and addressing the shortage of qualified STEM teachers are necessary to ensure the future success of the US. For all students, and for women and minorities especially, early exposure to STEM subjects is critically important, as is being surrounded by a community of STEM professionals.
Central to this strategy is recruiting qualified teachers and giving them the means to develop into effective instructors. Studies confirm the common sense idea that there is a strong link between teacher performance and student success. The President’s Council of Advisors on Science and Technology (PCAST) estimates that the US will need more than 100,000 STEM teachers over the next decade.
The authors of the report advise: “To meet our needs for a STEM-capable citizenry, a STEM-proficient workforce, and future STEM experts, the nation must focus on two complementary goals: We must prepare all students, including girls and minorities who are underrepresented in these fields, to be proficient in STEM subjects. And we must inspire all students to learn STEM and, in the process, motivate many of them to pursue STEM careers.” | <urn:uuid:9efb3a34-f328-46d0-8854-1cdabc2623f2> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/09/19/rising_to_the_stem_challenge/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00388-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95067 | 1,446 | 3.4375 | 3 |
CSID’s Andy Thomas argues that while the Internet of Things (IoT) has gained momentum over the past year, the convenience of such connectivity could come at the cost of security.
Connected cars have made the headlines this year. Vehicles have long been computerised, but only recently linked to the Internet, and some manufacturers have shown a lack of security expertise. In April, cyber-security experts revealed a software flaw in the Jeep Cherokee’s entertainment system, which allowed them to take control of the vehicle on the move using a laptop at home. The hack, which they described as “fairly easy” and “a weekend project,” enabled them to alter the vehicle’s speed, change its braking capability, and manipulate the radio and windscreen wipers.
More recently, researchers hacked a Tesla Model S – once again via the car’s entertainment system, although it took closer to a year to achieve. They were able to apply the hand brake, lock and unlock the car, and control the touch screen displays. Tesla quickly developed a fix, which has been sent to all affected vehicles.
The thought of a hacker taking control of your steering wheel is rather daunting; the idea of them hijacking your refrigerator is probably less so. However, apparently innocuous devices such as “smart fridges” and “connected toasters” warrant equal consideration, because they are a point of entry to your network. It’s like leaving a window open in your spare room: it allows access to the rest of the house, whether or not there’s anything of value in the room itself.
The recently exposed vulnerability of a Samsung smart refrigerator is a case in point: its calendar integration functionality provided hackers with access to the owner’s network and the ability to steal linked Gmail login credentials. Similarly, weaknesses in smart light bulbs have allowed hackers to obtain the passwords for the connecting Wi-Fi network as they were passed from one bulb to another.
Meanwhile, there are plenty of unfounded security fears around things like smart medical devices. In fact, most use Bluetooth; they aren’t connected to the Internet at all. Generally speaking, they’re too small to incorporate a phone connection, and consumer concerns over phone transmitters in the body restrict development of the technology. Pacemaker hacking is highly unlikely at present.
The majority of smart watches don’t connect directly to the Internet, either. HP has found major areas of concern in many smart watches, including insufficiently robust authentication, vulnerability to man-in-the-middle attacks, and poor firmware updates. However, the real weakness is the mobile phone the wearable links to, which holds vastly more personal data and exhibits many of the same vulnerabilities.
Certain wearables are a problem, due to the information they hold. For example, some music festivals allow participants to load their wristband pass with credit card information. Simply holding the wristband up to the vendor’s reader pays for drinks, food and merchandising. It sounds cool and convenient, but lose the wristband, or sell it at the exit, and the new owner has only to crack the wristband’s four-digit PIN to gain access to the credit card information.
Wearable technology risks
Wearables, particularly fitness trackers, have taken off in the last few years. Figures for 2015 show that 14 percent of UK adults own a wearable device or smart watch (compared to 63 percent who own a smartphone or tablet), and the market for fitness devices and apps has doubled in the past year. All this wearable tech creates new opportunities for collecting private data, and Symantec threat researcher, Candid Wueest, believes that developers of wearable devices are not prioritising security and privacy. His research found some devices sending data to a staggering 14 IP addresses; and at a Black Hat demonstration, he identified six Jawbone and Fitbit users in the audience, and specific details about their movements – down to the time they left or entered the room.
In short, the IoT is here, but before placing an order for a fancy new fitness tracker or that swanky smart-fridge – or sensors for your business, take a moment to consider these points:
Prioritise security – Back up data as you would with any other tech device. Too many people don’t back up regularly until they lose their photos/tax returns in a hard drive crash.
Be aware – Read reviews and technical documents. Ensure you know what a device does, how it is secured, and how to minimise opportunities for others to misuse its data. Look for wearables with remote-lock capabilities, so that you can lock or erase data if they’re stolen; and always protect your devices with a password, or biometric authentication if possible.
Andy Thomas is Managing Director of CSID Europe | <urn:uuid:442c7873-fe81-4684-bcce-dd94cfafa81a> | CC-MAIN-2017-04 | https://internetofbusiness.com/beware-trade-off-iot-convenience-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00020-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939949 | 1,002 | 2.859375 | 3 |
From tweens and teens to silver surfers, more people are jumping onto multiple Internet connected devices every day. It’s very exciting to see the proliferation of information sharing and communication between people on these devices at any time. Since October is National Cyber Security Awareness month, it’s a good reminder of the challenge we face with many Americans still not stopping to think about the minimum security needs before they connect.
The latest online safety survey of U.S. consumers that National Cyber Security Alliance (NCSA) partnered with McAfee to conduct shows the scope of the issue. As more devices are being used on the go and by employees, this survey reveals every day disconnects between consumer’s online safety perceptions and their actual practices.
Similar to keeping ourselves healthy, Americans are aware that they need safe and secure Internet-connected devices, but they aren’t keeping up with the required actions to stay safe online. This survey has telling signs of the additional education that is needed to foster a safer Internet experience:
- A Safe Internet is Crucial to U.S. Economy: Ninety-percent of Americans agree that a safe and secure Internet is crucial to our nation’s economic security.
- The Internet is Vital to American Jobs: Fifty-nine percent say their job is dependent on a safe and secure Internet and 79 percent say losing Internet access for 48 consecutive hours would be disruptive with 33 percent saying it would be extremely disruptive.
- Yet a Majority of Americans Do Not Feel Completely Safe Online: Ninety-percent say they do not feel completely safe from viruses, malware and hackers while on the Internet.
- Smartphone Use Grows, Security Lags: 63% feel their smartphones are safe from hackers yet – pointing to a strong disconnect – 57% have never backed up their devices by storing the information or data elsewhere and 63% have never installed security software or apps to protect against viruses or malware.
- Bring Your Own Device Policies Lacking: 48% of employed Americans are allowed to use a personal tablet, smartphone or laptop to perform job functions and 31% can connect to their work network using these personal devices. However, 44% of employed say their employers do not have formal BYOD policies.
- 25% Notified Data Was Exposed in Data Breach: One in four received notification by a business, online service provider or organization that their personally identifiable information (e.g. password, credit card number, email address, etc.) was lost or compromised because of a data breach.
The need for consumers to stay educated is necessary now more than ever with nearly nine in ten Americans using their computers for banking, stock trading or reviewing personal medical information. In a recent study by McAfee on the unprotected rates of PC users globally, the United States ranked the 5th least protected country. It also uncovered that there are 19.32% of Americans browsing the Internet without any protection, 12.25% of consumers have zero security protection installed and 7.07% have security software installed but disabled.
I hope these concerned netizens take the time during the next few weeks to learn more about keeping their devices, privacy and information protected. McAfee is very excited to work with NCSA once again to bring these issues to the forefront and continue these efforts to educate the public on these very real threats to consumer’s privacy, identity and overall online safety.
For more information about this survey and tips for consumers, you can check out the:
- Full study and a fact sheet at: http://www.staysafeonline.org/stay-safe-online/resources/
- Press release at: http://www.staysafeonline.org/about-us/news/details/?id=815
- Remarks from federal, state and local officials, and cyber industry leaders from companies to kick off the month’s efforts will be broadcast via Facebook Live today beginning at 10:00 a.m. ET at: https://www.facebook.com/FacebookDC/app_105217732913495?ref=ts | <urn:uuid:9ca51fbf-4258-4248-b408-4338906ce892> | CC-MAIN-2017-04 | https://securingtomorrow.mcafee.com/consumer/online-safety-survey2012/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00012-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93218 | 835 | 2.609375 | 3 |
Around the world, nations have written constitutions that rule them, and a Google-aided project now has placed some 177 constitutions online.
Google's Ideas think tank has helped create a fascinating online collection of constitutions
for scores of nations around the world where visitors can explore and learn about how the documents change over time.
The new site, called Constitute
, was built by the Comparative Constitutions Project
and supported with the help of the Google Ideas
think tank, Sara Sinclair Brody, the Google Ideas product manager, wrote in a Sept. 23 post on the Google Official Blog
"Constitutions are as unique as the people they govern, and have been around in one form or another for millennia," wrote Brody. "But did you know that every year approximately five new constitutions are written, and 20 to 30 are amended or revised? Or that Africa has the youngest set of constitutions, with 19 out of the 39 constitutions written globally since 2000 from the region?"
Those are the kinds of facts and lessons that can be gleaned from this intriguing collection of some 177 constitutions, which includes original and amended versions from many nations.
So far, the collection includes
constitutions over time from countries including Afghanistan, Albania, Australia, Belarus, Belgium, Bolivia, Cambodia, China, Cuba, Finland, Gambia, Greece, Haiti, Honduras, India, Ireland, Italy, Japan, Kuwait, Libya, Madagascar, Mexico, Moldova, the Netherlands, Norway, Pakistan, Romania, Russia, Serbia, Singapore, South Africa, Spain, Turkey, the Ukraine, the United States, Yemen and Zambia.
"The process of redesigning and drafting a new constitution can play a critical role in uniting a country, especially following periods of conflict and instability," wrote Brody. "In the past, it's been difficult to access and compare existing constitutional documents and language–which is critical to drafters–because the texts are locked up in libraries or on the hard drives of constitutional experts. Although the process of drafting constitutions has evolved from chisels and stone tablets to pens and modern computers, there has been little innovation in how their content is sourced and referenced."
That's where the new Constitute site comes in as a resource for scholars, historians, political leaders and others to probe constitutions from around the world and the changes that have followed them after they were originally written.
"Constitute enables people to browse and search constitutions via curated and tagged topics, as well as by country and year," wrote Brody. "The Comparative Constitutions Project cataloged and tagged nearly 350 themes, so people can easily find and compare specific constitutional material."
The theme topics
include amendments, citizenship, elections, cultures, international law, regulations and more.
"Our aim is to arm drafters with a better tool for constitution design and writing," she wrote. "We also hope citizens will use Constitute to learn more about their own constitutions, and those of countries around the world."
Google Ideas provided a grant to the University of Texas at Austin for the Comparative Constitutions Project
, according to the group. Other funding was also provided by the Indigo Trust
. Since 2005, the Comparative Constitutions Project has also received contributions from the National Science Foundation
, the Cline Center for Democracy
, the University of Texas
, the University of Chicago
and the Constitution Unit
at University College London | <urn:uuid:46cf93c1-ab8f-4f96-94b8-44536c7fdcd5> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/google-think-tank-helps-build-virtual-museum-for-world-constitutions.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00434-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957542 | 710 | 2.828125 | 3 |
After a year and a half of culling through 6,100 applicants, NASA has chosen four men and four women to train to become astronauts and potentially travel to an asteroid or even Mars.
One of the astronaut trainees is a physicist and chief technology officer.
"These new space explorers asked to join NASA because they know we're doing big, bold things here -- developing missions to go farther into space than ever before," said NASA Administrator Charles Bolden, in a written statementI. "They're excited about the science we're doing on the International Space Station and our plan to launch from U.S. soil to there on spacecraft built by American companies. And they're ready to help lead the first human mission to an asteroid and then on to Mars."
The space agency reported that the eight-member 2013 astronaut candidate class comes from the second largest number of applications NASA has ever received. The group will receive technical training at space centers around the world to prepare for missions to low-Earth orbit, an asteroid and Mars.
In 2004, President George W. Bush called on NASA to send humans back to the moon by 2020 in a move that he said would prepare the space agency for a manned-mission to Mars.
More recently, President Barack Obama formulated a new plan that calls on NASA to build next-generation heavy-lift engines and robotics technology for use in space travel.
In April, scientists at the University of Washington reported they are working on a rocket that they say could enable astronauts to reach Mars in just 30 days. Using current technology, a round-trip human mission to Mars would take more than four years.
As soon as 2016, NASA plans to send a robotic spacecraft - still unmanned - to an asteroid. The $800 million effort will be the first U.S. mission to carry asteroid samples back to Earth.
"This year we have selected eight highly qualified individuals who have demonstrated impressive strengths academically, operationally, and physically" said Janet Kavandi, director of Flight Crew Operations at Johnson Space Center, in a statement. "They have diverse backgrounds and skill sets that will contribute greatly to the existing astronaut corps. Based on their incredible experiences to date, I have every confidence that they will apply their combined expertise and talents to achieve great things for NASA and this country in the pursuit of human exploration."
The astronaut candidates will begin training at Johnson Space Center in Houston this August. They are:
- Josh A. Cassada, 39, who is originally from White Bear Lake, Minn. Cassada is a former naval aviator who is a physicist by training. Today he serves as co-founder and Chief Technology Officer for Quantum Opus, which focuses on quantum optics research.
- Victor J. Glover, 37, of Pomona, Calif. and Prosper, Texas, a Lt. Commander with the U.S. Navy. He currently serves as a Navy Legislative Fellow in the U.S. Congress.
- Tyler N. Hague, 37, of Hoxie, Kan., who is a Lt. Colonel with the U.S. Air Force. Hague is supporting the Department of Defense as deputy chief of the Joint Improvised Explosive Device Defeat Organization.
- Christina M. Hammock, 34, of Jacksonville, N.C., who serves as National Oceanic and Atmospheric Administration (NOAA) Station Chief in American Samoa.
- Nicole Aunapu Mann, 35, from Penngrove, Calif., who is a Major in the U.S. Marine Corps. Mann is an F/A 18 pilot, currently serving as an Integrated Product Team Lead at the U.S. Naval Air Station.
- Anne C. McClain, 34, originally from Spokane, Wash., who is a Major with the U.S. Army. She is an OH-58 helicopter pilot, and a recent graduate of U.S. Naval Test Pilot School at Naval Air Station.
- Jessica U. Meir, Ph.D., 35, who is from Caribou, Maine. She is an Assistant Professor of Anesthesia at Harvard Medical School, Massachusetts General Hospital, in Boston.
- Andrew R. Morgan, an M.D., 37, from New Castle, Penn., who is a Major with the U.S. Army. He has experience as an emergency physician and flight surgeon for the Army special operations community, and currently is completing a sports medicine fellowship.
This article, NASAs new astronauts could one day blast off to Mars, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is email@example.com. | <urn:uuid:5b11faf9-469f-4606-b82e-d6831d95c113> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2497851/government-it/nasa-s-new-astronauts-could-one-day-blast-off-to-mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00158-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955604 | 988 | 3.046875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.